Home Doctor NewsOncology News Researchers learn way to educate artificial intelligence to recognise tumours

Researchers learn way to educate artificial intelligence to recognise tumours

by Pragati Singh
health

One can train artificial intelligence (AI) to determine whether or not a tissue image contains a tumour. But up until recently, it was unclear how it arrived at its conclusions. The Research Center for Protein Diagnostics (PRODI) at Ruhr-Universitat Bochum is developing a fresh strategy that will make an AI’s judgement transparent and hence reliable.

In the journal Medical Image Analysis, the team of researchers headed by Professor Axel Mosig describes their methodology. Professor Andrea Tannapfel, director of the Institute of Pathology, Professor Anke Reinacher-Schick, an oncologist at the St. Josef Hospital of the Ruhr-Universitat, and Professor Klaus Gerwert, a biophysicist and the founding director of PRODI, all collaborated on the study with bioinformatics expert Axel Mosig.

The team created a neural network, or artificial intelligence, that can determine whether or not a tissue sample has a tumour. To achieve this, they supplied the AI a vast number of photos of microscopic tissues, some of which had tumours and others didn’t.

According to Axel Mosig, “Neural networks are initially a black box: it’s unclear whatever distinguishing properties a network learns from the training data.” They are unable to justify their choices, unlike human specialists. But according to bioinformatics specialist David Schuhmacher, who worked on the paper, “it’s vital that the AI is capable of explanation and hence trustworthy for medical applications in particular.”

The foundation of AI is testable hypotheses.

Therefore, the explainable AI developed by the Bochum team is predicated on the only kind of meaningful claims that science is capable of making: falsifiable hypotheses. A faulty hypothesis must be able to be shown by an experiment. Artificial intelligence typically employs the inductive reasoning principle, whereby it builds a general model from particular observations—the training data—and then uses that model to evaluate all subsequent observations.

David Hume, a philosopher, had identified the underlying issue 250 years prior, and it is simply illustrative: No matter how many white swans we see, we could never draw the conclusion from this information that all swans are white and that no black swans exist at all. As a result, science employs what is known as deductive logic.

In this method, a broad hypothesis serves as the initial assumption. When a black swan is spotted, for instance, the presumption that all swans are white is disproved.

An activation map reveals the location of the tumour.

Inductive AI and the deductive scientific approach appear to be almost incompatible at first appearance, according to Stephanie Schorner, a physicist who also contributed to the study. But the researchers managed to do it. Their cutting-edge neural network creates an activation map of the microscopic tissue image in addition to classifying whether a tissue sample has a tumour or is tumor-free.

The hypothesis upon which the activation map is built, namely that the activation obtained from the neural network correlates precisely to the cancer locations in the sample, is testable.

Molecular techniques that are site-specific can be employed to verify this claim.

In conclusion, Axel Mosig says, “Thanks to the interdisciplinary structures at PRODI, we have the best preconditions for incorporating the hypothesis-based approach into the development of reliable biomarker AI in the future, for example to be able to distinguish between some therapy-relevant tumour subtypes.

 

You may also like