Digital assistants aid disease diagnosis

0
12
Digital assistants aid disease diagnosis

Computed tomography scan of a brain showing a haemorrhage (red region) that caused a stroke. Credit: Alfred Pasieka/SPL

The faster a person is diagnosed after a stroke, the sooner they can be treated and the more cognitive function can be preserved. But it takes time to identify the problem from a brain scan — an average of 87 minutes after the images have been captured, for cases flagged as urgent, according to one study1. In that time, tissue dies. Researchers have found that someone experiencing a stroke that cuts off flow through a large blood vessel will typically lose 120 million neurons, 830 billion synapse connections and 714 kilometres of nerve fibre in 1 hour — the equivalent of ageing by about 3.5 years2.

Cutting down the time it takes radiologists to diagnose people with stroke, therefore, could lead to better outcomes. “They notify me sooner and I can operate,” says Eric Oermann, a neurosurgeon at Mount Sinai Health System in New York City. Oermann, who directs Mount Sinai’s artificial-intelligence (AI) consortium, AISINAI, has been studying whether the technology can help to speed up diagnosis.

He and his colleagues ran computed tomography (CT) images of brains through a type of AI known as a deep neural network1. First, the system was shown images annotated by radiologists and interpreted using a natural-language processing tool developed by Oermann and radiologist John Zech, now of Columbia University in New York City. Oermann’s hope was that, given enough example images to study, the algorithm would learn to identify features that distinguish a healthy brain from one experiencing an ischaemic stroke, a haemorrhagic stroke or the build-up of fluid known as hydrocephaly.

Their system was unable to match the accuracy of a radiologist in diagnosing these three conditions. But it was good at quickly flagging brain scans that it deemed in need of attention — and therefore could be used to alert radiologists to examine urgent cases more rapidly. That, Oermann says, can make a big difference. Brain scans often sit waiting to be examined by radiologists for four hours or longer. If the system reshuffled the queue according to its assessment of urgency, the wait for a radiologist’s examination could shrink to just a few minutes for the most urgent cases, he says.

Oermann’s view is that AI will have a major impact on medical care. “I think everyone knows it’s going to radically change medicine in the twenty-first century,” he says. Indeed, researchers at hospitals, universities, small start-ups and computing giants are studying ways to use AI to identify and classify various conditions — from breast cancer and rare genetic disorders to depression. And although most algorithms with highly accurate diagnostic capabilities are years away from the clinic, a few might be much closer.

Field of vision

The promise of AI lies in its powerful ability to identify patterns that can elude humans, either because the signs are too subtle or because they emerge only from huge sets of data. The technology has become more viable in the past decade through the use of deep neural networks, which are built to mimic the brain. In these, the nodes that make up the network represent individual neurons, connected by a mathematical weight that represents synapses. To train a neural network, researchers provide an input of example images, such as brain scans. This is translated into a set of numbers that might describe where each individual pixel in a scan falls on a 100-point scale from black to white. A hidden layer of nodes multiplies those input values by the weight of the connections and produces a numerical output. This is compared to the input, the weights are adjusted to make the two agree better, and the process is repeated. Eventually, the system develops a mathematical model of what a brain haemorrhage looks like, and can say how closely a new scan resembles the images on which it was trained.

Saeed Hassanpour, an electrical engineer who studies biomedical data science at Geisel School of Medicine at Dartmouth College in Hanover, New Hampshire, trained a neural network to classify a common type of cancer called adenocarcinoma from microscope slides containing samples of lung tissue3. How this condition is treated depends on the grade and stage of the tumour, but making that determination can be tricky. Affected cells can fall into any of five distinct subtypes, and most tumours contain a mixture. Some subtypes are associated with high survival rates, but if even a small number of cancer cells are of the deadlier variety, treatment must be more aggressive. Often, pathologists do not agree on what they’re seeing.

Hassanpour trained a neural network using a set of slides on which three Dartmouth pathologists had labelled the cellular patterns they saw. The neural network was then given a set of unannotated slides and asked to identify subtypes for itself. For any slide, it had a 66.6% chance of agreeing with at l

Read More

Leave a reply