The idea behind using a neural network for image recognition is that you don’t have to tell it what to look for in an image. You don’t even have to worry about what it’s looking for. With enough training, the neural network should be able to pick out details with which to make accurate identifications.
For things like figuring out if there’s a cat in an image, neural networks don’t offer much, if any, advantages over the actual neurons in our visual system. But where they can potentially shine are instances where we don’t know what to look for. There are cases where images can provide subtle information that a human doesn’t understand how to read, but a neural network can pick it up with the right training.
Now researchers have done just that by getting a deep learning algorithm to identify heart disease risks using an image of a patient’s retina.
The idea isn’t as crazy as it might sound. The retina has a rich collection of blood vessels and it is possible to detect problems in those blood vessels that also affect circulation as a whole; things like high cholesterol or high blood pressure leave traces on the eye. So a research team made up of folks from Google and Verily Life Sciences decided to see how well a deep learning network could do to extract those from retinal images.
To train the network, they used a total of nearly 300,000 patient images tagged with information relevant to heart disease, such as age, smoking status, blood pressure and BMI. Once trained, the system was run on another 13,000 images to see how it worked.
By simply looking at the retinal images, the algorithm was usually able to get to within 3.5 years of a patient’s actual age. It also did well in estimating the patient’s blood pressure and body mass index. Given those successes, the team then trained a similar network to use the images to estimate the risk of a major heart problem within the next five years. It ended up having similar performance to a calculation that used many of the factors mentioned above to estimate heart risk, but the algorithm did it all based on an image rather than a few tests and a detailed questionnaire.
The nice thing about this work is that the algorithm was set up in such a way that it could report what it focused on to make its diagnoses. For things like age, smoking status and blood pressure, the software focused on characteristics of the blood vessels. Training to predict gender caused it to focus on specific features scattered around the eye, while body mass index ended up without any clear focus, suggesting signals from BMI are scattered across the retina.
The researchers say that even a training set of 300,000 images is small for a deep learning algorithm, so they think they could do better if they were given more data to work with. And the improvement is needed, because they note that the performance comparable to the diagnostic calculation is not that great, because the calculation has a large uncertainty. With some improvement, the algorithm could prove a useful diagnostic tool, as retinal images are often taken to screen for eye problems associated with diabetes, which in turn is often associated with heart disease.
Nature Biomedical Engineering2018. DOI: 10.1038/s41551-018-0195-0 (About DOIs).