January 7, 2018 weblog
Eyes as a portal to cardiovascular risk factors
Researchers from Google Research, Verily Life Sciences, and the Division of Cardiovascular Medicine, Stanford School of Medicine, are showing that the eyes have it in offering a portal to one's health status.
The team is treating the eye as the information portal. Their work is described in the paper, "Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs using Deep Learning," which is up on arXiv.
Ryan Poplin, Avinash Varadarajan, Katy Blumer, Yun Liu, Michael McConnell, Greg Corrado, Lily Peng, and Dale Webster are the authors.
"We predict cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as such as age (within 3.26 years), gender (0.97 AUC), smoking status (0.71 AUC), HbA1c (within 1.39%), systolic blood pressure (within 11.23mmHg) as well as major adverse cardiac events (0.70 AUC)."
A convolutional neural network was their tool. Mihai Andrei, writing in ZME Science, described this type of network as "a feed-forward algorithm inspired by biological processes, especially pattern between neurons, commonly used in image analysis."
Ryan Whitwam in ExtremeTech talked about their methodology. To develop its retina-scanning neural network, Google needed a lot of data. It used retinal images from patients to set up the network. Later, it validated the network's deep learning abilities using two different data sets of patients.
The authors wrote that "We developed deep learning models using retinal fundus images from 48,101 patients from UK Biobank and 236,234 patients from EyePACS , and validated these models using images from 12,026 patients from UK Biobank and 999 patients from EyePACS."
The team used an open-source software library for machine intelligence, TensorFlow.
The authors noted that "Markers of cardiovascular disease, such as hypertensive retinopathy and cholesterol emboli, can often manifest in the eye."
So, could they accurately predict health metrics? After all, wrote the authors, "Risk stratification is key to identifying and managing groups at risk for cardiovascular disease, which remains the leading cause of death globally."
ZME Science described the results: "They were able to predict age (within 3.26 years), and within an acceptable margin, gender, smoking status, systolic blood pressure as well as major adverse cardiac events."
The authors presented more numbers in their results: age (within 3.26 years), gender (0.97 AUC), smoking status (0.71 AUC), HbA1c (within 1.39%), systolic blood pressure (within 11.23mmHg) as well as major adverse cardiac events (0.70 AUC).
In the bigger picture, Scientific American wrote about deep learning sharpening views of cells and genes. "Neural networks are making biological images easier to process." Amy Maxmen said scientists are using the CNN approach "to find mutations in genomes and predict variations in the layout of single cells."
Maxmen wrote that the team's work "is part of a wave of new deep-learning applications that are making image processing easier and more versatile—and could even identify overlooked biological phenomena."
Several reports quoted Philip Nelson, a director of engineering at Google Research, saying, "machines can now see things that humans might not have seen before."
Andrei turned to why this matters as a research path. "Doctors today rely heavily on blood tests to determine cardiovascular risks, so having a non-invasive alternative could save a lot of costs and time, while making visits to the doctor less unpleasant."
The authors are aware that more research is warranted in the future.
They wrote, "The dataset size is relatively small for deep learning." They said a significantly larger dataset or a population with more cardiovascular events may enable more accurate deep learning models to be trained and evaluated with high confidence.
Whitwam raised the point that the study is still just in preprint and "Other researchers will need to go over the models and validate the results before we'll know the impact, but it could be a boon to medicine."
Since a retina scan is simple and noninvasive, even if shown to be not 100 percent accurate, it nonetheless could provide more data to doctors, Whitwam said on Friday.
Traditionally, medical discoveries are made by observing associations and then designing experiments to test these hypotheses. However, observing and quantifying associations in images can be difficult because of the wide variety of features, patterns, colors, values, shapes in real data. In this paper, we use deep learning, a machine learning technique that learns its own features, to discover new knowledge from retinal fundus images. Using models trained on data from 284,335 patients, and validated on two independent datasets of 12,026 and 999 patients, we predict cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as such as age (within 3.26 years), gender (0.97 AUC), smoking status (0.71 AUC), HbA1c (within 1.39%), systolic blood pressure (within 11.23mmHg) as well as major adverse cardiac events (0.70 AUC). We further show that our models used distinct aspects of the anatomy to generate each prediction, such as the optic disc or blood vessels, opening avenues of further research.
© 2018 Tech Xplore