The role of artificial intelligence (AI) has the potential to be “transformative” in medicine, including in vascular surgery. This is according to Alan Karthikesalingam, the UK Research Lead at Google Health (London, UK) and a vascular surgery lecturer, who spoke on the topic at this year’s British Society of Endovascular Therapy (BSET) Annual Meeting (24 June, online). The presenter noted in particular that developments in three-dimensional (3D) positioning and navigation is an “exciting” area in the field of vascular surgery, but stressed that “high-quality, prospective, randomised evidence” is required to prove clinical- and cost-effectiveness.
Karthikesalingam first gave an overview of deep learning, which he described as the field of AI that has seen “the most impressive progress relating to medicine in recent years”. He noted that deep learning is something most people use every day, with it being involved in how a person’s phone can help find users themselves, or their pets in their photos, or offers autocomplete suggestions in email programmes.
“Most computer programmes are limited by the instructions they are provided”, Karthikesalingam stated, noting the difference with deep learning is that it is comprises mathematical functions that learn from examples.
Karthikesalingam spoke specifically about the field of supervised learning and, in particular, computer vision applications. One of the “most famous” applications of supervised learning, he noted, is object recognition in computer vision, whereby scientists train a deep neural network to be able to classify the presence of an object in an image, for example detecting the presence of a cat or dog in a photograph. “Over hundreds of thousands, maybe even millions of examples,” the presenter detailed, “these networks become really good at the task, to the point where they can classify even really ambiguous images correctly, sometimes even to the levels of humans”.
Medical applications of AI
Addressing BSET viewers, the presenter detailed some work Google has carried out into the application of supervised learning in computed tomography (CT) scans, training the technology to learn to identify cancerous nodules in screening for lung cancer.
He also stressed that deep learning can help medical professionals to understand “something a bit more nuanced” about an image rather than simply either the presence or absence of a particular disease. “We might want to define, for example, the boundaries of a tissue, either normal anatomy or a pathology”, he said, noting a specific application in this case as endovascular aneurysm repair (EVAR) planning.
The ability of this technology to identify new biomarkers and new signals is another application, Karthikesalingam added. He explained, for example, that these systems could be used to identify markers of disease progression, to help predict response to therapy, or to identify novel predictors of disease.
The presenter remarked that this technology “genuinely has the potential to help our patients” by improving the accuracy and availability of care and reducing unwarranted variation, particularly in the field of medical imaging.
Issues in translation
One of the key issues in the translation of this technology to clinical practice relates to the interaction of deep learning systems with clinicians, including how scientists design user interfaces and tools for clinicians so that these systems are accurate, well-calibrated, and easy to use. “This is an important area of research at the moment”, Karthikesalingam relayed.
The presenter also conveyed various “pitfalls” in how this kind of research can be carried out “meaningfully and properly”. He detailed that many of the routine datasets collected in clinical practice “can encode unspoken biases and representation issues in clinical practice today”, especially around treatment and equity of provision of care for underprivileged groups and minorities.
In addition, he noted “important” concerns around data quality, data security, privacy, consent, and confidentiality, as well as regulatory considerations.
High-quality data needed
Moderator Simon Neequaye (Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK) was keen to know Karthikesalingam’s thoughts on the role of deep learning in 3D positioning and navigation with “systems like Cydar [Cydar Medical], FORS [Fiber Optic RealShape, Philips], and robotics”.
“This is an exciting area”, Karthikesalingam responded, noting that he suspects there will be an increase in the availability of such tools over the next five years. He stressed that “the proof will be whether those tools are actually useful in clinical practice”, concluding, however, that the technology is “highly promising” in this area. “If the trends that we see in screening and in diagnostic imaging carry on to interventional settings, it should really make a difference this decade”.
Closing his presentation, Karthikesalingam was keen to emphasise that whether these sorts of systems will translate into clinically- and cost-effective tools is going to necessitate “high-quality, prospective, randomised evidence in the years to come”.