“A hugely exciting time”: Experts focus in on artificial intelligence in the vascular field

1030
AI
L-R: Tom Carrell, Richard Linder and Randy Moore

Can we trust artificial intelligence (AI)? Will AI replace physician judgement? Experts recently addressed these and other key questions amidst an expansion of AI technologies in the vascular space.  

AI is “here to stay”. This is according to Randy Moore (Calgary, Canada), vascular surgeon and chief medical officer (CMO) of ViTAA Medical Solutions, during a recent CX Vascular Live roundtable discussion. Moore was joined by Richard Linder (Sandy, USA), chair and chief executive officer of Xenter, and Tom Carrell (Barrington, UK), founder and CMO of Cydar Medical and formerly a vascular surgeon, to examine the topic of AI in the medical field. 

“Can we trust AI?” was Moore’s opening question to Linder and Carrell, having stated that the discussion would focus on some of the “more controversial” aspects of the topic. Linder’s response was twofold: “not yet” and “it depends”. He elaborated: “I think […] the quality of data, the size of the data, the transparency […] and the sourcing of the data, and how these algorithms are drafted are very important pieces of that question.” 

Carrell concurred that “it depends”. He views AI as an “enabling” technology, or a “complementary additional ability,” adding an extra layer of assurance on top of clinical judgement. However, Carrell warned, “you cannot trust anything—you cannot trust your own clinical judgement 100%, you cannot trust the AI 100%”. 

Moore summarised that perhaps AI should be categorised as a “trusted advisor”. On this note, however, he acknowledged that one of the main challenges here has to do with a “natural assumption” that, due to the “huge bodies of data” being entered, the outputs are truthful. “We have to make sure that the data that are being entered and being used to drive the algorithms are representative of the general population and free of bias,” he stressed. 


Linder agreed with Moore’s point, sharing that he and the team at Xenter have looked at the concept of physical intelligence for gathering the real-time data that are needed to develop a clinical decision. These data are put into a healthcare cloud, he explained, offering a standardised, global alternative to datasets such as specific healthcare records or healthcare systems.
 

At Cydar Medical, Carrell shared that the team is using data generated through real-world product use to address the issue of bias. Moore added that at ViTAA Medical Solutions, while the team is also focused on developing representative AI technology, “more importantly,” they have built in “explainable” as opposed to “black box” AI. Moore explained that this allows the team to conduct an auditing process and ensures that physicians are aware of the outputs that are being generated so they can make a more informed clinical decision. 

The conversation then moved on to whether AI will ever replace physician judgement. “No,” was Carrell’s short answer, reiterating his belief that the two are “complementary”. He detailed: “I think there are things that humans can do and will be able to do better than AI, things that are kind of outliers where you are relying on other bits of information, other bits of experience.” Carrell does believe AI has a place, however. In contrast to what he summarised as the “plasticity” inherent in human decision-making, AI is “very good at doing some things that humans find time consuming and laborious”. 

Linder continued that the diagnostic capabilities of a physician—honed by often decades-long experience of seeing patients—is “not going to be replaced by AI”. He used the example that the tone of how a patient might say ‘I feel great’ could vary from laboured to upbeat, which are “two totally different responses”. He stressed that AI in the form of voice recognition would simply not pick up such nuanced details.  

While Moore concurred that physician judgement will never be replaced by AI, he warned that physicians who do not incorporate AI into their practice will be replaced by those who do. 

“Completely,” Carrell said in agreement, adding that AI “is here today, it is coming, and we are seeing it around in all sorts of elements in our daily life”. He equated ignoring AI with ignoring the advent of multidisciplinary teams, or the advent of good clinical practice. 

According to Linder, AI could be put to beneficial use in the development of clinical decision support tools. In this capacity, it might be able to answer some key questions, Linder communicated, including, for instance, the accuracy and size of a dataset. “There will be a value placed on those decision support tools,” Linder posited, “and I think that is kind of where we all have to focus for each of our companies and products too.” 

The discussion then turned to accuracy, with Moore posing the question of how to deal with AI that “hallucinates”. Carrell responded by noting that, at Cydar, the team makes sure the information that is being presented is “visually inspectable”. He elaborated that, as Cydar technology deals primarily with imaging data, it is visually presented to the user so they can fact check it, leading the user to ask the pertinent questions: “Does that make sense? Is that matching up to what I am expecting to see?” 

Linder continued that he and his team use “multiple modalities” to validate the accuracy of any AI-generated data. The combination of intravascular ultrasound (IVUS), optical coherence tomography (OCT) and angiography can be used to “tri-register,” for example—where multiple imaging modalities are partnered with physiological data and presented in a simple way to enable fast decision-making. “That is the type of platform technology that we are trying to develop,” Linder noted. 

Finally, the trio considered some of the challenges ahead for AI. Linder described data security as “probably one of the most significantly looming issues” in the field. “Patients ultimately own their data, and they want to make sure that they receive their data, that it is secure, and that they can understand and comprehend the clinical decisions [made],” he stressed. 

In addition to this, Moore pointed out the fact that, while there are clinicians who are interested in bringing technology forward, AI “adds an extra layer of complexity to the processes that already require complex [quality management system] processes”. 

Taking a step back to look at the broader picture, Carrell opined that this is “a hugely exciting time”. He expressed his hope that “we have finally got the tools” to address some challenges that have been around for a quarter of a century or so.  

Linder, while agreeing with Carrell, also cautioned that the power of predictive data and the regulatory side of things “need to catch up,” but postulated that when it does, AI will have a big impact. “We just need to keep an eye on […] clinical outcomes and make sure that we are improving those,” he stressed. 


LEAVE A REPLY

Please enter your comment!
Please enter your name here