Research indicates that healthcare professionals perform better than risk prediction tools at predicting 30-day mortality, morbidity, and revision risk in patients following major lower limb amputation. The PERCEIVE study is led by chief investigator David Bosanquet (Royal Gwent Hospital, Newport, UK), funded by Health and Care Research Wales’ Research for Patient and Public Benefit scheme, and delivered by the Centre for Trials Research (Cardiff University) and the Vascular and Endovascular Research Network (VERN). Presenting early data from the PERCEIVE study during the Sol Cohen Prize Session at the UK Vascular Societies’ Annual Scientific Meeting (VSASM 2021; 1–3 December, Manchester, UK), Brenig Gwilym (Royal Gwent Hospital) stressed that determining the accuracy of healthcare professionals’ predictions is extremely important when validating risk prediction tools to put their performance into context.
“There are a lot of risk prediction tools and they seem to perform very well,” Gwilym, a trainee vascular surgeon, told the VSASM audience. However, he remarked that “the reality is, only a few validation studies are performed”. It is known that healthcare specialists are less accurate in predicting long-term outcomes compared to short-term outcomes, Gwilym noted, and therefore the research team set out to investigate how accurate healthcare professionals and amputation risk prediction tools are, at predicting outcomes. Additionally, the investigators aimed to analyse the accuracy of predictions across seniority and profession.
The team prospectively collected data for 537 patients across 41 centres over a period of seven months, from October 2020 to May 2021. Inclusion criteria were emergency and elective procedures, and chronic limb-threatening ischaemia, diabetes and acute limb ischemia as indications. Paediatric, trauma, cancer and revision cases were excluded from this study.
The presenter detailed that healthcare professionals (2,500 predictions)—including consultant surgeons, consultant anaesthetists, trainee surgeons (registrars), and trainee anaesthetists (registrars)—were asked to give preoperative predictions for 30-day outcomes as a percentage for mortality, morbidity and revision. A total of 11 mortality risk prediction tools were included, with one for morbidity and one for revision risk.
Gwilym mentioned that, out of the 52 deaths and 100 morbidity cases, 20 and 17 were due to COVID-19, respectively. Healthcare professionals were deemed “good at predictions, with a general overestimation of risk,” with trainee surgeons performing the worst. Compared to the prediction risk tools, healthcare professionals predicted mortality better in all cases but one, the presenter reported.
In addition, the researchers found that healthcare professionals predicted morbidity less accurately, with consultants performing slightly better. Additionally, Gwilym described healthcare professionals as “good at predicting revision risk”. However, he emphasised that “experience matters—so consultant surgeons were more accurate than the trainees at predicting revision risk”.
Gwilym also drew attention to current global events: “We recognise that COVID-19 has been confounding our results significantly here, especially when you consider mortality and morbidity. Therefore, we performed a sensitivity analysis, excluding all patients that tested positive for COVID-19 post- and preoperatively”. Communicating the results of this analysis, he relayed that healthcare professionals’ and risk prediction tools’ predictions “do improve, but not by an awful lot”.
To conclude, Gwilym summarised the findings of PERCEIVE, stating that healthcare professionals were “reasonably good” at predicting mortality and revision risk in this cohort, but that morbidity has proven difficult to predict. He also highlighted the need for “more extensive validation of risk prediction tools” in future studies.