Tools and resources
2 Evidence gaps
2 Evidence gaps
This section describes the evidence gaps, why they need to be addressed and their relative importance for future committee decision making.
The committee will not be able to make a positive recommendation without the essential evidence gaps (see section 2.1) being addressed. The company can strengthen the evidence base by also addressing as many other evidence gaps (see section 2.2) as possible. This will help the committee to make a recommendation by ensuring it has a better understanding of the patient or healthcare system benefits of the technology.
2.1 Essential evidence for future committee decision making
The impact of AI-derived software on a healthcare professional's ability to identify people for whom thrombolysis and thrombectomy is suitable
There is limited evidence on the impact of using AI software alongside healthcare professional interpretation to detect relevant features such as large vessel occlusions or intracerebral haemorrhage, and making decisions about use of thrombolysis and thrombectomy.
The impact of the software on how many people have thrombolysis or thrombectomy
Current evidence about the impact of AI software on how many people have thrombolysis or thrombectomy is limited. It is also confounded by issues such as lack of clarity about whether study populations were comparable before and after introduction of AI software, and the unknown influence of other changes to the care pathway around the time of implementation, for example, because of the COVID-19 pandemic. Further evidence, minimising the limitations and confounding issues affecting the current evidence, is needed to support future committee decision making.
The impact of using the software on time to thrombolysis or thrombectomy
The available evidence suggested that time to treatment with thrombolysis or thrombectomy reduced with the introduction of AI software. But, all studies were retrospective and limited. Further evidence is needed comparing time to treatment with and without AI software, accounting for confounding factors such as ring-fencing stroke beds and increasing staff numbers. This should also consider the impact on time to treatment for people transferred to other centres for thrombectomy, and whether image sharing functionality was used to facilitate this.
2.2 Evidence that further supports committee decision making
How often the software is unable to analyse CT brain scans and reasons for this
Software failure could delay diagnosis and access to time-sensitive treatments. However, only 1 study (Kauw et al. 2020) reported technical failure outcomes for any AI software and clinical experts advised that failure in clinical practice may be higher than the 11% reported. Evidence is needed to establish how often each AI software is unable to guide treatment decisions in stroke, and the reasons for this.
This page was last updated: