Corporate document

How this position statement was developed

This position statement has been developed through consideration of how AI methods could be applied to all aspects of evidence considered by NICE. For information, the following sections of this document outline this work. They summarise potential uses of AI methods that NICE is aware have been applied to, or are being researched for, HTA-related purposes.

We understand that the use of AI is a rapidly developing field, and so the uses listed in the following sections are not expected to be exhaustive. Additionally, our reflections on the potential uses do not represent endorsement or acceptance of these methods in evidence considered by NICE. Organisations considering using AI methods in their evidence should discuss this with NICE (see paragraph 1.3 in the section on our position on the use of AI in evidence generation and reporting).

Systematic review and evidence synthesis

1.17

Conventional literature search and review processes are largely undertaken manually, and typically require substantial time and resources. AI methods have the potential to automate various steps in these processes.

1.18

Machine learning methods and large language model prompts may be able to support evidence identification by generating search strategies, automating the classification of studies (for example, by study design), the primary and full-text screening of records to identify eligible studies, and the visualisation of search results (Cochrane 2024; ISPOR 2024; Fleurence et al. 2024; NICE's guidelines manual).

1.19

Large language models could be used to automate data extraction from published quantitative and qualitative studies by inputting prompts into the AI tool to generate the preferred output (Cochrane 2024; ISPOR 2024; Fleurence et al. 2024; Reason et al. 2024a). This is less well established than the uses described in paragraph 1.18.

1.20

Large language models could be provided with prompts to generate the code required to synthesise extracted data in the form of a (network) meta-analysis (Reason et al. 2024b). This is less well established than the uses described in paragraph 1.18.

1.21

We are aware that Cochrane is developing guidance on the responsible use of AI in evidence synthesis (Cochrane 2024), and the Guidelines International Network has established a working group that will produce guidance and resources (GIN 2024). These are likely to be useful sources of good practices for submitting organisations seeking to use such methods.

Clinical evidence

1.22

Clinical-effectiveness evidence is typically informed by clinical trials on the intervention, clinical trials on the comparator(s) and real-world data. It may include evidence to quantify a treatment effect, establish a side effect profile, or assess the generalisability of trial data to the NHS population.

1.23

AI can be used in trial design, such as defining the inclusion and exclusion criteria, and retention. Pattern recognition and machine learning may be used to avoid excluding people based on factors which do not affect the treatment response. Dosage levels, sample size and trial duration can also be optimised using AI approaches. Natural language processing can be used to mine electronic health records, for example, to identify people who meet the trial criteria and have highest potential for benefit, and side effect reporting (Datta et al. 2024; Fleurence et al. 2024; Harrer et al. 2019; Padula et al. 2022).

1.24

AI approaches can also be used to identify and adjust for limitations in clinical data. For example, pattern recognition can identify relevant covariates that influence treatment response and adjust for these in the statistical analyses (Padula et al. 2022).

1.25

AI methods can take into account complex, non-linear relationships between covariates, producing models that have fewer structural assumptions compared with parametric models. This may be particularly useful for improving performance of predictions and reducing bias in causal inference, with benefits also for precision of the effect estimate and estimates of uncertainty (Padula et al. 2022).

1.26

AI approaches can be used to produce synthetic data and generate external control arms (Fleurence et al. 2024). This may be applicable when it is unethical to include a placebo arm in a trial. It can also be used to predict clinical effectiveness in different populations, for example, by applying data from a clinical study to a population with different characteristics (Ling et al. 2023; Piciocchi et al. 2024).

1.27

Natural language processing may be used to analyse large amounts of information. This could be applied, for example, to generate an executive summary of the clinical evidence. It may also be used to simplify technical language for the purpose of creating a lay summary (Velupillai et al. 2018; Saleh et al. 2024).

1.28

When AI is used in clinical evidence generation, reporting should be transparent and make use of relevant checklists such as PALISADE to justify its use (Padula et al. 2022) and TRIPOD+AI to explain AI model development (Collins et al. 2024). Any AI approach used (for example, a cohort selection model) should be considered part of the clinical trial and full details provided within a submission.

Real-world data and analysis

1.29

Real-world data may lend itself increasingly to the use of AI approaches as accessibility and standardisation of large datasets reflecting routine care and real-world populations improve. AI approaches have several potential roles for supporting real-world evidence across numerous stages of evidence generation.

1.30

AI methods may have a role to play in data processing before the development of real-world evidence. For example, natural language processing approaches are being used to generate structured data from unstructured real-world data (Keloth et al. 2023; Fleurence et al. 2024; Li et al. 2022; Soroush et al. 2024). Approaches such as multimodal data integration can combine different data sources into a cohesive dataset (for example, Boehm et al. 2022), and aspects such as data matching and linkage, deduplication, standardisation, data cleaning and quality improvement (for example, error detection and imputation of missing data) are increasingly being automated and are scalable to process large volumes of data (Elliott et al. 2022; Pfitzner et al. 2021; OHDSI 2024).

1.31

AI approaches may support the efficient selection of relevant populations and observations from large datasets for the purposes of addressing specific research questions (Shivade et al. 2014).

1.32

AI methods can support estimation of comparative treatment effects (causal inference), primarily through using feature selection methods which select from a subset of relevant features for use in model construction. Additionally, analytical methods using AI approaches can provide more targeted estimates of causal effects, sometimes harnessing predictive capabilities of multiple valid machine learning algorithms (Padula et al. 2022).

Cost-effectiveness evidence

1.33

Cost-effectiveness evidence is typically informed by economic models. Developing an economic model is a resource-intensive multi-step process involving model conceptualisation, parameter estimation, construction, validation, analysis and reporting steps. AI methods may have a role in several of these steps (Fleurence et al. 2024).

1.34

AI methods have the capability of interrogating complex or different datasets in new ways (for example, Boehm et al. 2022 and Cowley et al. 2023). This may generate new or deeper insights into cost drivers and health outcomes, such as disease progression, surrogate relationships, and clinical pathways. This information could inform the conceptualisation and parameterisation of an economic model (for example, in terms of included health states, transitions, and events; Chhatwal et al. 2024), reducing structural uncertainty.

1.35

Methods using large language models could be used to automate the construction and calibration of new economic models, and generation of model reports (Reason et al. 2024a). Following human-led model conceptualisation and parameter estimation steps, large language model prompts can be designed to generate the code for the economic model. This may permit the construction and comparison of multiple models to assess structural uncertainty (Fleurence et al. 2024).

1.36

Large language models can be provided with prompts to reflect new information in an economic model, such as clinical data or comparators, facilitating updates and adaptations (Reason et al. 2024a). In the extreme, AI methods may support economic models being updated in real time, though we are yet to see a published example of this in healthcare.

1.37

Large language model methods can support the replication and cross-validation of existing economic models (Reason et al. 2024a).

1.38

Machine learning methods can be used for simulation optimisation (Amaran et al. 2015). In the context of economic modelling, this could reduce, and ideally minimise, the computational time that a simulation model takes to run. By increasing the efficiency of economic models, more complex models that use fewer simplifying assumptions may become more practical to use, including for probabilistic sensitivity analysis (Fleurence et al. 2024).

Next steps

1.39

NICE will review this position statement if significant new evidence becomes available that might require a change to the use of AI methods in evidence submissions as outlined in this statement.

1.40

If an update to NICE's health technology evaluation manual concerning AI methods is identified as being potentially appropriate, it will be considered as part of the framework for modular updates to NICE's methods manual.

1.41

NICE will monitor any future use of AI methods in NICE evaluations, and consider whether their use poses challenges or opportunities to NICE's guidance development processes. NICE will continue to assess operational considerations around the upskilling and training of staff and committee members. In the future there may be a need to commit to expanding capacity and capabilities across related disciplines that support the adoption of AI in our evaluations (Zemplényi et al. 2023).

1.42

NICE will continue its ongoing policy and research activities examining the use of AI methods in HTA.

1.43

If you have any questions or comments about this position statement, please contact us using the following email address: htalab@nice.org.uk.