Corporate document

NICE's priority areas

Guidance on best practice for AI-based methods to support evidence generation

We aim to establish guidance on the use of innovative AI methodologies that are starting to be, or are likely to be, impactful for evidence generation. This will build on and supersede NICE's position statement on the use of AI in the generation and reporting of evidence considered by NICE. We will outline best practice principles that enable scientific rigour and consider ethical use, as well as transparency. When possible, we will work with national and international system partners to reduce duplication of effort and increase harmonisation.

What this means in practice

We will:

  • Gather insights on current practices, challenges and expectations regarding the use of AI in evidence generation, drawing input from expert advisers.

  • Prioritise topic areas for development, such as guidance for assessing the quality and reliability of AI-generated evidence, and considerations for bias, transparency, reproducibility and ethical use.

  • Lead exploratory and applied research activities to develop relevant case studies applying AI in evidence generation and to understand the implications for NICE's methods and processes, including in systematic reviews of evidence and economic modelling.

  • Consider aligning with existing literature and guidelines on AI methodologies, and frameworks from other regulatory bodies and relevant organisations.

  • Support companies and evidence developers through the NICE Advice service.

Our approach will help ensure good practice in evidence generation, and foster appropriate trust in and reliability of evidence generated with AI.

Evaluating AI-based technologies

NICE has already evaluated various technologies that incorporate AI. To date, these have included the use of AI for predictive purposes, diagnostic uses (for example, in imaging) and in digital therapeutics. As the use of AI increases in the NHS, it is critical that we ensure our evaluation programmes are considering challenges for AI-based technologies, for example, additional elements of bias and ethical considerations, societal harms, misuse and autonomy risks.

What this means in practice

We will:

  • Work with system partners to prioritise topics for standards development, covering issues inherent to AI technologies. This may include additional considerations in: the validation of algorithms; evaluation of adaptive algorithms; assessment of real-world impact; and bias, acceptability, transparency of evidence generation, algorithmic explainability and cyber security. If needed, we will consider developing tools and templates to support implementation of new standards.

  • Consider aligning with existing guidelines and frameworks from other regulatory bodies and relevant organisations.

  • Train our staff and committees to develop the skills required to evaluate AI-based technologies, including aspects that differ from traditional medical technologies and how to recognise associated risks.

  • Support companies and evidence developers through the NICE Advice service, the AI and digital regulations service (AIDRS), and other engagement work.

We will engage with regulatory bodies to ensure that our evaluation processes are aligned with the latest regulatory standards and best practices, allowing us to assess new AI technologies quickly and effectively as they emerge.

Using AI to streamline our processes and increase efficiency and effectiveness

We will explore how AI can help us improve internal processes, particularly in enhancing efficiency, accuracy and quality. We will do this in collaboration with suppliers and system partners, ensuring we guard against risks and prioritise cybersecurity.

What this means in practice

We will:

  • Learn from successful and unsuccessful implementation of AI projects and gather case studies to demonstrate potential value.

  • Explore AI tools to automate tasks, speed up guidance production, enhance data analytics and support decision making, while assessing risks.

  • Adhere to ethical guidelines, ensuring transparency, accountability, and compliance with relevant regulations, standards, licences and copyright frameworks.

  • Establish processes for performance monitoring, cybersecurity and risk mitigation, ensuring safety, responsibility and fairness.

  • Train our staff to ensure effective use of these tools and to build AI literacy within the organisation.