How are you taking part in this consultation?

You will not be able to change how you comment later.

You must be signed in to answer questions

  • Question on Document

    Are there any additional implementation factors that need to be considered?
  • Question on Document

    Please let us know of any other ongoing studies with the technologies to include in table 1
The content on this page is not current guidance and is only for the purposes of the consultation process.

3 Approach to evidence generation

3.1 Evidence gaps and ongoing studies

Table 1 summarises the evidence gaps and ongoing studies that might address them. Information about evidence status is derived from the external assessment group's report; evidence not meeting the scope and inclusion criteria is not included.

The table shows the evidence available to the committee when the guidance was published. Some studies listed as ongoing may now be published.

Table 1 Evidence gaps and ongoing studies

Evidence gap

AVATAR Therapy for managing symptoms

SlowMo for managing symptoms

CareLoop for preventing relapse

Change in targeted psychosis symptoms

Evidence available

Evidence available

Limited evidence

Long-term change in targeted psychosis symptoms

Limited evidence

Ongoing studies

Limited evidence

No evidence

Rate of relapse or worsening of symptoms

No evidence

No evidence

Limited evidence

Ongoing study

Resource use

Limited evidence

Limited evidence

Ongoing studies

Limited evidence

Intervention-related adverse events

Evidence available

Ongoing studies

Evidence available

Limited evidence

Frequency of use and completion

Limited evidence

Evidence available

Ongoing study

Limited evidence

3.2 Data sources

There are several data collections that have different strengths and weaknesses that could potentially support evidence generation. NICE's real-world evidence framework provides detailed guidance on assessing the suitability of a real-world data source to answer a specific research question.

The Mental Health Services Dataset (MHSDS) is a mandated national data collection that could collect the necessary data. But it may not routinely collect all the outcome measures that were identified in the early value assessment for this evidence generation plan. Also, data may not have been submitted for all people using mental health services and there are potential issues with data quality. NHS England has suggested that modifying the MHSDS could take up to 2 years, so it is unlikely that modification could happen in time to support data collection for this evidence generation plan.

Some mental health trusts with technology systems, such as the Clinical Record Interactive Search (CRIS) system, allow de-identified data from electronic health records to be provided for research. This could support creating a dataset based on information from the different trusts' clinical records, which may include data on clinical outcomes of people with psychosis disorders. This could be used to increase and improve the data collected from the study proposed in this plan.

The quality and coverage of real-world data collections are of key importance when used in generating evidence. Active monitoring and follow up through a central coordinating point is an effective and viable approach to ensure good-quality data with broad coverage.

3.3 Evidence collection plan

Prospective controlled cohort studies are suggested as an approach to addressing the evidence gaps. These could incorporate a qualitative survey.

In such studies, 2 or more groups of people are followed over time and their outcomes compared. The studies should enrol a representative population to include adults with symptoms of psychosis, or who are at risk of relapse, who would likely be offered a digital technology in usual practice. Companies will need to clearly define their intended population.

The companies should prespecify the claimed benefits and position of their technologies in the clinical pathway for psychosis to justify their selected comparison population. The intended use of the technology should be clearly described. For example, continued or repeat use, as a component of standard care psychological intervention, or as a standalone intervention. Comparators for technologies for managing symptoms of psychosis (AVATAR Therapy and SlowMo) include cognitive behavioural therapy for psychosis (CBTp), other psychological interventions such as group therapy or supportive counselling, or waiting list. For the technology that aims to prevent relapse (CareLoop), comparators include healthcare professional review and follow up. The comparator or review protocol should be clearly described.

For comparing the technology with active treatment, start of follow up should be from the point of starting treatment. For comparing active treatment with waiting list, start of follow up should be from the point of referral for treatment. Eligibility criteria (for example, indication for referral and an assessment of the risk and suitability of digitally enabled therapy for the person), and the time point of starting follow up should be reported. Eligibility criteria should be consistent across comparison groups to avoid selection bias.

Data should be collected for all groups, at appropriate intervals from the start of follow up for a minimum of 6 months, and ideally for 12 months. Comparator data could be from different centres, with comparable populations and care pathways, that do not have access to the technologies. Ideally multiple sites should be enrolled, representing the variety of care across the NHS. The included services for standard care and the treatment options must be described, including their composition and, ideally, performance against national outcomes for the relevant condition should be reported.

Using digital technologies may worsen symptoms of delusion and paranoia in some people. So patient outcomes should be closely monitored and collected, with interim analyses and clear escalation plans specified in protocols.

Because the suggested study design is non-randomised, it is important that appropriate steps are taken to balance confounding factors across the comparison groups at baseline. This includes clearly defined and consistent enrolment criteria across the comparison groups and techniques such as matching or adjustment approaches (for example, propensity score methods) to ensure comparable groups. High-quality data on patient characteristics is needed to correct for differences and to assess who the technologies may not be suitable for. Important confounding factors should be identified, with input from clinical experts during protocol development.

Incomplete records can also lead to bias if unaccounted for. Loss to follow up, with reasons, should be reported over the data collection period. Data collection should follow a predefined protocol and quality assurance processes should be put in place to ensure the integrity and consistency of data collection.

An enrolment period should be included and be sufficient to account for learning effects when implementing the new technologies.

Data may be collected through a combination of primary data collection, routine NHS data sources, and by the technologies themselves.

The study should consider uptake of the technologies among people who were eligible for them. By also considering historical data, the study may identify changes in overall access to treatment when this is a claimed benefit.

Feedback can also be collected through a survey or structured interviews with people using the technologies. Robustness of survey results depends on comprehensive distribution across people who are eligible and on the sample being representative of the population of potential users.

See NICE's real-world evidence framework, which provides guidance on the planning, conduct, and reporting of real-world evidence studies. This document also provides best practice principles for robust design of real-world evidence when assessing comparative treatment effects using a prospective cohort study design.

3.4 Data to be collected

The following data should be collected for the technologies and their comparators to address the evidence gaps:

  • Baseline data including:

    • age and gender

    • current and previous treatments including antipsychotic medicine, other medicines, and psychological therapy

    • medical history including duration of psychosis symptoms, current psychiatric comorbidities, alcohol or drug issues

    • the indication for referral

    • symptom severity

    • risk classification or other characteristics that may be related to the likelihood of choosing to access the technology, for example, socioeconomic status, language, ethnicity or region, or important confounders identified with input from clinical experts

    • assessment of whether digital treatment is suitable for the person, and willingness to have it, with reasons for refusal

  • Clinical-effectiveness measures taken from baseline at appropriate time intervals for a minimum of 6 months to ideally 12 months:

    • For AVATAR Therapy: assess auditory verbal hallucinations using the Psychotic Symptoms Rating Scales, auditory and hallucinations (PSYRATS‑AH).

    • For SlowMo: assess distressing worries or paranoia using the Psychotic Symptoms Rating Scales, delusions (PSYRATS‑DEL) and the Green et al. Paranoid Thought Scales (GPTS).

    • For CareLoop: monitor symptoms to prevent relapse using the Positive and Negative Syndrome Scale (PANSS).

    • For all 3 technologies: assess functional outcomes (for example, using the Work and Social Adjustment Scale [WSAS], or the Global Assessment of Function Scale [GAF] Scale).

    • For all 3 technologies: record rate of relapse (that is, need for urgent review, change in antipsychotic medicine, referral to crisis care or hospital for psychiatric treatment) and time to relapse or worsening of symptoms.

  • Any adverse effects associated with use of the technology, including worsening delusion and paranoia, and incidence of suicide and self-harm.

  • Resource use before, during and after treatment. This should include time to implement and maintain the technologies and use them between appointments, for example to check alerts. It should also include the average number of treatment sessions per person, and the level of support provided (defined by healthcare professional grade and time) and any resource use associated with relapse (such as hospital care).

  • Access to treatment, including average waiting time from referral to treatment for psychosis, for people having standard care and for people using the technologies.

  • Patient and staff experience of using the technology.

  • Use of the technology including:

    • number of people accessing services with the relevant clinical indication

    • number of people offered the technology

    • number and proportion who started using the technology

    • engagement over time including frequency of use (continued or repeat use)

    • rates of stopping treatment

    • reasons why people stop using the technologies (for example, because of improvement in symptoms, lack of improvement or other reasons)

  • Information about any updates to the technologies during the observation period.

3.5 Evidence generation period

The evidence generation period should be 3 years, or less if enough evidence is available. This will be enough time to implement the study, collect and analyse the data and write a report.