NICE process and methods

3 Clinical effectiveness

Section 3 provides detailed guidance on the level of information that should be included in the evidence submission template about the clinical effectiveness of the appraised technology.

Evidence on outcomes should be obtained from a systematic review, defined as systematically locating, including, appraising and synthesising the evidence to obtain a reliable and valid overview of the data.

When completing the template, also refer to the NICE guide to the methods of technology appraisal (section 5.2), the NICE guide to the processes of technology appraisal (section 3.2) and the NICE process and methods addenda.

For further information on how to implement the approaches described in the NICE methods guide, see the technical support documents produced by the NICE Decision Support Unit[1] about evidence synthesis:

  • Introduction to evidence synthesis for decision making (technical support document 1).

  • A generalised linear modelling framework for pairwise and network meta-analysis of randomised controlled trials (technical support document 2).

  • Heterogeneity: subgroups, meta-regression, bias and bias-adjustment (technical support document 3).

  • Inconsistency in networks of evidence based on randomised controlled trials (technical support document 4).

  • Evidence synthesis in the baseline natural history model (technical support document 5).

  • Embedding evidence synthesis in probabilistic cost-effectiveness analysis: software choices (technical support document 6).

  • Evidence synthesis of treatment efficacy in decision making: A reviewer's checklist (technical support document 7).

  • Methods for population-adjusted indirect comparisons in submissions to NICE (technical support document 18).

3.1 Identification and selection of relevant studies

This section provides guidance on identifying and selecting relevant studies that provide evidence for:

  • the technology being appraised

  • comparator technologies, when an indirect or mixed treatment comparison is carried out.

This information should be submitted as appendix D to the main submission.

To identify and select relevant studies, it is expected that a systematic literature search will be carried out in line with the NICE guide to the methods of technology appraisal sections 5.2.2 and 5.2.4.

In exceptional circumstances a systematic literature search may not be necessary. If a systematic literature search is not included in the submission, the company must confirm that no other additional relevant studies have been done outside its organisation. See the instructions at the start of the user guide for more details of NICE's requirements and section 3.1 of the NICE guide to the processes of technology appraisal.

Advise whether a search strategy was developed to identify relevant studies. If a search strategy was developed and a literature search carried out, provide details under the subheadings listed in this section. Key aspects of study selection can be found in Systematic reviews: CRD's guidance for undertaking reviews in health care (University of York Centre for Reviews and Dissemination).

Search strategy

Describe the search strategies used to retrieve relevant clinical data. The methods used should be justified with reference to the decision problem. Sufficient detail should be provided so that the results may be reproduced. This includes a full list of all information sources and the full electronic search strategies for all databases, including any limits applied.

Identifying evidence for comparator(s)

Clinical evidence for the comparator(s) must include all of the studies considered relevant from the NICE technology appraisal(s) of the comparator(s). The references for these studies can be found in the company evidence submission for the technology appraisals, and the ERG report (which may have identified additional relevant studies). The original literature search must be updated, so the systematic literature search for comparator evidence in a cost-comparison analysis can have different date limits than for the intervention technology. The start date for the search strategy to retrieve new data on the comparator should be the end date used for literature searches in the NICE technology appraisal(s) of each comparator. Specify whether the study is from the original technology appraisal or from a new search.

Study selection

Provide details of the treatments to be compared. This should include all treatments identified in the final NICE scope. If additional treatments have been included, the rationale should be provided. For example, additional treatments may be added to make a connected network for a mixed treatment comparison.

Describe the inclusion and exclusion selection criteria, language restrictions and the study selection process in a table. Justification should be provided to ensure that the rationale for study selection is transparent. A suggested table format is provided below.

Table X Eligibility criteria used in the search strategy

Clinical effectiveness

Inclusion criteria

Exclusion criteria

Population

Intervention

Comparators

Outcomes

Study design

Language restrictions

A flow diagram of the numbers of studies included and excluded at each stage should be provided using a validated statement for reporting systematic reviews and meta-analyses, such as the PRISMA flow diagram. The total number of studies in the statement should equal the total number of studies listed in section 2.1.

When data from a single study have been drawn from more than 1 source (for example, a poster and a published report) or when trials are linked (for example, an open-label extension to an RCT), this should be clearly stated.

Provide a complete reference list of included studies. For studies involving comparator treatments, state whether the publication was included in the published NICE technology appraisal(s) of each comparator treatment.

Provide a complete reference list of excluded studies.

For indirect and mixed treatment comparisons

Summary of trials included in indirect or mixed treatment comparisons

In a table provide a summary of the trials used to carry out the indirect comparison or mixed treatment comparison. A suggested table format is presented below. When there are more than 2 treatments in the comparator sets for synthesis, include a network diagram.

Table X Summary of the trials used to carry out the indirect or mixed treatment comparison

References of trial

Intervention A

Intervention B

Intervention C

Intervention D

Trial 1

Trial 2

Trial 3

Trial 4

[Add more rows as needed]

Etc.

If the table or network diagram provided does not include all the trials that were identified in the search strategy, the rationale for exclusion should be provided.

Methods and outcomes of studies included in indirect or mixed treatment comparisons

Provide the rationale for the choice of outcome measure chosen, along with the rationale for the choice of outcome scale selected.

Discuss the populations in the included trials, especially if they are not the same as the populations specified in the NICE scope. If they are not the same:

  • provide a rationale to justify including the study

  • describe the assumptions made about the impact or lack of impact this may have on the relative treatment effect

  • explain whether an adjustment has been made for these differences.

Describe whether there are apparent or potential differences in patient populations between the trials. If this is the case, explain how this has been taken into account.

Provide the following for each trial included:

  • table(s) of the methods

  • table(s) of the outcomes and the results

  • table(s) of the participants' baseline characteristics.

For studies which will be detailed in section 3.3 of the main submission (that is, studies assessing the intervention technology), cross reference the submission rather than repeating the information in appendix D.

Methods of analysis of studies included in indirect or mixed treatment comparisons

Provide a clear description of the indirect or mixed treatment comparison methodology. If the company considers that an indirect treatment comparison or mixed treatment comparison is inappropriate, the rationale should be provided and alternative analyses explored (for example, naive indirect comparison or a narrative overview). Refer to the NICE guide to the methods of technology appraisal, sections 5.2.16 to 5.2.18.

For studies which will be detailed in section 3.4 of the main submission (that is, studies assessing the intervention technology), cross reference the submission rather than repeating the information in appendix D.

Supply any programming language used (for example, the WinBUGS code).

Risk of bias of studies included in indirect or mixed treatment comparisons

Provide a complete quality assessment of each trial.

Identify any risk of bias within the trials identified, and describe any adjustments made to the analysis.

See section 3.5 of the user guide for more details of what should be included here. For studies which will be detailed in section 3.5 of the main submission (that is, studies assessing the intervention technology), cross reference the submission rather than repeating the information in appendix D.

3.2 List of relevant clinical effectiveness evidence

NICE prefers RCTs that directly compare the technology with 1 or more relevant comparators. However, such evidence may not always be available and may not be sufficient to quantify the effect of treatment over the course of the disease. Therefore, data from non-randomised and non-controlled studies may be needed to supplement RCT data. In addition, data from trials that compare the technology with non-relevant comparators may be needed to enable the technology and the comparators to be linked in an indirect or mixed treatment comparison. Please provide details of the RCTs and non-randomised and non-controlled trials identified in the systematic literature review for the technology being appraised. A suggested table format for each source of evidence is given below. Indicate whether the trial was used to support the application for marketing authorisation.

Table X Clinical effectiveness evidence

Study

[Clinical trial name or primary author surname (year published)]

Study design

Population

Intervention(s)

Comparator(s)

Indicate if trial supports application for marketing authorisation (yes/no)

Reported outcomes specified in the decision problem

All other reported outcomes

3.3 Summary of methodology of the relevant clinical effectiveness evidence

It is expected that all key aspects of methodology will be in the public domain; if a company wishes to submit aspects of the methodology in confidence, prior agreement must be obtained from NICE.

3.3.1 Items 3 to 6b of the CONSORT checklist should be provided for all RCTs identified in section 3.2.

  • Trial design – brief description of trial design, including details of randomisation if applicable.

  • Eligibility criteria – a comprehensive description of the eligibility criteria used to select the trial participants, including any definitions and any assessments used in recruitment.

  • Settings and locations where the data were collected – describe the locations where the trial was carried out, including the country and, if applicable, the care setting (for example, primary care [GP or practice nurse], secondary care [inpatient, outpatient, day case]).

  • Trial drugs and concomitant medications – provide details of trial drugs and comparator(s), with dosing information and titration schedules if appropriate. Provide an overview of concomitant medications permitted and disallowed during the trial.

  • Outcomes specified in the scope – please state if the outcomes were pre-specified or post-hoc analyses.

3.3.2 Provide a comparative summary of the methodology of the trials in a table. A suggested table format is presented below.

Table X Comparative summary of trial methodology

Trial number

(acronym)

Trial 1

Trial 2

[Add more columns as needed]

Location

Trial design

Eligibility criteria for participants

Settings and locations where the data were collected

Trial drugs (the interventions for each group with sufficient details to allow replication, including how and when they were administered)

Intervention(s) (n=[x]) and comparator(s) (n=[x])

Permitted and disallowed concomitant medication

Primary outcomes (including scoring methods and timings of assessments)

Pre-planned subgroups

3.3.3 In a table describe the characteristics of the participants at baseline for each of the trials in your submission. Provide details of baseline demographics, including age, sex and relevant variables describing disease severity and duration and appropriate previous treatments and concomitant treatment. Highlight any differences between trial groups. A suggested table format is presented below.

Table X Characteristics of participants in the studies across treatment groups

Trial number (acronym)

Baseline characteristic

Treatment group X

Treatment group Y

[Add more columns as needed]

Trial 1 (n=[x])

(n=[x])

(n=[x])

(n=[x])

Age

Sex

[Add more rows as needed]

Trial 2 (n=[x])

(n=[x])

(n=[x])

(n=[x])

Age

Sex

[Add more rows as needed]

Adapted from Pharmaceutical Benefits Advisory Committee (2008) Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (Version 4.3). Canberra: Pharmaceutical Benefits Advisory Committee

3.4 Statistical analysis and definition of study groups in the relevant clinical effectiveness evidence

3.4.1 During completion of this section consider items 7a (sample size), 7b (interim analyses and stopping guidelines), 12a (statistical methods used to compare groups for primary and secondary outcomes) and 12b (methods for additional analyses, such as subgroup analyses and adjusted analyses) of the CONSORT checklist.

3.4.2 For each trial identified in section 3.2, provide details of the trial population included in the primary analysis of the primary outcome and methods used to take account of missing data (for example, a description of the intention-to-treat analysis carried out, including censoring methods, or whether a per-protocol analysis was carried out).

3.4.3 For each trial, provide details of the statistical tests used in the primary analysis. Also provide details of the primary hypothesis or hypotheses under consideration, the power of the trial and a description of sample size calculation, including rationale and assumptions in a table. State whether each trial was designed as a superiority, equivalence or non-inferiority trial; state the equivalence boundary and non-inferiority margin where relevant. Justify non-inferiority margins selected, in relation to clinically important differences. If the outcomes were adjusted for covariates, provide the rationale. A suggested table format is presented below.

3.4.4 For non-randomised and non-controlled evidence such as observational studies, the potential biases should be identified before data analysis, either by a thorough review of the subject area or discussion with experts in the clinical discipline. Ideally these should be quantified and adjusted for.

Table X Summary of statistical analyses

Trial number (acronym)

Hypothesis objective*

Statistical analysis

Sample size, power calculation

Data management, patient withdrawals

Trial 1

Trial 2

[Add more rows as needed]

*Include whether the hypothesis was tested as superiority, equivalence or non-inferiority trial

Participant flow in the relevant randomised controlled trials

In appendix D provide details of the numbers of participants who were eligible to enter the trials. Include the number of participants randomised and allocated to each treatment. Provide details of and the rationale for participants who crossed over treatment groups, were lost to follow-up or withdrew from the RCT. Provide a CONSORT diagram showing the flow of participants through each stage of each of the trials.

3.5 Quality assessment of the relevant clinical effectiveness evidence

In appendix D, provide the complete quality assessment for each trial.

3.5.1 The validity of the results of an individual RCT or non-randomised or non-controlled study will depend on the robustness of its overall design and execution, and its relevance to the decision problem. The quality of each source of evidence identified in section 3.2 should be appraised. Whenever possible, the criteria for assessing published studies should be used to assess the validity of unpublished and part-published studies. The quality assessment will be validated by the ERG.

3.5.2 Describe the methods used for assessing risk of bias and generalisability of individual trials (including whether this was done at the study or outcome level), and how this information is to be used in any data synthesis.

  • The following are the minimum criteria for assessment of risk of bias and generalisability in parallel group RCTs, but the list is not exhaustive:

    • Was the randomisation method adequate?

    • Was the allocation adequately concealed?

    • Were the groups similar at the outset of the study in terms of prognostic factors, for example severity of disease?

    • Were the care providers, participants and outcome assessors blind to treatment allocation? If any of these people were not blind to treatment allocation, what might be the likely impact on the risk of bias (for each outcome)?

    • Were there any unexpected imbalances in drop-outs between groups? If so, were they explained or adjusted for?

    • Is there any evidence to suggest that the authors measured more outcomes than they reported?

    • Did the analysis include an intention-to-treat analysis? If so, was this appropriate and were appropriate methods used to account for missing data?

  • Also consider whether the authors of the study publications declared any conflicts of interest.

  • In addition to parallel group RCTs, there are other randomised designs (for example, randomised crossover trials and randomised cluster trials) in which further quality criteria may need to be considered when assessing bias. Key aspects of quality to be considered can be found in Systematic reviews: CRD's guidance for undertaking reviews in health care (University of York Centre for Reviews and Dissemination).

  • For the quality assessments of non-randomised and non-controlled evidence, use an appropriate and validated quality assessment instrument. Key aspects of quality to be considered can be found in Systematic reviews: CRD's guidance for undertaking reviews in health care (University of York Centre for Reviews and Dissemination). This includes information on a number of initiatives aimed at improving the quality of research reporting.

3.5.3 Consider how closely the trials reflect routine clinical practice in England.

3.5.4 If there is more than 1 trial, tabulate a summary of the responses applied to each of the quality assessment criteria. A suggested table format for the quality assessment results is given below.

Table X Quality assessment results for parallel group RCTs

Trial number (acronym)

Trial 1

Trial 2

[Add more columns as needed]

Was randomisation carried out appropriately?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Was the concealment of treatment allocation adequate?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Were the groups similar at the outset of the study in terms of prognostic factors?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Were the care providers, participants and outcome assessors blind to treatment allocation?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Were there any unexpected imbalances in drop-outs between groups?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Is there any evidence to suggest that the authors measured more outcomes than they reported?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Did the analysis include an intention-to-treat analysis? If so, was this appropriate and were appropriate methods used to account for missing data?

(yes/no/not clear/N/A)

(yes/no/not clear/N/A)

Adapted from Systematic reviews: CRD's guidance for undertaking reviews in health care (University of York Centre for Reviews and Dissemination)

3.6 Clinical effectiveness results of the relevant trials

3.6.1 Present results for all outcomes that are important to the decision problem, from the trials identified in section 3.2. These must include outcomes and measures that were used in the cost-effectiveness analysis of the NICE technology appraisal(s) of the comparator(s) specified in the final scope, focusing on any outcomes that the model was sensitive to. Normally, the committee will consider only the same outcome measures as were used in the NICE technology appraisal(s) for the comparator(s). Different outcome measures will be accepted if an empirical mapping tool is available.

3.6.2 Data from intention-to-treat analyses should be presented whenever possible and a definition of the included participants provided. If participants have been excluded from the analysis, the rationale for this should be given. Per-protocol analyses should be presented in addition to intention-to-treat analyses where relevant to the study design and hypothesis. Explain any discrepancies between the intention-to-treat and per-protocol analyses.

3.6.3 The information may be presented graphically to supplement text and tabulated data. If appropriate, please present graphs such as Kaplan–Meier plots.

3.6.4 For each outcome, provide the following information from each study:

  • The unit of measurement.

  • The size of the effect; for dichotomous outcomes, the results ideally should be expressed both as relative risks (or odds ratios) and risk (or rate) differences. For time-to-event analysis, the hazard ratio is an equivalent statistic. Both absolute and relative data should be presented.

  • A 95% confidence interval.

  • The number of people in each group included in each analysis and whether the analysis was intention-to-treat or per-protocol. State the results in absolute numbers when feasible.

  • When interim data are quoted, this should be clearly stated, along with the point at which data were taken and the time remaining until completion of the trial. Analytical adjustments should be described to cater for the interim nature of the data.

  • Other relevant data that may help interpret the results may be included, such as adherence to medication or study protocol.

  • Discuss and justify any clinically important differences in the results between the different arms of a trial and between trials.

  • Specify whether unadjusted and adjusted analyses were performed, and whether the results were consistent.

3.7 Subgroup analysis

This section should be read with the NICE guide to the methods of technology appraisal, sections 5.10.1 to 5.10.12. Only provide results of subgroup analyses if the technology does not provide similar or greater health benefits at a similar or lower cost to the comparator in the full population for whom the comparator has been recommended by NICE.

3.7.1 Provide details of the subgroup analyses carried out. Specify the rationale and whether they were pre-planned or post-hoc.

3.7.2 Clearly specify the characteristics of the participants in the subgroups and explain the appropriateness of the analysis to the decision problem.

3.7.3 Provide details of the statistical tests used in the primary analysis of the subgroups, including any tests for interaction.

Provide a summary of the results for the subgroups in appendix E.

3.8 Meta-analysis

This section should be read with the NICE guide to the methods of technology appraisal, sections 5.2.8 to 5.2.11. For further information on how to implement the approaches described in the guide, see the series of technical support documents produced by the NICE Decision Support Unit about evidence synthesis.

3.8.1 If a meta-analysis cannot be conducted and instead a qualitative overview is considered appropriate, summarise the overall results of the individual studies with reference to their critical appraisal.

3.8.2 If a meta-analysis has been performed, include the following in the results:

  • The characteristics and possible limitations of the data (that is, population, intervention, setting, sample sizes and the validity of the evidence) should be fully reported for each study included in the analysis and a forest plot included.

  • A statistical assessment of heterogeneity. If the visual presentation and/or the statistical test indicate that the RCT results are heterogeneous, try to explain the heterogeneity.

  • Statistically combine (pool) the results for both relative risk reduction and absolute risk reduction using either a fixed effects or random effects model as appropriate.

  • Provide an adequate description of the methods of statistical combination and justify their choice.

  • Carry out sensitivity analysis when appropriate.

  • Tabulate and/or graphically display the individual and combined results (such as through the use of forest plots).

3.8.3 If any of the relevant studies listed in section 3.1 are excluded from the meta-analysis, the reasons for doing so should be explained. The impact that each excluded study has on the overall meta-analysis should be explored.

3.9 Indirect and mixed treatment comparisons

3.9.1 In a table provide a summary of the trials used to carry out the indirect comparison or mixed treatment comparison. There is a suggested table format below. When there are more than 2 treatments in the comparator sets for synthesis, include a network diagram. For studies involving comparator treatments, state whether the publication was included in the published NICE technology appraisal(s) for each comparator treatment.

Table X Summary of the trials used to carry out the indirect or mixed treatment comparison

References of trial

Intervention A

Intervention B

Intervention C

Intervention D

Trial 1

Trial 2

Trial 3

Trial 4

[Add more rows as needed]

3.9.2 If the table or network diagram provided does not include all the trials that were identified in the search strategy, the rationale for exclusion should be provided.

Full details of the methodology for the indirect comparison or mixed treatment comparison should be presented in appendix D, including:

  • the methods used to identify and select the studies

  • methods and outcomes of the included studies

  • quality assessment of the included studies

  • methods of analysis of the indirect comparison or mixed treatment comparison

  • justification for the choice of random or fixed effects model.

See section 3.1 of the user guide for full details of the information required in appendix D.

3.9.3 Provide the results of the analysis. For examples of how to present the results of the analysis, see the NICE Decision Support Unit technical support documents 1 to 3.

3.9.4 Provide the results of the statistical assessment of heterogeneity. The degree of heterogeneity, and the reasons for it, should be explored as fully as possible.

3.9.5 If there is doubt about the relevance of particular trials, present separate sensitivity analyses in which these trials are excluded.

3.9.6 Discuss any heterogeneity between results of pairwise comparisons and inconsistencies between the direct and indirect evidence on the technologies.

3.10 Adverse reactions

3.10.1 Evidence from comparative RCTs and regulatory summaries is preferred, but findings from non-comparative trials may sometimes be relevant. For example, post-marketing surveillance data may demonstrate that the technology shows a relative lack of adverse reactions commonly associated with the comparator, or that the occurrence of adverse reactions is not statistically significantly different to those associated with other treatments.

3.10.2 In a table, summarise the adverse reactions reported in the studies identified in section 3.2. For each intervention group, give the number with the adverse reaction and the frequency, the number in the group, and the percentage with the adverse reaction. Then present the relative risk and risk difference and associated 95% confidence intervals for each adverse reaction.

In appendix F, provide details of any studies that report additional adverse reactions to those reported by the studies identified in section 3.2. Include the following:

  • Details of the methodology used for the identification, selection and quality assessment of the studies.

  • Examples of search strategies for specific adverse reactions or generic adverse reaction terms. Key aspects of quality criteria for adverse reaction data can found in Systematic reviews: CRD's guidance for undertaking reviews in health care (University of York Centre for Reviews and Dissemination). Exact details of the search strategy used and a complete quality assessment for each trial should also be provided in appendix F.

  • Details of the methodology of the studies.

  • Adverse reactions. In a table provide details of adverse reactions for each intervention group. For each group, give the number with the adverse reaction and the frequency, the number in the group, and the percentage with the adverse reaction. Then present the relative risk and risk difference and associated 95% confidence intervals for each adverse reaction.

3.10.3 Provide a brief conclusion of the safety of the technology in relation to the decision problem. Comment on the similarities and differences between the technology under appraisal and its comparator(s), with respect to adverse reactions. Provide evidence to confirm whether any differences are statistically significant or clinically meaningful.

3.11 Conclusions about comparable health benefits and safety

3.11.1 Draw conclusions from the evidence supporting superiority, similarity, non-inferiority or equivalence of the technology compared with the comparator(s) specified in the final scope issued by NICE, including any subgroups. Focus on the key outcomes and measures that were used in the cost-effectiveness analysis of the NICE technology appraisal(s) of the comparator(s) (detailed in section 2).

3.11.2 If there are differences in effectiveness between the technology and its comparator(s), comment on whether these are clinically meaningful and provide supporting evidence.

3.11.3 Provide evidence on the clinical or biological plausibility of similarities in health benefits between the technology and the comparator(s).

3.11.4 Refer back to the committee's preferred clinical assumptions from the NICE technology appraisal(s) of the comparator(s) outlined in section 2.1, focusing on key drivers of the cost-effectiveness results, and comment on whether similar assumptions can be made for the technology under appraisal. For example:

  • Issue from previous appraisal: duration of treatment effect was a key driver of cost effectiveness.

  • Committee's conclusion: duration of treatment effect wanes over time.

  • Assumption for new technology and justification: treatment effect also wanes over time similar to the original technology (cross reference the section of the submission that provides supporting evidence).

3.11.5 Describe and explain any uncertainties in the evidence informing your conclusions.

3.12 Ongoing studies

3.12.1 Provide details of all completed and ongoing studies that should provide additional evidence in the next 12 months for the indication being appraised.



[1] Although the Decision Support Unit is funded by NICE, technical support documents are not formal NICE guidance or policy.