NICE process and methods
9 Evidence considered by the Committee
9 Evidence considered by the Committee
Evidence and commentary are considered by the Committee at 2 stages in the assessment of a procedure:
-
when formulating draft recommendations for consultation
-
when arriving at their final recommendations.
The evidence that the Committee uses to make its draft decision is mainly from published sources. 'Commentary' refers to the variety of opinion and information from unpublished sources that may be relevant to a procedure (see section 10).
Selection of evidence for the interventional procedures programme is influenced by the following factors:
-
NICE interventional procedures guidance addresses only efficacy and safety, not cost effectiveness.
-
Depending on the circumstances, either active treatment or sham (placebo) is the preferred comparator in assessing the efficacy and safety of a procedure.
-
Detailed recommendations on different indications and patient subgroups are not usually possible because the published data are usually insufficient.
-
Randomised controlled trials (RCTs) are often not available. Non‑randomised comparative studies, case series and case reports may therefore be the main sources of data.
The following sections describe how NICE identifies and selects the evidence for presentation to the Committee. This is done in the form of an overview (see section 9.3), which the Committee uses as the basis for its draft recommendations on a procedure.
9.1 Literature search
The literature search is carried out by the guidance information services team. The aim is to identify as much evidence on the procedure as possible using a comprehensive and exhaustive search strategy, but on a limited number of sources in line with the rapid nature of guidance development. Developing the search strategy is an iterative process; changes are made to the strategy according to the results retrieved, based on discussions between the guidance information services team and the programme's technical team.
Because of the nature of procedures notified to the programme, there are rarely directly relevant thesaurus headings (MeSH, EmTree). Often a given procedure has no established terminology and is referred to in a variety of ways in different publications. Using free‑text searches (words in titles and abstracts) may therefore be more important, and appropriate synonyms, abbreviations and alternative spellings are sought and used extensively in the search strategy.
The search focuses on identifying relevant background information, systematic reviews, health technology assessments (rarely available) and, most importantly, primary research and ongoing or newly reported research in the form of conference proceedings.
Evidence included
The following searches are conducted against the sources and methodology set out below.
Background information
-
US Food and Drug Administration's Manufacturer and User Facility Device Experience (MAUDE) database
-
Australian Safety and Efficacy Register of New Interventional Procedures – Surgical (ASERNIP‑S)
-
general internet search.
Systematic reviews and health technology assessments
-
Cochrane Database of Systematic Reviews
-
Health Technology Assessment Database.
Primary research evidence
-
Medline
-
EMBASE
-
Cochrane Central Register of Controlled Trials (CENTRAL)
-
Medline In-Process and other non‑indexed citations (Premedline)
-
PubMed
-
Cumulative Index to Nursing and Allied Health Literature (CINAHL), only when appropriate.
Ongoing research
Databases used include:
-
ClinicalTrials.gov
-
World Health Organization International Clinical Trials Registry
-
National Institute for Health Research Clinical Research Network Coordinating Centre Portfolio Database.
Use of methodological filters
Methodological filters such as the Cochrane Highly Sensitive Search Strategy are seldom used. This is because the evidence base is rarely large enough to warrant such restrictions and because, at the time of the programme's assessment, interventional procedures have rarely been studied in controlled trials. However, a filter based on study design may be applied for some procedures when perhaps the efficacy of an established procedure is being called into question by new information. A filter for safety outcomes may be applied for some procedures when there is a large body of evidence that includes systematic reviews, and when complications (morbidities) have been identified as a particular concern.
Language restrictions
Searches include publications in any language. When there is sufficient evidence available in English, selection is limited to English‑language publications. Translation into English of full articles published in languages other than English is only requested by the technical team if the outcomes reported in the non‑English‑language literature differ in nature from those reported in the English‑language literature, or are reported with substantially different frequency – particularly for safety outcomes. Because of resource and timing constraints, NICE may not be able to obtain English translations, even of relevant studies.
Such translations are treated in exactly the same way as English‑language studies (that is, they are included in the evidence summary table of the overview if they are considered to be among the most valid and relevant studies).
Date restrictions
Date restrictions are not normally used when searching for literature on interventional procedures. They are applied only in particular situations, for example, when a technology has evolved, when there is an exceptionally large amount of literature, or when a good‑quality systematic review or health technology assessment exists that has not excluded studies on the basis of study design. When a health technology assessment exists, the search is restricted to studies published after the year of publication of the most recent study included in the review or assessment.
Timing
The literature search is conducted as close to the relevant Committee meeting as possible, to ensure timeliness of the search. If there are any delays to the assessment of the procedure, a further search (using the same search terms) is conducted shortly before the relevant Committee meeting in case new literature has emerged.
9.2 Selecting the evidence to present to the Committee
The main aim of evidence selection is to highlight the most valid and relevant studies for detailed presentation to the Committee. These studies are presented as part of the evidence summary tables in the overview that is prepared for the procedure. To conduct rapid assessments of novel procedures, the interventional procedures programme limits the studies presented in detail in these tables to those most likely to be relevant and informative. In general, all well‑designed research studies, those reporting on large numbers of patients, those with long follow‑up (if length of follow‑up is relevant to outcomes of the procedure) and any reports of additional important safety outcomes are included. Typically, the number of studies in the tables is 6–8. The initial screening for eligible studies is done using abstracts downloaded from electronic databases. A study is eligible for inclusion if it includes patients with the appropriate indication, describes the relevant intervention and reports efficacy or safety outcome data, particularly if those outcomes were identified as being important in the brief. If a study cannot be reasonably excluded on the basis of the abstract alone, its eligibility is assessed using the full text of the publication.
The remaining eligible studies (those not included in the evidence summary table) are listed in an appendix, with brief details of each study and its outcomes. The aim of this appendix is to present the overall picture of evidence on the procedure and to allow all relevant studies to be listed without making the overview excessively large. It is possible, however, that other potentially relevant studies may not be included in the appendix because they were not identified by the literature search. Any anomalies normally relate to the date on which the literature search is conducted or the nature of the search terms, particularly for novel procedures. Relevant studies highlighted at consultation are incorporated into the evidence overview and consultees are encouraged to tell NICE about relevant studies during consultation.
Studies that do not contain clinical information on efficacy and safety outcomes (for example, narrative review articles, animal studies or studies reporting only on physiological outcomes) are not included in the overview, and are therefore not considered by the Committee.
Once all the studies identified in the literature search have been assessed for eligibility, the reference lists of the eligible studies are checked for other studies that may not have been identified by the search strategy. If a lot of potentially eligible studies are identified through this process, the original search strategy is modified and the search is repeated. The newly identified studies are incorporated into the overview as described above.
For some procedures, selecting the studies to include in the overview – and for further appraisal in the evidence summary table – may be a complex and difficult task. This is because some studies have to take priority over others, based on a judgement about their relevance and validity. A particular difficulty arises when there are a disproportionate number of published studies in relation to:
-
different subgroups of patients treated with the same procedure
-
different devices used for the same procedure, or technical variations of a procedure
-
different outcomes (for example, some studies reporting only efficacy and some only safety outcomes; some studies reporting quality‑of‑life outcomes, others not).
In this context, the programme's technical team may take the following actions:
-
prioritise a particular subgroup of studies, chosen to provide a balanced view of the evidence
-
propose splitting the overview, so that more than 1 piece of guidance is produced.
This approach has to be considered in the context of the need for effective use of programme resources and Committee time, and potential usefulness to the NHS of the resulting guidance. The technical team refer to the brief when prioritising studies.
In practice, judgements about selection are made, informed by these considerations. Analysts may seek a second opinion from other members of the technical team and any disagreement about the inclusion or exclusion of a particular study is resolved by consensus. If consensus is not possible, a third opinion from another member of the technical team, usually more senior, is also sought. The person offering the third opinion makes the final decision.
In general, studies that are designed and executed in a way that is most likely to minimise bias are included in the evidence summary table. A number of checks are used to establish whether the right studies have been selected for inclusion, including using the expertise and knowledge of the specialist advisers, the notifier of the procedure, the specialist Committee member and, ultimately, consultees who respond to the consultation on the draft guidance.
The treatment effect of a technology can be summarised as the difference between the health state or quality of life that would, on average, be experienced by patients having the technology, and the health state or quality of life of the same group were they to have standard or sham (placebo) treatment. The following criteria are considered when selecting evidence on safety and efficacy for the overview:
General quality considerations
Quality of evidence relates to the methods used to minimise bias within a study design and in the conduct of a study.
Study design
Levels of evidence are a convenient way to summarise study design according to its capacity to minimise bias. The highest value has traditionally been placed on evidence from systematic reviews or meta‑analysis of RCTs, or 1 or more well‑designed and executed RCT. However, the level of evidence is only 1 dimension when considering validity and relevance. Depending on the procedure and the most important outcomes being considered, non‑randomised studies may be more informative, for example, for safety outcomes.
Study size
Assuming that other considerations about study type and methods are equal, priority is usually given to studies that include larger numbers of patients. This is important so accurate estimates of efficacy and safety can be given, and to optimise the possibility of identifying less frequent safety outcomes.
Follow-up duration and completeness
Assuming that other considerations about study type and size are equal, priority is usually given to studies with longer and more complete follow‑up. This is particularly relevant for assessing efficacy and safety in the context of conditions such as cancer and conditions that cause long‑term disability, and for procedures relating to implantable materials or devices. Prolonged follow‑up is also important to detect rare adverse events after procedures.
Patient-focused efficacy and safety outcomes
Patient-focused final, as opposed to surrogate, outcomes are considered particularly important when judging the efficacy of a procedure. For example, evidence that a procedure reduces tumour size carries less weight than evidence about benefits such as enhanced survival or improved quality of life.
Because safety is a key feature of the programme's methods, studies that systematically report adverse events are sought. Safety outcomes are often not well addressed in randomised trials. Large numbers of treated patients are needed to reliably detect uncommon yet serious adverse events. Large case series, surveys, registers and case reports may provide valuable information, for example, for procedures where there is concern about the potential for rare but serious complications. Although these sources lack data to support incidence calculations, they provide information that can be highly relevant. This is particularly the case for serious adverse events that occur with procedures used to treat conditions that have little impact on quality of life or with a good prognosis.
Procedures for which no comparator (controlled) data are reported
Sometimes, all the evidence for a procedure is from non‑comparative studies (for example, reports of case series). Selected evidence about key efficacy and safety outcomes of established practice may then be presented.
Procedures involving a diagnostic or monitoring test
Some interventional procedures are carried out to obtain diagnostic or monitoring information during the procedure or to enable information to be collected subsequently (for example, carrying out a biopsy or implanting a telemetric device). Although a standard overview is produced for such procedures, there are special considerations in relation to the assessment and Committee decision‑making.
Evidence about diagnostic tests relates to:
-
analytical validity – whether the test detects the biomarker of interest in a laboratory setting
-
clinical validity – whether the test detects changes in disease state or risk in a clinical setting
-
clinical utility (diagnostic and therapeutic yield) – whether the test improves patient outcomes.
Evidence on diagnostic tests largely consists of studies of analytical and clinical validity. Studies showing the impact of diagnostic tests on patient outcomes are less commonly available. All relevant evidence on analytical and clinical validity, and on clinical utility, is included in the efficacy section of the overview. Specialist advice on clinical utility is collected to support the Committee's interpretation of the relevance of the evidence on analytical and clinical utility.
Inclusion of unpublished or non-peer-reviewed data
Efficacy data
Efficacy data that are unpublished or not peer reviewed are not normally selected for presentation to the Committee. This includes conference abstracts, which are not normally considered adequate to support decisions on efficacy. If an abstract report relates to a major and potentially relevant study, then efforts are made to obtain a peer‑reviewed paper of the findings as early as possible. Papers containing relevant evidence that have been accepted for publication are included, provided that the publication date is before the guidance is published.
The programme will use unpublished data from registers if:
-
they arise from a data collection exercise recommended in interventional procedures guidance and
-
the data collection exercise meets the register standards presented elsewhere in this manual.
Safety data
Data on safety, however immature, may come from abstracts, companies, registers, specialist advisers' reports and other miscellaneous sources. The programme team always brings such data to the Committee's attention, regardless of source, when safety issues relating to serious adverse events are identified. Unpublished evidence is used when this shows safety outcomes that have not been reported in published sources.
9.3 The overview
General approach to the overview
Different terminology to report identical or similar outcomes is often used in studies included in the overview. For example, erectile dysfunction may also be described as male sexual dysfunction or impotence; insomnia might also be called sleep disturbance. If there is no universally accepted nomenclature of signs and symptoms, the programme team may opt to 'translate' specific signs and symptoms to more widely used or reported terms. The original term is introduced, with an explanation about its subsequent substitution to improve readability and help with comparisons between studies. Symptom grading scales reported or referred to in the studies are described in the overview provided they are commonly recognised. No pooling or meta‑analysis of data is done by the programme team.
If a denominator is less than 10, the rate is given as a fraction (r/n), without a % value. In studies where only x% is provided in the primary study report, the r/n is not usually calculated from assumed values.
Confidence intervals around rate values are not usually calculated; they may be included in the overview if reported in the primary report.
It is usually appropriate to present statistical comparisons in the overview when reporting the results of studies that contain comparative data. When a reported comparative outcome is considered important enough for inclusion in the overview, the p value reported in the primary study is also given. If no significance level is reported, it says 'not reported' or 'NR'.
Although some interventional procedures assessed by NICE involve implanting or using a medical device, the programme does not evaluate the device itself: the focus is on the procedure. The programme only considers the efficacy and safety of a procedure using devices that are CE marked. Evidence about the procedure relating to devices without CE marking is selected for the overview if the evidence meets the inclusion criteria. If proprietary names of medical devices are specified in the published studies, these names may be included in the overview of the evidence, but interventional procedures guidance does not name companies' devices or brands.
Formal submissions are not used by the programme. However, a search is done for companies producing devices that may be used to do the procedure so that NICE can make a structured information request at the beginning of the assessment of the procedure (see section 10.2).
If NICE is made aware of relevant material not in the public domain, it will consider whether to include this in the overview using the normal approach to selection of evidence for the overview.
Evidence summary table
The evidence summary table included in the overview comprises:
-
study details
-
analysis (brief critical appraisal)
-
efficacy outcomes
-
safety outcomes.
Study details
Study details are usually structured as follows (details are included when provided by the primary study report):
-
reference (first author – surname and initials – and year)
-
study type/design, that is:
-
health technology assessment or systematic review (of RCTs or non‑RCT studies)
-
RCT
-
non-RCT
-
case series
-
case report
-
-
country (or countries) where study was done
-
recruitment period
-
study population and number (total number of patients and, when relevant, number of patients treated with the procedure of interest)
-
age and sex of patients
-
patient selection criteria
-
technique (details of procedure done) and comparator (where relevant)
-
length of follow‑up (mean or median when stated)
-
details of conflicts of interest declared by the authors.
Critical appraisal of the evidence (analysis)
The critical appraisal of the studies in the overview identifies issues that might influence the interpretation of the evidence. The critical appraisal addresses key features of the evidence relating to study design, the quality of the study, statistical analysis, effect size and relevance of the outcomes. While several critical appraisal checklists exist, it is difficult to be prescriptive about using such lists because the relative importance of the issues varies according to the procedure, the indication and the available evidence.
The programme analyst may comment on the following issues when reporting on a primary study or systematic review of primary studies:
-
patient selection
-
patient enrolment or recruitment method (for example, whether it was continuous)
-
previous operator training for the procedure
-
previous volume of experience of operators or participating units with the procedure
-
relevance of outcomes measured
-
validity and reproducibility of measurement of outcomes (for example, blinding)
-
appropriateness of analysis (for example, intention‑to‑treat analysis)
-
completeness of follow‑up, for any studies involving post‑procedure follow‑up
-
reasons for loss to follow‑up
-
general considerations about validity and generalisability of the studies, when appropriate
-
inclusion of the same patients in more than 1 study
-
multiple reporting of a single study
-
other potential sources of bias.
The evidence summary table in the overview presents the efficacy and safety outcomes reported in the studies. Outcomes are grouped under subheadings where appropriate. Safety, but not efficacy, data from conference abstracts may be presented in the evidence summary table.
The overview also contains advice from specialist advisers and commentary from patient commentators, which are described in section 10.
Sometimes, the volume or complexity of evidence (or the complexity of the procedure) makes it too difficult to present to the Committee in the format of an overview. In this case, NICE commissions an External Assessment Centre to produce a systematic review.
9.4 Systematic reviews
Reasons for commissioning a systematic review
After considering the brief and the available literature, the programme team may decide to refer the procedure to an External Assessment Centre for a systematic review. Criteria used to help identify procedures for which a systematic review might be appropriate include:
-
when the size of the evidence base is too large to prepare in the format of a standard overview
-
when the procedure has the potential to cause serious adverse events and the evidence therefore needs a complex statistical analysis to enable the Committee to make a decision
-
when the procedure has more than 1 indication or involves more than 1 technique.
Occasionally, after considering the overview and specialist advice, the Committee may request a systematic review. This may occur, for example, when the Committee has found that the evidence is difficult to interpret, or considers that it leads to apparently contradictory conclusions.
When a systematic review is needed, NICE selects an External Assessment Centre to carry it out. The systematic review normally takes 6 months to complete, and the standard timeline for developing guidance does not apply. Revised timelines for the development of guidance on the procedure are presented on NICE's website.
Process for carrying out a systematic review
A brief is prepared by an External Assessment Centre and agreed by NICE and the Committee. It describes the aims of the systematic review and the methodology to be used, including a table setting out the relevant population, intervention, comparator and outcomes (PICO).
External Assessment Centres do systematic reviews using methods proposed by the Centre for Reviews and Dissemination and the Cochrane Collaboration. Systematic reviews include evidence from all available relevant scientific sources, including published research and conference abstracts, with the aim of providing the most up‑to‑date body of information. Unpublished sources of information are also sought and, if they are used, this is stated clearly in the report. The review process incorporates a formal assessment of the methodological quality of included full‑text studies, and indicates if material is unpublished.
The systematic review and related documents are published on NICE's website with the consultation document at the time of consultation.
For each systematic review, the External Assessment Centre seeks clinical advice specific to the procedure(s) under assessment. The Centre is responsible for getting this advice. In preparing the systematic review, the Centre may also need input from appropriate individuals and organisations, including:
-
companies, if a medical device or devices are involved in the procedure
-
patient groups, for example, in the interpretation of patient‑reported outcomes
-
regulators such as the Medicines and Healthcare products Regulatory Agency and the US Food and Drug Administration, in relation to the regulatory status of products and safety reports.