How are you taking part in this consultation?

You will not be able to change how you comment later.

You must be signed in to answer questions

    The content on this page is not current guidance and is only for the purposes of the consultation process.

    1 Incorporating economic evaluation

    1.1 Introduction

    This chapter describes the role of economics in developing NICE guidelines, and suggests possible approaches to use when considering and incorporating economic evidence. It also sets out the principles for conducting new economic modelling if there is insufficient or not applicable published evidence to assess the cost effectiveness of key interventions, services or programmes.

    1.2 The role of economics in guideline development

    Economic evaluation compares the costs and consequences of alternative courses of action. Formally assessing the cost effectiveness of an intervention, service or programme can help decision-makers ensure that maximum gain is achieved from limited resources. If resources are used for interventions or services that are not cost effective, the population as a whole gains fewer benefits.

    It is particularly important for committee members to understand that economic analysis is not only about estimating the resource consequences of a guideline recommendation, but is concerned with evaluating costs in relation to benefits (including benefits to quality of life) and harm of alternative courses of action. NICE's principles usually take precedence over economics.

    Guideline recommendations should be based on the balance between the estimated costs of the interventions or services and their expected benefits compared with an alternative (that is, their 'cost effectiveness'). In general, the committee should be increasingly certain of the cost effectiveness of a recommendation as the cost of implementation increases.

    Defining the priorities for economic evaluation should start during scoping of the guideline, and should continue when the review questions are being developed. Questions on economic issues mirror the review questions on effectiveness, but with a focus on cost effectiveness. Health economic input in guidelines typically involves 2 stages. The first is a literature review of published economic evidence to determine whether the review questions set out in the scope have already been assessed by economic evaluations. Reviews of economic evidence identify, present and appraise data from studies of cost effectiveness. They may be considered as part of each review question undertaken for a guideline. If existing economic evidence is inadequate or inconclusive for 1 or more review questions, then the second stage may involve a variety of economic modelling approaches such as adapting existing economic models or building new models from existing data.

    The committee may require more robust evidence on the effectiveness and cost effectiveness of recommendations that are expected to have a substantial impact on resources. Economic analysis must be done when there is no robust evidence of cost effectiveness to support these recommendations. Any uncertainties must be offset by a compelling argument in favour of the recommendation. However, the cost impact or savings potential of a recommendation should not be the sole reason for the committee's decision.

    Resource impact is considered in terms of the additional cost or saving above that of current practice for each of the first 5 years of implementing the guideline. Resource impact is defined as substantial if:

    • implementing a single guideline recommendation in England costs more than £1 million per year or

    • implementing the whole guideline in England costs more than £5 million per year.

    The aim is to ensure that the guideline does not introduce a cost pressure into the health and social care system unless the committee is convinced of the benefits and cost effectiveness of the recommendations (NICE 2021).

    Reviews of economic evidence and any economic modelling are quality assured by the developer and a member of NICE staff with responsibility for quality assurance. The nature of the quality assurance will depend on the type of economic evaluation, but will consider the evaluation in terms of the appropriate reference case and be based on a methodology checklist (for example, those in the appendix on appraisal checklists, evidence tables, GRADE and economic profiles).

    1.3 Type of economic analyses

    Common types of health economic analysis are summarised in box 7.1.

    Box 7.1 Types of economic analysis

    Cost-minimisation analysis: a determination of the least costly among alternative interventions that are assumed to produce equivalent outcomes

    Cost-effectiveness analysis (CEA): a comparison of costs in monetary units with outcomes in quantitative non-monetary units (for example, reduced mortality or morbidity)

    Cost–utility analysis (CUA): a form of cost-effectiveness analysis that compares costs in monetary units with outcomes in terms of their utility, usually to the patient, measured in QALYs

    Cost–consequences analysis: a form of cost-effectiveness analysis that presents costs and outcomes in discrete categories, without aggregating or weighting them

    Cost–benefit analysis (CBA): a comparison of costs and benefits, both of which are quantified in common monetary terms

    Cost–utility analysis is a form of cost-effectiveness analysis that uses utility as a common outcome. It considers people's quality of life and the length of life they will gain as a result of an intervention or a programme. The health effects are expressed as quality-adjusted life years (QALYs), an outcome that can be compared between different populations and disease areas. Costs of resources, and their valuation, should be related to the prices relevant to the sector.

    Cost–utility analysis is required routinely by NICE for the economic evaluation of health-related interventions, programmes and services, for several reasons:

    • When used in conjunction with an NHS and PSS perspective, it provides a single yardstick or 'currency' for measuring the impact of interventions. It also allows interventions to be compared so that resources may be allocated more efficiently.

    • Where possible, NICE programmes use a common method of cost-effectiveness analysis that allows comparisons between programmes.

    If a cost–utility analysis is not possible (for example, when outcomes cannot be expressed using a utility measure such as the QALY), a cost–consequences analysis may be considered. Cost–consequences analysis can consider all the relevant health and non-health effects of an intervention across different sectors and reports them without aggregation. A cost–consequences analysis that includes most or all of the potential outcomes of an intervention will be more useful than an analysis that only reports 1 or 2 outcomes.

    A cost–consequences analysis is useful when different outcomes cannot be incorporated into an index measure. It is helpful to produce a table that summarises all the costs and outcomes and enables the options to be considered in a concise and consistent manner. Outcomes that can be monetised are quantified and presented in monetary terms. Some effects may be quantified but cannot readily be put into monetary form (for more details see the Department for Transport's Transport Analysis Guidance [TAG] unit A2.1 (2019). Some effects cannot readily be quantified (such as reductions in the degree of bullying or discrimination) and should be considered by decision-making committees as part of a cost–consequences analysis alongside effects that can be quantified.

    All effects (even if they cannot be quantified) and costs of an intervention are considered when deciding which interventions represent the best value. Effectively, cost–consequences analysis provides a 'balance sheet' of outcomes that decision-makers can weigh up against the costs of an intervention (including related future costs). However, the outcomes are not separately valued, only quantified; so the study takes no view on whether the cost is worth incurring, only focusing on the cost of different methods of achieving units of outcome.

    Cost-effectiveness analysis uses a measure of outcome (a life year saved, a death averted, a patient-year free of symptoms) and assesses the cost per unit of achieving this outcome by different means. Again, the outcome is not separately valued, only quantified; so the study takes no view on whether the cost is worth incurring, only focusing on the cost of different methods of achieving units of outcome.

    Cost-minimisation analysis is the simplest form of economic analysis, which can be used when the health effects of an intervention are the same as those of the status quo, and when there are no other criteria for whether the intervention should be recommended. For example, cost-minimisation analysis could be used to decide whether a doctor or nurse should give routine injections when it is found that both are equally effective at giving injections (on average). In cost-minimisation analysis, an intervention is cost effective only if its net cost is lower than that of the status quo. The disadvantage of cost-minimisation analysis is that the health effects of an intervention cannot often be considered equal to those of the status quo.

    Cost–benefit analysis considers health and non-health effects but converts them into monetary values, which can then be aggregated. Once this has been done, 'decision rules' are used to decide which interventions to undertake. Several metrics are available for reporting the results of cost–benefit analysis. Two commonly used metrics are the 'benefit cost ratio' (BCR) and the 'net present value' (NPV) – see the Department for Transport's Transport Analysis Guide (TAG) Unit A1.1 (2021) for more information.

    1.4 The reference case

    A guideline may consider a range of interventions, commissioned by various organisations and resulting in different types of benefits (outcomes). It is crucial that reviews of economic evidence and economic evaluations undertaken to inform guideline development adopt a consistent approach depending on the type of interventions assessed. The 'reference case' specifies the methods considered consistent with the objective of maximising benefits from limited resources. NICE is interested in benefits to patients (for interventions with health outcomes in NHS and personal social services [PSS] settings), to individuals and community groups (for interventions with health and non-health outcomes in public sector settings) and to people using services and their carers (for interventions with a social care focus).

    Choosing the most appropriate reference case depends on whether or not the interventions undergoing evaluation:

    • are commissioned by the NHS and PSS alone or by any other public sector body

    • focus on social care outcomes.

    The reference case chosen should be agreed for each decision problem (relevant to a review question), should be set out briefly in the scope and detailed in the economic plan. A guideline may use a different reference case for different decision problems if appropriate (for example, if a guideline reviews interventions with non‑health- and/or social care-related outcomes). This should be agreed with NICE staff with responsibility for quality assurance before any economic evaluation is conducted.

    Table 7.1 summarises the reference case according to the interventions being evaluated.

    Table 7.1 Summary of the reference case

    Element of assessment

    Interventions funded by the NHS and personal social services (PSS) with health outcomes

    Interventions funded by the public sector with health and non-health outcomes

    Interventions funded by the public sector with a social care focus

    Defining the decision problem

    The scope developed by NICE

    Comparator

    Interventions routinely used in the NHS, including those regarded as current best practice

    Interventions routinely used in the public sector, including those regarded as best practice

    Interventions routinely delivered by the public and non-public social care sector

    (Social care costs are the costs of interventions which have been commissioned or paid for in full, or in part by non-NHS organisations)

    Perspective on costs

    NHS and PSS; for PSS include only care that is funded by NHS (such as 'continuing healthcare' or 'funded nursing care')

    Costs borne by people using services that are reimbursed by the NHS or PSS should also be included

    Public sector – often reducing to local government

    Other (where appropriate); for example, employer

    Costs borne by people using services that are reimbursed by the public sector should also be included

    Public sector – often reducing to local government

    Other (where appropriate); for example, employer

    Costs borne by people using services and the value of unpaid care may also be included if they contribute to outcomes

    Perspective on outcomes

    All direct health effects, whether for people using services and/or, when relevant, other people (principally family members and/or informal carers)

    All direct health and relevant non-health effects on individuals. For local government and other settings, where appropriate, non-health effects may also be included

    All direct health and relevant non-health effects on people for whom services are delivered (people using services and/or carers)

    Type of economic evaluation

    Cost–utility analysis

    Cost–utility analysis (base case)

    Cost-effectiveness analysis

    Cost–consequences analysis

    Cost–benefit analysis

    Cost-minimisation analysis

    Cost–utility analysis (base case)

    Cost-effectiveness analysis

    Cost–consequences analysis

    Cost–benefit analysis

    Cost-minimisation analysis

    Synthesis of evidence on outcomes

    Based on a systematic review, with a preference for treatment effects from randomised controlled trials (RCTs)

    Time horizon

    Long enough to reflect all important differences in costs or outcomes between the interventions being compared

    Measuring and valuing health effects

    Quality-adjusted life years (QALYs): the EQ-5D-3L is the preferred measure of health-related quality of life in adults

    (See NICE position statement on the EQ-5D-5L)

    Measure of non-health effects

    Not applicable

    Where appropriate, to be decided on a case-by-case basis

    Capability or social care-related quality of life measures where an intervention results in both health and capability or social care outcomes

    Source of data for measurement of quality of life

    Reported directly by people using service and/or carers

    Source of preference data for valuation of changes in health-related quality of life

    Representative sample of the UK population

    Discounting

    The same annual rate for both costs and health effects (currently 3.5%)

    Equity considerations: QALYs

    A QALY has the same weight regardless of who receives the health benefit

    Equity and Health Inequality considerations: other

    Equity and health inequality considerations relevant to specific topics, and how these were addressed in economic evaluation, must be reported

    Evidence on resource use and costs

    Costs should relate to the perspective used and should be valued using the prices relevant to that perspective

    Interventions funded by the NHS and PSS with health outcomes

    For decision problems where the intervention evaluated is solely commissioned by the NHS and does not have a clear focus on non-health outcomes, the reference case for 'interventions funded by the NHS and PSS with health outcomes' should be chosen.

    All relevant NHS and PSS costs that change as a result of an intervention should be taken into account. Important non-NHS and PSS costs should also be identified and considered for inclusion in sensitivity analysis, or to aid decision-making. These may include costs to other central government departments and local government. Service recommendations are likely to have additional costs, which include implementation costs not usually included in the analysis and costs to other government budgets, such as social care. Implementation costs should be included in a sensitivity analysis, where relevant, while costs to other government budgets can be presented in a separate analysis to the base case.

    Biosimilar medicines are considered to differ from the original product in price only (in line with the NICE technology appraisal position statement on the use of biosimilars). It may be necessary to consider modelling scenarios for different levels of uptake for biosimilars, and whether there is any relevant MHRA or government guidance on switching to biosimilars.

    Productivity costs and costs borne by people using services and carers that are not reimbursed by the NHS or PSS should usually be excluded from any analyses. That is, a societal perspective will not normally be used.

    More details on methods of economic evaluation for interventions with health outcomes in NHS and PSS settings can be found in NICE health technology evaluations: the manual (2022).This includes a reference case, which specifies the methods considered by NICE to be the most appropriate for analysis when developing technology appraisal guidance. The reference case is consistent with the NHS objective of maximising health gain from limited resources.

    Interventions funded by the public sector with health and non-health outcomes

    For decision problems where the interventions evaluated are commissioned in full or in part by non‑NHS public sector and other bodies, the reference case for 'interventions funded by the public sector with health and non-health outcomes' should be chosen. For the base-case analysis, a cost–utility analysis should be done using a cost per QALY (quality-adjusted life year) where possible.

    This reference case may be most appropriate for public health interventions paid for by an arm of government, and would consider all the costs of implementing the intervention, and changes to downstream costs. In some cases, the downstream costs are negative, and refer to cost savings. For example, an intervention such as increasing physical activity, whose effects may include preventing type 2 diabetes, may be paid for by local government, but may result in cost savings to the NHS in the form of fewer or delayed cases of diabetes. A public sector cost perspective would aggregate all these costs and cost savings. A narrower local government cost perspective would consider only the cost of implementation, whereas an NHS cost perspective would consider only the cost savings. When examining interventions that are not paid for by an arm of government (such as workplace interventions), the perspective on costs should be discussed and agreed with NICE staff with responsibility for quality assurance.

    Productivity costs should usually be excluded from both the reference-case and non-reference-case analyses; exceptions (for example, when evaluating interventions in the workplace) can only be made with the agreement of NICE staff with responsibility for quality assurance.

    For public health interventions, all direct health effects for people using services or, when relevant, other people such as family members and/or informal carers will be included. Non-health effects may also be included. When required, the perspective will be widened to include sectors that do not bear the cost of an intervention, but receive some kind of benefit from it.

    Interventions with a social care focus

    For decision problems where the interventions evaluated have a clear focus on social care outcomes, the reference case on 'interventions with a social care focus' should be chosen. For the base-case analysis, a cost–utility analysis should be done using a cost per QALY approach where possible.

    Public sector funding of social care for individual service users is subject to eligibility criteria based on a needs assessment and a financial assessment (means test). Therefore, users of social care may have to fund, or partly fund, their own care. A public sector perspective on costs should still be adopted, but should consider different scenarios of funding.

    A public sector perspective is likely to be a local authority perspective for many social care interventions, but downstream costs that affect other public sector bodies may be considered where relevant, especially if they are a direct consequence of the primary aim of the intervention. When individuals may pay a contribution towards their social care, 2 further perspectives may also be pertinent: a wider perspective (which takes account of changes to the amount that individuals and private firms pay towards the cost of care, on top of the public sector contributions) and an individual perspective (which accounts for changes in individual payments only). The value of unpaid care may also be included in sensitivity analysis, or to aid decision-making. The value of unpaid care should be set at the market value of paid care. Productivity costs should usually be excluded from both the reference-case and non-reference-case analyses; exceptions can only be made with the agreement of NICE staff with responsibility for quality assurance.

    For social care interventions, the usual perspective on outcomes will be all effects on people for whom services are delivered including, when relevant, family members and/or informal carers. When required, the perspective may be widened to include sectors that do not bear the cost of an intervention, but receive some kind of benefit from it.

    Other perspectives

    Other perspectives (for example, employers) may also be used to capture significant costs and effects that are material to the interventions. If other perspectives are used, this should be agreed with NICE staff with responsibility for quality assurance before use.

    1.5 Reviewing economic evaluations

    Identifying and examining published economic evidence that is relevant to the review questions is an important component of guideline development. The general approach to reviewing economic evaluations should be systematic, focused and pragmatic. The principal search strategy (see the section on developing search strategies in the chapter on identifying the evidence: literature searching and evidence submission), including search strategies for health economic evidence, should be posted on the NICE website 6 weeks before consultation on the draft guideline.

    Searching for economic evidence

    The approach to searching for economic evidence should be systematic. The strategies and criteria used should be stated explicitly in the guideline and applied consistently.

    The advice in the chapter on identifying the evidence: literature searching and evidence submission about identifying the evidence may be relevant to the systematic search for economic evaluations. The types of searches that might be needed are described in the following 2 sections.

    Initial scoping search to identify economic evaluations

    A scoping search may be performed to look for economic evaluations relevant to current practice in the UK and therefore likely to be relevant to decision-making by the committee (see the chapter on decision-making committees). This should cover areas likely to be included in the scope (see the chapter on the scope).

    Economic databases (see the appendix on sources for evidence reviews) should be searched using the population terms used in the evidence review. Other databases relevant to the topic and likely to include relevant economic evaluations should also be searched using the population terms with an economics search filter (see the section on developing search strategies in the chapter on identifying the evidence: literature searching and evidence submission). At the initial scoping stage, it may be efficient to limit any searches of databases that are sources for NHS economic evaluation database (EED) to studies indexed after December 2014 when the searches to identify studies for NHS EED ceased.

    Economic evaluations of social care interventions may be published in journals that are not identified through standard searches. Pragmatic searches based on references of key articles and contacting authors should be considered for identifying relevant papers.

    Further systematic search to identify economic evaluations

    For some review questions a full systematic search, covering all appropriate sources (see the appendix on sources for evidence reviews), should be performed to identify all relevant economic evaluations. There are several methods for identifying economic evaluations and the developer should choose the appropriate method and record the reasons for the choice in the search protocol. It may also be appropriate to focus on cost–utility analysis studies (see the InterTASC Information Specialists' subgroup Search Filters Resource), or to apply geographical search filters, for example to limit studies to OECD countries (as suggested in Ayiku et al., 2022).

    • All relevant review questions could be covered by a single search using the population search terms, combined with a search filter where appropriate, to identify economic evaluations and health-state utility data.

    • Another approach may be to use the search strategies derived with/from the review questions combined with search filters to identify economic evaluations and health-state utility data. If using this approach, it may be necessary to adapt strategies in some databases to ensure adequate sensitivity (Wood et al. 2017).

    • Another option is to identify economic evaluations and quality-of-life data alongside screening for evidence for effectiveness. Further guidance on searching for economic evaluations is available from SuRe Info.

    Selecting relevant economic evaluations

    The process for sifting and selecting economic evaluations for assessment is essentially the same as for effectiveness studies (see the section on identifying and selecting relevant evidence in the chapter on reviewing research evidence). It should be targeted to identify the papers that are most relevant to current UK practice and therefore likely to inform the committee's decision-making.

    Inclusion criteria for sifting and selecting papers for each review should specify populations and interventions relevant to the review question. They should also specify:

    • An appropriate date range, because older studies may reflect outdated practices.

    • The country or setting, because studies conducted in other countries might not be relevant to the UK. In some cases it may be appropriate to limit consideration to the UK or countries with similar healthcare systems.

    The review should also usually focus on economic evaluations that compare both the costs and consequences of the alternative interventions under consideration. Cost–utility, cost–benefit, cost-effectiveness, cost-minimisation or cost–consequences analyses (see box 7.1 in the section on the role of economics in guideline development) can be considered depending on what the committee deems to be the most relevant perspective and likely outcomes for the question. Non-comparative costing studies, 'burden of disease' studies and 'cost of illness' studies should usually be excluded; but non-comparative costing studies (such as econometric, efficiency, simulation, micro-costing and resource use, and time-series) may be included for some service delivery questions, or flagged if they might be useful for economic modelling. Sometimes, the published economic evidence is extremely sparse. In such cases, the inclusion criteria for studies may be broadened. The decision to do this is taken by the developer in consultation with NICE staff with responsibility for quality assurance and, when appropriate, with the committee or its chair.

    Assessing the quality of economic evaluations

    All economic evaluations relevant to the guideline should be appraised using the methodology checklists (see the appendix on appraisal checklists, evidence tables, GRADE and economic profiles). These should be used to appraise published economic evaluations, as well as unpublished papers, such as studies submitted by registered stakeholders and academic papers that are not yet published. The same criteria should be applied to any new economic evaluations conducted for the guideline (see the section on approaches to original economic evaluation).

    Exclusion of economic evaluations will depend on the applicability of evidence to the NICE decision-making context (usually the reference case), the amount of higher-quality evidence and the degree of certainty about the cost effectiveness of an intervention (when all the evidence is considered as a whole). Lower-quality studies are more likely to be excluded when cost effectiveness (or lack of it) can be reliably established without them. The reasons for such exclusions should be explained in the guideline.

    Sometimes reported sensitivity analyses indicate whether the results of an evaluation or study are robust despite methodological limitations. If there is no sensitivity analysis, judgement is needed to assess whether a limitation would be likely to change the results and conclusions. If necessary, a check list, such as the health technology assessment checklist for decision-analytic models (Philips et al. 2004), may also be used to give a more detailed assessment of the methodological quality of economic evaluations and modelling studies. Judgements made, and reasons for these judgements, should be recorded in the guideline.

    Summarising and presenting results for economic evaluations

    Cost-effectiveness or net benefit estimates from published or unpublished studies, or from original economic evaluations conducted for the guideline, should be presented in the guideline, for example, using an 'economic evidence profile' (see the appendix on appraisal checklists, evidence tables, GRADE and economic profiles). This should include relevant economic information (applicability, limitations, costs, effects, cost-effectiveness and/or net benefit estimates as appropriate), as well as a statement on the cost-effectiveness with respect to NICE's decision threshold. Costs do not need to be adjusted to present value, but costs from other countries should be converted to Pounds Sterling using an exchange rate from an appropriate and current source (such as HM Revenue and Customs or Organisation for Economic Co-operation and Development).

    It should be explicitly stated if economic information is not available or if it is not thought to be relevant to the review question.

    1.6 Prioritising questions for further economic analysis

    If a high-quality economic analysis that addresses a key issue and is relevant to current practice has already been published, then further modelling may not be needed. However, often the economic literature is not sufficiently robust or applicable. Original economic analyses should only be performed if an existing analysis cannot easily be adapted to answer the question.

    Economic plans

    The full economic plan initially identifies key areas of the scope as priorities for further economic analysis and outlines proposed methods for addressing review questions about cost effectiveness. The full economic plan may be modified during development of the guideline; for example, as evidence is reviewed, it may become apparent that further economic evaluation is not needed or may not be possible for some areas that were initially prioritised. A version of the economic plan setting out the questions prioritised for further economic analysis, the population, the interventions and the type of economic analysis is published on the NICE website at least 6 weeks before the guideline goes out for consultation (see the section on planning the evidence review in the chapter on developing review questions and planning the evidence review). The reasons for the final choice of priorities for economic analysis should be explained in the guideline.

    Discussion of the economic plan with the committee early in guideline development is essential to ensure that:

    • the most important questions are selected for economic analysis

    • the methodological approach is appropriate (including the reference case)

    • all important effects and resource costs are included

    • effects and outcomes relating to a wider perspective are included if relevant

    • additional effects and outcomes not related to health or social care are included if they are relevant

    • economic evidence is available to support recommendations that are likely to lead to substantial costs.

    The number and complexity of new analyses depends on the priority areas and the information needed for decision-making by the committee. Selection of questions for further economic analysis should be based on systematic consideration of the potential value of economic analysis across all key issues.

    Economic analysis is potentially useful for any question in which an intervention, service or programme is compared with another. It may also be appropriate in comparing different combinations or sequences of interventions, as well as individual components of the service or intervention. However, the broad scope of some guidelines means that it may not be practical to conduct original economic analysis for every component.

    The decision about whether to carry out an economic analysis therefore depends on:

    • the potential overall expected benefit and resource implications of an intervention both for individual people and the population as a whole

    • the degree of uncertainty in the economic evidence review and the likelihood that economic analysis will clarify matters.

    Economic modelling may not be warranted if:

    • It is not possible to estimate cost effectiveness. However, in this case, a 'scenario' or 'threshold' analysis may be useful.

    • The intervention has no likelihood of being cost saving and its harms outweigh its benefits.

    • The published evidence of cost effectiveness is so reliable that further economic analysis is not needed.

    • The benefits sufficiently outweigh the costs (that is, it is obvious that the intervention is cost effective) or the costs sufficiently outweigh the benefits (that is, it is obvious that the intervention is not cost effective).

    • An intervention has very small costs, very small benefits and very small budget impact.

    1.7 Approaches to original economic evaluation

    General principles

    Regardless of the methodological approach taken, the general principles described in this section should be observed. Any variation from these principles should be described and justified in the guideline. The decision problem should be clearly stated. This should include a definition and justification of the interventions or programmes being assessed and the relevant groups using services (including carers).

    Developing conceptual models linked to topic areas or review questions may help the health economist to decide what key information is needed for developing effectiveness and cost-effectiveness analyses (see the chapter on the scope for details). Models developed for public health and service delivery topics are likely to relate to several review questions, so most recommendations will be underpinned by some form of modelled analysis.

    The choice of model structure is a key aspect of the design-oriented conceptual model. Brennan's taxonomy of model structures (Brennan et al. 2006) should be considered for guidance on which types of models may be appropriate to the decision problem.

    Even if a fully modelled analysis is not possible, there may be value in the process of model development, because this will help to structure committee discussions. For example, a model might be able to demonstrate how a change in service will affect demand for a downstream service or intervention.

    For service delivery questions, the key challenge is linking changes in service to a health benefit. This obviously poses a challenge when conducting health economic analyses, but it will also be difficult finding high-quality evidence of effectiveness. Modelling using scenario analysis is usually needed to generate the health effects used within the health economic analyses. Because of the considerable resource and health impact of any recommendations on service delivery, its cost effectiveness must be considered, either analytically or qualitatively (see the appendix on service delivery – developing review questions, evidence reviews and synthesis).

    Economic analysis should include comparison of all relevant alternatives for specified groups of people affected by the intervention or using services. Any differences between the review questions and the economic analysis should be clearly acknowledged, justified, approved by the committee and explained in the guideline. The interventions or services included in the analysis should be described in enough detail to allow stakeholders to understand exactly what is being assessed. This is particularly important when calculating the cost effectiveness of services.

    An economic analysis should be underpinned by the best-quality evidence. The evidence for treatment effectiveness should be based on and be consistent with that identified for the relevant effectiveness review question. However, sometimes models may differ from the guideline review, such as when additional outcomes or additional timepoints might be extracted from the effectiveness evidence, that were not prioritised in the effectiveness review.

    Where there is insufficient evidence in the effectiveness review, usually a research recommendation will be made and modelling will not go ahead. Occasionally, a model will be developed based upon indirect evidence, bespoke analysis of real-world data or expert opinion. This should be discussed with NICE staff responsible for quality assurance and clearly explained and justified in the guideline.

    The structure of any economic model should be discussed and agreed with the committee early in guideline development. The reasons for the structure of the model should be clear. Potential alternatives should be identified and considered for use in sensitivity analysis. If existing economic models are being used, or are informing a new analysis, particularly economic models used previous in the guideline's development, the way these models are adapted or used should be clear.

    Clinical end points that reflect how a patient feels, functions, or how long a patient lives are considered more informative than surrogate outcomes. When using 'final' clinical end points in a model is not possible and data on other outcomes are used to infer the effect on mortality and health-related quality of life, there should be evidence supporting the outcome relationship (see NICE health technology evaluations: the manual (2022) for the three levels of evidence for surrogate relationships can be considered in decision making). The uncertainty associated with the relationship between the surrogate end points and the final outcomes should be quantified and captured in the model's probabilistic analysis.

    For evaluations of diagnostic testing, there may be some direct benefits from the knowledge gained and some direct harm from the testing, but most of the outcomes come downstream because of treatment or preventive measures being started, modified or stopped. Diagnostic tests can sometimes be evaluated using clinical trials, but this is unusual. If direct data on the impact of a diagnostic test strategy on final outcomes is not available, it will be necessary to combine evidence from different sources. A linked-evidence modelling approach should be used that captures the following components:

    • the pre-test prevalence of the disease,

    • diagnostic test accuracy statistics for the test(s) being evaluated,

    • diagnostic test accuracy statistics for subsequent tests,

    • the treatment pathway for each test outcome,

    • treatment outcomes,

    • adverse events associated with testing including the impact of radiation exposure from an imaging test.

    Clinical trial populations often differ from the population of interest and so might not capture the absolute effects of an intervention. Quantifying the baseline risk of health outcomes and how the condition would naturally progress with the comparator(s) can be a useful step when estimating absolute health outcomes in the economic analysis. This can be informed by observational studies. Relative treatment effects seen in randomised trials may then be applied to data on the baseline risk of health outcomes for the populations or subgroups of interest. The methods used to identify and critically evaluate sources of data for these estimates should be reported.

    When outcomes in an economic evaluation are known to be related, a joint synthesis of structurally related outcomes is recommended whenever possible, to account for correlation and to increase precision. Examples of these relations include:

    • network meta-analysis – the correlation between different pair-wise comparisons (see technical support document 6)

    • diagnostic meta-analysis – the inverse correlation between sensitivity and specificity

    • utility mapping equations – the different coefficients in the equation will be correlated (see technical support document 10)

    Studies using survival outcomes, or time-to-event outcomes, often measure the relative effects of treatments using hazard ratios (HRs), which may either be constant over time (proportional hazards) or change over time. When incorporating such outcomes into a model, the proportional hazards assumption should always be assessed (see technical support document 14).

    Sometimes there will be a lack of robust quantitative evidence for a key model parameter. In such situations, informal or formal expert elicitation might sometimes be used to identify a plausible distribution of values.

    For service delivery questions, any analysis will need to consider resource constraints. These might be monetary, but might also be resources such as staff, beds, equipment and so on. However, affordability should not be the sole consideration for service recommendations; the impact of any proposed changes on quality of care needs to be considered.

    Before presenting final results to a committee for decision-making, all economic evaluations should undergo rigorous quality assessment and validation to assess inputs, identify logical, mathematical and computational errors, and review the plausibility of outputs. The HM Treasury's review of quality assurance of government models (2013) provides guidance on developing the environment and processes required to promote effective quality assurance. This process should be documented.

    Quality assurance of an economic evaluation may take various forms at different stages in development, as detailed in the HM Treasury Aqua Book (2015). It can range from basic steps that should always occur, such as disciplined version control, extensive developer testing of their own model, and independent testing by a colleague with the necessary technical knowledge, to external testing by an independent third party and independent analytical audit of all data and methods used. For developer health economists testing their own evaluation, or those of others ('model busting'), useful and practical validation methods include:

    • 1‑way and n‑way sensitivity analyses, including null values and extreme values

    • ensuring that the model results can be explained, for example, the logic and reason underlying the effect of a particular scenario analysis on results

    • ensuring that predictions of intermediate endpoints (for example, event rate counts) and final endpoints (for example, undiscounted life expectancy) are plausible, including comparison with source materials.

    Results should be reported of any analyses conducted to demonstrate external validity. However, relevant data should not be omitted just to facilitate external validation (for example, not including trials so that they can be used for subsequent validation).

    Conventions on reporting economic evaluations should be followed (see Husereau et al. 2022) to ensure that reporting of methods and results is transparent. For time horizons that extend beyond 10 years, it may be useful to report discounted costs and effects for the short (1–3 years) and medium (5–10 years) term. The following results should be presented where available and relevant:

    • endpoints from the analysis, such as life years gained, number of events and survival

    • disaggregated costs

    • total and incremental costs and effects for all options.

    When comparing multiple mutually exclusive options, an incremental approach should be adopted. This should be done by:

    • comparing the interventions sequentially in rank order of cost or outcome, with each strategy compared with the next non-dominated alternative in terms of (for cost-utility analyses) the 'incremental cost per QALY gained';

    • ranking the interventions in order of mean 'net health benefit' [NHB] (or 'net monetary benefit'), where the opportunity cost is specified by the incremental (monetary) cost divided by a specific cost per QALY threshold.

    Comparisons with a common baseline intervention should not be used for decision-making (although should be included in the incremental analysis if it reflects a relevant option).

    Any comparison of interventions in an economic model that are not based on head-to-head trial comparisons should be carefully evaluated for the between-study heterogeneity, and potential for modifiers of treatment effect should be explored. Limitations should be noted and clearly discussed in the guideline. Ideally, when options have not been directly compared, and, more generally, when there are more than two options, a network meta-analysis should be considered as the best option for evidence synthesis to inform the model (See Chapter 6).

    Economic models developed for the guideline are available to registered stakeholders during consultation on the guideline. These models should be fully executable and clearly presented.

    Different approaches to economic analysis

    There are different approaches to economic analysis (see box 7.1 in the section on the role of economics in guideline development for examples). If economic analysis is needed, the most appropriate approach should be considered early during the development of a guideline, and reflect the content of the guideline scope.

    There is often a trade-off between the range of new analyses that can be conducted and the complexity of each piece of analysis. Simple methods may be used if these can provide the committee with enough information on which to base a decision. For example, if an intervention is associated with better effectiveness and fewer adverse effects than its comparator, then an estimate of cost may be all that is needed. Or a simple decision tree may provide a sufficiently reliable estimate of cost effectiveness. In other situations, a more complex approach, such as Markov modelling or discrete event simulation, may be warranted.

    The type of economic analysis that should be considered is informed by the perspective specified in the scope of the guideline, and the extent to which the effects resulting from the intervention extend beyond health.

    Measuring and valuing effects for health interventions

    The measurement of changes in health-related quality of life should be reported directly from people using services (or their carers). The value placed on health-related quality of life of people using services (or their carers) should be based on a valuation of public preferences elicited from a representative sample of the UK population, using a choice-based valuation method such as the time trade‑off or standard gamble. The QALY is the measure of health effects preferred by NICE, and the EQ‑5D is NICE's preferred instrument to measure health-related quality of life in adults.

    For some economic analyses, a flexible approach may be needed, reflecting the nature of effects delivered by different interventions or programmes. If health effects are relevant, the EQ‑5D‑based QALY should be used. When EQ‑5D data are not available from the relevant clinical studies included in the clinical evidence review, EQ‑5D data can be sourced from the literature. The methods used for identifying the data should be systematic and transparent. The justification for choosing a particular data set should be clearly explained. When more than 1 plausible set of EQ‑5D data is available, sensitivity analyses should be carried out to show the impact of the alternative utility values.

    When EQ‑5D data are not available, published mapped EQ‑5D data should be used, or they may be estimated by mapping other health-related quality-of-life measures or health-related effects observed in the relevant studies to the EQ‑5D if data are available. The mapping function chosen should be based on data sets containing both health-related quality-of-life measures. The statistical properties of the mapping function should be fully described, its choice justified, and it should be adequately demonstrated how well the function fits the data. Sensitivity analyses exploring variation in the use of the mapping algorithms on the outputs should be presented.

    In some circumstances, EQ‑5D data may not be the most appropriate or may not be available. Qualitative empirical evidence on the lack of content validity for the EQ‑5D should be provided, demonstrating that key dimensions of health are missing. This should be supported by evidence that shows that EQ‑5D performs poorly on tests of construct validity and responsiveness in a particular patient group. This evidence should be derived from a synthesis of peer-reviewed literature. In these circumstances, alternative health-related quality of life measures may be used and must be accompanied by a carefully detailed account of the methods used to generate the data, their validity, and how these methods affect the utility values.

    When necessary, consideration should be given to alternative standardised and validated preference-based measures of health-related quality of life that have been designed specifically for use in children. The standard version of the EQ‑5D has not been designed for use in children. NICE does not recommend specific measures of health-related quality of life in children and young people. A generic measure that has been shown to have good psychometric performance in the relevant age ranges should be used. A report by the Decision Support Unit (Brazier et al. 2011) summarises the psychometric performance of several preference-based measures.

    As outlined in NICE health technology evaluations: the manual (2022) and the accompanying NICE position statement on use of the EQ-5D-5L valuation set for England (updated October 2019), the EQ‑5D 5‑level (5L) valuation set is not currently recommended for use by NICE. Guideline developers should use the 3L valuation set for reference-case analyses, where available.

    The QALY remains the most suitable measure for assessing the impact of services, because it can incorporate effects from extension to life and experience of care. It can also include the trade‑offs of benefits and adverse events. However, if linking effects to a QALY gain is not possible, links to a clinically relevant or a related outcome should be considered. Outcomes should be optimised for the lowest resource use. The link (either direct or indirect) of any surrogate outcome, such as a process outcome (for example, bed days), to a clinical outcome needs to be justified. However, when QALYs are not used, issues such as trade‑offs between different beneficial and harmful effects need to be considered.

    Measuring and valuing effects for non-health interventions

    For some decision problems (such as for interventions with a social care focus), the intended outcomes of interventions are broader than improvements in health status. Here broader, preference-weighted measures of outcomes, based on specific instruments, may be more appropriate. For example, social care quality-of-life measures are being developed and NICE will consider using 'social care QALYs' if validated, such as the ASCOT (Adult Social Care Outcome Toolkit) set of instruments used by the Department of Health and Social Care in the Adult Social Care Outcomes Framework indicator on social care-related quality of life.

    Similarly, depending on the topic, and on the intended effects of the interventions and programmes, the economic analysis may also consider effects in terms of capability and wellbeing. For capability effects, use of the ICECAP‑O (Investigating Choice Experiments for the Preferences of Older People CAPability measure for Older people) or ICECAP‑A (Investigating Choice Experiments for the Preferences of Older People CAPability measure for Adults) instruments may be considered by NICE when developing methodology in the future. If an intervention is associated with both health- and non-health-related effects, it may be helpful to present these elements separately.

    Economic analysis for interventions funded by the NHS and PSS with health outcomes

    Economic analyses conducted for decisions about interventions with health outcomes funded by the NHS and PSS should usually follow the reference case in table 7.1 in the section on the reference case. Advice on how to follow approaches described in NICE health technology evaluations: the manual (2022) is provided in the NICE Decision Support Unit's technical support documents. Departures from the reference case may sometimes be appropriate; for example, when there are not enough data to estimate QALYs gained. Any such departures must be agreed with members of NICE staff with responsibility for quality assurance and highlighted in the guideline with reasons given.

    Economic analysis for interventions funded by the public sector with health and non-health outcomes

    The usual perspective for the economic analysis of public health interventions is that of the public sector. This may be simplified to a local government perspective if few costs and effects apply to other government agencies.

    As public health and wellbeing programmes are funded by the public sector, in particular by local authorities, NICE has broadened its approach for the appraisal of interventions in these areas. Local government is responsible not only for the health of individuals and communities, but also for their overall welfare. The tools used for economic evaluation must reflect a wider remit than health and allow greater local variation in the outcomes captured. The nature of the evidence and that of the outcomes being measured may place more emphasis on cost–consequences analysis and cost–benefit analysis for interventions in these areas.

    Whenever there are multiple outcomes, a cost–consequences analysis is usually needed, and the committee weighs up the changes to the various outcomes against the changes in costs in an open and transparent manner. However, for the base-case analysis, a cost–utility analysis should be undertaken using a cost per QALY approach where possible.

    A wider perspective may be used, and will usually be carried out using cost–benefit analysis. When a wider perspective is used, it must be agreed with NICE staff with responsibility for quality assurance and highlighted in the guideline with reasons given.

    Economic analysis for interventions with a social care focus

    For social care interventions, the perspective on outcomes should be all effects on people for whom services are delivered (people using services and/or carers). Effects on people using services and carers (whether expressed in terms of health effects, social care quality of life, capability or wellbeing) are the intended outcomes of social care interventions and programmes. Although holistic effects on people using services, their families and carers may represent the ideal perspective on outcomes, a pragmatic and flexible approach is needed to address different perspectives, recognising that improved outcomes for people using services and carers may not always coincide.

    Whenever there are multiple outcomes, a cost–consequences analysis is usually needed, and the committee weighs up the changes to the various outcomes against the changes in costs in an open and transparent manner. However, for the base-case analysis, a cost–utility analysis should be undertaken using a cost per QALY approach where possible.

    Any economic model should take account of the proportion of care that is publicly funded or self-funded. Scenario analysis may also be useful to take account of any known differences between local authorities in terms of how they apply eligibility criteria. Scenario analysis should also be considered if the cost of social care varies depending on whether it is paid for by local authorities or by individual service users; the value of unpaid care should also be taken into account where appropriate.

    It is envisaged that the analytical difficulties involved in creating clear, transparent decision rules around the costs that should be considered, and for which interventions and outcomes, will be particularly problematic for social care. These should be discussed with the committee before any economic analysis is undertaken and an approach agreed.

    Identification and selection of model inputs

    An economic analysis uses decision-analytic techniques with outcome, cost and utility data from the best available published sources.

    The reference case across all perspectives (table 7.1 in the section on the reference case) states that evidence on effects should be obtained from a systematic review with a preference on randomised controlled trials. Some inputs, such as costs, may have standard sources that are appropriate, such as national list prices or a national audit, but for others appropriate data will need to be sourced.

    Additional searches may be needed; for example, if searches for evidence on effects do not provide the information needed for economic modelling. Additional information may be needed on:

    • disease prognosis

    • the relationship between short- and long-term outcomes

    • quality of life

    • adverse events

    • resource use or costs.

    Although it is desirable to conduct systematic literature reviews for all such inputs, this is time-consuming and other pragmatic options for identifying inputs may be used. Informal searches should aim to satisfy the principle of 'saturation' (that is, to 'identify the breadth of information needs relevant to a model and sufficient information such that further efforts to identify more information would add nothing to the analysis' (Kaltenthaler et al. 2011). Studies identified in the review of evidence on effects should be scrutinised for other relevant data, and attention should be paid to the sources of parameters in analyses included in the systematic review of published economic evaluations. Alternatives could include asking committee members and other experts for suitable evidence or eliciting their opinions, this could include for example, using formal consensus methods such as the Delphi technique or the nominal-group technique. If a systematic review is not possible, transparent processes for identifying model inputs should be reported; the internal quality and external validity of each potential data source should be assessed, and their selection justified. If more than 1 suitable source of evidence is found, consideration should be given to synthesis and/or exploration of alternative values in sensitivity analyses.

    For some questions, there may be good reason to believe that relevant and useful information exists outside of literature databases or validated national data sources. Examples include ongoing research, a relatively new intervention and studies that have been published only as abstracts. Typically, the method for requesting information from stakeholders is through a call for evidence (see the appendix on call for evidence and expert witnesses).

    For some guidelines, econometric studies provide a supplementary source of evidence and data for bespoke economic models. For these studies, the database 'Econlit' should be searched as a minimum.

    Real world data

    "Real word data (RWD)" is defined in the NICE real-world evidence framework (2022) as "data relating to patient health or experience, or care delivery collected outside the context of a highly controlled clinical trial". Data from electronic health records, registries, audits and other sources of real world data may be used to better define and inform parameter estimates for economic models. For some parameters, real-world data are considered the most appropriate evidence source (for example, jurisdiction-specific administrative databases can be used to estimate control or untreated event probabilities, resource use counts and unit costs), so they should be the main source of such data in economic models. In addition, real world data may be more up to date and closer to reality, e.g. in relationship to real-world costs and populations. Real-world data should not be considered a substitute for published evidence from randomised controlled trials, when assessing differences in outcomes between interventions. However, RWD can be used to answer different, but related research questions alongside good published RCT evidence, e.g. effectiveness vs efficacy, effects in subgroups of interest, UK-specific head-to-head data and final outcomes. Further guidance on searching and selecting evidence for key model inputs is available from the NICE real-world evidence framework (2022), Kaltenthaler et al. (2011) and Paisley (2016).

    To obtain real-world data, it may be necessary to negotiate access with the organisations and individuals that hold the data, or to ask them to provide a summary for inclusion in the guidance if published reports are insufficient. Any processes used for accessing data will need to be reported in the economic plan and in the guideline. Given the difficulties that organisations may have in extracting audit data, such requests should be focused and targeted: for example, identifying a specific audit and requesting results from the previous 3 years.

    Additional guidance on the use of non-randomised studies and real-world data is provided by the NICE real-world evidence framework (2022), which:

    • identifies when real-world data can be used to reduce uncertainties and improve guidance

    • describes best-practices for planning, conducting and reporting real-world evidence studies to improve the quality and transparency of evidence.

    Cost data

    Some information on unit costs may be found in the Personal Social Services -Research Unit (PSSRU) report on unit costs of health and social care or NHS England's National Cost Collection (previously the Department of Health's reference costs) (provider perspective). Information on resource impact costings can be found in NICE's process guide on resource impact assessment. The NHS Supply Chain catalogue provides costs of clinical consumables, capital medical equipment and non-medical products. Some information about public services may be better obtained from national statistics or databases, rather than from published studies. Philips et al. (2004) provide a useful guide to searching for data for use in economic models.

    In cases where current costs are not available, costs from previous years should be adjusted to present value using inflation indices appropriate to the cost perspective, such as the hospital and community health services (HCHS) index and the PSS pay and prices index, available from the PSSRU report on unit costs of health and social care (Jones 2021), or the Office for National Statistics (ONS) consumer price index.

    Wherever possible, costs relevant to the healthcare system in England should be used. However, in cases where only costs from other countries are available these should be converted to Pounds Sterling using an exchange rate from an appropriate and current source (such as HM Revenue and Customs or Organisation for Economic Co-operation and Development).

    Usually the public list prices for technologies (for example, medicines or medical devices) should be used in the reference-case analysis, but as outlined in NICE health technology evaluations: the manual (2022), reference-case analyses should be based on prices that reflect as closely as possible the prices that are paid in the NHS for all evaluations. When there are nationally available price reductions (for example, for medicines procured for use in secondary care through contracts negotiated by the NHS Commercial Medicines Unit), the reduced price should be used in the reference-case analysis to best reflect the price relevant to the NHS. The Commercial Medicines Unit publishes information on the prices paid for some generic medicines by NHS trusts through its Electronic Market Information Tool (eMIT), focusing on medicines in the 'National Generics Programme Framework' for England. Analyses based on price reductions for the NHS will be considered only when the reduced prices are transparent and can be consistently available across the NHS, and when the period for which the specified price is available is guaranteed. When a reduced price is available through a patient access scheme that has been agreed with the Department of Health and Social Care, the analyses should include the costs associated with the scheme. If the price is not listed on eMIT, then the current price listed on the British National Formulary (BNF) should be used. For medicines that are predominantly dispensed in the community, prices should be based on the Drug Tariff. In the absence of a published list price and a price agreed by a national institution (as may be the case for some devices), an alternative price may be considered, provided that it is nationally and publicly available. If no other information is available on costs, local costs obtained from the committee may be used.

    For economic evaluations using the NHS and PSS perspective, the full price paid by the NHS should be captured for all resource use. For example, this means including income tax, NI and pensions for staff costs and VAT for equipment and consumables.

    Quality of life data

    Preference-based quality-of-life data are often needed for economic models. Many of the search filters available are highly sensitive and so, although they identify relevant studies, they also detect a large amount of irrelevant data. An initial broad literature search for quality-of-life data may be a good option, but the amount of information identified may be unmanageable (depending on the key issue being addressed). It may be more appropriate and manageable to incorporate a quality of life search filter when performing additional searches for key issues of high economic priority. When searching bibliographic databases for health-state utility values, specific techniques outlined in Ara (2017) and Golder et al. (2005) and Papaioannou et al. (2010) may be useful, and specific search filters have been developed that may increase sensitivity (Arber et al. 2017). The provision of quality-of-life data should be guided by the health economist at an early stage during guideline development so that the information specialist can adopt an appropriate strategy. Resources for identifying useful utility data for economic modelling are the dedicated registries of health-state utility values such as ScHARRHUD and Tufts CEA Registry and the NICE Decision Support Unit's technical support documents.

    Exploring uncertainty

    The committee should discuss any potential bias, uncertainties and limitations of economic models. Sensitivity analysis should be used to explore the impact that potential sources of bias and uncertainty could have on model results.

    Deterministic sensitivity analysis should be used to explore key assumptions and input parameters used in the modelling, as well as to test any bias resulting from the data sources selected for key model inputs. This should test whether and how the model results change under alternative, plausible scenarios. Threshold analysis should also be considered, particularly for highly uncertain parameters, to explore the impact of the parameter on the ICER/NHB.

    'Tornado' histograms may be a useful way to present deterministic results. Deterministic threshold analysis might inform decision making when there are influential but highly uncertain parameters. However, if the model is non-linear, deterministic analysis will be less appropriate for decision making.

    Probabilistic sensitivity analysis should be used to account for uncertainty arising from imprecision in model inputs. The use of probabilistic sensitivity analysis will often be specified in the health economic plan. Any uncertainty associated with all inputs can be simultaneously reflected in the results, so the preferred cost-effectiveness estimates should be those derived from probabilistic analyses when possible. In non-linear decision models where outputs are a result of a multiplicative function (for example, in Markov models), probabilistic methods also provide the best estimates of mean costs and outcomes. The choice of distributions used should be justified. When doing a probabilistic analysis, enough model simulations should be used to minimise the effect of Monte Carlo error. Reviewing the variance around probabilistic model outputs (net benefits or ICERs) as the number of simulations increases can provide a way of assessing if the model has been run enough times or more runs are needed. Presentation of the results of probabilistic sensitivity analysis could include scatter plots or confidence ellipses, with an option for including cost-effectiveness acceptability curves and frontiers.

    When probabilistic methods are unsuitable, or not possible, the impact of parameter uncertainty should be thoroughly explored using deterministic sensitivity analysis, being mindful of correlated parameters, and the decision not to use probabilistic methods should be justified in the guideline.

    Consideration can be given to including structural assumptions and the inclusion or exclusion of data sources in probabilistic sensitivity analysis. In this case, the method used to select the distribution should be outlined in the guideline (Jackson et al. 2011).

    Discounting

    Cost-effectiveness results should reflect the present value of the stream of costs and benefits accruing over the time horizon of the analysis. For the reference case, the same annual discount rate should be used for both costs and benefits. NICE considers that it is usually appropriate to discount costs and health effects at the same annual rate of 3.5%.

    Sensitivity analyses using 1.5% as an alternative rate for both costs and health effects may be presented alongside the reference‑case analysis, particularly for public health guidance. When treatment restores people who would otherwise die or have a very severely impaired life to full or near full health, and when this is sustained over a very long period (normally at least 30 years), cost-effectiveness analyses are very sensitive to the discount rate used. In this circumstance, analyses that use a non-reference-case discount rate for costs and outcomes may be considered. A discount rate of 1.5% for costs and benefits may be considered by the committee if it is highly likely that, based on the evidence presented, long-term health benefits are likely to be achieved. However, the committee will need to be satisfied that the recommendation does not commit the funder to significant irrecoverable costs.

    Subgroup analysis

    The relevance of subgroup analysis to decision-making should be discussed with the committee. When appropriate, economic analyses should estimate the cost effectiveness of an intervention in each subgroup.

    Local considerations

    For service delivery questions, cost-effectiveness analyses may need to account for local factors, such as the expected number of procedures and the availability of staff and equipment at different times of the day, week and year. Service delivery models may need to incorporate the fact that each local provider may be starting from a different baseline of identified factors (for example, the number of consultants available at weekends). It is therefore important that these factors are identified and considered by the committee. Where possible, results obtained from the analysis should include both the national average and identified local scenarios to ensure that service delivery recommendations are robust to local variation.

    Service failures

    Service designs under consideration might result in occasional service failure – that is, where the service does not operate as planned. For example, a service for treating myocardial infarction may have fewer places where people can be treated at weekends compared with weekdays as a result of reduced staffing. Therefore, more people will need to travel further away by ambulance and the journey time will also be longer. Given the limited number of ambulances, a small proportion may be delayed, resulting in consequences in terms of costs and QALYs. Such possible service failures should be taken into account in effectiveness and economic modelling. This effectively means that analyses should incorporate the 'side effects' of service designs.

    Service demand

    Introducing a new service or increasing capacity will often result in an increase in demand. This could mean that a service does not achieve the predicted effectiveness because there is more demand than was planned for. This should be addressed either in the analysis or in considerations.

    1.8 Using economic evidence to formulate guideline recommendations

    For an economic analysis to be useful, it must inform the guideline recommendations. The committee should discuss cost effectiveness in parallel with general effectiveness when formulating recommendations (see the chapter on writing the guideline).

    Within the context of NICE's principles on social value judgements, the committee should be encouraged to consider recommendations that:

    • increase effectiveness at an acceptable level of increased cost or

    • are less effective than current practice, but free up sufficient resources that can be re‑invested in public sector care or services to increase the welfare of the population receiving care.

    The committee's interpretations and discussions should be clearly presented in the guideline. This should include a discussion of potential sources of bias and uncertainty. It should also include the results of sensitivity analyses in the consideration of uncertainty, as well as any additional considerations that are thought to be relevant. It should be explicitly stated if economic evidence is not available, or if it is not thought to be relevant to the question.

    Recommendations for interventions informed by cost–utility analysis

    If there is strong evidence that an intervention dominates the alternatives (that is, it is both more effective and less costly), it should normally be recommended. However, if 1 intervention is more effective but also more costly than another, then the incremental cost-effectiveness ratio (ICER) should be considered.

    Health effects

    The cost per QALY gained should be calculated as the difference in mean cost divided by the difference in mean QALYs for 1 intervention compared with the other.

    If 1 intervention appears to be more effective than another, the committee has to decide whether it represents reasonable 'value for money' as indicated by the relevant ICER. In doing so, the committee should also refer to the principles that guide the development of NICE guidance and standards (also see box 7.2).

    Box 7.2 Comparing the cost effectiveness of different interventions

    "If possible, NICE considers value for money by calculating the incremental cost-effectiveness ratio (ICER). This is based on an assessment of the intervention's costs and how much benefit it produces compared with the next best alternative. It is expressed as the 'cost (in £) per quality-adjusted life year (QALY) gained'. This takes into account the 'opportunity cost' of recommending one intervention instead of another, highlighting that there would have been other potential uses of the resource. It includes the needs of other people using services now or in the future who are not known and not represented. The primary consideration underpinning our guidance and standards is the overall population need. This means that sometimes we do not recommend an intervention because it does not provide enough benefit to justify its cost. It also means that we cannot apply the 'rule of rescue', which refers to the desire to help an identifiable person whose life is in danger no matter how much it costs. Sometimes NICE uses other methods if they are more suitable for the evidence available, for example when looking at interventions in public health and social care.

    Interventions with an ICER of less than £20,000 per QALY gained are generally considered to be cost effective. Our methods manuals explain when it might be acceptable to recommend an intervention with a higher cost-effectiveness estimate. A different threshold is applied for interventions that meet the criteria to be assessed as a 'highly specialised technology'."

    When assessing the cost effectiveness of competing courses of action, the committee should not give particular priority to any intervention or approach that is currently offered. In any situation where 'current practice', compared with an alternative approach, generates an ICER above a level that would normally be considered cost effective, the case for continuing to invest in it should be carefully considered, based on similar levels of evidence and considerations that would apply to an investment decision. The committee should be mindful of whether the intervention is consuming more resource than its value is contributing based on NICE's cost per QALY threshold.

    Equity considerations

    NICE guideline economic evaluation of healthcare, social care and public health interventions does not include equity weighting – a QALY has the same weight for all population groups.

    The estimation of QALYs implies a particular position regarding the comparison of health gained between individuals. Therefore, an additional QALY is of equal value regardless of other characteristics of the individuals, such as their socio-demographic characteristics, their age, or their level of health.

    It is important to recognise that care provision, specifically social care, may be means tested, and that this affects the economic perspective in terms of who bears costs – the public sector or the person using services or their family. Economic evaluation should reflect the intentions of the system.

    One of the principles that guide the development of NICE guidance and standards is the aim to reduce health inequalities (Principle 9), and that our guidance should support strategies that improve population health as a whole, while offering particular benefit to the most disadvantaged. NICE's recommendations should not be based on evidence of costs and benefit alone, so equity considerations relevant to specific topics, and how these were addressed in economic evaluation, must be reported in the guideline.

    The severity modifiers introduced by CHTE to Technology Appraisal guidance will not be applied to guideline analyses, as these replaced their previous "end of life considerations" on the bases of overall cost neutrality to the NHS. For Guidelines, the severity of the condition should be captured within the QALY benefits and then deliberatively within decision making.

    Distributional cost effectiveness analysis

    NICE recognises the important role NICE guidance can play in the national drive to reduce health inequalities, defined by the UK Government and the NHS as unfair differences in health between more and less socially disadvantaged groups.

    To facilitate its approach to considering health inequalities, NICE commissioned a prototype tool that provides quantitative estimates of the impact of NICE recommendations on health inequalities. The tool uses distributional cost-effectiveness analysis to model changes in health inequality between five socioeconomic groups in England based on the neighbourhood index of multiple deprivation.

    NICE encourages identifying opportunities, where data allows, to pilot the tool to help gauge the analytical resources and committee time required to use the tool in practice and to determine its usefulness in stimulating committee deliberation on health inequalities and trade-offs between cost-effectiveness and health inequality effects.

    Non-health effects

    Outside the health sector, it is more difficult to judge whether the benefits accruing to the non-health sectors are cost effective, but it may be possible to undertake cost–utility analysis based on measures of social care-related quality of life. The committee should take into account the factors it considers most appropriate when making decisions about recommendations. These could include non-health-related outcomes that are valued by the rest of the public sector, including social care. It is possible that over time, and as the methodology develops (including the establishment of recognised standard measures of utility for social care), there will be more formal methods for assessing cost effectiveness outside the health sector.

    Recommendations for interventions informed by cost–benefit analysis

    When considering cost–benefit analysis, the committee should be aware that an aggregate of individual 'willingness to pay' (WTP) is likely to be more than public-sector WTP, sometimes by quite a margin. If a conversion factor has been used to estimate public sector WTP from an aggregate of individual WTP, the committee should take this into account. In the absence of a conversion factor, the committee should consider the possible discrepancy in WTP when making recommendations that rely on a cost–benefit analysis.

    The committee should also attempt to determine whether any adjustment should be made to convert 'ability‑to‑pay' estimates into those that prioritise on the basis of need and the ability of an intervention to meet that need.

    The committee should not recommend interventions with an estimated negative net present value (NPV) unless other factors such as social value judgements are likely to outweigh the costs. Given a choice of interventions with positive NPVs, committees should prefer the intervention that maximises the NPV, unless other objectives override the economic loss incurred by choosing an intervention that does not maximise NPV.

    Care must be taken with published cost–benefit analyses to ensure that the value of all the health and relevant non-health effects have been included. Older cost–benefit analyses, in particular, often consist of initial costs (called 'costs') and subsequent cost savings (called 'benefits') and fail to include monetarised health effects and all relevant non-health effects.

    Recommendations for interventions informed by cost–consequences analysis

    The committee should ensure that, where possible, the different sets of consequences do not double count costs or effects. The way that the sets of consequences have been implicitly weighted should be recorded as openly, transparently and accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process. Various tools, such as multi-criteria decision analysis (MCDA), are available to support this part of the process, although attention needs to be given to any weightings used, particularly with reference to the NICE reference case and NICE's principles.

    Recommendations for interventions informed by cost-effectiveness analysis

    If there is strong evidence that an intervention dominates the alternatives (that is, it is both more effective and less costly), it should normally be recommended. However, if one intervention is more effective but also more costly than another, then the ICER should be considered. If one intervention appears to be more effective than another, the committee has to decide whether it represents reasonable 'value for money' as indicated by the relevant ICER.

    The committee should use an established ICER threshold (see the section on cost–utility analysis). In the absence of an established threshold, the committee should estimate a threshold it thinks would represent reasonable 'value for money' as indicated by the relevant ICER.

    The committee should take account of NICE's principles when making its decisions.

    Recommendations for interventions informed by cost-minimisation analysis

    Cost minimisation can be used when the difference in effects between an intervention and its comparator is known to be small and the cost difference is large (for example, whether doctors or nurses should give routine injections). If it cannot be assumed from prior knowledge that the difference in effects is sufficiently small, ideally the difference should be determined by an equivalence trial, which usually requires a larger sample than a trial to determine superiority or non-inferiority. For this reason, cost-minimisation analysis is only applicable in a relatively small number of cases.

    Recommendations when there is no economic evidence

    When no relevant published studies are found, and a new economic analysis is not prioritised, the committee should make a qualitative judgement about cost effectiveness by considering potential differences in resource use and cost between the options alongside the results of the review of evidence of effectiveness. This may include considering information about unit costs, which should be presented in the guideline. The committee's considerations when assessing cost effectiveness in the absence of evidence should be explained in the guideline.

    Further considerations

    Decisions about whether to recommend interventions should not be based on cost effectiveness alone. The committee should also take into account other factors, such as the need to prevent discrimination and to promote equity. The committee should consider trade‑offs between efficient and equitable allocations of resources. These factors should be explained in the guideline.

    1.9 References

    Ara RM, Brazier J, Peasgood T et al. (2017) The identification, review and synthesis of HSUV's from the literature. Pharamcoeconomics 35 (Suppl 1):43–55

    Arber M, Garcia S, Veale T et al. (2016) Performance of search filters to identify health state utility studies. Value in Health 19: A390–1

    Arber M, Garcia S, Veale T et al. (2017) Performance of Ovid MEDLINE search filters to identify health state utility studies. International Journal of Technology Assessment in Health Care 33: 472–80

    Ayiku L, Levay P, Hudson T (2021) The NICE OECD countries geographic search filters: Part 2 – Validation of the MEDLINE and Embase (Ovid) filters. Journal of Medical Library Association 109 No. 2: 258–266

    Brazier J, Longworth L (2011) NICE DSU Technical support document 8: An introduction to the measurement and valuation of health for NICE submissions.

    Brennan A, Chick SE, Davies R (2006) A taxonomy of model structures for economic evaluation of health technologies. Health Economics 15: 1295–1310

    Centre for Reviews and Dissemination (2007) NHS economic evaluation database handbook [online]. NHS economic evaluation database handbook

    Department for Transport (2019). Transport Analysis Guidance (TAG) unit A2.1.

    Department for Transport (2021). Transport Analysis Guide (TAG) Unit A1.1.

    Dias S, Sutton AJ, Welton NJ, Ades AE (2011) NICE DSU Technical Support Document 6: Embedding evidence synthesis in probabilistic cost-effectiveness Analysis: software choices.

    Golder S, Glanville J, Ginnelly L (2005) Populating decision-analytic models: the feasibility and efficiency of database searching for individual parameters. International Journal of Technology Assessment in Health Care 21: 305–11

    HM Treasury (2015) The Aqua Book: guidance on producing quality analysis for government. [online; accessed 3 September 2018]

    HM Treasury (2013) Review of quality assurance of government analytical models: final report. [online; accessed 3 September 2018]

    Husereau D, Drummond M, Augustovski F et al. (2022) Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) Statement: Updated Reporting Guidance for Health Economic Evaluations. Value Health 25(1): 3-9.

    Jackson CH, Bojke L, Thompson G et al. (2011) A framework for addressing structural uncertainty in decision models. Medical Decision Making 31: 662–74

    Jones K, Burns A. (2021) Unit Costs of Health and Social Care 2021, Personal Social Services Research Unit, University of Kent, Canterbury.

    Kaltenthaler E, Tappenden P, Paisley S (2011) NICE DSU Technical support document 13: identifying and reviewing evidence to inform the conceptualisation and population of cost-effectiveness models. [online; accessed 3 September 2018]

    Latimer N (2013) NICE DSU Technical support document 14: Survival analysis for economic evaluations alongside clinical trials – extrapolation with patient-level data.

    Longworth L, Rowen D (2011) NICE DSU Technical support document 10: the use of mapping methods to estimate health state utility values. [online; accessed 3 September 2018]

    National Institute for Health and Care Excellence The principles that guide the development of NICE guidance and standards.

    National Institute for Health and Care Excellence (2021) Assessing resource impact process manual: guidelines.

    National Institute for Health and Care Excellence (2022) NICE real-world evidence framework.

    NICE Decision Support Unit (2011) Technical support document series [accessed 3 September 2018]

    Paisley S (2016) Identification of evidence for key parameters in decision-analytic models of cost-effectiveness: a description of sources and a recommended minimum search requirement. Pharmacoeconomics 34: 597–608

    Papaioannou D, Brazier JE, Paisley S (2010) NICE DSU Technical support document 9: the identification, review and synthesis of health state utility values from the literature. [online; accessed 3 September 2018]

    Philips Z, Ginnelly L, Sculpher M et al. (2004) Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technology Assessment 8: 1–158Wood H, Arber M, Isojarvi J et al. (2017) Sources used to find studies for systematic review of economic evaluations. Presentation at the HTAi Annual Meeting. Rome, Italy, June 17 to 21 2017