How are you taking part in this consultation?

You will not be able to change how you comment later.

You must be signed in to answer questions

    The content on this page is not current guidance and is only for the purposes of the consultation process.

    3 Approach to evidence generation

    3.1 Evidence gaps and ongoing studies

    Table 1 summarises the evidence gaps and ongoing studies that might address them. Information about evidence status is derived from the external assessment group's report; evidence not meeting the scope and inclusion criteria is not included. The table shows the evidence available to the committee when the guidance was published.

    Table 1 Evidence gaps and ongoing studies

    Evidence gap

    Overcoming Bulimia Online

    Comparative evidence about remission, relapse and mortality with the technology used as a self-help intervention

    Limited evidence

    Long-term effectiveness and outcomes

    Limited evidence

    Reasons for high attrition and barriers to engagement

    Limited evidence

    Equity and accessibility concerns

    No evidence

    Resource and care pathway impact

    No evidence

    Generalisability and population diversity

    Limited evidence

    Acceptability and user experience in routine NHS settings

    Limited evidence

    3.2 Data sources

    Most of the data, particularly that relating to comparative evidence and attrition, is likely best collected through primary data collection using the technology itself. There are data sources that may collect some of the necessary outcome information, but they may require linkage to the primary data collection. 

    There are several existing data collections with different strengths and weaknesses that could potentially support evidence generation. NICE's real-world evidence framework provides detailed guidance on assessing the suitability of a real-world data source to answer a specific research question. Potential data sources include: 

    Some data, such as starting therapy and engagement metrics, may be generated through the digital technology itself. This data can be integrated with other data collected with routinely collected datasets where appropriate.

    The CPRD and HES data sets are well-established and reliable sources of NHS data. But, neither data set will be modified to add new data fields specific to the technology. So, the digital intervention could be adapted to collect key data items of interest for the evaluation.

    The quality and coverage of real-world data collections are of key importance when used in generating evidence. Active monitoring and follow up through a central coordinating point is an effective and viable approach of ensuring good-quality data with broad coverage.

    3.3 Evidence collection plan

    NICE suggests a mixed methods approach to address the identified evidence gaps; a prospective comparative cohort study combined with a qualitative survey. The qualitative component should explore user experience, engagement and barriers to access in more depth.

    Data could be collected through a combination of:

    • primary data collection (for example, outcome measures and surveys)

    • data generated through the technology itself (for example, engagement metrics and session completion)

    • routinely collected real-world data sources (for example, CPRD and HES).

    Data collection should follow a predefined protocol. Quality assurance processes should be put in place to ensure the integrity and consistency of data collection. See NICE's real-world evidence framework, which provides guidance on the planning, conduct and reporting of real-world evidence studies. It also provides best practice principles for robustly designed real-world evidence when assessing comparative treatment effects.

    Prospective real-world comparative cohort study

    In this type of study, data should be collected from healthcare services where the digital technology is offered and compared with services where it is not. People in both groups should be followed from the point at which they would typically be offered the technology.

    The comparison group should include people from similar services with comparable patient populations and standard care pathways but without access to the digital technology. Ideally, the study should be conducted across multiple centres to reflect the diversity of the NHS service provision.

    Non-random assignment to interventions introduces a risk of confounding bias. So, appropriate methods such as matching or adjustment (for example, propensity score methods) should be used to minimise selection bias and balance confounding factors between groups. High-quality data on patient characteristics will be essential to support these methods. The identification of key confounders should be informed by expert input during protocol development.

    Qualitative survey

    Feedback should be collected through a survey or structured interviews with people who have used the technology. The robustness of the findings will depend on:

    • Broad and inclusive distribution across eligible users

    • the sample of respondents being representative of the population of potential users.

    3.4 Data to be collected

    Real-world prospective comparative cohort study

    The following information should be collected:

    • remission status at the end of treatment, 3, 6 and 12 months

    • relapse rate at 3, 6 and 12 months (reappearance of symptoms after remission)

    • abstinence from binge or purge episodes

    • mortality

    • patient reported outcomes (for example, Global Eating Disorder Examination questionnaire, Hospital Anxiety and Depression Scale and Clinical Impairment Assessment questionnaire)

    • health-related quality of life (for example, EQ-5D-5L at baseline, 3, 6 and 12 months)

    • uptake of follow-on treatment after digital intervention

    impact on reduction for more intensive care

    • time to dropout or last session completed

    • engagement metrics (for example, time spent per session or number of logins)

    • session completion rate

    • reported barriers (for example, technical issues, lack of support, lack of perceived benefit)

    • patient characteristics and demographics (for example, age, gender, ethnicity, sexual orientation, socioeconomic status, disability or cognitive impairment, education level, diagnosis and symptoms severity)

    • usage data stratified by demographic variables (for example, session completion rates by age, ethnicity, disability status or severity of symptoms)

    • intervention cost per user (licence, support, maintenance)

    • staff time associated with implementation or support

    • number of:

      • binge-eating episodes at baseline, end of treatment, 3, 6 and 12 months

      • days missed from school or work

      • GP visits

      • specialist consultations (for example, psychiatrists or eating disorders services)

      • emergency department visits

      • crisis services use

      • community mental health teams use

      • inpatient admissions

      • missed or cancelled appointments.

    Qualitative survey study

    Outcomes to be collected from people who have an eating disorder and healthcare staff:

    • patient-reported barriers to accessing or using the digital intervention (through questionnaires or interviews)

    • healthcare-professional-reported reasons for not offering the intervention to certain people

    • feedback from excluded people or people who declined to take part

    • transparency for inclusion and exclusion criteria

    • user feedback via semistructured interviews or open-ended surveys

    • feedback from NHS staff (for example, GPs, psychologists, eating disorder specialists).

    Information about the technologies

    Information should be collected about:

    • how the technologies were developed

    • how people are referred to the technology and at what point in their clinical pathway

    • any updates to the technologies

    Data collection should follow a predefined protocol and quality assurance processes should be put in place to ensure the integrity and consistency of data collection. See NICE's real-world evidence framework, which provides guidance on the planning, conduct, and reporting of real-world evidence studies.

    3.5 Evidence generation period

    This will be 2 years to allow for setting up, implementing the test, data collection, analysis and reporting.

    3.6 Following best practice in study methodology

    Following best practice when conducting studies is paramount to ensuring the reliability and validity of the research findings. Following rigorous guidelines and established standards is crucial for generating credible evidence that can improve care. The NICE real-world evidence framework details some key considerations.