Does it work? Three considerations for getting innovation evaluation right

Sophie Castle-Clarke, Principal Advisor at Health Innovation East, explains how we measure the impact of our projects through robust evaluation.

Share:
Published: 14th January 2022

Making changes to how healthcare is delivered – whether in terms of changing a process, introducing a new technology, or trying a new staffing model – needs to be approached carefully.

Robust evaluation allows us to understand the impact of a pilot or programme as well as how effectively it was implemented, and whether there are lessons for implementation elsewhere. The NICE evidence standards framework for digital health technologies usefully sets out the level of evidence needed for different types of innovations that carry different levels of risk.

At Health Innovation East we ensure evaluation is carefully considered right at the start of project planning. But real-world evaluation can be complex. Speaking from experience, here are three considerations for getting evaluations right:

Establish your objectives and review them regularly

When starting an evaluation it is useful to describe the changes being implemented and why, how they fit into care pathways and what is expected of each project stakeholder. Be very clear about the anticipated impacts and ensure there is wide consensus on what the intervention is trying to achieve. This can help identify any gaps or differences of understanding early on. It also serves as a good basis for developing a consistent, agreed and documented theory of change which explicitly states the desired outcomes and any assumptions and external factors that could affect the project.

It is important to review a project’s objectives and theory of change regularly. When implementing pilot projects in real-world settings projects may need to adapt in response to external factors or changing assumptions. Regular reviews ensure objectives can be realigned where necessary and keeping all stakeholders involved and informed of changes ensures there are no surprises.

Select metrics carefully

Once you’ve agreed your objectives, you need an effective way of measuring them. You need to select methods and metrics that show whether the intended outcomes have been achieved. Using a validated indicator or measure can produce robust data which, where appropriate, can be compared to existing evidence. Using a validated tool ensures that the results are reliable (i.e., consistent in different settings and patient cohorts), valid and sensitive enough to appropriately identify the outcome being tested. Validated tools exist for a wide range of conditions and patient outcomes.

We commissioned Sheffield Hallam University to conduct an evaluation of Active+me – a remote cardiac recovery programme from Aseptika introduced in one of our acute providers during the pandemic to reduce face-to-face events. The primary outcome we were interested in was patient activation – an individual’s knowledge, skill, and confidence in managing their health (Hibbard et al., 2005)1. Using the Patient Activation Measure (PAM) – a validated metric which gives each patient a score patient between 0-100 – we could robustly ascertain not only how effective the intervention had been in increasing activation in our cohort, but also how this compared to similar interventions elsewhere.

In the 46 patients recruited into the pilot, average PAM scores increased from 65.5 to 70.2, and those for high-risk patients increased from 61.9 to 75.0. A similar pilot study found that PAM scores increased by 4.2 points after hospital-based cardiac rehab, and 4.8 points after telemetry-based cardiac rehab (Knudsen et al., 2020)2. Using the PAM meant that outcomes are comparable and we are able to see the effectiveness of the intervention compared to alternative approaches.

Read more about our work with Active+me REMOTE Cardiac Recovery.

Be aware of confounding variables

When interpreting evaluation results, it is important to understand what caused an impact to occur. This can be done by:

  • Assessing how far the intervention was responsible for the outcome (known as assessing the causal contribution) (for example by collecting interview data on what was perceived to cause the change, examining the link between the level of engagement with an intervention and the outcome or checking the results against evidence-based predictions);
  • Comparing the results to the counterfactual (for example by comparing the outcomes of the intervention group with a control group); and
  • Investigating possible alternative explanations (for example by conducting a force field analysis to identify other factors impacting on an intervention).

There are a number of ways to do all of these things, and much depends on the particular evaluation in question. For more information about all of these things, see the Better Evaluation website.

We recently commissioned an evaluation of the Skin Analytics teledermatology platform which allows primary care staff remote access to a dermatology consultant, providing rapid assessment of dermatological lesions. The evaluation compared the course of action suggested by the remote consultant with the eventual patient outcome in secondary care. This revealed the accuracy of the recommendations made by the remote consultant. The evaluation found that where the remote consultant recommended an urgent referral to secondary care, 65% of lesions required treatment or longer-term monitoring. This dropped to 38% for standard referrals. The service also recommended that 55% of lesions were discharged immediately to primary care following an assessment by the teledermatology service with just 0.45% of lesions re-presenting to primary care within six months of the initial referral.

This approach assumes that all referrals made to the teledermatology service would otherwise have been referred directly to secondary care by the GP. An approach that would have tested this assumption would have been to also compare the results to practices not using the teledermatology platform (control practices) so that outcomes could be compared to treatment as usual (which may have shown different levels of risk aversion and referrals between clinicians). However, data and capacity limitations made this approach difficult. A level of pragmatism is needed in providing useful data in a timely way.

Final reflections

Real world evaluations can be complicated, especially in complex healthcare settings. However, it is important that evaluation is robust if we are to fully understand the impact of the changes we make.

Plans for implementation may change as projects progress and evaluations may need to change with them, but a flexible and pragmatic approach means we can gather the evidence the health and care system needs to implement innovation with confidence.

If you want to know more about how Health Innovation East can support you in delivering and evaluating innovation projects, please get in touch.

About the author

Sophie Castle-Clarke
Sophie Castle-Clarke, Principal Advisor at Health Innovation East

Sophie leads the local, regional and commissioned portfolio for the Delivery Team, with particular oversight over the organisation’s evaluations. Prior to joining Health Innovation East in September 2019, Sophie worked in health policy research at Nuffield Trust and RAND Europe. Sophie gained her MPhil from the University of Cambridge in 2011.

You might also like…

Mobile phone using Skin Analytics to assess a skin lesion

Read more about our work supporting the assessment of skin lesions in primary care across Norfolk and Wavney with Skin Analytics teledermatology.

Read the impact story…

iVALiD Persistent depression event news
Taking a regional approach to addressing mental health challenges

Tracy Dowling, Chief Executive at Cambridgeshire and Peterborough NHS Foundation Trust and Non-Executive Director at Health Innovation East, reflects on our work helping people with persistent depression.

Read more…

References

  1. Hibbard, Judith H., Eldon R. Mahoney, Jean Stockard, and Martin Tusler. “Development and testing of a short form of the patient activation measure.” Health services research 40, no. 6p1 (2005): 1918-1930
  2. Knudsen MV, Petersen AK, Angel S, Hjortdal VE, Maindal HT, Laustsen S. Tele-rehabilitation and hospital-based cardiac rehabilitation are comparable in increasing patient activation and health literacy: A pilot study. Eur J Cardiovasc Nurs. 2020;19(5):376-85.

Share your idea

Do you have a great idea that could deliver meaningful change in the real world?

Get involved

Newsletter