Guidance

The Evaluation Task Force Strategy 2022 - 2025 (HTML)

Updated 22 March 2024

Intro to the ETF

The Evaluation Task Force

Where are we today?

Evaluation in Government

Individual government departments are responsible for evaluating the policies and interventions they fund or deliver. Analysts in departments lead on the design and delivery of evaluations, working closely with policy and delivery officials. In order to help them achieve this, there are several cross-government teams and resources which provide departments with oversight and support to design and deliver robust evaluation. These stakeholders are outlined in the next slides.

The Evaluation Task Force (ETF)

A joint HM Treasury and Cabinet Office unit, the ETF was set up to put evaluation evidence at the heart of government decisions. The team drives continuous improvements in the way HM Government programmes are evaluated to inform decisions on whether they should be continued, expanded, modified or stopped. The ETF provides departments with ‘reactive’ evaluation advice and support (in response to department requests), as well as a ‘proactive’ scrutiny and challenge function (which is guided by HMT priorities and key interests across government). Between April 2021 - August 2022, the ETF has advised on 162 programmes covering £47.9bn of spend.

The ETF also acts as the secretariat for the What Works Network, a network of 13 centres covering a total of £250 billion of public spend to generate, collate and translate evidence in key areas for government & practitioners.

The ETF also oversees the £15 million Evaluation Accelerator Fund, which aims to accelerate robust evaluations across government to transform our understanding of the impact of activity in priority policy areas.

Further details on the ETF can also be found on the Evaluation Task Force website

Where are we today: stakeholders

CGEG

The Cross Government Evaluation Group (CGEG) is a community of practice of cross-disciplinary evaluation specialists (with representation from most major departments) across HM Government which discusses the demand, supply, and promotion of good practice evaluation across HM Government.

AF

The Analysis Function (AF) Evaluation Support Team supports the development of professional standards in evaluation across HM Government. They offer training and guidance on evaluation methods to analysts and other key stakeholders of evaluation (incl policy professionals and operational delivery staff) via bespoke workshops and events.

Magenta Book

The Magenta Book sets out detailed best practice on evaluation methods, use and dissemination. The Magenta Book is owned by HM Treasury. The ETF is responsible for updating the Magenta Book and will conduct a periodic review of the Magenta Book every five years. Ad-hoc requests to update the Magenta Book can be raised by emailing [email protected]

Green Book

The Green Book is guidance owned by HM Treasury on how to appraise policies, programmes and projects. It provides guidance on the design and use of monitoring and evaluation before, during and after implementation, along with additional materials and supplementary guidance, which can be found here.

Further details on the different functions can be found on the Evaluation Task Force website

What Works Network

The network consists of 13 What Works Centres. Each centre is committed to generating, collating and translating evidence in key policy areas for HM Government and practitioners.

Chief Scientific Adviser

The majority of government departments have a Chief Scientific Adviser (CSA). They work alongside other analytical disciplines and senior teams to ensure robust, joined-up evidence is at the core of decisions within departments and across HM Government.

Department Director of Analysis

Every department has a Departmental Director of Analysis (DDAn). DDAns are senior civil servants that oversee the evaluation delivery, prioritisation, staffing and often budgets of evaluation across

their department.

Government Social Research Profession

GSR is the analytical profession within HM Government for civil servants who provide social and behavioural research and advice. GSR supports the development, implementation, review and evaluation of government policy working with other analytical, policy and delivery professions. [footnote 1]

  • Despite great examples of evaluations across pockets of HM government, high quality evaluation of government policies is still the exception rather than the rule, and so the existence and quality of findings are inconsistent. A recent NAO report outlined that much of the £1 trillion of government spending is not evaluated.

  • In December 2019, the Prime Minister’s Implementation Unit found that only 8% of Government’s £432 billion spending on major projects had robust impact evaluation plans in place, and 64% - £276 billion of taxpayers’ money - had no evaluation at all.

  • Further, a Public Accounts Committee Inquiry in 2022 found that departments are falling short of requirements on transparency and publication of evaluation findings, with more than one-third of chief analysts reporting that they are only sometimes able to publish evaluation findings as required.

  • A recent NAO report (pdf, 470kb) outlined strategic, technical and political barriers to evaluation that the ETF, in collaboration with departments, central evaluation teams and other stakeholders aims to address by 2025.

What is our mission?

To put robust evaluation evidence at the heart of government decisions so HM Government can have confidence the money it spends is delivering better outcomes for the public, and delivers value for money.

What are we aiming to achieve?

Our ultimate goal is to increase the effectiveness and efficiency of HM Government decision-making so we have confidence the decisions made by policy-makers and the money HM Government spends are improving people’s lives. We want HM Government programmes to deliver the impacts they set out to achieve and produce better outcomes. With robust evaluation processes[footnote 2] in place, HM Government can learn and continually improve its policies and programmes. To deliver this, we are striving to achieve the following outcomes:

  1. HM Government departments design and deliver robust and proportionate evaluation across their portfolio.

  2. HM Government departments have a transparent approach to their evaluation activity, publishing evaluation outputs in a timely fashion.

  3. Good quality evaluation evidence informs decision-making in HM Government.

How will we achieve our key outcomes?

Our work splits into three interconnected areas:

  1. Embedding evaluation into decision-making processes

  2. Promoting the importance of evaluation across HM Government

  3. Supporting the delivery of priority evaluations

1. Embedding evaluation into decision-making processes

We work closely with colleagues across HM Government to help ensure that decision-making is informed by evaluation evidence and that effective systems, mechanisms and levers are in place within HM Government for delivering, managing and using evaluation evidence. For example:

  • We advise HMT Spending Teams on the evaluation evidence underpinning spending requests, looking at the evidence available and its quality. We also provide advice on evaluations being planned or carried out and where evaluations could be improved or should be designed/created.

  • We work with HMT to ensure that departments’ Outcome Delivery Plans include the appropriate consideration of evaluation evidence, robust evaluation plans, and commit to publishing evaluations in a timely manner. This also includes overseeing departments’ compliance with their settlement condition for publishing an evaluation strategy (and other conditions).

  • We advise on appropriate evaluation plans for a number of major decision making panels including regular evaluation advice at the Complex Grant Advisory Panels and auditing major projects overseen by the Infrastructure and Projects Authority to ensure that robust evaluation plans are integrated into all major projects.

  • We advocate and are responsible for the Magenta Book by quality-assuring that existing content and updates to the Magenta Book are robust, relevant and of high-quality.

  • We are collating useful information on evaluation on our website. This includes a list of curated evaluation training, guidance as well as featured evaluations across HM Government.

  • We are designing an Evaluation Registry with the aim to make evaluation evidence more accessible and easy to find. This will help to ensure evidence from listed evaluations is used to inform decision-making in HM Government. Increasing the visibility and accessibility of evaluations will also improve accountability for findings and lessons learned. Moreover, publication of evidence supporting a policy will safeguard effective activities from public scrutiny and spending cuts.

Case study - Embedding evaluation into decision-making processes

Spending Review 2021

The ETF worked with HM Treasury to review the evaluation plans of key business cases during the Spending Review 2021. Feedback from the ETF was used to inform value for money for potential spending decisions and our assessment led to a renewed focus on the importance of evidence to support future spending. This was also reflected in specific settlement conditions, requiring ETF involvement in programmes as well as general conditions. The general conditions pushed the evaluation agenda across HM Government, for example, by asking all departments to commit to and publish departmental evaluation strategies outlining a long-term approach to monitoring and evaluating their programmes and policies

2. Supporting the delivery of priority evaluations

We work closely with department evaluators, analysts and policy colleagues to help design and deliver robust evaluations in key priority areas and challenge them in areas where evaluation is poor or non-existent. The ETF operates an account management system, with each of the four Evaluation Leads responsible for a few departments. This system enables us to develop close relationships with departments and offer ongoing advice and support where it’s needed.  For example:

  • We have identified 10 Top Priority areas, where we focus most of our advice and support. Where it is a priority, we are able to provide bespoke evaluation support throughout the entire lifecycle of a project or programme.

  • The ETF also manages the Evaluation Accelerator Fund, a £15m fund that has so far  awarded over £12m to 16 projects run by departments and What Works centres to help test and evaluate new policies.

  • Where appropriate, we also signpost to other available evaluation support available to civil servants, such as the What Works Network. Relevant teams are clearly signposted on our website to ensure that civil servants can find the right support for their evaluation need without any hassle.

  • We also support the evaluation of innovation funds that are trialling and testing new ideas. So far, the ETF has been able to establish robust monitoring and evaluation processes for more than 50 HMT-run innovation funds worth over £500 million.

Case study - Supporting the delivery of priority evaluations

Youth Investment Fund

The Department for Digital, Culture, Media and Sports has invested £368m in the Youth Investment Fund (YIF), its flagship and largest ever capital investment youth programme designed to level up access to youth services across England.

YIF aims to create, expand and improve local youth facilities and their services, in order to drive positive outcomes for young people, including skills for life and work. The ETF has been involved in the design of the evaluation, recommending a feasibility study that will develop an impact evaluation approach to robustly measure outcomes and value for money. The ETF will sit on the evaluation advisory board once the full impact evaluation launches in January 2023.

3. Promoting the importance of evaluation across HM Government

We work closely with the evaluation community across HMG to improve the way that the separate parts of the evaluation system work, individually and collectively. For example:

  • We collaborate with CGEG and AF to raise the profile of evaluation across HM Government so that all parts of the evaluation system are in regular contact and that their offers complement each other.

  • We work with other professions, where issues are flagged, to integrate and promote robust evaluation processes. For example, the ETF has contributed to awareness raising opportunities organised by the Finance and Policy profession.

  • We support the What Works Network to ensure departments are taking advantage of the offer provided by relevant What Works Centres that might be able to provide support, resources and further evaluation advice on specific policy issues.

  • We create regular opportunities to raise awareness of evaluation across HM Government, such as evaluation webinars, teach ins, conferences and news stories on our website. Where possible, we include non-analytical professions in these events to raise awareness for evaluation among non-technical audiences.

Case study - Promoting the importance of evaluation/raising the prominence of evaluation

Policy That Works conference

The Evaluation Task Force led the virtual Policy that Works conference between February and March 2022. The conference presented an opportunity to highlight available evaluation support services and good practice across three days that featured presentations, panel events and workshops designed to support good quality policy evaluation and evidence use across government.

With more than 2,200 registrants it was HM Government’s biggest analytical conference to-date. Participants and speakers came from a breadth of backgrounds, including but not limited to: civil servants, political officials, members of CGEG as well as the policy, operational delivery and analytical profession, the Analytical Function and external experts from What Works Centres, research organisations and academia.

What backs up our assumptions?

Our approach has been informed by international good practice for improving public policy evaluation.

In 2016, an integration of evaluation reporting and requirements into regular programme budgets (via the Management Accountability Framework) in Canada led to tangible improvements and accountability in evaluation across departments.

The Treasury in Japan only accepts spending proposals if they include a clear plan for impact evaluation. Initial findings show that, while the spend on evaluation increased, the overall spend in departments went down as a result of being able to target expenditure more effectively.

The US federal government introduced the Evidence Act in 2019 to improve evaluation activity in government. A key learning was that although the Evidence Act gave agencies the authority to prioritise funding impact evaluations, it neither integrated this with budget processes nor required any impact evaluation to take place as a condition of receiving programme funding.

In 2022, the Commission on Evidence-Based Policymaking found that the ideal national evidence infrastructure should focus on creating evidence-support systems as well as an evidence-implementation system in order to effectively use evidence in addressing societal challenges.

Our strategy has also been shaped by considering relevant theory

We have considered leading behavioural science models such as COM-B and the Extended Parallel Process Model (EPPM) when designing our strategy and thinking through our theory of change, ensuring we are targeting evaluation Capability, Opportunities and Motivations in encouraging evaluation Behaviour. And we make sure we provide a balance of challenge and support so that people feel motivated as well as supported to make necessary changes to improve evaluation in government.

How will we understand our impact?

We are held to account by our Ministers and our independent Oversight Board.

The ETF is a new endeavour and as such we will monitor and evaluate our work to ensure we are working efficiently, effectively and are on course to deliver the change we want to see.

We will conduct a formative evaluation to continually test, learn, and adapt the way we work to deliver the best possible outcomes. We will take a reflective approach to our work, regularly collecting and acting on feedback from stakeholders. We will take stock as a team through regular reflective sessions.

We will conduct an interim evaluation of our work to understand what we have achieved, how, and to identify how our work could be improved. We have developed a log frame to sit alongside our Theory of Change outlining how we will measure our progress in delivering these outcomes. The table below provides further details on our evaluation plan.

Evaluation Aim

  • To reflect and learn how the ETF can do their work better

  • To understand what the ETF is doing and how this leads to achieving wider goals

  • To understand what impact the ETF is having and how it could improve

Evaluation Methods

  • Regular away days, Wash up sessions after major delivery, interviews with stakeholders, review of collected activity data

  • ETF tracker, interviews with stakeholders, review of collected activity data, review of systematic change

  • ETF tracker, interviews with stakeholders, review of systematic change + collected activity data, case studies

Measuring ETF outcomes

  • Underpinned by the ETF Theory of Change, we will measure our outcomes over time using a range of output and outcome indicators to identify whether we are achieving our anticipated outcomes that aim to improve the scale and quality of evaluation, build transparency and support evidence-informed decision making.

  • Each priority indicator will report publicly on an annual basis and feature on the ETF website. This will include reporting on the interim evaluations on the ETF website.

  • This responds to recommendations featured in the NAO report on government evaluation (2021) and the Public Accounts Committee recent report for ETF to establish quantifiable improvements on the scale and quality of evaluation.

  • Not all priority indicators are wholly controlled or ‘owned’ by the ETF and some will be dependent on cross-government partners working together on building an improved evaluation eco-system.

  • This is not an exhaustive list of all of the indicators that will be used to assess the effectiveness of the ETF and the wider evaluation eco-system in government.

Indicators:Outcome One

OUTCOME ONE: Government departments design and deliver robust and proportionate evaluation across their portfolio

Output indicators Target March 23 Target March 24 Target March 25 Data source
1.1   Number of evaluations ETF has advised on (cumulative) 200 250 300 ETF data
1.2   Value of programmes ETF has advised on (cumulative) £50bn £65bn £80bn ETF data
1.3   Number of evaluation awareness-raising opportunities ETF has led in past 12 months 24 24 24 Speaking events, written pieces
1.4 Proportion of Evaluation Accelerator Fund projects on track (RAG rated ‘Green’) 100% 100% 100% Quarterly monitoring
Outcome indicators Target March 23 Target March 24 Target March 25 Data source
1.5 Proportion of new [footnote 3] major projects on the GMPP with evaluation plans on the Evaluation Registry (cumulative) NA NA 100% Evaluation Registry
1.6   Proportion of ETF priority projects with robust evaluation plans (cumulative) 60% 80% 100% ETF data

Indicators: Outcome Two

OUTCOME TWO: Government departments have a transparent approach to their evaluation activity, publishing evaluation outputs in a timely fashion

Output indicators Target March 23 Target March 24 Target March 25 Data source
2.1   Number of departments with a departmental evaluation strategy published (one   time reporting) 18 N/A N/A ETF data
2.2   Evaluation Registry is launched online (one time reporting) N/A Yes N/A ETF   website
Outcome indicators Target March 23 Target March 24 Target March 25 Data source
2.3   Number of evaluation plans and trial protocols registered on the Evaluation Registry (cumulative) NA NA 350 Evaluation Registry, department data
2.4   Number of evaluations with published reports on the Evaluation Registry NA NA 1200 Evaluation Registry, department data

Indicators:Outcome Three

OUTCOME THREE: Good quality evaluation evidence informs decision-making in government

Output indicators Target March 23 Target March 24 Target March 25 Data source
3.1 Number of HMT business cases and spending review bids that the ETF has advised on (cumulative) 80 tbc tbc ETF data
3.2 Proportion of departments who are compliant with department-level evaluation settlement conditions (cumulative) 70% 80% 90% ETF data
Outcome indicators Target March 23 Target March 24 Target March 25 Data source
3.3   Proportion of senior evaluation stakeholders who agree evaluation evidence informs the design and delivery of policies 20% 50% 60% ETF annual survey of senior evaluation stakeholders
3.4   Proportion of senior evaluation stakeholders who agree their department needs to do more to ensure evaluation evidence informs spending decisions 40% 30% 20% As above
3.5   Proportion of senior evaluation stakeholders who agree that SCS support evaluation 40% 50% 60% As above

Where do we want to be in 2025?

  • In 2025, in the areas that are most important to HM Treasury, the ETF has provided good-quality evaluation evidence to inform decisions on whether programmes need to be stopped, modified or expanded.

  • In 2025, all major programmes in government will have a robust evaluation in place, delivering value for money for the public. Ensuring the impact of taxpayer money is maximised in the government’s highest profile programmes to improve lives and livelihoods in the UK.

  • In 2025, all departments are publishing findings on policy evaluations in a timely and transparent way, as a result there is a strong trust in data and evidence of policy-making. Stopping replication of programmes that we have learned don’t work and enhancing our ability to learn from the best practice of those that do.

  • In 2025, the work and impact of the ETF makes the UK government a world leader in evidence-based policy making.

  1. The ETF also collaborates with the other government research professions but the remit and work of the ETF aligns the closest to the GSR. 

  2. The ETF promotes the Nesta Standards of Evidence framework (with evaluation falling into NESTA  Level 3 and above considered robust). We acknowledge that the level of robust and appropriate evaluation mau need to be adapted to the context of a policy / programme. 

  3. For this purpose, we define new projects as major projects with a start date of March 2023 or later.