Policymakers may have ballpark ways to figure out what is ‘going on out there’, but the sure-fire way to know whether a policy is achieving its intended outcomes, for whom, and to learn lessons, is to evaluate. Raj Patel examines the challenges of assessing social, economic and public health interventions – and how longitudinal data can help.
According to the National Audit Office, only 8% of major government projects are robustly evaluated, while 64% are not evaluated at all. As a result, it is not always clear how billions of pounds spent are making a difference to people’s lives, or how they could be better spent.
The dynamic nature of policymaking – perpetual drive for new ideas, and challenges of measuring impact – mean that evaluations are under-utilised as a way of working, strategic learning and driving change. Civil servants argue that the focus on delivery, ‘fire-fighting’, or bureaucracy also make it harder to focus on outcomes. So, policies risk being transactional rather than transformative. The NAO investigation identified both demand side and supply side challenges in government, but there are also challenges of data.
According to the National Audit Office, only 8% of major government projects are robustly evaluated, while 64% are not evaluated at all.
In the field of social, economic and environmental development, policy proposals can rarely be tested, with those that can test, often opting for randomised controlled trials. However, observational data also have an important role to play, including in measuring spill-over effects and unintended consequences. Here we are primarily discussing interventions that impact on thousands or millions of lives, and it’s not as simple as whether something works or not. A range of factors and processes, such as social and cultural norms, class and economic systems, the way different stakeholders respond to new policies and how something is implemented locally, can affect impact. In some cases, results may not be apparent for years, and what we are looking for is a slowing down or acceleration of a change.
Evaluations are more likely to be feasible, and the results used, if they are designed into the policy at the stage of conception. According to the UK Government Department for Levelling Up, Housing and Communities (DLUHC), “the advantage of considering evaluation evidence from the outset is that it increases the likelihood of generating timely and helpful information to assist in delivering the department’s objectives”.
To stimulate the use of evaluations in improving policies, Understanding Society has started a project on how long-term panel data could help – including a call for policy evaluation fellows. Longitudinal surveys, or repeated data, provide a valuable long-term resource for evaluations, and academic researchers familiar with panel data research methods, have an opportunity to open up pathways to policy impact.
With regular information collected on behaviours and outcomes, researchers can examine what has changed following the introduction of particular policies. Data are collected from the same set of individuals over time, and across the entire year, allowing comparisons before and after or between groups (where one group is not affected by a policy). The Study collects data from across the UK, enabling the evaluation of policies that are devolved, or gradually rolled out across the country.
Longitudinal studies enable various evaluation techniques, with varying degrees of robustness: single group pre and post analysis; interrupted time series analysis; difference in difference analysis, where two groups are compared; propensity score matching; developing a synthetic control group as the counterfactual; and regression discontinuity design. Longitudinal studies can also be cost-effective – providing data which might otherwise cost the evaluation process a lot to collect.
Evidence-based mechanisms of change
Lack of demand stems from a variety of factors according to the NAO Report, ranging from lack of political interest or cultural buy-in, to evaluation evidence being not well understood by the policy profession. The amount of public spending on evaluations is a fraction of one percent, and David Halpern, the ‘What Works’ National Adviser, has said departmental investment in R&D should increase to the national average in the long-term, and all civil servants should understand basic evaluation methods.
In the short to medium-term the growing use of the theories of change by many organisations including government departments, provides an opportunity. Simply defined, a theory of change is a model of how (in principle) a policy, or an organisation, expects delivery to translate into desired outcomes. There are many varieties of theories of change used in different contexts, but they are widely used as evaluative frameworks. If departments set out the mechanisms of change based on research and evaluative evidence, and publish these (at least in outline) alongside each major new policy or programme, that could create a more reflective culture.
When using panel data, including a counterfactual in research can make it easier to infer causation, but it’s not always easy to choose a suitable counterfactual or control area. Even with large panel surveys like Understanding Society, it is not always feasible to identify if respondents have been the beneficiary of a range of policies or services.
However, linking survey and administrative data can help with the ‘identification problem’, where information on take-up or use of a service is captured in administrative data sources. For example, a study in 2021 evaluated parents’ responses to OFSTED assessments. Using linked household-school administrative data, the researchers observed some households being interviewed before their school was inspected (the control group), and some after (the treated group). Parents receiving good news about school quality significantly decreased the time they invested in their children compared to parents that received good news later. Overall progress, however, on linking large scale survey and administrative data remains slow.
Interventions and systems thinking
The pandemic required mobilisation across the whole of government. This way of working is becoming more critical as we face cross-cutting challenges such as the climate crisis, inequality, and an ageing society.
The issue of policy silos is a long-standing one, and the shift towards Outcome Delivery Plans (ODP) and shared outcomes in government may gradually help to break it down.
One interesting example of a collaboration between academia, government and practitioners, which takes a systems approach to policy appraisal and evaluation is the SIPHER programme. It aims not to answer before and after questions about policy, but to better understand how many events or interventions contribute to changes in different parts of a system.
The programme has created a synthetic baseline population using the granularity of Census data and richness of Understanding Society findings. SIPHER will initially focus on three areas of policy evaluation: health, wellbeing and inequality. It will give policymakers and practitioners a resource base to appraise policy. This will help inform future policies for spreading economic benefits and opportunities, improving availability, quality and affordability of housing, and promoting and maintaining good mental health and wellbeing.
A culture of reflective policymaking is vitally important to drive change in people’s lives. Longitudinal data, combined with greater capabilities in robust evaluation design across academia and policymakers, represents a real opportunity for academics to influence and improve policies.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Lucian Alexe via Unsplash.