LSE - Small Logo
LSE - Small Logo

Blog Team

November 5th, 2013

Spain should follow the UK, France and the United States in embracing evidence based social innovation

2 comments

Estimated reading time: 5 minutes

Blog Team

November 5th, 2013

Spain should follow the UK, France and the United States in embracing evidence based social innovation

2 comments

Estimated reading time: 5 minutes

Public policy evaluation is a key factor in improving public programmes and services. David Casado and Blanca Lázaro assess policy evaluation methods in Spain. They argue that Spain lags behind other developed countries, such as the UK, France, and the United States, in its adoption of ‘evidence based social innovation’ processes. They write that Spanish policy evaluation would benefit from embracing experimental evaluation techniques, in which programmes and policies are judged using systematic trials.

The current economic crisis has led to renewed interest in public policy evaluation. At the state, regional and local levels in Spain, political leaders – of all colours and standpoints – have been insisting on the need to ‘improve the effectiveness’ of public programmes and services. This insistence is unquestionably positive, but it is clearly insufficient in terms of allowing us to move from desires to action, and even less so in terms of producing tangible results.

Credit: Miss Messie (CC-BY-SA-3.0)
Credit: Miss Messie (CC-BY-SA-3.0)

Spain still lags far behind other developed countries in the area of public policy evaluation. When the impact of some policy or other is being discussed, the results invoked are often based, in the best of cases, on erroneous readings of the data available, if not on impossible generalised anecdotes or ideological apriorisms that lack any empirical basis.

The absence of any evidence as to their effectiveness not only affects those programmes that have been operating for some time, such as vocational training courses, assistance for business innovation and many others, but also affects the pilot programmes in which new formulas for tackling a particular problem are tested. Very often, generalisations are made, or else these pilot programmes are abolished on the basis of factors that have little to do with their effectiveness and without us really knowing whether they have reached their objective or not.

In contrast, since the end of the last decade governments in the United Kingdom, France and the United States – and in a more incipient manner in Australia, Canada, Germany and Ireland – have been working with private philanthropic bodies to promote ‘evidence based innovation’ processes focused on a systematic search for new cost-effective solutions to social problems. In all cases, the emphasis is placed on preventive actions – early childhood, education, the transition to the world of work, the reintegration of former detainees, etc. – which are the areas that can bring improved results and increased savings for the taxpayer in the long term. Rigorous and independent evaluation, preferably of the experimental type, has a leading role to play in testing the effectiveness of such actions.

Examples of the kind of issues assessed by such evaluation are the impact of the Bachillerato of excellence (for higher secondary education) in Madrid, or the programme for individual computers in Catalan schools (1×1 programme). Alternatively, we might ask what can be said about the economic incentives programme for young people who have not completed ESO (compulsory secondary schooling) in Extremadura, which was aimed at getting these young people back into the classroom.

Providing conclusions in these cases is a challenge faced by the people involved in evaluation. From this perspective, the impact of a programme is the difference between what really happens to participants and the so-called counterfactual situation: i.e. what would have happened to them had they not participated. It represents a challenge because it is obviously impossible for the same subject to be simultaneously a participant and a non-participant in the same particular programme. For this reason, evaluators try to measure the aforementioned counterfactual element using various techniques, amongst which so-called experimental evaluation stands out, thanks to the solidity of the results it provides.

This type of evaluative design is identical to that employed in the clinical tests that are used to establish the effectiveness of a drug. The process starts with an initial group of people who can benefit from a certain programme, and a random procedure is then used to establish who will participate in the programme (the treatment group) and who will not (the control group). The comparison made between the two groups after a certain time (for example, in the case of a training programme for unemployed people, it might look at whether they had obtained work) allows us to obtain a measure of the real effectiveness of the public intervention evaluated.

The practice of experimental evaluation certainly presents major challenges – consolidated ways of managing the programmes, scalability, ethical questions, etc. – and there are often determining political factors to be taken into consideration. However, our feeling is that the complete absence of experimental evaluations in Spain is a situation that should not be tolerated for much longer.

In this context, the Catalan Institute of Public Policy Evaluation (Ivàlua) organised an International Workshop on the subject in September of this year with a view to disseminating the concept of experimental evaluation and the role that it can play in social innovation processes. In specific terms, three formulas for institutionalising evidenced based innovation were examined: France’s Fonds d’Expérimentation pour la Jeunesse (FEJ), Britain’s Education Endowment Foundation (EEF), and the Social Impact Bonds (SIB) of the United Kingdom and the United States.

The private sector participates in all three processes. This is carried out in a more traditional manner in the FEJ, with a ministerial initiative in conjunction with private partners who provide the financing. It is done in a more pronounced manner in the case of the EEF – which is a legally private entity, even though it receives significant funding from the government. In the case of the SIB, it is conducted with an innovative investor model, although the state also plays a fundamental role.

The implementation of similar initiatives in Spain will require public and private actors responsible for promoting the initiatives to understand that they are necessary and that they must get involved. This means breaking with the short-term inertia that always falls back on the same formulas of public and social intervention, and which applies increases or adjustments depending on the economic cycle.

In other words, public and social organisations should make a decided commitment to knowledge that can allow them to approach the complexity of the social reality under better conditions, and which will let them maximise their capacity to deal with this reality. Without this prior conviction, we will be incapable of moving beyond the current ‘declarative’ innovation – very present in various discourses, plans and projects – to a more disciplined, more rigorous innovation that aspires to having a real impact in economic and welfare terms.

Please read our comments policy before commenting.

Note:  This article gives the views of the authors, and not the position of EUROPP – European Politics and Policy, nor of the London School of Economics.

Shortened URL for this post: http://bit.ly/HscxXG

 _________________________________

About the authors

Blanca Lázaro – Catalan Institute for Public Policy Evaluation (Ivàlua)
Blanca Lázaro is Executive Director of Ivàlua, the Catalan Institute for Public Policy Evaluation. She was previously Director General of Services at the Department of Labor and Industry of the Catalan Government (2004 – 2008); Executive Director and project manager of the Barcelona Promotion Foundation (Barcelona Chamber of Commerce Group), from 1998 until 2004; Project Manager and senior consultant at the International Department of the CIREM Foundation (1994 – 1998), and head of the Unit of Studies at the Rector’s Office of the Autonomous University of Barcelona (1991-1994).

David Casado – Catalan Institute for Public Policy Evaluation (Ivàlua)
David Casado is an Analyst at Ivàlua, the Catalan Institute for Public Policy Evaluation. In his work at Ivàlua he has participated in the preparation of several Introductory Guides. He has taught on five editions of the Training Cycle in Public Policy Evaluation and he has participated in the evaluation of several policies and programmes.

About the author

Blog Team

Posted In: Blanca Lázaro | David Casado | Governance and policy evaluation

2 Comments