LSE - Small Logo
LSE - Small Logo

Andreea Moise

January 17th, 2019

Book Review: Doing Realist Research edited by Nick Emmel, Joanne Greenhalgh, Anna Manzano, Mark Monaghan and Sonia Dalkin

0 comments | 2 shares

Estimated reading time: 10 minutes

Andreea Moise

January 17th, 2019

Book Review: Doing Realist Research edited by Nick Emmel, Joanne Greenhalgh, Anna Manzano, Mark Monaghan and Sonia Dalkin

0 comments | 2 shares

Estimated reading time: 10 minutes

In Doing Realist ResearchNick Emmel, Joanne Greenhalgh, Anna Manzano, Mark Monaghan and Sonia Dalkin draw on the expertise of key specialists who push the boundaries of traditional research approaches to advocate for a more thoughtful and critical application of realist methodologies. This book will support researchers across disciplines to challenge the rigidity of established practice, writes Andreea Moise, and makes a compelling case for integrating aspects of realism or conducting research in accordance with a truly realist paradigm.

Doing Realist Research. Nick Emmel, Joanne Greenhalgh, Anna Manzano, Mark Monaghan and Sonia Dalkin (eds). SAGE. 2018.

Book cover of Doing Realist Research Find this book (affiliate link): amazon-logo

One of the main challenges of conducting research is choosing the right approaches and methods for investigation to acknowledge the complexities of the social world. Most social and evaluative research takes place in field settings, also conceptualised as ‘open systems’ with permeable boundaries allowing dynamic movement, and most importantly, where participants are not isolated from multiple and contingent causes exerted by other entities within the complex systems in which they are nested. In spite of this complexity having been acknowledged by some of the most respected figures in the field such as Donald T. Campbell, most programme evaluation conducted to date still assumes operating within ‘closed systems’ (i.e. laboratories) where undesired causal noise is assumed to be forcibly excluded.

Doing Realist Research provides a realist perspective to solving this common issue by drawing on the expertise of key specialists in the field who challenge the boundaries of traditional approaches and advocate for a more thoughtful and critical application of realist methodologies instead. Realist research methodologies are based on a realist philosophy of science (ontology) and have more of an explanatory rather than judgmental focus, being concerned with explaining underlying factors and mechanisms that support, shape and make a difference (or not) to the outcomes of interest within complex systems. Building on Ray Pawson and Nick Tilley’s widely cited Realistic Evaluation, each of the twelve chapters in this edited collection adds to understanding by discussing key concepts, including practical tips for conducting realist research by drawing on eye-opening case studies and signposting passionate readers to other relevant publications. Chapter names and main case studies are practically listed in a table format in the introduction of the book, making it easy even for busy readers to delve into the sections that directly support their research interests.

The first chapter of the book describes the long-term, ongoing friendship and collaboration between Pawson and Tilley as well as the tensions that the two encountered in taking evaluation research in a realist direction to answer crucial questions – what works for whom, in what circumstances and how? – that would better address the needs of policymakers and practitioners rather than just answering ‘whether it worked’. The two challenge the hegemony of evidence hierarchies and associated assessment tools that take Randomised Controlled Trials (RCTs) to be the gold standard irrespective of the field setting. This is because in social settings the assumptions behind the strong internal validity claims of RCTs are rarely met (such as full randomisation, successful blinding and independence of targets of intervention and analysis).

Furthermore, policymakers and practitioners are often concerned with how to translate policies and programmes from one jurisdiction to another across time, which taps into the issue of external validity or the generalisability of findings. Brad Astbury expands the examination of causal explanation as the principle of theoretical generalisation in Chapter Four. Drawing on a case study of an early intervention programme for families in crisis, he emphasises the importance of learning cumulatively, drawing on existing theoretical resources such as logic models, as well as the value of bringing ideas (such as stakeholder theories) into relation with evidence. By focusing on aspects of causal explanation, the evidence becomes heavily contextualised and helps to answer the ‘will it work elsewhere?’ question, showing how realist researchers can respond pragmatically to the needs of (research) funders, policymakers and practitioners.

Image Credit: (Pixabay CCO)

In Chapter Seven, continuing to recognise the paramount importance of context, Rob Anderson, Rebecca Hardwick, Mark Pearson and Richard Byng discuss the limitations of the archetypal ‘black box’ evaluations conducted in health which have minimal interest in how and why a particular configuration of programme resources changes outcomes. Since this has important consequences for the generalisability and usefulness of findings, the authors advocate for the need for a more explanatory economic evaluation. Further, they examine the potential synergy between economic and realist evaluation. They argue that by taking a realist approach, articulating theories, capturing resources and identifying the causal processes that intervene between programme treatment and outcome, evaluation moves from describing what works to understanding who it works for, in what context and why, starting to produce generalisable knowledge about whether interventions are likely to work and be cost-effective elsewhere.

It goes without saying that finding relevant data of sufficient rigour is key in successfully conducting a research study. In Chapter Eight, drawing on his experience on the RAMSES Project in developing quality and publication standards for realist reviews, Geoff Wong discusses six challenges encountered as part of data gathering in realist reviews and makes suggestions about how they might be overcome. According to the author, a common problem that evidence synthesis researchers face is that of ‘empty’ reviews, exemplified by the case of reviewing relevant evidence for the legislation to ban smoking in private vehicles while carrying children, which identified a single academic study published in Australia.

Contrary to traditional reviews, relevant data may come from a variety of sources other than research studies, including non-documentary sources such as radio interviews with experts or social media. Wong also advises that searches should not be driven by a methodological hierarchy of evidence but rather the need to identify meaningful data to develop, refine or test programme theories.  In the case example, relevant data came from newspaper articles and consultations with stakeholders including representation from major tobacco companies. However, an important point noted is that including various forms of evidence should not be made at the expense of rigour and transparency and should be in line with the quality standards for realist reviews.

Interestingly, the examples discussed throughout this chapter illustrate that not all data gathered by realist reviewers will be of the highest quality and even what might be perceived as ‘low quality’ can be used to build a credible argument underpinning one or more programme theories, with the caveat that any claims made by reviewers on the plausibility of their programme theories will be based on both the trustworthiness of their data and the coherence of their arguments. However, while it is acknowledged that there may be situations when other fellow researchers, policy- or decisionmakers may disagree, there is no further discussion about how these tensions might be solved.

Since the most prominent examples of the application of realist methodologies throughout the book refer to programme evaluation, I found myself reflecting on the attributes of good evaluation and on my experience of working for an ambitious home visiting programme, which embarked on an intensive path of improvement and innovation in response to the publication of findings from an RCT focused on assessing short-term outcomes. By taking a realist research approach instead, it is likely that learning about the programme would have surfaced much earlier during the research process to inform decisions about how the programme can better identify and target where to focus investment and energy. Using realist theories and methods in programme evaluation and social research also appears to facilitate a more inclusive approach to incorporating the input of those closer to the programme in the study design (consulting practitioners, service-users and other stakeholders) and making sure their views are represented. In so doing, ‘small contexts may overcome big policy reforms’, as Pawson states in the final chapter.

Overall, this book will support researchers from all disciplines to challenge rigid established practice and to better deal with the complexity and multifacetedness involved when measuring impact, without being overly prescriptive but instead fostering critical thinking and flexibility in making informed choices during the research process. When conventional research practices prove to be too narrow in their focus to answer the pressing questions of policymakers, practitioners, funders, commissioners and the general public, the authors make a compelling case for integrating aspects of realism or conducting research in accordance with a truly realist paradigm.


Note: This review gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.


 

Print Friendly, PDF & Email

About the author

Two grey pencils on yellow background

Andreea Moise

Andreea Moise is Principal Quality Improvement Analyst at Family Nurse Partnership National Unit, a home visiting programme targeted at first-time young mothers and their families. She previously held a number of research and analytical positions in the private, voluntary and public sectors. Her research interests include child development, wellbeing, reducing inequalities and education with a particular focus on longitudinal modelling and using data and research to drive improvements in public services. Andreea holds a master’s degree in social research methods and social policy from London School of Economics and currently she is doctoral researcher at University College London. She tweets at @moise_andreea.

Posted In: Contributions from LSE Alumni | Methods and Research

Leave a Reply

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales
This work by LSE Review of Books is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales.