LSE - Small Logo
LSE - Small Logo

Blog Admin

November 12th, 2018

Less than 5% of papers on the use of research in health policymaking tested interventions to see what worked. But those studies reveal a number of strategies for improvement

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Admin

November 12th, 2018

Less than 5% of papers on the use of research in health policymaking tested interventions to see what worked. But those studies reveal a number of strategies for improvement

0 comments | 1 shares

Estimated reading time: 5 minutes

Population health policies stand a much better chance of succeeding if they’re informed by research evidence. But what are the best ways of making sure this happens? Danielle Campbell and Gabriel Moore conducted a rapid review of the literature on the subject and found that very few studies actually concerned testing interventions to see whether they worked. Those articles that do report on intervention strategies revealed a number of effective approaches, with recommendations including tailored approaches to presenting research findings to policymakers, interactive seminars or roundtables for communicating evidence, and increasing organisational capacity to use research.

The past decade has seen a surge of interest in looking at ways of enhancing the use of research in health policymaking, as well as a growing volume of literature on the subject. Has this interest produced any consensus on how to ensure policy is backed by evidence? And if so, what are the best strategies? Our paper in the latest issue of Public Health Research & Practice, published by the Sax Institute, tackles these questions.

In our rapid review, we looked at the literature on the subject published since 2009. The first thing to note is that, although we identified over 300 papers on the use of research in health policymaking, the vast majority of these were descriptive. Very few – in fact just 14 of 304 articles – actually concerned testing interventions to see whether they worked. There is a serious discrepancy, therefore, between surging interest in this area and the small number of studies actually testing strategies.

The 14 articles we did find (reporting on 13 intervention strategies) tended to be methodologically weak. Only one study used an experimental design, while one other used a pre/post-test design. The others used a range of approaches and were characterised by an absence of control groups, small sample sizes, and self-report data. Most measured outcomes related to factors that influence research use rather than actual research use.

Given these methodological issues, it was hard to come to any strong conclusions about what works and what doesn’t. Nonetheless, our review did point to some promising themes and possible ways forward.

One theme was the importance of ensuring that policymakers are provided with research syntheses and summaries that match their needs. This was the lesson from a study from Monash University, Australia, that interviewed 43 policymakers on a strategy aimed at supporting the use of systematic reviews in health policy. The policymakers in this study overwhelmingly agreed that research summaries were critical to increase the use of research. The study demonstrated a need for layered or “graded-entry” formats, ranging from short summaries all the way up to detailed reports. It showed the need for a mechanism to assess policymakers’ changing research needs so they could be targeted with a relevant synthesis.

This was also the message from a 2011 study from the US, which tested four different types of policy briefs on mammography screening with nearly 300 policymakers. The study authors found different types of policymakers tended to like different types of briefs, with staffers more likely to report a preference for story-focused briefs and legislators preferring data-focused briefs.

Another theme to emerge was the need for better collaboration between researchers and policymakers, and for the two to build closer relationships. One large study, involving nearly 1,000 policymakers, looked at an intervention where researchers presented their findings directly to policymakers in either traditional seminars or interactive roundtables. Policymakers agreed that such presentations stimulated their thinking, that the interactive roundtables were more relevant to their needs compared with traditional seminars, and that the new knowledge could be used in their work as policymakers.

Three of the studies under review focused on increasing organisational capacity to use research. A Canadian study looked at a scheme to improve implementation of best practice guidelines in health promotion programmes using a team of “organisational champions”, while a Dutch study explored the use of masterclasses for public health professionals and policymakers supporting a practice-based research project.

And finally, another theme explored was the impact of funding research infrastructure. A study from the Netherlands used interviews and focus groups to assess the impact of a grant-funded collaborative centre bringing together public health services, municipal departments, and university departments. The study suggested that boundary organisations don’t automatically produce cross-domain interactions, and that cultural changes and leadership are also needed for this to occur.

Overall, our review identifies several approaches that show some promise for improving the use of research in population health policy. They include the following:

  • A system for commissioning rapid reviews
  • Tailored approaches to presenting research findings to policymakers
  • The involvement of policymakers in research teams and networks
  • Interactive seminars and conferencing technology for communicating evidence
  • Initiatives to build capability in people and across organisations
  • Funded institutional-level collaborations.

Our paper highlights a strong interest in building partnerships and furthering interaction between policymakers and researchers; in fact, around half of the studies we looked at were focused on this theme. This collaboration of research producers and users can cut across all parts of the research process, from shaping research questions, to methodology, data collection, interpretation of and implementing the results. But the problem is that, at the moment, there is simply not enough evidence to build an integrated approach that would encompass all these elements. What we urgently need now is well-designed studies that look at how to implement strategies to achieve these goals. And secondly, we need to be able to identify the most practical metrics for assessing research use.

This blog post is based on the authors’ article, “Increasing the use of research in population health policies and programs: a rapid review”, published in Public Health Research & Practice (DOI: 10.17061/phrp2831816).

Featured image credit: Thomas Drouault, via Unsplash (licensed under a CC0 1.0 license).

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the authors

Danielle Campbell is a Senior Analyst in the New South Wales Ministry of Health, Australia. She is interested in approaches to encourage the generation and use of research and evaluation to improve population health and decrease health inequities.

Gabriel Moore is the Manager, Knowledge Exchange at the Sax Institute in Sydney, Australia, where she oversees the Evidence Check and rapid synthesis programmes. Her research focuses on the use of rapid reviews and the roles and effectiveness of knowledge brokering.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Evidence-based policy | Government | Knowledge exchange | Research communication

Leave a Reply

Your email address will not be published. Required fields are marked *