FacebookBufferPocketShare

ellenAs we begin to open up more scientific research to the public it is worth considering the risks involved when complicated, highly technical content is available but misinterpreted. Ellen Collins believes calls for openness often overestimate the enthusiasm and ability of the general public to actually engage with the messiness of research findings. More needs to be done to ensure scientific findings are reported and represented responsibly in public.

I recently fulfilled a lifetime ambition by appearing in the Guardian. Well, OK, not lifetime. I’ve only been reading it for about seventeen years. And when I say ‘appearing’ what I obviously mean is having some research that I worked on alluded to, without any citation, quotation or link to the findings. But still… you take your victories where you find them, right?

Old newspaper

Image credit: ShironekoEuro (CC BY)

Actually, it was a rather dispiriting experience. The journalist had picked up on one finding from our two-year project on student library usage and used it as a hook for her piece on how universities are engaging with big data. The finding was one that I blogged about quite early in the project. At the start of that post, there is a big line in bold type which essentially says ‘this finding is dodgy! Don’t use it!’. We subsequently did some further analysis and came up with a more nuanced interpretation of the data which told a more ambiguous story. Guess which one made it into the piece?

This being the Guardian, any Tom, Dick or Harriet can weigh in with his or her two penn’orth in the comments section. This makes for pretty fun and occasionally informative reading on some of the articles. But most comments on our work fell into one of two categories. First: ‘well durr! how much time and money went into proving this extremely obvious finding?’ and second: ‘surely these idiot researchers can see that not using the library is a symptom of failure, not a cause?’.

This whole situation relates to some things I’ve been considering for a while about public access to research, one of the Government’s big arguments in favour of open access. I know that people hold quite strong views about the public’s ability to engage with academic outputs. I don’t have any evidence on that to sway me either way. But this one experience highlights a few points that I’m not sure we really talk about enough when it comes to openness.

First: research is messy. Being open about this messiness is good, but it carries some heavy risks. Before we blogged the early, flawed but headline-grabbing finding, we had a long conversation about whether it was right to share it. We knew that because it was a hard number telling a positive story about libraries, people would pick it up and use it. I was afraid that the message about its flaws would get lost in re-tellings. But we decided that the project was about being open, and openness means showing your working. Unfortunately, I’ve been proven right. The later, better, results are ignored, and so is the clear health warning on the early, messy ones, because the simple story is too compelling.

Second: just because we make something open doesn’t mean people will actually read it. (Is this the publishing version of horses – water – drinking?). We fell at the first hurdle when the Guardian journalist neglected to link to our blog, showing all the findings. But it’s not that hard to find via Google (there it is, result number five). Instead, people simply engaged with the journalist’s flawed and partial representation of our results. If they had read – even glanced at – the project blog, they would have seen that the finding about dropping out was one tiny part of a much bigger research project on supporting student library usage, which answers the ‘what a waste of time’ objection in the comments. And they would also have seen that almost every blog post about findings stresses that correlation is not causation; our findings are indicators to support interventions or areas for further research, not explanations for student outcomes. So they don’t need to tell us that the relationship isn’t causal – we know. But because people only want or have time to engage with the journalist’s interpretation, they have a very incomplete understanding of the research.

correlation

Image credit: XKCD (CC BY-NC)

Third: what is a researcher supposed to do when this kind of thing happens? I’m trying to clear up some errors in this post but, even on a good day, I can’t claim that Ellen Blogs Research has the Guardian’s reach. Should I go into the comments section and respond to the same misunderstanding each of the seventeen times it occurs? Should I contact the journalist with a hissy-fit email and demand right of reply in the well-read Corrections and Clarifications column?

Finally: some people are really stupid. What’s that? Our findings about undergraduates must be nonsense because you finished your postgraduate degree and got a first without using the library once? Well, thank GOODNESS you were around to clear that one up for us! Our two years of statistical analysis completely fall apart in the face of your single anecdote.

Deep breath. I am aware that this isn’t life-or-death stuff. Nobody is going to suffer because a few hundred Guardian readers go away with a misunderstanding about a fairly specialist research project on student library usage. But these are questions we need to consider as we begin to open up all scientific research for public access, because some of it will be life-or-death stuff. Let’s consider Andrew Wakefield and the MMR nightmare. In this case, a person may have died because of irresponsible scientific reporting and the public’s inability to engage with the messiness of science. People want a clear and simple story, and journalists are happy to provide it. And once that story was in the public domain, it proved extremely difficult to counteract, even among people who, by their own confession, ought to have known better.

Now, we might argue that open access could be a solution to these problems. We no longer have to rely on journalists to interpret the findings, we can go back to them ourselves and see what they actually say. But my experience today suggests that this overestimates the enthusiasm and ability of the general public (or, at least, that bit of it which reads and comments on the Guardian website). And, even if people did go back to the original research, would they understand the findings? I’m pretty sure the chap with the anecdata about his degree success wouldn’t.

I believe in open access. I think it is a good thing that the general public should be able to see the results of scientific research. But I think we also need to acknowledge that making this complicated, messy, highly technical content open to people who don’t have the expertise – or perhaps even the inclination – to explore it properly, is a risk. And that if we are serious about openness we need to do more to help people find, read, understand and critique the original research outputs. I don’t know how we do this. But I’d certainly like to start trying to find out.

This was originally posted on Ellen’s personal blog and is reposted with permission. Her article on a similar subject recently appeared on the Guardian’s Higher Education Network with further discussion on the above.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Ellen Collins is a Research Consultant at the Research Information Network, where she has initiated, developed and managed projects for clients including academic publishers, librarians, funders and policymakers. She is particularly interested in how researchers find, use and share information, and the ways that their behaviour is changing in response to new communications platforms and business models.

Print Friendly