LSE - Small Logo
LSE - Small Logo

Blog Admin

January 31st, 2014

The case for greater transparency in experimental and social science research

3 comments

Estimated reading time: 5 minutes

Blog Admin

January 31st, 2014

The case for greater transparency in experimental and social science research

3 comments

Estimated reading time: 5 minutes

rosemcdermottProving public value can be an especially difficult task when high-profile cases of fraud in social science disciplines emerge. Rose McDermott makes the case for greater transparency in both the production and review of social science to restore the legitimacy of the scientific endeavour. While no one practice can eliminate fraud, greater transparency can make it both more difficult to do and also increase the shame associated with violation by making violators identities public.

In a recent issue of PS edited by Arthur Lupia and Colin Elman dedicated to exploring the issue of openness in political science research, I made an argument about the increasing attention being paid to transparency and openness in experimental research in political science more generally. This issue is particularly important for several reasons, but here I would like to discuss one of the most critical reasons it matters which I did not address in that piece. I believe that transparency and openness are especially powerful ways to reduce accusations and suspicions surrounding fraud and corruption in research. In addition, I personally would go farther, as I did in my comments on the panel surrounding this issue at the most recent American political science association meeting, and call for greater transparency in reviews to diminish corruption in reviewing as well. I speak to each of these issues in in turn below.

Image credit: NASA (public domain)

First, recent, high profile, accusations and documentation of fraud and scientific misconduct against prominent researchers both in the U.S. and Europe have reduced both elite and public trust in the value of the scientific enterprise at large. The most prominent case, that of Diederik Stapel, a psychologist at the University of Groningen, involved “unprecedented” fraud, including, most damning, the fabrication of data. In the end, numerous high visibility publications in venues as prestigious as Science, were shown to be based on falsified or fabricated data, and had to be retracted. A similar high profile case in the U.S. involving Marc Hauser, a psychologist at Harvard, similarly required the withdrawal of numerous high impact publications as well as the return of research money to governmental funding sources.

This fact alone makes clear why seeking to reduce or prevent such fraud is critical for all research academics who hope to secure enhanced funding for important research in a climate of budget cuts; when academic findings are themselves increasingly subject to scrutiny and scepticism, it makes it easier for policymakers of all sorts to declare a pox on everyone’s house and summarily cut funding to a community understood to endorse cultural corruption at its core for purposes of personal ambition and financial gain. This may not be true, but perception often equally reality in the political world and anyone looking for reasons to cut funding from one area to help donors in another need look no farther than the bonaza provided by fraudulent research.

Of course there are deep sociology of knowledge reasons why the two most prominent recent cases of fraud have been levied against behavioral psychologists. Although some may claim such fraud is more likely to occur in such areas, such is unlikely to be the case.  Rather, this “outing” reflects a fundamental shift in the power of sub-fields within psychology, as cognitive and neuroscientists prove ascendant, those with less power find fewer patrons to protect their resource base.  A reflection of this is seen in the U.S. Congressional evisceration of funding for political science (unless it directly relates to national security) under the Coburn amendment.  It would have been harder to completely eliminate funding for, say, research into cancer or Alzheimer’s disease, since many people have those conditions, or have people they love who are afflicted by them, and so its value is more obvious.

So of course part of the problem is that political science does not do a great job of proving its added value to the public, a point which John Aldrich and Arthur Lupia are seeking to address in the American Political Science Association Presidential Task Force on public engagement currently underway. However, the other part of the problem is that work that appears “secret” can make it easier for observers to question its legitimacy, especially if such proprietary information makes it difficult or impossible for others to replicate the findings. Thus, while a lot of attention in the experimental community has gone into registering designs prior to analysis, and this procedure has great academic value in promoting integrity, more attention should go into greater transparency in posting data and .do files when articles are published so that anyone can examine them, re-analyze them using different data or methods, or attempt to replicate them. So while no one practice can eliminate fraud, or the fear and perception of it, greater transparency can make it both more difficult to do, increase the likelihood that violators will be caught, and also increase the shame associated with violation by making violators identities public.

One additional strategy to increase transparency in another domain would go far toward reducing corruption in the discipline as well.  Reviewer’s identities should be public.  Double blind may have been a reasonable idea in the past, but in the age of the internet, it has become a single blind procedure by default, as reviewers can search out a title or phrase and ascertain authorship through vestiges of work that remain on-line. This causes unfair advantage. Because pieces are often sent to experts in an author’s area, corrupt or competitive reviewers can seek to reject or delay work in order to gain an unfair advantage, steal ideas which they then strive to get out first, and otherwise derail the entire process of peer review. Sending work to those who are not in the same area is not a fair solution and does not help anyone.  However, if authors had to make their identities public, they would be less likely to be able to get away with duplicitous reviews designed to delay or reject competitor’s work; similarly, if the reviewer then came out with a piece too similar to one they had reviewed, both editors and authors would be aware of this violation of intellectual property, and it would be easier to trace, and provide the paper trail should recompense be demanded. Of course, this may make it harder to get reviewers in an already overtaxed system where many people submit and way fewer review, but it would mean that those who do review would be held to a higher standard of integrity and accountability. The problem of reviewer fatigue needs a broader resolution to get around the reduction in numbers such a policy will inevitably produce, but a higher rate of desk rejects appears the easiest solution. In the end, greater transparency in both production and review works to enhance honesty, improve prospects for funding , increase public and elite perceptions of value, and strengthen the scientific endeavor.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Rose McDermott is a Professor of Political Science at Brown University and a Fellow of the American Academy of Arts and Sciences.  She is past president of the International Society of Political Psychology. She is the author of 3 books, co-editor of 2 additional volumes, and over 100 articles, many on experiments and the experimental method.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Academic publishing | Evidence-based research | Impact | Research ethics | Research funding

3 Comments