Proving public value can be an especially difficult task when high-profile cases of fraud in social science disciplines emerge. Rose McDermott makes the case for greater transparency in both the production and review of social science to restore the legitimacy of the scientific endeavour. While no one practice can eliminate fraud, greater transparency can make it both more difficult to do and also increase the shame associated with violation by making violators identities public.
In a recent issue of PS edited by Arthur Lupia and Colin Elman dedicated to exploring the issue of openness in political science research, I made an argument about the increasing attention being paid to transparency and openness in experimental research in political science more generally. This issue is particularly important for several reasons, but here I would like to discuss one of the most critical reasons it matters which I did not address in that piece. I believe that transparency and openness are especially powerful ways to reduce accusations and suspicions surrounding fraud and corruption in research. In addition, I personally would go farther, as I did in my comments on the panel surrounding this issue at the most recent American political science association meeting, and call for greater transparency in reviews to diminish corruption in reviewing as well. I speak to each of these issues in in turn below.
First, recent, high profile, accusations and documentation of fraud and scientific misconduct against prominent researchers both in the U.S. and Europe have reduced both elite and public trust in the value of the scientific enterprise at large. The most prominent case, that of Diederik Stapel, a psychologist at the University of Groningen, involved “unprecedented” fraud, including, most damning, the fabrication of data. In the end, numerous high visibility publications in venues as prestigious as Science, were shown to be based on falsified or fabricated data, and had to be retracted. A similar high profile case in the U.S. involving Marc Hauser, a psychologist at Harvard, similarly required the withdrawal of numerous high impact publications as well as the return of research money to governmental funding sources.
This fact alone makes clear why seeking to reduce or prevent such fraud is critical for all research academics who hope to secure enhanced funding for important research in a climate of budget cuts; when academic findings are themselves increasingly subject to scrutiny and scepticism, it makes it easier for policymakers of all sorts to declare a pox on everyone’s house and summarily cut funding to a community understood to endorse cultural corruption at its core for purposes of personal ambition and financial gain. This may not be true, but perception often equally reality in the political world and anyone looking for reasons to cut funding from one area to help donors in another need look no farther than the bonaza provided by fraudulent research.
Of course there are deep sociology of knowledge reasons why the two most prominent recent cases of fraud have been levied against behavioral psychologists. Although some may claim such fraud is more likely to occur in such areas, such is unlikely to be the case. Rather, this “outing” reflects a fundamental shift in the power of sub-fields within psychology, as cognitive and neuroscientists prove ascendant, those with less power find fewer patrons to protect their resource base. A reflection of this is seen in the U.S. Congressional evisceration of funding for political science (unless it directly relates to national security) under the Coburn amendment. It would have been harder to completely eliminate funding for, say, research into cancer or Alzheimer’s disease, since many people have those conditions, or have people they love who are afflicted by them, and so its value is more obvious.
So of course part of the problem is that political science does not do a great job of proving its added value to the public, a point which John Aldrich and Arthur Lupia are seeking to address in the American Political Science Association Presidential Task Force on public engagement currently underway. However, the other part of the problem is that work that appears “secret” can make it easier for observers to question its legitimacy, especially if such proprietary information makes it difficult or impossible for others to replicate the findings. Thus, while a lot of attention in the experimental community has gone into registering designs prior to analysis, and this procedure has great academic value in promoting integrity, more attention should go into greater transparency in posting data and .do files when articles are published so that anyone can examine them, re-analyze them using different data or methods, or attempt to replicate them. So while no one practice can eliminate fraud, or the fear and perception of it, greater transparency can make it both more difficult to do, increase the likelihood that violators will be caught, and also increase the shame associated with violation by making violators identities public.
One additional strategy to increase transparency in another domain would go far toward reducing corruption in the discipline as well. Reviewer’s identities should be public. Double blind may have been a reasonable idea in the past, but in the age of the internet, it has become a single blind procedure by default, as reviewers can search out a title or phrase and ascertain authorship through vestiges of work that remain on-line. This causes unfair advantage. Because pieces are often sent to experts in an author’s area, corrupt or competitive reviewers can seek to reject or delay work in order to gain an unfair advantage, steal ideas which they then strive to get out first, and otherwise derail the entire process of peer review. Sending work to those who are not in the same area is not a fair solution and does not help anyone. However, if authors had to make their identities public, they would be less likely to be able to get away with duplicitous reviews designed to delay or reject competitor’s work; similarly, if the reviewer then came out with a piece too similar to one they had reviewed, both editors and authors would be aware of this violation of intellectual property, and it would be easier to trace, and provide the paper trail should recompense be demanded. Of course, this may make it harder to get reviewers in an already overtaxed system where many people submit and way fewer review, but it would mean that those who do review would be held to a higher standard of integrity and accountability. The problem of reviewer fatigue needs a broader resolution to get around the reduction in numbers such a policy will inevitably produce, but a higher rate of desk rejects appears the easiest solution. In the end, greater transparency in both production and review works to enhance honesty, improve prospects for funding , increase public and elite perceptions of value, and strengthen the scientific endeavor.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Rose McDermott is a Professor of Political Science at Brown University and a Fellow of the American Academy of Arts and Sciences. She is past president of the International Society of Political Psychology. She is the author of 3 books, co-editor of 2 additional volumes, and over 100 articles, many on experiments and the experimental method.
I often wonder how much ‘making up data’ goes on within social science, especially in qualitative research (the kind I tend to do) in which interviewees usually request anonymity, making transparency and data sharing tricky. So it was great to read this post and I definitely agree we need to do more to ensure the legitimacy and integrity of work in the social sciences. However, I’m personally unpersuaded by the argument that revealing reviewers’ identities will help with this.
I’ve now reviewed several academic articles for journals which used this approach and I’ve found it pretty problematic. In two cases, for example, I was asked to review articles for very senior people in my field who I know (and whom I would hope to work with in future). I didn’t feel it was reasonable to turn down the request to review the articles simply because I knew the authors (I knew them because we work on similar issues so it seemed understandable that I’d been identified as a potential reviewer – I work in a pretty small field and if I didn’t review the work of people I knew, I probably wouldn’t be reviewing much). Yet, because I knew them and because I knew they would know my identity as a reviewer, I felt some pressure to give a more positive review than I otherwise might have done. In both cases, I tried to review the articles as if the process was anonymous but I was advised by colleagues and friends that I should ensure I wasn’t ‘too critical’ if I didn’t want to damage my professional links. In the end, I think my reviews were constructive and fair (I recommended both articles be published but with some substantial revisions to address what I felt were major weaknesses). The reviews were not well-received by the authors and this process did, I believe, damage the (tentative) links I had with these authors. I don’t think I could have reviewed the articles differently but, in light of this, I’m now extremely hesitant about reviewing in this kind of open format.
If we all worked in a level playing field, I think reviewers’ identities being shared might function well. However, in a profession as hierarchical as academia, in which permanent positions and grant applications of junior researchers are often dependent on the good-will and sponsorship of more senior figures in their field, I can’t help but worry that open reviewing leads to a pressure (particularly for more junior) academics to give overly positive reviews (especially if there is any belief that the colleagues in question might be reviewing your articles or grant applications in the near future). I worry that this could lead to more, not less, academic fraud.
My own view is that it is the process of editing / processing reviews that needs to be substantially improved. In the past year, I have witnessed three examples of colleagues’ who have received reviewer feedback which has consisted of between two to five positive/constructive reviews accompanied by one other review which can only be classed as a personal/political attack, with little relevance to the work under review (sometimes making claims about the work in question that are obviously untrue to anyone who has read the work in full). At least some of these reviews probably fit the ‘corrupt or competitive reviewers’ category referred to in this blog and yet they had the desired impact (all three pieces of work were rejected). Surely this is something editors / funders could do more to identify; if a reviewer’s comments are not based firmly on the work they have been asked to review, or if one reviewer’s comments contrast starkly with multiple others, there would seem to be good reasons for considering the review in question extremely carefully. Yet, too often (especially with grant applications, in which funds are inevitably limited) any negative review (no matter what the basis / validity of the claims) seems to ensure rejection, enabling the kind of corrupt/competitive behaviour described in this blog.
I agree with the tenor of Kat Smith’s comment that open reviewing can be inhibiting to frank comment. Intuitively I felt this might be the case, and I am interested that she has provided a personal case history to this effect. The answer to unfair (anonymous) reviews is for the oversight editor to exercise some judgement, and for reviewers to be required to identify potential conflicts of interest that they might have. I was an editor on a top journal for ten years with one or two manuscripts coming in each day. Firstly, I exercised a pretty stringent desk review that screened out half the submissions, in part on the grounds that to send out weak or out of scope manuscripts for review was a waste of the time of my precious pool of reviewers. Secondly, I did not rely on personal contacts for reviewers, but instead used the reference list of the manuscripts being assessed. This almost always worked, in quality manuscripts (which, by definition, I hoped that I had reduced the field to after desk review). Finally, I did not allow myself to be persuaded solely by the weight of reviewer opinion (although usually I did because almost always the material was outside my immediate area of expertise) – particularly if the manuscript seemed innovative (if also flawed in some technical respects) and also if the reviewer or reviewers seemed to have an axe to grind (which was rare). In cases where irresolvable differences between a reviewer and an author persisted, and I felt that the manuscript was of quality and the “dispute” of merit, I would persuade the reviewer and the author to go public, with a critique and right of reply. This was very often highly instructive since, far from the reviewer necessarily having some personal axe to grind, they were actually raising an issue of some merit that was not easy to resolve in the relevant field of scholarship.
One way of combating reviewer fatigue could be to simply raise the bar for publication. The current academic churnalism is a terrible waste of time. Academics are forced to re-package and re-publish the same few ideas and insights again and again, and readers waste days looking through articles for a single original thought or salient new data point.