It has become tradition the last few years for us to take a look back at the past year’s most popular posts on the Impact blog. This year’s list features a diverse range of topics from collaborative writing tools to the more theoretical implications of neoliberalism on research openness. Many thanks to all our contributors for creating and allowing us to share such excellent content.
Research collaboration now involves significant online communication. But sending files back and forth between collaborators creates redundancy of effort, causes unnecessary delays and, many times, leaves people frustrated with the whole idea of collaboration. Luckily, there are many web-based collaborative writing tools aimed at the general public or specifically at academic writers to help. Christof Schöch looks at the different tools out there and presents some helpful tips on finding the right tool for the job.
Twitter and blogs are not add-ons to academic research, but a simple reflection of the passion that underpins it.
The role of the academic humanist has always been a public one – however mediated through teaching and publication, argues Tim Hitchcock. As central means to participate in public conversations, Twitter and blogging just make good academic sense. Hitchcock looks at how these new platforms are facilitating academic collaboration, teaching and public engagement. What starts as a blog, ends as an academic output, and an output with a ready-made audience, eager to cite it.
It is widely accepted that academic papers are rarely cited or even read. But what kind of data lies behind these assertions? Dahlia Remler takes a look at the academic research on citation practices and finds that whilst it is clear citation rates are low, much confusion remains over precise figures and methods for determining accurate citation analysis. In her investigation, Remler wonders whether academics are able to answer these key questions. But expert evaluation has indeed correctly discredited the overblown claim resulting from embellished journalism.
Why do academics choose useless titles for articles and chapters? Four steps to getting a better title.
An informative title for an article or chapter maximizes the likelihood that your audience correctly remembers enough about your arguments to re-discover what they are looking for. Without embedded cues, your work will sit undisturbed on other scholars’ PDF libraries, or languish unread among hundreds of millions of other documents on the Web. Patrick Dunleavy presents examples of frequently used useless titles and advises on using a full narrative title, one that makes completely clear your argument, conclusions or findings.
It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough.
The Open Movement has made impressive strides in the past year, but do these strides stand for reform or are they just symptomatic of the further expansion and entrenchment of neoliberalism? Eric Kansa argues that it is time for the movement to broaden its long-term strategy to tackle the needs for wider reform in the financing and organization of research and education and oppose the all-pervasive trend of universities primarily serving the needs of commerce.
Academics in children’s picture books tend to be elderly, old men, who work in science, called Professor SomethingDumb. Why does this matter? Melissa Terras presents the findings from her two-year search on the representation of academics and argues these portrayals should be challenged. Such narrow stereotypes of academics presented and promulgated in these books continue to percolate back to those who read the books, or have the books read to them.
Stand Up and Be Counted: Why social science should stop using the qualitative/quantitative dichotomy
Qualitative and quantitative research methods have long been asserted as distinctly separate, but to what end? Howard Aldrich argues the simple dichotomy fails to account for the breadth of collection and analysis techniques currently in use. But institutional norms and practices keep alive the implicit message that non-statistical approaches are somehow less rigorous than statistical ones.