Scholars, librarians, archivists, publishers and research funders met last week to discuss how to bring about change in scholarly communication given the influence of new technologies. Martin Fenner kicks off a series of posts we have over the next week on the event by discussing how scholarly innovation is currently approached and what should be reconsidered to build a more sustainable scientific infrastructure.
The Beyond the PDF Conference took place last week in Amsterdam. Unfortunately I was unable to attend in person this time (I took part in the first Beyond the PDF in January 2011), but I watched the livestream of the Business Case panel discussion. How to pay for the development of new scientific infrastructure and tools is something that I think a lot about having moved away from academia to become a developer of scientific software last year. I would assume three things:
- there are a lot of great ideas out there to improve scholarly communication
- there is enough money out there to pay for improvements in scholarly communication
- we are frustrated because progress is much slower than we anticipate
If we have enough great ideas and enough money, but don’t see the results we expect, something must be going wrong. A simple answer would be that it is different people and organizations that have the ideas from those that have the money, but I don’t think that this is the reason. My suspicion is that there is a deeper problem, and that the approach we take to scholarly innovation is broken. Below is how innovation is approached by the major players:
- individual scientists and/or software developers come up with great ideas, but don’t get past the prototype stage because of limited resources
- academic tools and infrastructure are built as part of a funded project (anywhere from 6 months to a few years), but there are no resources to turn this into a service that is persistent beyond the project
- publishers and large academic institutions have the resources to build these tools. They are often less innovative because of their size
- funders pay for projects (see above), but rarely for infrastructure, and they rarely get involved in innovative projects themselves
- commercial organizations can quickly bring great ideas to market (in particular small startups), but it is often unclear how their services are paid for in the long run
At the end of the day it seems that we have a lot of great ideas, but many of them never reach critical mass, and an even smaller number has long-term sustainability. I can think of a number of great projects that have never gained traction, and of a number of great tools and services where I have no idea how their development and service is paid for. The idea to get to a large number of users no matter what it costs, and figure out the business plan later is popular with internet startups, but dangerous when we care about tools we want to still use two years from now. Two projects that are not specific to science, but are important for science and have made this work are Wikipedia and Github. From the long list of tools for scientists I would not pick Mendeley or figshare (both great services, but still in search of sustainability), but ArXiV and Papers.
It also doesn’t help that most scientists are a conservative bunch when it comes to technology, and that the scientific market is fairly small compared to the overall number of users. Another big challenge is to innovate in an open environment, i.e. to make the innovation available to as many people as possible without barriers of access. Some of my personal conclusions from all this are the following:
- we should acknowledge that we have an innovation problem, and it is not simply solved by getting more money
- we have a collaboration problem, too many people are doing similar things without talking to each other and working together
- scientific infrastructure and tools cost money. We need the right people to pay (ideally not the individual researcher), fair prices and intelligent business models
- funders should reconsider how they pay for scientific infrastructure, as the project-based approach is broken
- large organizations (commercial, non-commercial and academic) should think about their approach to innovation, in particular how they support innovation outside of their organization
[An overview of the outcomes of the conference can be found here. A Storify of the tweets surrounding the event can be found here. Video from the session will be made available shortly here]
This was originally published on Martin’s blog Gobbledygook and is reposted with permission.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.
Martin Fenner works as a medical doctor and cancer researcher at the Hannover Medical School Cancer Center in Germany. He writes about how the internet is changing scholarly communication. He is one of the organizers of the Science Online London conference and is member of the ORCID board of directors. He believes that open standards that enable collaboration between people and software tools will make the internet a friendlier and more productive place for science and scientists. Martin can be found on Twitter as @mfenner.
Thank you for the post, Martin. It is quite clear that the funding model for transitioning scholarship into the digital age is inefficient at best and completely broken at worst. We look to the funders to provide leadership and coordination, while the funders (as was clear in the evaluation session at BtPDF2) look right back at the researchers and institutions. But the model established for researchers and institutions is not a cooperative one. Unlike materials and pharmaceuticals, it’s not clear what the commercialization prospects and pipelines are for many of these tools, particularly as the one industry that might step up (Publishing) perhaps has little vested interest in changing the current system. New start ups might, but if the researchers only value articles and impact factor (also in the evaluation session), then it is hard to get them accepted. So we end up with thousands of partially realized and resource-starved tools and approaches. Which then get criticized in grant renewals. Which then leads to the next tool and next approach. And so it goes.