tattersall smallThe academic community faces a significant problem in staying up-to-date with new technologies. Often the easiest option for researchers is not to engage rather than trying a new way of working. Andy Tattersall looks at the lack of adoption of digital technologies and argues that in academia, the problem has often been a lack of translation: academics are advised how to use Twitter but rarely why. As a result tools are used sporadically, in silos and incorrectly.

There has been an increasing number of technologies released in the last five years to help academics manage, communicate and deliver their research. Several have now established themselves within academia and can count hundreds of thousands of users, if not millions. Successes include Mendeley, ResearchGate, SlideShare and Prezi. Many of the first Web 2.0 technologies employed by researchers and teachers were not exclusive to the academic community and whilst that has been a facilitator by reducing the learning curve, it has also been a barrier as the lines between the professional and personal are blurred.

Even with the increasing number of new and bespoke academic technology start-ups that build on existing models, the idea of using something established like Twitter as an scholarly communicative tool is still new to most academics. Whilst the majority still do not consider using Slideshare as a repository for their presentations, when they do, they must also get their heads around copyright and  Creative Commons images as alternatives to their reliance on Getty Images, etc. Take for example JOVE which aids scientists’ efforts to share and understand each others’ experiments using video. But for most academics this seems a radical change in approach and for some an unnecessary one.

Given social and collaborative technologies in their current guise, i.e. Web 2.0 tools and more recently social media and the Cloud, you would think the majority of academics would be taking every opportunity to use them, but the reality is that they are not. They are not even close to widespread adoption in some fields of research. This is because we are working on a system that predates the majority currently working within it. The journal impact factor was first thought of in 1955 and as a result researchers have largely formed their careers based on publications, memberships and presentations.

In academia the problem has often been the lack of translation: academics are advised how to use Twitter but rarely why, they are told to upload to Slideshare but not how to embed it into a blog post, and most do not know how to write a blog post or why it could benefit their work. From the academic’s perspective all of these things take time and that includes figuring out how to use them, time they often do not have and the evidence is not always that immediate. The problem is that for an academic going from a point of using minimal technology to employing it in multiple aspects of their work for collaborating, creating and communicating research it can be quite a learning curve. As a result tools are used sporadically, in silos and incorrectly.

neutrality-digital divideImage credit: Digital Divide, EEF (CC BY 3.0)

Consider what it was like for researchers moving from Web 1.0 to 2.0 moving from a world based on static web sites to ones that allow interaction, from hard drives to the cloud, from presenting at a conference to delivering it over the Web. With Tweets captured in Storify, the video on YouTube, the slides uploaded to Slideshare and then embedded in a blog with complementary informal text and post presentation tweets. Web 2.0 is best described as a ‘state of mind’, not in some kind of higher state of consciousness but in the clicking together of the principles of authorship, creation, sharing – embedding the whole process. For the Web 1.0 academic that would be like going from first to five gear whilst up a steep hill: a lot of grind but no momentum. That’s not to say that these technologies are too clever for academics to understand, far from it, but understanding the wider context and connecting them requires support. We have to understand that the Web is just a means to an end for many academics. Their email and calendar is there, so is Google and the papers they read, anything else has to be proven to have a benefit.

As with research, teaching is going through a period of change, driven on more so by technology. For academic institutions taking advantage of the many learning technologies, often freely available, they employ experts such as learning technologists. Apart from doing the nuts and bolts jobs of making sure the virtual learning environments work properly, the learning technologists also make the connections between technology and pedagogy. It makes sense that if the teaching community embraces the support mechanism for the modern classroom, then the same should be happening in the modern research office.

Much discussion revolves around the problem of digital divides, see for example the work of the Tinder Foundation in Sheffield, where those unconnected to the Web do not have the same opportunities as those with a broadband connection. Does that digital divide extend to those capitalising and understanding the issues relating to social media, Creative Commons, infographics, video, altmetrics etc in the professional setting?

Amongst the many increasing demands academics have is that of staying up to date with new technologies, which is often treated as an ‘as and when’ job. This is not just in relation to niche technologies but core tools as well, such as email, smart devices and office packages. Often the easiest option is not to engage rather than take a leap of faith and try a new way of working, say for example Google Docs or Mendeley instead of Word of Reference Manager.

The reason often stated is that given that technology moves at such a pace, why invest in one tool when a better one may be around the corner. There is some truth in that, yet actually changing that practice, getting things wrong and picking the wrong tool can have benefits. As the ‘Web 2.0 state of mind’ testifies, individual technologies become less important, and become pathways to another output, collaborator or audience. If Microsoft had decided to scrap Word 15 years ago many would have been left looking in the office stationery cupboard for a pen and paper, now that is less important with Openoffice and Google Docs to name but a few.

The new wave of technologies for researchers show no sign of slowing down and the past problems of too much choice will seem trivial when there are now several ways a scientist can manage their research project, publish it and disseminate it. The problem is for those who so far have chosen not to engage with these technologies, whether that be through choice or ignorance and many will at some point need to address this. We are still some way from seeing any form of critical mass within the academic community in relation to the use of blogging, cloud-based working, social media amongst other things. Yet there has been a few shifts that academics cannot ignore, open access being one of them and MOOCs within the teaching community. Yet, even these models are not set in stone, nobody can predict what the next ten years will look like in academia, they can hazard a guess but it will be nothing more than that.

But it is almost a decade since Tim O’Reilly popularised the term ‘Web 2.0’ and still many academics have achieved an awful lot career-wise without ever really embracing it. It does not mean this will always be the way as Web 2.0 has branched out into specialised academic tools and were more often than not started by PhD and early career researchers. These innovators are invariably part of a generation who cannot remember a world before the Web.

Even though some of these technologies will fall by the wayside, it does feel like we are at the dawn of a new technological revolution driven by people who see academia operating in two time zones, a full century apart. For academics to take advantage of it all does require some effort as engaging with these technologies could in the short term prove to be hugely disruptive for those without support. For those who do apply the web 2.0 state of mind it could be beneficial once they set out their objectives and reasons for using the technologies that could help shape how research is measured and communicated. There is change happening and for individual researchers working in a Web 1.0 world it cannot happen overnight and requires ongoing support for the most part. Without this, opportunities could be missed. But in the longer term, core skills such as real-time collaboration over the Web, new forms of writing, reviewing and communicating will become harder to grasp, leaving some further behind than ever. The digital divide is not just those connected to the Web and those who are not. It is grey-er than that – it is those who take advantage of the Web, understand not only its value but the problems that come with it. For the later stage career academic the benefits will be harder to quantify, but for those just starting out, it is better to understand those tricky gear shifts now rather than grind them later.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Andy Tattersall is an Information Specialist at the School of Health and Related Research (ScHARR) at the University of Sheffield. His role is to scan the horizon for Web and technologies opportunities relating to research, teaching and collaboration and maintain networks that support this. Andy has a keen interest in new ways of working by employing Altmetrics, Web 2.0 and Social Media but also paying close attention to the implications and pitfalls for using such advances. @andy_tattersall

Print Friendly