LSE - Small Logo
LSE - Small Logo

Blog Admin

October 15th, 2013

Universities can improve academic services through wider recognition of altmetrics and alt-products.

7 comments

Estimated reading time: 5 minutes

Blog Admin

October 15th, 2013

Universities can improve academic services through wider recognition of altmetrics and alt-products.

7 comments

Estimated reading time: 5 minutes

carly

As altmetrics gain traction across the scholarly community, publishers and academic institutions are seeking to develop standards to encourage wider adoption. Carly Strasser provides an overview of why altmetrics are here to stay and how universities might begin to incorporate altmetrics into their own services. While this process might take some time, institutions can begin by encouraging their researchers to recognize the importance of all of their scholarly work (datasets, software, etc).

Have you heard of “altmetrics”? If not, you will soon. A movement is descending upon academia that will change the way we consider research outputs and researcher impact. Like any good movement, there is still much debate about what altmetrics means, and how it relates to “article-level metrics” (ALMs). But in the interest of writing this blog post, here’s a summary:

Currently the metrics for assessing researcher impact focus on publications. In particular, assessment relies on the impact factors of journals in which researchers publish, and how many citations are garnered by those publications. There are myriad reasons why this is a problematic system (summarized here, here, and here), but let’s continue with the assumption that the current system is not adequate. Altmetrics focuses broadening the things we are measuring, as well as how we measure them. For instance, article-level metrics (ALM) report on aspects of the article itself, rather than the journal in which it can be found. ALM reports might include the number of article views, the number of downloads, and the number of references to the article in social media such as Twitter. In addition to measuring the impact of articles in new ways, the altmetrics movement is also striving to expand what scholarly outputs are assessed – rather than focusing on journal articles, we could also be giving credit for other scholarly outputs such as datasets, software, and blog posts.

Article-level metrics at PLoS
Article-level metrics at PLOS (Image credit: Duncan Hull CC-BY)

Altmetrics (and specifically ALM) have been around since 2007, and adoption of the concept has picked up dramatically in the last two years. The field is exploding, and great tools like the PLOS ALMPlum Analtyics, Altmetric, and Impact Story are gaining traction thanks to buy-in and participation from organizations like CrossRef and ORCID. The Alfred P. Sloan Foundation has funded two altmetrics-focused workshops run by the open-access publisher (and leader in ALM) PLOS, the second of which took place last week.

As a result of altmetrics gaining traction, there are concerns that we should be thinking about standards or best practices around altmetrics to ensure both credibility and adoption. Enter the National Information Standards Organization (NISO). These folks are partly responsible for all kinds of standards that you probably interact with in your daily life: ISSNs, DOIs, and URLs are among their most widely known outputs. They are an obvious organization to consider standards and best practices around altmetrics; as such, the Alfred P. Sloan Foundation funded a meeting for the NISO Alternative Assessment Metrics Project in conjunction with the PLOS ALM Workshop last week. This meeting was the first of three information-gathering workshops to determine how best to proceed in establishing altmetrics standards.

The Altmetrics Workshops

Last week I attended both the NISO and PLOS workshops at Fort Mason in San Francisco, primarily as a representative of researchers and the universities that support them. In my capacity as a data curation specialist at the California Digital Library (CDL), I think about open science, scholarly communication, data sharing, and data stewardship in the context of the University of California’s 10 campuses and  thousands of researchers. I keep an eye on trends and how they might affect researchers and alter how we communicate science; the emerging field of altmetrics certainly fits the bill.

fort mason
The NISO and PLOS Altmetrics workshops were held at Fort Mason in San Francisco, an iconic location on the bay with views of the Golden Gate Bridge and Alcatraz Island. Fort Mason served as a refugee camp (pictured here) after the 1906 Earthquake and Fire.

In this role, I presented on our work at the CDL’s UC Curation Center (UC3 for short) for the NISO Workshop attendees. The UC3 group provides the services below:

  • EZID identifier service which distributes unique identifiers like DOIs
  • DataUp tool for helping researchers manage, describe, and share their tabular datasets
  • DMPTool for helping researchers create data management plans for their funders
  • Merritt repository for long term preservation and access of digital materials (including data)
  • Web Archiving Service (WAS) for preserving, searching, and analyzing websites

The question I posed during my presentation for NISO was this: What do altmetrics look like for our university-focused services? We don’t publish journal articles, so what do altmetrics look like for our “alt-products” (hat tip to Martin Fenner of PLOS) such as websites in WAS, data in Merritt, or publicly shared data management plans in the DMPTool?

The answer is that “it depends”. We could provide basic altmetrics like view counts and download counts for digital objects we store. A more complex offering would be to support additional metadata collected from users of our materials, such as commenting and annotation. The most challenging implementation would include integrating services such as Plum Analtyics, Altmetric, or Impact Story with our services.

Why haven’t we done anything yet?

There are a few roadblocks to our taking action and implementing altmetrics within the UC3 group, including limited staff resources and confusion over where to begin. But the major hurdle for us is bureaucracy. The University of California system is very large, and has many (many) stakeholders in this space. We at the CDL can’t make our decisions in isolation since what we do affects university libraries, data centers, IT departments, research support offices, and the researchers themselves. The decision-making process involves many (many) individuals at various levels in the UC system, which is a strength (ensures our work is well-received and relevant) but also a weakness – unlike academic publishing companies that are at liberty to be nimble and responsive to community needs, we work on longer time scales.

How should academic institutions proceed in considering altmetrics for alt-products?

First, we should all recognize that this is a new and rapidly developing field of research and development, which means we should expect opinions and implementations to be widely variable. Second, we should start encouraging campus researchers to recognize the importance of all of their scholarly work (datasets, software, etc) by taking advantage of services like Impact Story and figshare. We should also strongly encourage researchers to sign up for an ORCID ID – it is proving to be the emerging favorite in identifying researchers’ work on the web. And finally, we should be open-minded. Few would disagree that the current assessment scheme for academic researchers is flawed, and altmetrics is looking for solutions to remedy these flaws. Altmetrics is NOT all about Twitter and Facebook, despite what many might perceive when first hearing about it. It IS about recognizing that a researcher is more than the sum of their citations.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Carly Strasser is a data curation specialist at the California Digital Library, part of the University of California system. She has a PhD in Biological Oceanography, which informs her work on helping researchers better manage and share their data. She is involved in development and implementation of many of the UC Curation Center‘s services, including the DMPTool (software to guide researchers in creating a data management plan) and DataUp (an application that helps researchers organize, manage, and archive their tabular data).

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Academic publishing | Citations | Impact

7 Comments