Andreas Pacher presents the Observatory of International Research (OOIR), a research tool that provides users with easy to use overviews and information for whole fields of social science research. Reflecting on the advantages and limitations of other discovery tools and the potential for information overload, Andreas points to the utility of OOIR in producing search results that are both broad based and tailored to specific academic interests.
Many research discovery tools are too demanding. They are useful only under the condition of extensive user engagement, requiring a prior selection of journals, topics, or related papers, sometimes even in specialized formats such as DOIs or BibTeX lists. Not only are they difficult to use for non-scholars (e.g. journalists working as science editors) and those who merely seek to have a quick glance at a discipline’s current topics; but they also foster a culture of ‘research waste’, by filtering out research from outside a pre-selected scope of interest. Other tools may be easier to use, but they gather metadata from suboptimal sources (e.g. RSS Feeds) and in some cases even risk including predatory sources.
OOIR (Observatory of International Research) is a new research discovery tool designed to solve these issues. It follows a simple concept: It automatically lists the latest papers published in a given social science discipline, supplemented with a ‘Trending Papers’-section based on Altmetric Attention Scores and other statistical data (such as citation-based journal-interlinkages).
Not necessitating any prior selection, OOIR seeks to render research discovery quick and simple and its usefulness extends to non-scholarly purposes. One may find the latest research in, say, Political Science, or History, without having to know the field’s ‘top journals’ or anything about DOIs. By breaking down these barriers to entry, OOIR seeks to aid the wider dissemination of social scientific findings.
Unlike other research discovery tools, OOIR is comprehensive (not narrowing down the papers according to users’ topics of interest) and yet limited (not risking the inclusion of predatory outlets but staying confined to SSCI-indexed journals); it fosters interdisciplinarity (by covering eight disciplines) and uses reliable metadata (by using CrossRef’s API rather than RSS feeds).
A culture of research waste?
Two keywords may aid us in understanding why research discovery tools are important; Research Waste and Information Overload. Keeping track of the latest insights can be burdensome: Assume that there are 146 traditional Sociology journals, and that Garfield’s Law of Concentration applies, which tells us that 20% of all journals accrue 80% of all citations in a scientific field. Now, if one commits to staying up to date with all papers published in this 20% of journals in a given year, this would still leave us with 29 outlets, and if they publish 35 papers a year, this would result in over 1,000 articles to screen.
While this strategy mitigates the risk of ‘information overload’, it aggravates the culture of ‘research waste’. With one broad brush, it dismisses almost 120 journals or 4,200 papers per year as irrelevant. Existing research discovery tools reflect similar selection and exclusion biases, rendering them unsuitable for a comprehensive view into a scholarly domain. The tools are either too narrow, too broad, too unreliable, or too demanding for this purpose.
Too narrow, are user-tailored recommendations whose algorithms simply spit out titles exhibiting word-similarity to already-read articles. Personalized suggestions from Mendeley or ResearchGate follow this principle.
Other tools are too broad since they include almost any topically proximate journals. Given the festering problem of predatory journals, this all-inclusive approach does not always seem wise. The Philosophy Paperboy, for instance, covers 563 journals, much more than the 190 indexed in Web of Science’s category ‘Philosophy’.
Furthermore, some discovery instruments harvest their data from RSS feeds, but this source of scholarly metadata can be questionable. RSS feeds do not follow monitored standards and their timeliness is uncertain. Initiatives such as CrimPapers or JournalismResearchNews take this approach, but they would be more reliable, if they directly tapped the main repository of scholarly metadata, CrossRef.
Finally, most devices require (sometimes extensive) user engagement. One needs to know with great precision what one is seeking. They are perfect for specified literature searches, but impractical for simply getting to know the recent trends in a given domain. This especially raises the bar for non-scholars who may not be able to provide a preconceived selection of keywords (e.g. Google Scholar, ScienceOpen), a prior choice of specified journals (e.g. Researcher-App) or a pregiven list of DOIs (e.g. CitationGecko).
OOIR’s Limitations: Journal Selection and Metadata Quality
OOIR, on the other hand, aims to be helpful without necessitating a preconceived idea what one looks for, other than the latest research in a given field. Some limitations should nevertheless be considered.
First, journal selection is based on Web of Science’s SSCI (Social Science Citation Index). This reliance is convenient, since it absolves OOIR from personally picking or excluding journals based on arbitrary criteria. However, it also omits potential key outlets, especially if they are newer in the field.
Secondly, OOIR gathers its main data from CrossRef, but CrossRef is not universal. Out of a total 850 journals covered, OOIR cannot follow 119 outlets because they do not provide machine-readable metadata to CrossRef. (If one does not count Law, the number of journals left out is reduced to 56 out of 700.)
Moreover, CrossRef data is perfect, if one merely wants the title, journal, and DOI of each article, but scholarly metadata beyond such basic information are inconsistent. For instance, not all publishers make their references public; this currently affects 261 out of 850 journals covered. As a result, OOIR’s citation statistics are necessarily of a low quality. OOIR also refrains from listing authors and their affiliations, as such metadata are far from uniform and institutional affiliations are frequently missing. OOIR is thus highly sympathetic to the cause of I4OC (Initiative for Open Citations), ROR (Research Organization Registry, a project developing unique identifiers for research organizations), and Metadata 2020 (a collaboration advocating for richer research metadata).
Almost every research discovery tool contains impressive functions, but most of them are useful only for specified literature searches with precise preconceptions. OOIR seeks to offer something different, rendering access to scholarly trends across the social sciences quick and simple. In so doing; it contributes to the efficiency of social science research, supports the wider dissemination of social science findings and opens up research discovery tools to a new audience.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
Image credit: pixel2013 via pixabay
Great initiative, Andreas, you have certainly put your finger on a central problem for all researchers – how to keep up with the literature. But there are some limitations in what you are providing. First, for any researcher, using Altmetrics to indicate trending is not ideal. If you are a researcher into kidney disease, you want to find the latest papers on kidney disease whether or not people are reading them.
More fundamentally, tracking papers from the journals in which they are published, or from a collection of eight major disciplines, is not sufficiently granular. The approach by researchers from 30 years ago to visit the library and browse the contents list of the most reputable journals (which is essentially what you are providing here, albeit in digital form) is utterly outmoded. There are too many journals for that to be possible. What is needed is a tool that can identify relevance at an article level, and do it automatically, based on the preferences you supply. There are such tools available; UNSILO is one of them, but there are others.
Thanks a lot, Michael, for your kind words!
I fully agree with your arguments, especially if the premise is that one really only looks for a narrower range of papers relevant to one’s stated preferences. Many research discovery tools are quite suitable for this purpose (though far from perfect, in my experience – – UNSILO may be more interesting, I have not tried it out yet, thanks for the recommendation).
OOIR wants to achieve something different: Scholars may not always feel content with staying confined within their own subfield. For instance, my subfield of interest is diplomacy, but I also want to know what Political Science in general is publishing on (and not only their best-ranked journals). Sometimes it may even be interesting for me to sneak into the latest papers in Geography journals. I find OOIR neither too granular nor too general because it only follows a specific set of ‘tested’ journals, namely those indexed in Web of Science’s Journal Citation Reports (SSCI). It guarantees that the papers listed in OOIR can be expected to be of a (more or less) reliable quality.
You are right about Altmetric being imperfect, and I originally hesitated with its implementation. However, I found that Altmetric sufficiently indicates (as a mere heuristic) what the scholarly community is interested in. So far, I know of no other proxy which would show scholarly trends, especially in disciplines I am ignorant about (e.g. History,). Altmetric also offers a friendly API and aesthetic visualizations, which was a big plus.
Overall, I agree with your general argument: Research discovery remains improvable!