Andreas Pacher presents the Observatory of International Research (OOIR), a research tool that provides users with easy to use overviews and information for whole fields of social science research. Reflecting on the advantages and limitations of other discovery tools and the potential for information overload, Andreas points to the utility of OOIR in producing search results that are both broad based and tailored to specific academic interests.
Many research discovery tools are too demanding. They are useful only under the condition of extensive user engagement, requiring a prior selection of journals, topics, or related papers, sometimes even in specialized formats such as DOIs or BibTeX lists. Not only are they difficult to use for non-scholars (e.g. journalists working as science editors) and those who merely seek to have a quick glance at a discipline’s current topics; but they also foster a culture of ‘research waste’, by filtering out research from outside a pre-selected scope of interest. Other tools may be easier to use, but they gather metadata from suboptimal sources (e.g. RSS Feeds) and in some cases even risk including predatory sources.
OOIR (Observatory of International Research) is a new research discovery tool designed to solve these issues. It follows a simple concept: It automatically lists the latest papers published in a given social science discipline, supplemented with a ‘Trending Papers’-section based on Altmetric Attention Scores and other statistical data (such as citation-based journal-interlinkages).
Not necessitating any prior selection, OOIR seeks to render research discovery quick and simple and its usefulness extends to non-scholarly purposes. One may find the latest research in, say, Political Science, or History, without having to know the field’s ‘top journals’ or anything about DOIs. By breaking down these barriers to entry, OOIR seeks to aid the wider dissemination of social scientific findings.
Unlike other research discovery tools, OOIR is comprehensive (not narrowing down the papers according to users’ topics of interest) and yet limited (not risking the inclusion of predatory outlets but staying confined to SSCI-indexed journals); it fosters interdisciplinarity (by covering eight disciplines) and uses reliable metadata (by using CrossRef’s API rather than RSS feeds).
A culture of research waste?
Two keywords may aid us in understanding why research discovery tools are important; Research Waste and Information Overload. Keeping track of the latest insights can be burdensome: Assume that there are 146 traditional Sociology journals, and that Garfield’s Law of Concentration applies, which tells us that 20% of all journals accrue 80% of all citations in a scientific field. Now, if one commits to staying up to date with all papers published in this 20% of journals in a given year, this would still leave us with 29 outlets, and if they publish 35 papers a year, this would result in over 1,000 articles to screen.
While this strategy mitigates the risk of ‘information overload’, it aggravates the culture of ‘research waste’. With one broad brush, it dismisses almost 120 journals or 4,200 papers per year as irrelevant. Existing research discovery tools reflect similar selection and exclusion biases, rendering them unsuitable for a comprehensive view into a scholarly domain. The tools are either too narrow, too broad, too unreliable, or too demanding for this purpose.
Too narrow, are user-tailored recommendations whose algorithms simply spit out titles exhibiting word-similarity to already-read articles. Personalized suggestions from Mendeley or ResearchGate follow this principle.
Other tools are too broad since they include almost any topically proximate journals. Given the festering problem of predatory journals, this all-inclusive approach does not always seem wise. The Philosophy Paperboy, for instance, covers 563 journals, much more than the 190 indexed in Web of Science’s category ‘Philosophy’.
Furthermore, some discovery instruments harvest their data from RSS feeds, but this source of scholarly metadata can be questionable. RSS feeds do not follow monitored standards and their timeliness is uncertain. Initiatives such as CrimPapers or JournalismResearchNews take this approach, but they would be more reliable, if they directly tapped the main repository of scholarly metadata, CrossRef.
Finally, most devices require (sometimes extensive) user engagement. One needs to know with great precision what one is seeking. They are perfect for specified literature searches, but impractical for simply getting to know the recent trends in a given domain. This especially raises the bar for non-scholars who may not be able to provide a preconceived selection of keywords (e.g. Google Scholar, ScienceOpen), a prior choice of specified journals (e.g. Researcher-App) or a pregiven list of DOIs (e.g. CitationGecko).
OOIR’s Limitations: Journal Selection and Metadata Quality
OOIR, on the other hand, aims to be helpful without necessitating a preconceived idea what one looks for, other than the latest research in a given field. Some limitations should nevertheless be considered.
First, journal selection is based on Web of Science’s SSCI (Social Science Citation Index). This reliance is convenient, since it absolves OOIR from personally picking or excluding journals based on arbitrary criteria. However, it also omits potential key outlets, especially if they are newer in the field.
Secondly, OOIR gathers its main data from CrossRef, but CrossRef is not universal. Out of a total 850 journals covered, OOIR cannot follow 119 outlets because they do not provide machine-readable metadata to CrossRef. (If one does not count Law, the number of journals left out is reduced to 56 out of 700.)
Moreover, CrossRef data is perfect, if one merely wants the title, journal, and DOI of each article, but scholarly metadata beyond such basic information are inconsistent. For instance, not all publishers make their references public; this currently affects 261 out of 850 journals covered. As a result, OOIR’s citation statistics are necessarily of a low quality. OOIR also refrains from listing authors and their affiliations, as such metadata are far from uniform and institutional affiliations are frequently missing. OOIR is thus highly sympathetic to the cause of I4OC (Initiative for Open Citations), ROR (Research Organization Registry, a project developing unique identifiers for research organizations), and Metadata 2020 (a collaboration advocating for richer research metadata).
Almost every research discovery tool contains impressive functions, but most of them are useful only for specified literature searches with precise preconceptions. OOIR seeks to offer something different, rendering access to scholarly trends across the social sciences quick and simple. In so doing; it contributes to the efficiency of social science research, supports the wider dissemination of social science findings and opens up research discovery tools to a new audience.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the author
Andreas Pacher initiated the Observatory of International Research (OOIR), a website which tracks social science journals in eight categories (Anthropology, Area Studies, Communication, Geography, History, Law, Political Science and Sociology) to continually list their latest papers. He studied law and international relations in Vienna (University of Vienna), Paris (SciencesPo), and Shanghai (Fudan University), and works in the public sector of the Republic of Austria (albeit OOIR is a private project). You can follow OOIR at @ObserveIR.