LSE - Small Logo
LSE - Small Logo

Yi Bu

June 6th, 2025

Why science needs better knowledge maps

1 comment | 11 shares

Estimated reading time: 10 minutes

Yi Bu

June 6th, 2025

Why science needs better knowledge maps

1 comment | 11 shares

Estimated reading time: 10 minutes

Science maps act as guides to the ever-changing world of scientific production. They help policymakers, academics and students understand where a field is headed, who’s leading the charge, and what ideas are linked. Yi Bu lays out his vision for how these knowledge maps might work, and why their flaws demand urgent attention.


Imagine you’re planning a road trip across a country you’ve never visited. You’d want a trustworthy map, right? One that shows not just highways but also backroads, rest stops, and points of interest. Now picture that same need, but for navigating the world of science. Every day, researchers publish thousands of papers, collaborate across borders and explore new frontiers. To make sense of it all, experts create “knowledge maps”—visual tools that show how ideas, researchers, and discoveries connect. But here’s the catch: how do we know these maps aren’t steering us toward dead ends or worse?

This blog post is about why that question matters and how we might answer it.

What are knowledge maps?

Think of a knowledge map as a simplified guide to a messy, ever-changing landscape. For example, a map might show which scientists often cite each other’s work (like friends who recommend each other’s favourite books). It could group papers by shared keywords, revealing trends like “climate change adaptation” or “AI ethics.” Or it might highlight universities or labs that collaborate frequently, like bustling hubs in a network.

These maps help everyone from policymakers to curious students answer big questions: where is this field headed? Who’s leading the charge? What ideas are linked?

Figure 1: Example of a science map on Information and Library Science using author co-citation analysis.

Source: Author’s journal article 

But creating these maps isn’t straightforward. For instance, if you tried to map a city using only social media check-ins, you might find useful clues, but you’d miss hidden neighbourhoods. Similarly, knowledge maps rely on data like citations or keywords. The choices made while analysing that data shape what the map shows (and hides).

Why some maps mislead more than they inform

Let’s say a map claims to show the “most important” research in renewable energy. But what if it’s biased toward older, well-known studies and misses groundbreaking new work? Or if it lumps together unrelated topics because of a quirk in the data? The stakes are high because flawed maps can misdirect funding, erase emerging fields, or amplify existing biases.

Platforms like Google Scholar dominate map-making tools, but their ranking algorithms are trade secrets.

Consider this real-world example. Early maps of “AI ethics” underrepresented non-Western perspectives, framing the field around Silicon Valley priorities. Critical work on labour rights or racial bias in AI from researchers in Africa and India was initially excluded.

The trouble is, evaluating these maps is hard. Unlike a road map, there’s no GPS satellite to verify every detail. Experts in a field might disagree on what’s important. A map that looks right to a computer scientist might baffle a biologist. Plus, science changes fast and today’s cutting-edge topic could be outdated next year.

Who’s holding the compass?

Behind every knowledge map are invisible forces shaping its boundaries:

Algorithmic opacity. Platforms like Google Scholar dominate map-making tools, but their ranking algorithms are trade secrets. What if they prioritise older, Western institutions or favour journals owned by publishers like Elsevier?

The language barrier. Most scientific papers recorded in international bibliographic databases such as Web of Science and Scopus are published in English, marginalising research in Mandarin, Spanish or Swahili. Maps built on this data risk missing climate solutions from smallholder farmers or traditional medical knowledge.

Who counts as an “expert”? When domain experts disagree, say, on whether AI ethics should prioritise regulation or innovation, whose vision gets coded into the map?

How do we check a map’s accuracy?

Here’s where things get creative. Researchers use a mix of strategies to test knowledge maps, much like you’d cross-check a travel guide with local advice:

Compare to “known landmarks”. Just as you’d trust a map that correctly marks the Eiffel Tower or Grand Canyon, knowledge maps can be tested against trusted benchmarks. For example, does the map highlight Nobel Prize-winning papers or famous scientists where experts expect them? Do keyword clusters match categories in well-respected databases, like medical research headings in PubMed?

Follow the money. Research funded by grants often reflects real-world priorities. If a map shows strong links between climate science papers and grants about renewable energy, that’s a good sign it’s capturing meaningful connections.

Ask the locals. Imagine showing your travel map to a tour guide. Similarly, domain experts can review knowledge maps and point out errors, like missing topics or odd groupings. Do cancer biologists agree that a map’s clusters represent their field accurately?

Crowdsource opinions. Sometimes, simpler feedback helps. Platforms like Amazon Mechanical Turk can gather input from many users. Tasks like “Find the AI experts on this map” or “Does this grouping make sense?” help spot usability issues.

None of these methods is perfect. Experts might disagree and crowdsourcing struggles with niche topics. But together they build a clearer picture.

Why should non-experts care?

You don’t need to be a scientist to benefit from reliable knowledge maps. They matter for at least three reasons. One is transparency in science, with maps showing how ideas evolve or how certain voices might be overlooked. Another reason: better decisions. Governments and universities use these tools to fund research, hire experts or design curricula. Accurate maps mean smarter investments.The third reason involves learning and curiosity. Ever fallen into a Wikipedia rabbit hole? Knowledge maps could guide self-learners through complex topics, becoming like a visual Wikipedia for science.

Mapping the future with smarter tools

Today’s knowledge maps are like early paper road atlases—useful but limited. Tomorrow’s tools could be more like interactive GPS. Upcoming tools include AI assistants scanning millions of papers to update maps in real time, highlighting emerging fields like “quantum biology” or “AI ethics”. Also, it will be possible to use mixed data, adding patents, news articles or even social media mentions showing how research impacts the real world—like tracking how a lab discovery becomes a lifesaving drug. Equally important, personalised maps will become a helpful tool. Students, policymakers or entrepreneurs might one day explore custom maps tailored to their interests.

But again, none of this works without trust. Just as you wouldn’t use a navigation app that sends you off a cliff, we need ways to ensure these maps are reliable.

The bottom line: knowledge maps are more than pretty visuals. They’re tools for navigating the frontiers of human understanding. But like any tool, they need testing. By combining data checks, expert input and new technologies, we can create maps that guide us wisely instead of leading us astray.

For anyone who cares about science, innovation or lifelong learning, the message is simple—good maps don’t just show the way. They help us ask better questions.

***

This post draws on the author’s article, Towards the Assessment of Mapping Knowledge Domains, published in Journal of Information Science.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: nobeastsofierce on Shutterstock.


 

About the author

Yi Bu

Yi Bu is an Assistant Professor at the Department of Information Management, Peking University. His research interests include quantitative science studies, science of science, scholarly data mining and research policy.

Posted In: Research communication

1 Comments