LSE - Small Logo
LSE - Small Logo

Aurélie Jean

Guillaume Sibout

Mark Esposito

Terence Tse

March 28th, 2025

The generative AI bubble is changing how we see the world

0 comments | 13 shares

Estimated reading time: 5 minutes

Aurélie Jean

Guillaume Sibout

Mark Esposito

Terence Tse

March 28th, 2025

The generative AI bubble is changing how we see the world

0 comments | 13 shares

Estimated reading time: 5 minutes

As if social media filter bubbles weren’t enough, we have now reached the age of “generative bubbles”, formed when users engage with generative AI in a narrow or skewed way. Aurelie Jean, Guillaume Sibout, Mark Esposito and Terence Tse write that generative bubbles prevent us from adopting a thoughtful approach to using tools such as ChatGPT, and so we become confined and shaped by how we interact with these technologies.


In November 2022, generative AI came along with some exaggerated claims about what artificial intelligence can do, including mastering consciousness and emotions. Less discussed, but certainly more important, are the consequences of genAI on the content we consume.

GenAI can give us a deformed perception of the world, with disinformation and misinformation reaching another level. Up until now, we had been worried about filter bubbles. The generative bubble, however, is overtaking it with powerful mechanisms —and consequential impacts.

First filter bubbles

The expression filter bubbles was coined in 2012 by the American entrepreneur Eli Pariser to describe the filtered information each individual receives on social media from algorithmic recommendations.

A learning algorithm categorises social media users in different classes, based on the statistical similarities in their static and dynamic profiles, including their behaviour on the platform (the accounts they follow, the content they like and comment on, the sentiment of their posts, etc.). A given user usually sees content that other users in their class like.

When poorly designed, this recommendation system can become rigid. The size and shape of each class don’t change over time and so users never move from one class to another. Classes thus become bubbles as they lock users’ minds and filter information based on what other users in the same group are seeing.

The filter bubble amplifies the propagation of fake news and conspiracy theories because of confirmation bias. When you are placed in a class where 90 per cent of users believe the earth is flat, you see algorithmically recommended content reinforcing that belief.

Now generative bubbles

Generative AI has now created another kind of bubble: content (often shared on social media) generated by algorithms embedded in chatbot technologies such as Gemini, ChatGPT, Le Chat, copilot or Jasper. A key distinction is that the quality of the generative content, including its precision and accuracy, depends strongly on the quality of the requests (or prompts) given by each user. Whereas in the filter bubble algorithms filter the content received by a person, in the generative bubble, users are filtered, limited, or restricted by themselves alone.

The two bubbles are conceptually similar, as they both limit the diversity of perspectives and ideas, but they differ in their underlying mechanisms. The filter bubble refers to being confined to an information ecosystem filtered by algorithms developed for social networks, search engines or online platforms. What we call a generative bubble describes a user’s tendency to engage with a generative tool in a narrow or skewed way, often due to habits, limited skills or knowledge, or the technology’s design. The generative bubble prevents us from adopting a thoughtful approach to using AI tools, and so we become confined and shaped by how we interact with these programmes.

In this sense, the filter bubble generates an external form of discrimination by choosing the content available to users. In contrast, the generative bubble introduces an internal form of discrimination rooted in how users engage with the AI tool and apply its features. This distinction between external and internal discrimination highlights how these two “bubbles” influence user experience and decision-making. 

Generative AI leads to double discrimination: due to algorithmic biases in its design and in the way it’s used. We call it adoption bias. Education level, culture and age influence how people use generative AI, meaning how they write prompts. For example, some users may use overly vague queries, fail to define specific criteria to guide the requested content, not experiment with semantic variations of the prompts or rely only on the generated results without questioning the suggestions received. This is not limited to individuals with low education or knowledge levels.

Even educated users may not fully grasp all the technical or semantic nuances of generating an algorithmic response. Experts in their respective fields have specific deep-rooted thought patterns in their methods of reasoning, and therefore in how they formulate their queries in a generative AI program. These thought patterns, or biases, can also vary significantly depending on a country’s culture, history, entrepreneurial mindset and the private or public institutions that govern them.

Fighting the generative bubble

The filter bubble is not a fatality. It can be avoided using well-designed AI governance comprising best practices from ideation (formulating the problem) to usage. The idea is to create guidelines on how to reflect on specifications and worst-case scenarios or define the sources of data to collect. Tests should be run on the data used to train and validate the algorithm once developed and once used (back-testing). Finally, information must be provided users about the technology, such as what type of AI (conditional rules, machine learning algorithms, mathematical equations), which kind and source of data is used, and what the algorithm is doing in practice (recommending, predicting, personalising, classifying, generating, etc.).

Governance must consider additional components to ensure the end user understands the ins and outs of the generative technology and how to use it. To do so, here are a few primary solutions to explore:

Inform user about adoption bias

Make them aware of biases based on age, education level or native language (when the primary language used in the generative AI program is different). This allows them to potentially reconsider their position and question their usage or even ask questions to the technology provider.

Guide user

Implement methods to improve skills in using the technology, such as how to make a query, errors to avoid, how to interpret a generated response and verify its accuracy, or pitfalls to avoid such as those related to anthropomorphism.

Encourage active experimentation

Motivate users to test different prompt formulations and explore the technology’s features more creatively. Also, encourage questioning, analysing, critiquing and adjusting the generated results rather than accepting them without modification.

Test with multiple proxies

Integrate additional tests in the deployment phase, when the generative AI solution is put into practice. This phase can also be considered with a limited and controlled group of human testers from different social and cultural capital (called beta testers), identified by the company that owns the technology.

Backtest the solution explicitly

Explicitly collect user feedback through surveys or questions while the technology is being used. This approach remains limited because users are in their generative bubble and have more difficulty identifying this type of technological discrimination.

Backtest the solution implicitly

Implicitly collect user feedback and behavioural data (or usage data) and compare users according to the types of profile previously identified and studied on the beta testers.

Continuously rethink the technology

Regularly reevaluate the technology’s features and design based on the results from the explicit and implicit backtesting.

New information dynamics

We’ve entered an era in which information is increasingly shared through open data platforms and bases, open source, the internet in general and social media in particular. Information is power, but shared information is superpower. Generated (AI-aided) information can significantly impact how we see each other and the world. We must pay more attention to how we adopt and use AI communication technologies.


Sign up for our weekly newsletter here.


  • This blog post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
  • Featured image provided by Shutterstock.
  • When you leave a comment, you’re agreeing to our Comment Policy.

About the author

Aurélie Jean

Aurélie Jean is a computational scientist, entrepreneur and author, specialized in algorithmic modelling. She has a PhD in Material Sciences and Engineering, option computational mechanics and mathematical morphology, from Mines Paris, PSL University, France.

Guillaume Sibout

Guillaume Sibout is a consultant with In Silico Veritas (ISV), a consultancy specialising in strategies and governance of algorithms. He has an Executive Master's degree in the digital humanities, innovation, transformation, media and marketing from Sciences Po.

Mark Esposito

Mark Esposito is Professor of Strategy and Economics at Hult International Business School and Director of the Futures Impact Lab. He is equally a Harvard social scientist, with appointments at the Center for International Development at Harvard Kennedy School; the Harvard’s Institute for Quantitative Social Science and the Davis Center for Eurasian Studies.

Terence Tse

Terence Tse is Professor of Finance at Hult International Business School and Co-founder and Executive Director of Nexus FrontierTech. He is also Co-Founder and Chair of Fintech and Business AI at The Chart think tank.

Posted In: Economics and Finance | Technology

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.