As universities integrate generative AI (GenAI) tools to support students’ learning and better prepare them for the requirements of tomorrow’s labour market, it is crucial to evaluate how students use these technologies and how such use impacts their skill development.
Following Oxford university’s announcement to expand its artificial intelligence (AI) offering and capabilities in collaboration with OpenAI, LSE partnered with Anthropic to provide free access to all students to their AI assistant, Claude for Education.
The tool is designed to support students’ learning by guiding their reasoning process, rather than simply providing answers. In preparation for this collaboration, we took stock of where we are – drawing on Anthropic’s recent Education Report, findings from our ongoing research projects through the GENIAL AI Hub, and a convenience sample student-led[*] survey of 100 LSE students.
The state of play
Based on around half a million anonymised student conversations on Claude.ai in the United States, Anthropic’s Education Report found that students in quantitative and science courses, particularly Computer Science, use Claude disproportionately more in their learning than students on qualitative courses. Although Computer Science students represent only 5.4 percent of American undergraduate students, they accounted for 38.6 percent of interactions on Claude.ai.
In contrast, undergraduate students in Business, Health, and the Humanities courses were less represented in the Anthropic study (only accounting for 8.9, 5.5 and 6.4 percent of all users respectively), which suggests that students from these disciplines rely less on Claude.ai in their learning. However, it is important to note that these findings reflect usage patterns specific to Claude.ai and do not necessarily capture broader trends in the adoption of AI-powered tools by university students.
The Anthropic data largely aligns with trends observed at LSE, where our study revealed that management students used AI tools less frequently compared to their peers in data science, statistics, and other quantitative disciplines. Nevertheless, these patterns could also reflect differences in social science students’ access to, preference for, or familiarity with Claude.ai, rather than with GenAI overall. Indeed, the non-representative student-led survey conducted prior to the rollout of Claude for Education licences found that 94 percent of respondents used ChatGPT in their learning, while Claude.ai was used by 21 percent of the 100 surveyed LSE students.
New ways of learning or development challenges?
Anthropic’s Education Report identified four different ways in which students use AI for their learning:
- Direct Problem Solving,
- Direct Output Creation,
- Collaborative Problem Solving,
- and Collaborative Output Creation.
While these use cases hold the potential for new ways of learning, they also raise challenges.
For instance, almost half of all student chats (47 percent) focused on direct problem solving and direct output creation, seeking straightforward answers or creating content with minimal conversational engagement. Although some of these had legitimate learning purposes, some were seeking answers to multiple-choice and English language test questions or aimed to paraphrase marketing and business texts to avoid plagiarism detection, showing evidence of “offloading significant thinking to the AI”.
The student survey, reflected similar trends. Students reported using AI to save time (84 percent) and to enhance understanding (78 percent). Our GENIAL study also observed that students often leverage GenAI tools for task completion and productivity gains rather than deeper conceptual exploration. Without teaching support, the use of GenAI tools leads students to prioritise outputs and performance over the learning journey itself. This ‘mindless use’ of AI can result in students submitting AI-generated work with a limited grasp of underlying concepts and the subsequent loss of foundational skills. While AI offers undeniable efficiencies and opportunities, these findings suggest that they also may hinder the learning process.
Anthropic’s report also flags this risk. The prevalence of direct interactions was particularly high in Humanities, Business, Health, and Education, where students used AI significantly more for output creation (59.9, 58.7, 59.9, and 74.4 percent respectively). Although the results were inconclusive, and output creation may have value, they confirm concerns about academic integrity and call for additional research in this area.
Even more concerning is the potential long-term impact on critical thinking skills. The report shows that Claude was primarily used by students in completing higher-level cognitive thinking skills, such as creating new materials (39.8 percent) and analysing activities (30.2 percent), potentially outsourcing thinking and critical evaluation to AI.
These insights are also emphasised by other studies, highlighting how instead of completing tasks themselves, knowledge workers increasingly focus on overseeing AI work, verifying its accuracy, and adapting outputs for their specific needs. GenAI can enhance efficiency, it also may lead to less critical thinking, particularly as users develop higher confidence in GenAI.
This is especially worrying as according to the World Economic Forum, 70 percent of employers consider analytical thinking as the most important skill in 2025. Hence, while global trends in skill requirements reinforce the urgency for more critical thinking skills, students may find themselves on a learning journey that rips them off these essential employability skills.
A way forward
To realise AI’s educational potential while mitigating its negative impacts on academic integrity and learning, we need to develop approaches that:
- Promote critical engagement, instead of providing answers.
- Include educational features that support students’ skill development by triggering problem-solving and analytical thinking processes.
- Highlight the limitations and inaccuracies of AI outputs, fostering verification practices.
- Protect intellectual property and academic integrity.
AI is undeniably reshaping education. To ensure it enhances rather than undermines the learning experience, we must prioritise the development of critical thinking, conceptual understanding, and independent learning skills. Through thoughtful dialogue between educators, students and technology developers, we can shape the future of learning that will prepare students to succeed in an increasingly complex world.
[*] Ran as a non-representative, student-led survey on a convenience sample with 100 participants in April 2025 by Zain Mirza, a 2nd year BSc in Accounting and Finance student and Anthropic’s student ambassador at LSE. The authors would like to thank Zain for initiating the student survey, and for their inspirational efforts in fostering dialogue.
This blog represents the views of its authors, not the position of the Department of Management or the London School of Economics.