LSE - Small Logo
LSE - Small Logo

Alessandro Di Lullo

Daniel A Bielik

November 14th, 2024

Are your students aware of your AI guidelines?

0 comments | 1 shares

Estimated reading time: 10 minutes

Alessandro Di Lullo

Daniel A Bielik

November 14th, 2024

Are your students aware of your AI guidelines?

0 comments | 1 shares

Estimated reading time: 10 minutes

Most students use AI in their studies, but only a few are fully aware of their university’s AI guidelines. Alessandro Di Lullo and Daniel A Bielik suggest why – and how – universities need to communicate their guidelines better to their students

Students and academics can now access artificial intelligence (AI) tools in a matter of minutes, from more or less anywhere and any device. According to the Digital Education Council (DEC) Global AI Student Survey 2024, 86% of students regularly use AI in their studies with 54% using it on at least a weekly basis.

In this environment, it becomes crucial for universities to master the art of AI policymaking, which we define as the process of designing, implementing, and regulating guidelines and principles that govern the development, deployment, and use of AI in the institution. In our experience, the art of university AI policymaking is about how to balance three objectives:

  • Boost the adoption of AI and new technologies.
  • Define what is the correct and responsible use of AI for different purposes and use cases within the university value chain.
  • Implement effective communication strategies, engage all stakeholders, and equip them with the necessary skills to use AI correctly and responsibly.

In light of this need for balance, and the ever-increasing speed at which technological advancements are forcing university leaders to make decisions in uncertainty, we present insights that provide a structure for informed decision-making.

The communication gap

The DEC Survey gathered nearly 4,000 responses across 16 countries in Europe, Asia, the Americas, and Africa, and from Bachelor, Master’s, and doctoral students in multiple fields of study, offering a diverse range of student viewpoints on AI in education.

The survey shows that a large population of students are not fully aware of the AI guidelines in their university. Specifically, only 5% of students are aware of their university’s AI guidelines and feel they are fully comprehensive.

The image is a scatter plot chart titled "Student perception of comprehensiveness of AI guidelines and awareness." It features a grid with axes labeled "Perceived comprehensiveness of AI guidelines" (vertical axis from low to high) and "Awareness of AI guidelines" (horizontal axis from low to high). Three large circles of different colors and sizes are plotted on the grid: a light blue circle representing "Students with full awareness" at the top right labeled 5%, a dark blue circle labeled 32% representing "The confused mass" in the middle, and a black circle labeled 29% representing "The lost" at the bottom left. Additional smaller light blue circles are scattered across the grid. A legend at the bottom right uses color coding to identify the groups. To the right, there is a question text box asking about agreement with AI guideline statements. The survey source is noted at the bottom: Digital Education Council Global AI Student Survey, 2024

The correlation between data points on awareness and perceived comprehensiveness indicates that students feel that support and guidelines on the use of AI in higher education are either not fit for purpose or are not communicated effectively. So does this negative perception stop those students from using AI? The data suggests otherwise.

The scale of this issue is reflected in the fact that, among the 29% of students who report having little to no awareness of AI guidelines, 76% of them regularly use AI in their studies. Extrapolating this trend to the broader university student population signals that millions of students may be using AI without sufficient knowledge of how to do so within university guidelines. This reality paves the way for behaviours that could be damaging to both student learning and academic integrity.

Bridging the gap

In response, the Digital Education Council has laid out a series of practical steps in the DEC AI Governance Framework. It outlines six key components that are crucial for implementing effective communication for AI:

  1. Identify relevant stakeholders: including students, educators, and administrators, ensuring that all parties impacted by AI use are considered.
  2. Policy development: when, how, and what aspects of AI usage should be communicated. These policies are essential to maintain transparency and consistency.
  3. Disclosure: defining the appropriate level of transparency regarding AI usage, which includes communicating the rationale behind using AI, the tools being deployed, how the data is stored and secured and the potential risks and impact to stakeholders.
  4. Communication process and channels: ensuring all users are aware of their obligations and appropriate use policies.
  5. Feedback and review channels: offering stakeholders the opportunity to voice concerns or appeal decisions made by AI systems.
  6. Language: using clear and accessible language to explain AI-related policies. This includes specifying which AI uses are permitted, restricted, or prohibited, and using illustrative guides to enhance understanding.

Based on our discussions with DEC university members, points three (disclosure) and four (communication process and channels) have emerged as focal points of the conversation. Our research shows that the current state of AI guidelines in most higher education institutions is that, by default, AI use cases are regulated by general institutional guidelines which typically require disclosure of any AI usage. Institutions often provide additional guidelines for some specific AI use cases, reclassifying them into these categories:

  • Open use: permits certain AI uses without requiring disclosure
  • Use with compliance: adds specific guidelines and instructions that students and faculty must follow
  • Prohibited use: bans certain AI uses within the institution

Looking ahead, the evolution toward more comprehensive AI governance will likely see institutions develop clearer, risk-based guidelines for different AI use cases. Our research shows that only a limited number of AI use cases are currently explicitly addressed. As a result, most AI use cases remain under the default ‘Use with disclosure’ category due to the lack of more specific guidelines. Two good examples of disclosure and compliance requirements have been compiled by the Digital Futures Institute at Teachers College, Columbia University, and the University of North Carolina.

Broadening the scope and reach

The challenge around communication is not limited to within the walls of the university. Employers, governments, parents, and other stakeholders have a vested interest in knowing that academic integrity is being reasonably maintained as AI use becomes more ubiquitous. It is conceivable that a narrative suggesting AI equals cheating could take hold among the broader populace.

We advise that the higher education community takes a proactive approach to external communication to avoid challenges to the social licence of higher education. Undoubtedly, anecdotal accounts of academic integrity breakdown due to student AI use will emerge in mainstream discussions. Similarly, as a sector we must be aware of the possibility of public conflation of faculty use of AI with a breakdown of teaching and learning standards. Any resulting backlash may not be as extreme as the Luddite movement of the 19th century, but could look more like the emergent challenges for social media in the past few years.

Regulators must be reassured that increasing prevalence of AI use and the evolution of AI use cases is well understood and mitigated. Employers need to feel comfortable that a graduate possesses the skills and knowledge outlined in their testamur. Lawmakers, donors, parents, and students need to know they are getting value for their money and time.

Maintaining the integrity narrative

We highlight now as a moment in time where we can use coordinated communication to get ahead of any such narrative that suggests AI equals cheating. We suggest that a clearly defined strategy and roadmap for curriculum reform and the adoption of authentic assessment techniques, coupled with proper accountability, will not only need to be in place, but easily explained to the broader community.

We advocate for regional and international collaboration between institutions, and for a coordinated, proactive approach to publishing easy-to-understand ready reckoners on academic integrity. Likewise, regulators and policymakers should be briefed and feedback sought on any concerns they have to ensure they are being addressed.

As an industry, we must measure longitudinally the attitude and perceptions of AI use in higher education among employers, regulators, legislators, and the community more broadly – to ensure that we maintain the narrative of integrity around our product, higher education, lest the narrative tide turns against us.

_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions.    _____________________________________________________________________________________________  Main image: AXP Photography on Unsplash

About the author

Black and white profile picture of Alessandro Di Lullo

Alessandro Di Lullo

Alessandro Di Lullo is Chief Executive Officer of the Digital Education Council, Adjunct Faculty Member at Singapore Management University and Academic Fellow at University of Hong Kong, focusing on AI governance

Profile pic of Daniel A Bielik

Daniel A Bielik

Daniel Bielik is President of the Digital Education Council and previously served as Ministerial Adviser in Education to the New South Wales Government, Australia

Posted In: AI

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.