The rise of generative AI is forcing educators to interrogate exactly what we mean by accountability and transparency. LSE HE Blog Fellow Maha Bali takes up the challenge
An entangled, complex relationship
“AI cannot be considered inevitable, beneficial or transformative in any straightforward way… AI in education is highly contested and controversial. It…requires public deliberation and ongoing oversight if any possible benefits are to be realized and its substantial risks addressed.” – Ben Williamson
I am often in spaces where people have critical conversations around the ethics and impact of generative artificial intelligence (GenAI) in education and yet in these same spaces I also continue to find deterministic views of technology and AI – where AI is seen as inevitable, and its presence in education is deemed to be unquestionably beneficial.
It is still an emerging technology – despite the attention it receives in higher education (HE) – and we are still uncertain about the full extent of AI’s benefits and risks. We need to “resist simplistic notions of technological determinism and show how technology’s lack of neutrality is negotiated on the ground”. I consider my relationship with these technologies to be in a state of not-yetness as I believe the relationship between education and technology is entangled, complex and negotiated. It involves not just teachers and GenAI, but administrators, students, and other technologies as well as the sociopolitical environment, and cultural and economic considerations, along with the values and goals of the human actors within them.
Here, I focus on two ethical dimensions that are directly of interest to most academics and educators – accountability and transparency – though there are many more of concern.
An illusion of neutrality
At the many conferences I attend and panels I’m a part of, I rarely notice people giving much thought to the issue of accountability when using GenAI, even given what we know about implicit bias and hallucination in GenAI tools. When a student, educator, or administrator makes an AI-aided decision or uses AI output, we need to avoid reproducing bias. Teams of educators who are aware of the potential inequalities in assessment can control the diversity of data a specialist tool is trained on (anticipatory accountability), and assess potential bias in the outputs (remedial accountability). But if we start using AI in areas such as feedback and learning analytics, our capacity to identify implicit bias could be reduced, while relegating AI to what is often incorrectly assumed to be a neutral technology. This can be avoided through the use of comprehensive auditability regimes and transparent design and use practices where learners are fully informed of the data being collected on them. At the very least, educators should use systems that have some explainability in place, and be able to justify, with human reasoning, the decisions made by these systems on critical issues such as college admissions and recruitment.
GenAI tools give the illusion of neutrality, and humans may lay blame on them, absolving themselves of any responsibility.
When learners use GenAI tools as tutors, they may get information that is factually incorrect or skewed in terms of reproducing implicit biases against certain marginalised groups. They might learn incorrect information from hallucinations, publish it online, and keep using it so often that it becomes difficult to use the internet to verify it any more. Even though such epistemic biases and misinformation have always existed, well before GenAI and even before the internet, the difference was that we could trace the sources, usually to particular humans whose credibility and biases we could evaluate, and who have a history we can trace. GenAI tools give the illusion of neutrality, and humans may lay blame on them, absolving themselves of any responsibility.
With regards to hallucinations, what kind of guardrails are needed to ensure humans using GenAI do not amplify the mistakes from AI outputs? Might we face productivity losses by automating the wrong thing with AI then asking humans to spend their time verifying it? Remedial accountability may not be a good use of our time.
We need to ensure that AI use does not reduce human agency and we do not pass on accountability to a tool that users cannot hold accountable. So, we must push for more explainability in our AI systems and transparency on training data. First, though, we need to discuss our own transparency when using AI.
Transparency beyond plagiarism
Most of the discussion on transparency has revolved around plagiarism and put the onus on students to be transparent about their use of GenAI. I think the issue of transparency with GenAI is multi-level.
- Transparency of students using AI – to what extent do university policies ask for this? Here is a crowdsourced list of syllabus guidelines collected since early 2023, a useful example of diverse guidelines within one course, and a curation of university-wide guidelines.
- Transparency of researchers using AI – to what extent do journals and research organisations require researchers to disclose their use of GenAI?
- Transparency of teachers using AI – to what extent are teachers transparent about their use of AI in writing syllabi, learning outcomes, lesson plans, and creating learning materials?
How are you approaching GenAI accountability and transparency?
We’d like to know:
- In your own practice, what do/will you do to mitigate the risks related to accountability and transparency in AI (especially those mentioned in this blog post)?
- Are these issues a priority for you? Why or why not?
Please share your thoughts via our survey
As educators and researchers, we have agency over the kind of transparency we require of learners in their assessments, and of researchers in their outputs. In areas where we lack agency, we can advocate that developers of GenAI incorporate more transparency in GenAI platforms, so that we can trace the credibility of outputs and attribute ideas properly, given that those ideas never actually come from the GenAI platform itself, but from people whose work was used to train the platforms.
I recently came across a striking example of how lack of transparency with GenAI can cause problems: a colleague, Laura Czerniewicz, got a Google alert that an article co-authored with Cheryl Brown had been cited. She looked at the article that cited her paper, read a paragraph describing the argument of her paper, but could not remember ever writing that paper. Someone had probably used GenAI in their literature review, and the GenAI tool had produced a fictitious citation of work that Laura and Cheryl had never written, but which appeared plausible. Obviously, the authors using the AI should be accountable for this mistake, but what are the responsibilities of the peer reviewers, editors, publishers? Readers of the original article with fictitious citations may cite it and assume what is in it is correct and so the fabricated citation may continue to be reused.
Transparent AI use in research
This is an important issue for academics as researchers, but also as we supervise research and dissertations of students at various levels.
Nature was one of the first journals to produce guidelines asserting that GenAI tools such as ChatGPT do not qualify for authorship as they cannot be held accountable, and required the use of GenAI tools to be declared in the methods section. Furthermore, Nature does not accept AI-generated images because of the controversy over the copyright of such images. Peer reviewers are held accountable for their reviews and need to be transparent about their use of GenAI. I once received a peer review that we suspected used undisclosed and unchecked GenAI. The editor removed that review from consideration.
I publish in a variety of journals, and one with well-thought-out AI guidelines is Open Praxis. It spells out all the possible AI uses and what they mean, and asks authors to clarify at which stages AI was used: direct contribution, general assistance, specific sections, idea development, editing and reviewing, language translation and localisation, data analysis, data visualisation, code or algorithms.
Ethical review boards of institutions may also wish to protect the privacy of human subjects when their data is collected and require that informed consent be gained before their data is put through a GenAI tool for analysis, or that it at least be anonymised before being used, due to privacy concerns. These two examples show how universities can provide guidelines to treat sensitive data cautiously before using GenAI tools at any stage of the research process.
However, given Laura and Cheryl’s story, I’m not sure that even such transparency is enough. What do we need to do to avoid all these issues?
Please share your thoughts in our survey on GenAI accountability and transparency
_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. _____________________________________________________________________________________________
Main image: Microsoft Copilot
I agree that the use of GenAI needs to be transparent, but at the moment it is difficult for users to track their use of AI and share it. Plagiarism has always been an issue in schools, but now with AI, it makes the issue even larger. Students start to genuinely forget where they got their information and who’s ideas it was.
Additionally, the act of trying to figure out if something was produced by AI is an extremely arduous task, and at the end of the day, it is tricky to prove unless someone admits to using it.
We need to advocate for institutions to subscribe to tools that help track and monitor the process of writing instead of waiting until the submission comes through. I’ve been recommending people to check out PowerNotes as it creates the audit trail we desperately need.