LSE - Small Logo
LSE - Small Logo

Natalia Kucirkova

June 7th, 2023

With generative AI, the consequences of no independent verification of Edtech evidence are looming large

0 comments | 13 shares

Estimated reading time: 3 minutes

Natalia Kucirkova

June 7th, 2023

With generative AI, the consequences of no independent verification of Edtech evidence are looming large

0 comments | 13 shares

Estimated reading time: 3 minutes

As governments grapple with balancing regulation and innovation in generative AI,  educational technologies (EdTech) are developed to bring AI directly into K12 classrooms.  The abundance of EdTech that teach children to respond in plausible, but non-sensical, ways requires urgent government oversight. For www.parenting.digital, Natalia Kucirkova discusses the need for international regulation of EdTech evidence and educational impact.

 According to ChatGPT, generative AI can transform education by “personalizing learning experiences, providing intelligent tutoring, enhancing content creation and curation, enabling language learning and translation, facilitating data analysis and predictive analytics, promoting accessibility and inclusivity, and improving administrative efficiency.” A major impediment to this vision is an unbiased verification of the impact of AI-based education. So that generative AI generates learning experiences that are viable and useful, the technologies need to be based on learning sciences. However, AI generates not only new learning experiences but it also challenges and undermines the science of learning. In the current little-regulated legal landscape of technologies, this is deeply problematic.

Missing evidence standards for EdTech

Major capitalist investments into AI-driven solutions propel EdTech developers to integrate generative AI into existing as well as newly designed apps and learning platforms. These efforts should be balanced by reforms that directly tackle the issue of AI-generated pseudo-science. AI-generated pseudo-science is possible in all subject areas, but educational research is particularly vulnerable to its negative impact on children. This is because of several issues but one lesser discussed is the lack of no agreed standards of educational evidence.

Even major education clearinghouses use divergent standards leading to divergent recommendations of what works in education. For EdTech specifically, the U.S. government recommends the ESSA standards of evidence and has recently released a guiding document for applying them in practice.

In the past, the guidelines have been criticised for their mismatch with EdTech qualities: EdTech are highly personalized and that makes their evaluation based on standards of generalizability (the ESSA standards of evidence) difficult. In the UK, ensuring that EdTech is based on robust evidence of improving education is one of the requirements of the Digital Futures Commission’s Blueprint for Education Data and its proposed certification scheme for EdTech.

AI-generated pseudo-science

With no globally agreed standards of what works, the educational sciences field faces the risk of AI exacerbating the problems of disconnect between educational science and educational practice, exacerbated by a profit-driven predatory research industry. Massive editors’ walk-outs from Elsevier editorial boards revealed that even in the largest publishing houses, studies get rapidly published with often minimal or biased peer review. AI-generated educational science will not guard against unethical processes that are ongoing in predatory science. Indeed, with generative AI, the problem gets extended to peer reviews run by chatbots and reliability checks carried out by AI.

For example, Elicit.org already produces research summaries based on a systematic synopsis of a large quantity of relevant papers categorized according to their methodologies, key arguments and published critiques, essentially generating systematic reviews within seconds. Algorithms are being used to generate rankings of individual research papers and to determine the quality of science. Without adequate regulation, there is a risk of substandard or misleading offerings infiltrating the market, leading to ineffective or even detrimental educational experiences.

Possible dangers

 One can easily imagine the negative flow of effects: EdTech companies collect a large amount of data directly and indirectly from children, teachers and caregivers. The companies regularly run customer surveys and measure learning engagement directly on their platforms. These data are used for product development but increasingly to generate evidence of impact. With a suite of AI research tools any EdTech company can generate a scientific summary that backs up its model and be branded as “science-based”. The data can be presented as “evidence-based EdTech” and sold to schools and districts, which have neither the capacity nor time to verify the truthfulness of EdTech marketing claims.

Industry-funded research has a long history of unethical and biased research practices and the education industry is not immune to them. The recent revelations of US start-ups lying about their impact and data reveal an industry driven by business metrics rather than a positive impact on kids’ learning.  AI-generated science is likely to exacerbate the risk of undue corporate influence in the production and procurement of ineffective educational technologies in public schools.

Ways forward

To mitigate these risks, international governments should develop standards to independently evaluate the evidence of EdTech and support its integration with EdTech design and development. In addition, governments need to regulate how the standards are applied and independently verified. Practical solutions in the form of badges, certificates or transparent reports on EdTech intelligence platforms such as Holon IQ or Dealroom, should be urgently implemented.

Another implication is the need for investors to apply refined ethical standards as part of due diligence checks and for EdTech founders to fundraise early-stage capital with evidence funding needs in mind. Furthermore, multiple stakeholder engagement in generating evidence is crucial for bending the market towards agile, applied and formative research on EdTech in schools.

In sum, the aim should be to underpin claims about what is educational with better evidence. The combination of government regulation and cross-stakeholder collaboration in developing the infrastructure necessary for its implementation is inevitable to harness AI’s potential for good. The two won’t be developed at the speed of AI innovation. Nevertheless, advocating for socially shared regulation that intersects human and AI collaboration in regulating education can serve as the new North Star for the EdTech industry.

First published at www.parenting.digital, this post represents the views of the authors and not the position of the Parenting for a Digital Future blog, nor of the London School of Economics and Political Science.

You are free to republish the text of this article under Creative Commons licence crediting www.parenting.digital and the author of the piece. Please note that images are not included in this blanket licence.

Image credit: Photo by Katerina Holmes on Pexels

About the author

Natalia Kucirkova

Natalia I. Kucirkova is Professor of Early Childhood Education and Development at the University of Stavanger, Norway and The Open University, UK. Natalia’s research concerns innovative ways of supporting children’s book reading, digital literacy, and exploring the role of personalisation in the early years. Natalia’s research takes place collaboratively across academia, commercial and third sectors. She led the development of the app "Our Story" and co-founded the university spin-out WiKIT, which bridges learning sciences and children's EdTech industry.

Posted In: Reflections