Natalia I. Kucirkova of the University of Stavanger, Norway and The Open University, UK, and Dan Lin of The Education University of Hong Kong, both on the advisory board of education certification non-profit Eduevidence, discuss the importance of certification in the EdTech space, where an abundance of apps are available to schools, teachers and parents.
There are currently 500,000 educational apps available on the Google and Apple App stores, and an average school district in USA uses about 2,500 different EdTech tools each year. With so many EdTech tools available in the market, educators and institutions often face challenges in selecting and procuring high-quality tools that align with their goals and objectives. A high-quality tool can enhance good teaching and boost children’s learning as well as providing personalized learning experiences, but how do we ensure that the EdTech tools that teachers use indeed are of good quality? And how do we define the evidence for quality?
The risk of using EdTech tools could involve lack of data privacy, insufficient educational effectiveness and inadvertent increases in inequality between students, so it is essential that teachers and educational bodies have a way to assess the quality of the tools that they use. Good quality in EdTech can encompass various factors, including cost-effectiveness, replication, and practical application. These factors might be demonstrated through positive meta-analysis evaluations that take years for researchers to complete, or through quick evaluations conducted by the companies themselves. To make these quality parameters more cohesive and systematic, establishing clear criteria and standards is essential. One approach that we have found effective through our work with Eduevidence is developing certifications, which can provide a unified benchmark for assessing and ensuring the quality of educational tools across the industry.
Given that quality has multiple components, various certifications exist to address different aspects of it. To navigate this complex landscape, it is crucial to first understand how these certifications are structured and the evaluation frameworks they use. This understanding helps clarify the diverse standards and criteria that shape what is deemed “quality” in educational technology.
Current certification frameworks
A systematic review carried out in the UK identified 75 different evaluation frameworks for EdTech quality alone. Efforts are being made to consolidate these frameworks under more unified standards. The European Union is currently working on a common evaluation framework for digital content, and a consortium of seven certification providers in the United States has also joined forces to create a cohesive approach. Similar efforts are being done on the global level: our research group at Eduevidence.org has been working on a research-driven approach to developing and implementing an international certification system that could reliably demonstrate the impact EdTech organisations make.
Eduevidence draws on a series of academic summaries, reports, and reviews of current certifications, and criteria that integrate EdTech evaluation frameworks with the umbrella of the 5Es dimension of impact: efficacy, effectiveness, ethics, equity and environment. The 5Es are key impact categories in the learning sciences that focus on the significant areas where EdTech can impact learning. The 5Es framework combines quality assurance by identifying what EdTech should avoid and quality aspirations by highlighting what EdTech could achieve. This dual focus ensures that EdTech tools are not only safe and usable but also inclusive and of added educational value to the students and teachers. If EdTech are to be integrated into schools or other specific contexts, there must be evidence that they meet these quality criteria. However, the word “evidence” opens a can of worms.
How do we define evidence in terms of quality in EdTech?
In defining what constitutes evidence of quality in an EdTech tool, there is a need to embrace a broad perspective that goes beyond any single evidence approach, discipline, or methodology.
While US standards of evidence, such as ESSA, follow a hierarchical structure where the focus is on empirical evidence with randomized controlled trials (RCTs) at the top of the evaluation hierarchy, other frameworks (e.g. the EdTech Tulna standards) focus on the technology design.
Reflecting this diversity of approaches, there has been a push to move beyond the notion that “real research” or indeed EdTech evidence are limited to RCTs. By integrating qualitative, quantitative, and learning sciences research, along with educational action research where teachers’ voices are central, EdTech companies can answer both efficacy questions (“does it work”) and effectiveness (“could the EdTech solution work in a specific context, with teachers’ support”). Both effectiveness and efficacy are crucial and must be present in effective EdTech solutions.
A robust evaluation system should thus allow for evidence understood as a linear journey towards a progressive strength of efficacy rigour as well as an iterative co-creation process with teachers. As with all research, efficacy and effectiveness studies are part of an ongoing cycle where it is the accumulation of a research portfolio over time that counts rather than a single study. In all these research approaches, the weight of evidence matters.
How do we define the weight of evidence?
A key distinction mechanism that certifications can promote is to emphasize the weight of evidence. For example, a master’s thesis holds less weight than a meta-analysis, a concept well understood in academic research. This is, for example, reflected in the Eduevidence certification system with three levels of Bronze, Silver and Gold certifications. Unlike other EdTech certifications that only provide kitemark of meeting or not meeting standards, a tiered approach ensures that decision-makers can easily identify not only whether, but also what level of evidence, EdTech companies have.
Second, it is important to pay attention to the overall impact of EdTech companies on children’s learning and education. There is often an imbalance in how companies prioritize different quality aspects: some may focus heavily on ensuring their products cater to marginalized groups but invest less in rigorous efficacy testing. Others might conduct expensive RCTs but only with selected groups of learners. There is a need for balance among the key impact aspects (efficacy, effectiveness, equity, ethics and environments) and EdTech providers should aim for a high “total score”, which is a composition of their achievements on each of the “E” verticals of impact. As such, they can achieve a comprehensive and meaningful impact that extends their innovation.
Fostering quality assurance and improvement in the EdTech Ecosystem
Aligning and consolidating approaches to EdTech selection and procurement are crucial for navigating the saturated EdTech market. However, more is needed to foster an EdTech ecosystem where quality assurance and improvement go hand in hand. In particular, capacity-strengthening approaches that nurture an evidence-based mindset in EdTech founders are key to sustaining a focus on iterative approaches towards impact. Additionally, enhancing teachers and other users’ research literacy is vital for recognizing which tools are evidence-based. Their discernment could be greatly facilitated by lists of recommended resources based on third-party independent and rigorous verifications of evidence. By providing educators with access to vetted resources, the EdTech ecosystem can promote a culture of evidence-based practice and continuous improvement.
This article gives the views of the authors and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.