On 5 September I attended this one-day event at the University of Dundee, billed as the UK’s “largest conference dedicated to exploring the best examples of e-assessment in the world today”. LSE has an growing interest in e-assessment (which we might define as the use of IT to facilitate assessment processes), with various pilot projects on the go this year.

Total e-assessment

With that in mind, one presentation in Dundee proved a real eye-opener for me. Linda Morris, an academic in University of Dundee’s College of Life Sciences, told us that by 2015 the College will have moved to the point where all assessment, across all 4 years and including final exams, will be done online. Furthermore, this marks the end point of a journey which started a long time ago – in fact they already were using e-assessment for all 1st-year courses by 2003! I felt more than a little embarrassed, to be honest.

The drivers for this change were simple: More students, asking for more feedback, and fewer staff. The paper-based assessment regime was becoming completely unmanageable. A fully-online system means no paper, remote access for markers, progress tracking, and easy distribution of feedback. It is also popular with students, many of whom have fallen out of the habit of writing at length by hand (and whose writing may be barely legible as a result).

Dundee’s system uses a combination of Exam Online for essay questions and QuestionMark Perception for other question types. This system supports all the forms of submission they need, as well as all their marking requirements: blind marking, multiple markers, inline comments, and marking workflow.

Do people like it? Yes. Linda says “once you start down the road of e-assessment, you won’t get anyone to go back”.

Software

Various vendors were on hand to promote their wares: Surpass, Cirrus, QuestionMark and MyProgress, amongst others. However, I found it hard to see what, if anything, these tools would offer us that Moodle does not already provide. In fact, in some cases the feature set seemed much thinner than that of the Moodle quiz tool.

Keynotes

Peter Reed of the University of Liverpool started the day by identifying institutional problems with the introduction of e-assessment. Such a move is often done in a piecemeal manner, perhaps in response to NSS scores, and as a result fails to be transformational. He also pointed to a lack of flexibility in submission practices, which may assume that all submissions are documents, and prevent students from submitting other digital artefacts.

In thinking about e-assessment at the institutional level, he encouraged us to apply Brookfield’s “4 Lenses”. This theory proposes that any teaching and learning activity should be evaluated from four different perspectives: self-reflection, students, literature (i.e. theory and evidence) and peers (i.e. staff).

For example, through the student lens, we should think about the week-on-week burden of assessment. An assessment won’t be an effective measure of student achievement if that student has 3 other, more pressing assessments, in that same week. This can be countered by spreading out the assessment load: instead of a single high-stakes assessments at the end of module, spread out lower-stakes assessments through the term. Similarly, through the peer lens, we need to think about assessment load across different programmes and different years. Where there are multiple assessments from different sources in the same week, administrative staff or markers may be unable to cope.

In the other keynote, Mark Glynn of Dublin City University spoke about “assessment analytics”, proposing that the “click data” that VLEs typically provide are of limited value, and that assessment data is what will provide really the useful analytics. Such analytics may be Descriptive (what happened), Diagnostic (why it happened), Predictive (what’s gonna happen) or Prescriptive (what should happen).

I had a problem with one of his ideas for such analytics: to show students how they had performed in relation to their peers. This would be beneficial to the student, he claimed, because they could tell whether 75% was “good” in the context of the overall marking on their assessment. I found this rather depressing; 75% should mean “good”, regardless of how the other students performed. If it does not, then it means we do not know how to mark properly: the percentage grades we assign have no inherent meaning, and assessment becomes simply a process of sorting students into order of achievement, rather than determining how well they have achieved the objectives of the course. The use of technology to patch up these failures of assessment is not exactly inspiring.

Conclusion

This was a worthwhile conference, with some valuable insights into what other institutions are doing in this area. The day conference was followed by a longer online programme, which is ongoing at the time of writing.

Steve