LSE - Small Logo
LSE - Small Logo

Ksenia Sawczak

May 16th, 2019

Assessing Impact Assessment – What can be learnt from Australia’s Engagement and Impact Assessment?

1 comment | 5 shares

Estimated reading time: 6 minutes

Ksenia Sawczak

May 16th, 2019

Assessing Impact Assessment – What can be learnt from Australia’s Engagement and Impact Assessment?

1 comment | 5 shares

Estimated reading time: 6 minutes

The impact agenda is an international and evolutionary phenomenon that has undergone numerous iterations. Discussing the development and recent release of the results of the Australian Engagement and Impact Assessment (EIA), Ksenia Sawczak considers the effectiveness of this latest exercise in impact assessment, finding it to provide an inadequate account of the impact of Australian research and ultimately a shaky evidence base for the development of future policy.


In March, the results of Australia’s first ever round of the Engagement and Impact Assessment (EIA) were released. This included a National Report with university score cards, as well as the publication of impact studies that achieved ratings of “high”. You’d be forgiven for missing this event. Apart from the fact that it was an unpopular exercise with doubts about its ability to generate meaningful findings, the results were overshadowed by the announcement of Excellence in Research for Australia (ERA) outcomes only days earlier, the well-established ERA being a far larger exercise, which comprehensively evaluates the quality of research undertaken in Australian universities. 

Over the years, Australia has had a confused relationship with the impact agenda, with much of this grounded in the vagaries of government. When the idea of a national exercise to evaluate research was first touted in the form of the Research Quality Framework, the focus was to be on both quality and impact of research. This was abandoned in 2007 in favour of ERA, following a change in government.

But the matter of impact did not disappear. Concerns about Australia’s poor performance in industry engagement, as measured by OECD statistics, and an interest in understanding returns on investment (e.g.  Boosting the Commercial Returns from Research 2014), kept leading back to the same question: what value is university-based research bringing to society and what are universities doing to make sure their work is reaching end-users? 

After 2007 the agenda changed from pure impact to the conditions that enable impact. This culminated in the release of the 2015 National Innovation and Science Agenda proposing a nation-wide assessment exercise which – according to the authors – would magically lead to improvements in how universities engage with industry. Even though, like ERA, the exercise would not be used to inform funding to universities and it was unclear how the outcomes would inform federal policy.

With a strong interest in not just impact, but the mechanisms that enable impact, the ARC designed an exercise that would conduct evaluations in three separate areas: Engagement, Impact and Approach to Impact. The Unit of Assessment (UoA) is the very broad 2-digit Field of Research (FoR) Code and the ratings are low, medium and high. The Engagement component was to be largely data-focused, (with a commendable commitment to reusing research income data collected through ERA); however, this morphed into an intensive narrative exercise, when it was found that the available data offered only a limited picture. For Impact, universities – regardless of size and scale of activity – were required to present one impact study that acted as a proxy for the entire FoR. The third narrative was for Approaches to Impact (i.e. a description of how institutions aim to achieve impact). Being able to tell a good story was always going to be the skill that mattered in achieving high ratings.

Over the same period of the development of the EIA, the interest in impact started to materialise in other forms, notably grant applications, e.g. with applicants for ARC funding being required to prepare Benefit and Impact Statements outlining the contribution that their research will make to the economy, society, environment or culture. However, things backfired when it was revealed that in 2017, the then Minster of Education had interfered in funding decisions by knocking back a number of projects that he deemed unworthy of funding on the basis of national interest. As a consequence, in 2018 the ARC announced a new National Interest Test. While seemingly innocuous in that grant applicants were still required to address matters of benefit, it demonstrated a largely parochial view on how impact and benefit were to be approached. Announcing the NIT, the new Education Minister declared that: “Introducing a national interest test will give the minister of the day the confidence to look the Australian voter in the eye and say, ‘your money is being spent wisely’.” Yet again, public confidence was presented as the rationale.

The EIA report raises many questions, both in of itself and also in the light of the parochial NIT. According to the panels that evaluated submissions, the sector is doing okay for Engagement, Impact and Approaches to Impact, with 85%, 88% and 76% respectively being rated at ‘medium’ or ‘high’ across the sector. What are we to make of this information? For Impact, nothing. The fact that one narrative alone can accurately represent an entire 2-digt FoR is a nonsense. Consider FoR 16 (Studies in Human Society), which covers anthropology, demography, criminology, human geography, policy and administration, political science, social work and sociology.  

Another mystery is how impact intersects, if at all, with the ‘national interest’ agenda. The impact studies released by the ARC make for thought-provoking reading. One of the most interesting is on “The History of the Tentmakers of Cairo” in FoR 19 (Studies in Creative Arts and Writing) which leaned on the work of a sole researcher. Would the worthy project behind the study have passed the Minister’s judgement if it had been submitted as a grant application? We will never know, but given its international scope it is questionable. Regardless, as a result of the EIA’s “proxy” approach in assessing impact, the university which presented the study will be for now known as an institution that produces highly impactful research in FoR 19 (an area in which, incidentally, the same university was rated as “below world average” in ERA). 

And what are we make to make of ratings for Approach to Impact and Engagement? Should we be concerned if a university that scores well for impact bombs in the other two components? Surely if the Australian government is so fixated on matters of public care, the concern should be about the benefit that research has produced for society and not the means by which this has occurred. But how is this to be reconciled with the government’s longstanding interest of seeing enhanced university-industry engagement?

What happens next really hinges upon what the policy makers in Australia are planning to do with the intelligence they have gathered.  But with a poor track record in using assessment outcomes to inform policy, it is difficult to be optimistic. It is ironic that an exercise as comprehensive as the EIA is likely to end up so limited in its utility. 

 


Image credit: Ariel Besagar via Unsplash (Licensed under a CC0 1.0 licence).

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.


Print Friendly, PDF & Email

About the author

Ksenia Sawczak

Ksenia Sawczak, is Head of Research and Development, Faculty of Arts and Social Sciences at the University of Sydney, where she oversees management of research activities. She is a frequent commentator on issues in Australian higher education and research policy.

Posted In: Impact | Measuring Research | News | Research evaluation

1 Comments