FacebookBufferPocketShare

J bayley new photoCoventry University have devoted time, talent and resources to come up with an embedded management tool to help academics plan and capture the impact of their research. Julie Bayley discusses the lessons learnt through the process of creating a functional, self-service solution that appeals to administrators and academics.  

As we head away from REF 2014, the HE community is looking to harness learning on ‘research impact’ in advance of future monitoring activities. Expectations that impact may have a higher weighting in the next REF alongside funders’ increasing calls for impact plans are featuring more heavily in university research strategies. Institutions are understandably keen to embed impact management in the post REF period, but how do we do this? How do we deliver an institution-wide approach in such a nuanced and discursive area? Coventry University have undertaken a very applied solution to this problem and the aim of this brief article is to offer insight into that process.

Background: Embedding Research Impact at Coventry (ERIC)

Credit: DaveCrosby (CC BY-SA)

In 2012 a team of Coventry University business engagement, information management (and IT) and academic staff completed a JISC funded project to develop a pilot impact capture system (ERIC).  The result was an in-house tool designed to support researchers to plan and capture the impact of their research, and the prototype based on REF markers was received positively in pilot testing. Seeing the potential for such a system, university leads subsequently approved the full development and rollout of the tool across the institution and we have been working to achieve this goal over the last year. The system consists of a series of 4 logic-based drop-down boxes via which academics articulate impacts from the broad area down to the indicative metrics. This is coupled with an evidence repository and an email reminder system to keep the impacts ‘live’. Impact is planned from the outset of a project, monitored, edited, updated or removed and evidence then uploaded as it becomes available. This self-service system benefits from being user (academic) led, supported centrally and with the scope to configure and capture impact in its broadest terms, and the learning insights offered here are based on our ongoing consultation with academics, strategic leads and business staff across the university. These discussions cover usability (content/structure) and behaviour change (facilitators and barriers to using the system), and contribute to the evolving institutional impact strategy.

So what lessons have we learned?

1.    Partnership is key

Success is dependent on partnerships, starting with the project team.  At Coventry, business development leads scoped the corporate need for ERIC and gained strategic support, and ultimately ERIC now forms part of the university’s research strategy. Information management specialists provided the architecture for the system, and governed the information flow and structure. The IT specialist, a programmer positioned within the team from the outset advised on and actioned solutions, continually mindful of the user needs. Finally as the academic lead I offered the user voice and helped shape the product to best support the academic community. This partnership resulted in collaboratively drawn plans, tested in the user environment and continually aligned with all agendas. Beyond the team, partnerships with key academics (‘champions’) provided a bridge into faculties and peer endorsement.

2.    Engage not enrage: defining impact and markers

The term ‘impact’ is not neutral; individual and disciplinary differences, bedded on the experiences of REF lead many academics to be reluctant to engage with the term.  Common feedback relates to not perceiving their work as having impact, not having tangible impact, not being able to quantify impact or being unable to track distal effects. At the more challenging end, for some academics ‘proving’ an external change as a result of their work and metricising the answer is at best difficult and at worst an affront to the inherent value they perceive in the work. This is further compounded by focusing on simple metrics and markers of implied quality (eg. Impact factor). Much of the variation in impact engagement rests on a narrow definition and an assumption that only large and measurable effects are worthwhile. A restricted definition of impact can alienate staff, particularly those from arts and humanities, who feel they do not fit the simple impact pathway (input-output-outcome). Part of our consultation has been to broaden understanding of impact and decouple impact capture from impact assessment. This has involved much mapping, deconstruction and dialogue to develop both an aerial understanding and increasingly discipline-specific models.  We have much further to go but the foundations are looking positive and leading the rollout with the academic voice is crucial to this endeavour.

3.    Minimise user effort

However comprehensive the contents, a system must be easily navigated, with clear processes and requiring minimal ‘clicks’ otherwise the effort of use outweighs the benefits. A partnership approach is crucial to ensure feedback from users is fed directly to the technical experts, and that the user interface is considered throughout. Increasingly academics are challenged with multiple systems and agendas and thus it is crucial that any impact agenda is linked to aligned activities on open access, public engagement and staff development to produce a coordinated and centralised approach. Impact needs embedding, and the tools to optimise very straightforward if we are to successfully equip academics to maximise the benefit of their work.

The reality of developing an institutionally embedded impact system is one of complexity and ongoing effort. As the academic partner in this work I have witnessed the benefit of a genuinely joined up approach between strategic leads, central support services and end users. I have also been privileged to be given the freedom to investigate how impact is constructed within a disparate academic community, ultimately helping to support and engage researchers more effectively. We are still battling with a number of fundamental issues – how impacts are attributed to individuals and apportioned between teams; how to fully engage those who are resistant to a systematic approach; how to address elements of suspicion about the purpose of the system and many more. Our system is paradoxically both well developed and in its infancy and the upcoming larger scale engagement and training phase will undoubtedly unveil more challenges.  But then we’re academics. We’re used to that.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Julie Bayley is Coventry University’s Impact Officer (based within the Business Development Support Office) and a Senior Researcher in Health Psychology (Applied Research Centre in Health and Lifestyle Interventions).  Her research covers the broad impact agenda and the application of psychology theory to individual behaviour, drawing these areas together to support the process of embedding impact management in HE.  Julie has worked extensively in behaviour change and intervention development, primarily in adolescent sexual health and currently supports a number of quality improvement activities at Coventry University.

Print Friendly