LSE - Small Logo
LSE - Small Logo

Vincent Mitchell

September 7th, 2022

Research assessments tell us what and who did research impact, but say little about the why and how.

0 comments | 23 shares

Estimated reading time: 6 minutes

Vincent Mitchell

September 7th, 2022

Research assessments tell us what and who did research impact, but say little about the why and how.

0 comments | 23 shares

Estimated reading time: 6 minutes

The introduction of impact into the 2014 Research Excellence Framework (REF) and the results it provided had a transformative effect on perceptions and approaches to research impact. However, REF 2021 and the results of the first KEF have had a more subdued reception. Vincent W Mitchell argues a reason for this is a lack of focus on tacit knowledge in both exercises, which whilst good at mapping out what universities and researchers do, are poor at providing analyses of how research impact could be improved.


REF (the Research Excellence Framework) and KEF (the Knowledge Exchange Framework) suffer from a shared problem. They measure what universities are doing and who is doing better or worse. This approach has an obvious limitation. It assumes that people know how to do better quality research and have access to resources that will enable them to act upon that knowledge. Better questions to ask are: why some universities are better at research impact than others and how do some institutions find themselves in more impactful KEF clusters than others?

As we explored in a recent paper, something that is often overlooked in REF and KEF assessments is ‘know how. For example, REF impact measures apply broadly across many contexts, but focus narrowly in terms of evidence about explicit outcomes: money saved, jobs created or changes to policy. Many of these measures are focused on explicit knowledge, which is codified and thus, easily explained, measured, and communicated in written form. KEF too favours explicit knowledge measures partly because for greater efficiency, they are collected already or based on readily available data. These measures allow comparisons about what impact is created and who is doing better at creating it, but it says less about the why and how.

REF and KEF suffer from a shared problem. They measure what universities are doing and who is doing better or worse. This approach has an obvious limitation. It assumes that people know how to do better quality research and have access to resources that will enable them to act upon that knowledge.

In contrast, tacit knowledge, or know how, is the knowledge that we all draw on while doing things, like driving or teaching, it is difficult to express in language or even be conscious of. For example, a map represents explicit knowledge, but working out where you are and where you need to go requires tacit knowledge. Crucially we need tacit knowledge to use explicit knowledge. Similarly, in deploying the explicit knowledge a university has from either research papers (REF) or teaching materials (KEF), tacit knowledge is required to use these to create the desired outcome.

Focusing on KEF, the indicators it uses are primarily quantitative and reflect explicit knowledge exchange, for example income derived from knowledge, academic staff time involved in delivery of activities, companies created, and proportions of publications that have non-academic co-authors. The latter measure is taken from Elsevier’s analysis of co-authorship of papers with non-academic partners using SciVal and Scopus tools. Another source of evidence are questions from the HESA HE Business and Community Interaction (HE-BCI) survey. Table 1 shows a very rough, surface analysis of these questions and we see again the dominance of explicit measures of knowledge over tacit, with only 1 question out of 19 being linked with any form of tacit knowledge.

Table.1: A subjective analysis of HESA questions using and tacit knowledge lens.

This focus on explicit knowledge has undesirable and unintended consequences. For example, the impact dimensions of the REF and the KEF both largely encourage a transactional approach to impact whereby a specific parcel of formal knowledge is evaluated in its effects on regulation and practice. This misses other important dimensions of impact, and, indeed, the role of universities as places generating dialogue, conversation, knowledge, but also informed doubt, and pragmatic and evidenced based ways of solving problems, using both formal and tacit knowledge. Critical and analytical thinking or creativity skills for example largely rely on tacit knowledge. Some of the HESA items are also very biased to the sciences, through their focus on the sale of Intellectual Property (IP). Applied to the social sciences this can have perverse effects. For example, business schools provide graduates that have the skills and tacit knowledge that are sought out by large and prestigious employers. A focus on student start-ups and their IP, may have a different and arguably, perverse result, by rewarding institutions with poor graduate employment rates, where students set up businesses as a measure of last resort.

Ultimately, a description of what we are doing and who is doing it, doesn’t help us to do better.

However, some KEF inputs point towards elements of tacit knowledge exchange, e.g. public and community engagement, research partnerships, working with the public and third sector, working with business, skills, enterprise and entrepreneurship. Knowledge Exchange is often described as a ‘contact sport’ and I argue it is in this ‘contact’ that tacit knowledge thrives. The KEF engagement metrics, which some have described as ‘trajectory measures’, don’t in themselves say much about the impacts realised, but provide a low-burden way to show that the University is undertaking the type of activities at a scale that one might reasonably expect to create impact via their ability to develop tacit knowledge. From the HESA analysis in table 1, there is also a potential in over 40% of the questions for the measure of these activities to involve tacit knowledge and even be driven by them. This know how, is more important than the know what of how much money is achieved by this activity. I suggest that there are missed opportunities to consider tacit knowledge and those questions with ‘Yes’ in column 4 could be very useful to explore these sources for know how lessons that could be applied to KEF endeavours.

In Australia, the Australian Research Council (ARC 2018 p44) is unusual and progressive in that is separates out impacts and outcomes from specific projects, which are case study based (like REF), from the ‘engagement’ environment of department which includes a narrative about how impact is brought about. This includes a description of the engagement processes and the purpose of the engagement, how the university discipline engaged with research end-users for mutual benefit as well as the duration and extent of the engagement activities. This is arguably much more in-line with an understanding of the role of tacit knowledge in impact.

Ultimately, a description of what we are doing and who is doing it, doesn’t help us to do better. Focussing on how it’s done, which is what I argue tacit knowledge brings to the impact discussion, gets us closer to understanding why the differences exist and closer to being able to advise on how to generate more impact. REF has been to some extent effective in identifying what is the best published work and who is doing it, but has struggled to help with the questions of why or how. Tacit knowledge is a part of the resources people need to do better research and create impacts. KEF could also benefit from a focus on why and how, but here at least there is some acknowledgement of the importance of processes which can create and convey tacit knowledge. However, what there isn’t in either KEF or REF is an awareness of the differences in the kinds of knowledge they are trying to assess and influence. This is an unfortunate omission, but one which we could easily begin to understand better in the future.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Randy Tarampi via Unsplash. 


 

Print Friendly, PDF & Email

About the author

Vincent Mitchell

Vincent Mitchell is Professor of Marketing at The University of Sydney Business School. His has spent 30 years researching consumer problems and consumer, market and policy solutions. He is in the top 1% of marketing academics in the world for scholarly impact.

Posted In: Knowledge exchange | REF2021

Leave a Reply

Your email address will not be published. Required fields are marked *