Impact case studies, such as those produced for the UK’s research excellence framework, often present neat linear impact narratives that reflect the transmission of explicit knowledge from the world of research to the world of practice. Vincent W Mitchell, William S Harvey and Geoffrey Wood argue that by privileging easily quantifiable explicit knowledge over subtler socialised forms of tacit knowledge exchange, the impact of the social sciences and applied fields of research, such as business and management studies are vastly underestimated.
Tacit knowledge is the knowledge that we draw on while doing (e.g., driving or teaching), but is difficult to express in language or be conscious of. For example, the rules of chess are explicit knowledge, but knowing how to play a game and win requires tacit knowledge (i.e., knowledge that guides you on what to do in what circumstance). Invariably, tacit knowledge is contextual and grounded in practice; critically tacit knowledge is needed to use explicit knowledge.
Consider two types of tacit knowledge. The first is implicit or interactional tacit knowledge, which can be made explicit through externalisation processes of reflexivity, such as getting interviewees to reflect on how they do things. The second is inherent or practice tacit knowledge, which cannot be revealed through externalisation processes and can only be demonstrated and observed during socialisation processes, which allow it to be enacted and emulated. For example, how to work a complex new piece of software or scientific instrument in a given context or problem. This latter type is encapsulated nicely in Polanyi’s memorable phrase: ‘we can know more than we can tell’, which was applied by the former Hewlett-Packard CEO, Lew Platt, who once famously said: ‘If HP knew what HP knows, we’d be three times more productive’.
In a recent paper, we explored the question of ‘where does all the know how go?’ in relation to national impact assessment exercises such as the Research Excellent Framework (REF). In some disciplines, external assessments of impact often yield poor results, but is this because we are looking in the wrong place? We suggest the significance of tacit ‘know how’ in REF and other national impact evaluation exercises is often overlooked. Instead, a focus on direct explicit effects (e.g. changes in the law in response to a report) often detracts from more subtle accounts of knowledge flows and indirect outcomes.
To create, convey and capture these two types of tacit knowledge we focus on key processes of externalisation and socialisation, which we exemplify in Figure 1. The first involves the conversion and externalisation of interactional tacit knowledge into explicit knowledge via words, images and concepts. A typical example of this might be books, particularly those which have a story telling ‘how to’ or ‘my guide to’ component, where academics are writing about how things were completed. The large space in books allows authors to explain, expound and elaborate in much more detail the how and why than is normally the case for research articles.
Figure 1: Examples of different ways to create, capture and covey tacit knowledge.
The second involves the exchange of practical tacit knowledge through joint activities and socialisation. In management research and business schools for example, typical socialisation processes might be industry sabbaticals and communities of practice. Industry secondments allow for daily practice and demonstration of tacit knowledge for both practitioners and academics to see, emulate and learn from. In Figure 1, we also note a couple of examples of activities which are low in embedded tacit knowledge, such as company talks, despite these being commonplace in the dissemination of research. In contrast, our example of being on an advisory board could provide practice opportunities over the years as well as many opportunities for reports and minutes to be created and actioned, thus potentially being helpful as both an externalisation and socialisation process.
Current REF impact measures focus more on how specific explicit research findings lead to discreet and close-ended outcomes, e.g., on pieces of regulation and formal policy documents. Thus, much of our understanding and dissemination of tacit knowledge, which underpins how research findings are used in practice in everyday and informal ways of doing things, is lost. This reality is especially problematic in the world of social science impact, where – as in management – impact is contextual, complicated, messy and not easily observed. However, when it comes to reporting impacts, it is perhaps not surprising that business schools reach for what is direct and measurable: for example concrete regulatory change. This explains why so many business and other schools highlight their impact on public policy; discounting all they may do to enrich the everyday practice of management.
For some social science disciplines, this raises the question of are we looking in the right place for impact? According to REF, research impact statistics and informal assessments of impact by commentators in the popular media, business schools do not do this well. This is despite the fact that they provide relevant skills and knowledge for employers and give superior employment outcomes when compared to many other degrees, including STEM subjects. Either this is a major market failure, or people are looking for business school impact in the wrong place, whether by accident or design. We argue that much of the knowledge generated of value to employees both in the classroom, and through research and direct interfaces with business is simply missed in the current impact assessment process. As we have previously discussed, the value of accounting for such research-based teaching and using students as research translators to create change is largely ignored. This abject neglect for an impact pathway is unjustifiable, demotivating and vastly underrepresents the impact of management and other social science research.
While the higher education sector tries to figure all of this out, it is confusing for stakeholders and even those who work in the system. REF and KEF have battled to deliver clarity and consistency when it comes to measures like impact that have been added over the years. This has led to conflicting league tables, when deep down, there is a shared disciplinary knowledge of who the best researchers and universities are, and what the best research looks like, no matter how hard tacked-on additional evaluations (including various mechanistic measures) seek to obscure things. The same goes for impact; clearly there is diversity in how seriously organisations take the output from different business schools and their scholars. Greater acknowledgement of the types of knowledge at play in impact would also help those working in higher education to focus and to do what one would hope the government’s real intention is, to encourage and better document impactful work.
A better understanding of the tacit knowledge involved brings nuance into the impact discussion and gets us closer to understanding why the differences exist between those who do it well versus less well. In turn, this gets us closer to being able to advise universities and individual researchers on how to have more impact. Although we use examples from the business and management disciplines, we suspect that the logic and the importance of tacit knowledge is a salient issue across all areas of scientific research. Furthermore, as both the KEF and REF undergo review, we wonder if KEF, like the REF might suffer from a tacit knowledge blind spot?
This post draws on the authors’ co-authored article, Where does all the ‘know how’ go? The role of tacit knowledge in research impact, published in Higher Education Research and Development.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below
Image Credit: Valentin Jorel via Unsplash.