LSE - Small Logo
LSE - Small Logo

Introduction

8.1  External impacts are rooted in collective ‘tacit knowledge’

8.2 The time lags in achieving impacts

8.3  Generating an evidence base about external impacts

8.4 Comparing organizations’ and disciplines’ performance

8.5   Managing impacts work – potential pitfalls

Summary

Introduction

We live in a social world shaped primarily by organisations. Our links to organisations and professions confer or create personally important identities, often stimulating strongly rooted processes of identification with the places and teams where we work, and the projects we work on – especially in academia, where ‘mission commitment’ is an important incentive. In most cases, organisational experiences and loyalties also trigger a substantial adaptation of our own goals and values to fit in with those of our surroundings (Galbraith, 1969). And organisations, professions and communities are the key determinants of what gets accepted as true or right or appropriate, of what knowledge once produced survives and of the criteria by which it is used, stored or lost. It is in this sense, that Mary Douglas stresses in How Institutions Think (1986) that although ‘institutions cannot have minds of their own’, none the less ‘institutions confer identity’, ‘institutions remember and forget’, ‘institutions do the classifying’, and they can also ‘make life or death decisions’.

It should not be surprising then that departments, research laboratories, and research groups and units within universities (which themselves can be thought of as top-level organisations, or as congeries of smaller organisations, or as local network identities) are the primary vehicles for generating external impacts from research. Of course, each of these units is made up of individual academics and researchers. But the importance of academic teams, departmental traditions, research synergies and organisational cultures all mean that in the most successful research environments the whole is far greater than the sum of the parts. From the outside world’s point of view, for any reasonably specialist audience or professionally important issue, the primary unit of perception is the department, or the sub-units within departments, such as research groups and labs. In a more generalised way for specialist audiences, and for the general media and lay public at large, universities are important carriers of reputation and traditions.

We first examine how growing external impacts (construed as occasions of influence) are rooted in the importance of collective ‘tacit knowledge’ in the development of organisations and networks of influence. Next we look at the time periods involved in achieving external impacts, which are generally much longer than those of academic impacts. The third section examines how to generate appropriate systematic information on external impacts. However, ascertaining any given level of influence occasions in isolation is likely to be of only limited usefulness unless it can be set against a background of well-informed comparisons. In particular, research funders and government regulatory bodies, along with key external stakeholders themselves, may be ‘naïve customers’ with exaggerated expectations of what is possible or desirable in terms of external impacts. Hence in section four we compare across organisations to ensure that a record of external impacts is assessed against meaningful benchmarks. Finally, we conclude the chapter by looking at some of the potential pitfalls and extra sensitivities that academic departments and universities need to take account of as the scope and scale of their external impacts expands.

8.1  External impacts are rooted in collective ‘tacit knowledge’

Departments, laboratories, and research groups and units are the key or essential ‘bearers’ of external impacts for several reasons. They form the key foci of ‘team identity’ for academics, the most important level at which work tasks are organised and specific duties and fields of activity are defined. And, as discussed in section 4.3 the team production of knowledge has been steadily increasing across all fields of academic endeavour over recent decades, although this trend is least strongly marked in the humanities, and has been more modest in the social sciences than in the STEM disciplines. For external audiences these team production aspects are also likely to be of considerable importance. For instance, we know that research for companies seeking comparative advantage often works best when the academic team includes senior academics with relevant experience and a track record in the sub-field, plus younger research staff in touch with the latest developments in analysis methods, IT or other research frontier ‘technical’ elements.

In departments, labs and research groups external impacts also become more visible and will seem more important (when viewed internally or externally). Influences on outside bodies can be put into a better perspective at an organisational level because:

  • Contributions accumulate across individuals, with the experience of external work by one researcher for client A potentially feeding forward to later work by another researcher for client B, both in specific content terms and in terms of knowledge of how to interact and deal effectively with external clients. This last aspect is particularly important when it comes to being able to handle the logistics of tendering, in response to requests for proposals (RFPs) or invitations to tender (ITTs), in the first place. Later on, knowledge of working with outside actors matters for the effective management of relations with such external clients during project completion and negotiations over licensing, patents, dissemination activities, what can be published in journals or reports, follow-on work, and ‘intellectual property rights’ (IPR) issues.
  • Many applied problems that external clients bring up require joined-up solutions that do not fit any one researcher’s competencies fully. Hence an approach from business or from public policy-makers that is backed by funding, whether as a direct commission or inquiry, or in a tendering format, can stimulate the formation of new teams, networks and synergies inside academic departments and research laboratories or units. This ‘focusing’ effect is something that government and foundation research funding bodies also try to achieve in solely academic work by launching research ‘programmes’ with ear-marked funds for work fitting into themes that they specify. But whereas such initiatives are normally consulted on and signalled well in advance, new requests from business or government typically originate at short notice, requiring good ‘horizon scanning’ by departments or labs in order to find out about them. Labs or departments must develop a capacity to respond quickly and creatively to these requests and also have to bear the often substantial tendering costs involved in addressing them.
  • A critically important means of external organisations and external professional practice meshing with academic departments and research laboratories is via the occupational mobility of students or young-alumni (such as post-docs) moving into jobs in business or government. Undergraduate and masters students and alums (and in the US doctoral students leaving at the ABD (‘all but dissertation’) stage) often bring to their new employers experience of the whole research environment of the department or lab. When academics and research leaders only put on specialised courses in their own areas, they often leave it to coursework students to integrate and make sense of very diverse intellectual offerings – and the best and brightest often succeed (against all the odds) and carry that value-added with them in moving on. PhD students and post-docs work in more specialised areas, and so they are less exposed to whole-department or whole-lab influences, although peer group networks in the ‘group jeopardy’ situation of doctoral work often compensate a little for this. When they move on, these alums also often go into specific, technical roles within business, profession and government, bringing with them key knowledge of the newest concepts, methods of analysis or experimentation.

Underlying these effects that tend to make the organisation more than just the sum of its component parts are some key differences between explicit and tacit knowledge. Laboratories and departments of course accumulate a great deal of explicit knowledge, because they combine researchers and academics of different ages, orientations, skills sets and technical capacities within a single disciplinary focus. Including theorists and empirical studies fosters a useful dialectic of discovery and integration research, and the cumulative experience of departments or laboratories is valuable for businesses or government in giving them confidence that researchers have collectively tackled similar problems before and achieved success. In addition, a good deal of recent work on innovation and on scientific advances has stressed that scientists and academics invariably operate with many beliefs, practices and standard operating procedures that are only partially documented. Much knowledge that could in principle be made explicit is not in fact crystallised formally or written down, but instead is contained in traditions and working methods that are understood by the staff members.

An important strand of organisation theory argues that the same is true of major corporations, government agencies and indeed all formal organisations (Nonaka, 1994; Nonaka and Takeuchi, 1995). Part of what makes different organisational cultures distinctive lies in what is not written down but contained only in the minds or brains of current organisation members. This can be summarised as ‘tacit knowledge’ and its importance is difficult to understate. It is especially hard for external clients and audiences to perceive or take account of ‘tacit knowledge’ unless the client maintains close and regular contact with the department or laboratory concerned, usually involving regular liaison, frequent visits or seconding staff to work inside the university. This is just as true of STEM disciplines, and indeed it is in exactly these disciplines that ‘serial’ linkages from firms, government funders or major foundations with university units most commonly occur – precisely because recognising and being able to evaluate tacit knowledge is likely to be of critical importance for future investment or funding decisions.

Recent work in the philosophy of science stresses, however, that the importance of tacit knowledge is more general and pervasive than the literature focusing only on innovations in high-tech industries suggests. Collins (2010) argues for the existence of three kinds of tacit knowledge:

  • relational tacit knowledge, consisting of tacit knowledge that could be made explicit under more favourable conditions;
  • ‘somatic’ tacit knowledge that relates to the limitations of human bodies and minds, covering forms of knowing (like how to ride a bike) that could not be conceivably written down for and implemented by a sophisticated robot; and
  • collective tacit knowledge, held at the organisational level in the shared understandings of multiple personnel. In Collins’ view this is the most irreducibly tacit form of knowledge, the most resistant to capture or rendering explicit.

The importance of tacit knowledge also underlies many of the difficulties of professional communication between scientists or academics and lay audiences. It explains why researchers are often made uncomfortable by how ‘outsiders’ to their discipline or research area (without access to its tacit knowledge) try to summarise their ‘explicit knowledge’ results or even their general orientation to research. Some authors have argued for a kind of ‘periodic table’ of different kinds of expertise, and have stressed that the extent and character of tacit knowledge available to different kinds of actors underlies many of the key differences that we acknowledge as important for assessing the usefulness and authority of varying forms of expertise (Collins and Evans, 2007).

In an effort to control such effects, there has been something of a movement in universities away from the centralisation of press and communications functions in a university-wide office that is necessarily generalist in its approach, and towards more ‘embedding’ of writers and communications or dissemination experts within laboratories, research centres or major academic departments. This trend often brings into universities former business or government personnel with a directly relevant scientific or academic background, but also with experience of publicising and explaining exactly the same issues, problems and research potentials to lay audiences in key stakeholder organisations.

8.2 The time lags in achieving impacts

In the 40s everyone was excited about supersonic flight and atomic power, and in today’s history books we continue to think of that era being dominated by those technologies. It wasn’t. One might more correctly think of the 40s as a time of tanks, aeroplanes, cars, coal and wheat and pig farming. We inhabit a world where what I call ‘the futurism of the past’ falsely conditions our conception of the past. David Edgerton, quoted in Sutherland (2006).

The conventional wisdom is that achieving external impacts from academic work involves much longer time periods than those involved in academic impacts, discussed in section 1.2. Yet there are no reason at all to believe a priori that this should this be so, so long as we are thinking of external impacts only as ‘recordable occasions of influence’ on society outside the university sector – rather than taking an all-inclusive view of ‘impacts’ as including causal contributions to external organizations’ outputs or outcomes or positive changes in the social welfare. It is clearly true that the diffusion or wide implementation of new ideas and innovations does often (but not always) takes time – to which we return later in this section. But the initial influence from academia to the external organizations need not necessarily be long-winded.

Indeed some aspects of generating external impacts should show a radically speeded-up process of influence. Wherever university researchers are directly commissioned or contracted to undertake work for business (especially and always) or public policy-makers (often) the grounds for expecting rapid impacts (as influence) are manifold:

  • The research processes involved in commercial or contracted research are typically much less leisurely and far more time-focused than conventional academic work, with much stronger time-disciplines, backed up by contractual or funding penalties for failing to hit agreed timelines and milestones. Where work is directly commissioned or contracted, then there need be no information-access lags in its definition. Direct communication of research needs from the client to academics should in principle be much more focused, swifter and less ambiguous than a process of university researchers trying to anticipate ‘client needs’ in the abstract.
  • The clearance, authorisation or consultation times involved in academic research, especially in management or the social sciences can be radically reduced. For external academic researchers studying government services or the welfare state these barriers are often very long, and often so extended as to be almost insurmountable. The clearance, or authorisation or consultation times involved in contracted research for government agencies may still be substantial, especially where the research involves other government bodies than the one commissioning the work: but they are at least tractable. In businesses commissioned research will often work smoothly too. But in complex corporations consisting of different sub-sections with different interests the problems of ‘influence costs’ are never completely eliminated for outside researchers (whether consultants or academics).
  • Where researchers are working directly with the owners of proprietary or normally confidential data, the usual time lags involved in getting access to the relevant data are short-circuited. The company or government agency involved instead makes the requisite information available directly, albeit under appropriate NDA (non-disclosure agreement) safeguards. In STEM disciplines researchers may gain access to proprietary technologies or materials, or get to use improved or expensive equipment that would otherwise be unaffordable. In the social sciences researchers may gain access to huge transaction datasets about corporation customers or government services users that allow much better (often ‘real time’) social information to be collated (Savage and Burrows, 2007, 2009; Dunleavy, 2010). Or they may gain access to internal customer or staff surveys and the accumulated results of internal research (focus groups, usability studies, randomised control trials, etc.), all of which greatly reduces the time needed on data collection.
  • Even if brand new research needs to be set up from scratch the normal extremely troublesome delays in gaining permissions, negotiating access for interviews or surveys, securing elite interviews and so on are all dramatically shorter for ‘insider’ research. The development of pilot studies can normally be dramatically speeded up when working directly with government or business clients. And strong external funding can allow main data-gathering periods to be reduced by upping the number of staff resources employed to do surveys, and by enabling the use of more expensive or comprehensive techniques for recruiting respondents.
  • Especially in STEM disciplines where academic researchers are working with manufacturing businesses, but also covering other areas (such as most work by researchers in business schools) the race to be first to acquire new knowledge or generate innovations or new techniques and business processes has strong commercial implications. There are much stronger incentives for researchers to make advances, especially where new technology is licensed to firms and the researchers and the universities involved stand to gain most from widespread adoption. A study of 86 US universities found that they give inventors 25 to 68 per cent of income generated, with the average being 41 per cent (Lach and Schankerman, 2007, pp. 3). In addition, universities with bonus-pay arrangements generated ‘on average, about 30-40 per cent more income per license, after controlling for other factors’ (op. cit., pp.5). Four fifths of private sector universities had bonus pay incentives for staff, compared to only half for public sector universities, and private universities had more generous arrangements also.
  • As soon as research is completed and written up it can be directly communicated back to the client. There are none of the lengthy publication timelines involved in conventional academic work, and nor are there any peer review demands creating uncertainty or potential distracting factors. Clients may want research work to be published for marketing or regulatory reasons – for instance, big pharmaceutical companies are legally required to publish the results of all drugs trials in some form, and often want favourable studies to be published in the most reputable medical journals achievable. But this is a secondary (marketing) add-on to their getting value-added from the research.
  • There are no primary delays involved in research dissemination and client-recognition delays, such as most often occur with conventional research. This is not to say that the clients for university research always respond positively to what they receive or act upon it, because we are concerned here only with the first-impact stage of a recordable influence – such as a business person or a government official reading a report of what research has discovered.

Of course, the later stages of corporations or public policy-makers deciding whether to do anything further in response to this primary influence are perhaps just as likely to be protracted as would be the case if the firm or agency had just stumbled on the research in an academic journal. However, where businesses or government agencies have commissioned and paid for research work, rather than just getting it for free, we might hope that their incentives to follow-up on it are somewhat increased. A great deal here depends on the balance of the ‘sunk costs’ already expended on the research and the wider ‘change costs’ of doing anything to change production, services or business arrangements. In general this balance should be much more favourable for commissioned research (which in business or government will tend to be focused on incremental improvements) than for solely academic work.

However only some kinds of academic research are likely to be directly commissioned or contracted by businesses, government agencies or most foundations – namely applied research. Less often basic research work that also fits closely into the ‘discovery’ category (discussed in Chapter 5) may be funded by high-tech businesses, where it may potentially confer comparative advantage in very technical and fast-moving markets (such as IT and perhaps pharmaceuticals). Other kinds of research – basic research and basic research with user-interests – are less likely to be directly contracted or commissioned. In terms of the categories used in Chapter 5, research that falls into the integration and bridging categories, along with blue-skies discovery research, are highly likely to be externally supported.

For all such un-commissioned research, the possible contributory factors to an ‘impacts gap’ (discussed in Chapter 6) all tend to militate in favour of time lags for achieving external impacts that are longer than those involved in getting academic impacts. Governments and businesses doing general horizon scanning in their areas tend to rely on professional certification in journals in identifying reliable or important research. Hence the external impacts gap factor largely comes on top of the conventional time lags to publication or academic recognition. Thus demand and supply mismatches and weak incentives both imply possible recognition delays – academics responding weakly and late to new government, business or civil society needs; and external organisations missing entirely or picking up only very late-in-the-day on research relevant for their needs. Difficulties in communication, especially the esoteric quality of academic professional communication enhances these risks, as do the cultural differences between sectors. Hence Gillies’ (2010) discussion of ‘delayed recognition’ problems for research (see section 1.2) all apply with particular force to mainstream academic research (when not commissioned). He points out that in the context of the papilloma virus causing cervical cancer, the time lags involved in securing academic acceptance of an idea inconsistent with the main paradigm delayed development of a vaccine again a dangerous and often fatal disease, with large-scale human costs and a significant loss of revenue and profits for drugs companies also.

So over what period should academic departments and research laboratories seek to track and demonstrate external impacts? In preparing for its 2014 Research Excellence Framework (REF) exercise, the English state funding body (HEFCE) acknowledged that the period should be longer than the five-year stretch being used for citations-based and peer reviewed research. It suggested that seven- or eight-year periods would be most relevant. Most UK universities responded to this suggestion by saying that even 8 years is too short to reach a meaningful estimation of impacts, but this time is construed in a far broader sense, going beyond occasions of influence to embrace also outputs, outcomes and implied social welfare changes.

And indeed, if for a moment we were to adopt such a maximally extended concept of ‘impacts’ as not just occasions of influence but also involving making a difference to the implementation of outputs, achievements of outcomes and to positively boosting social welfare – then here time lags can be much longer than is often supposed. The historian of technology changes David Edgerton called one of his key books The Shock of the Old (2006), in order to stress the very long time periods needed for most world-changing technologies to achieve widespread impacts or complete acceptance – for instance, the long period involved in the adoption of electricity, or the time lag involved in the late nineteenth century internal combustion engine becoming the mainstay of the inter-country shipping trade (handling almost 95 per cent of goods moved between countries) more than eight decades later. In the digital era the spread of innovations has clearly speeded up, like in the cases of mobile phones and internet systems. But equally, as Edgerton points out, many of the expected-to-be-happening technologies of the early post war period (like rockets, atomic power and automation) have progressed just as slowly as most late nineteenth century changes.

A far more specific analysis focusing on the extended impacts of university research is that of J.D. Adams (1990) who looked at the link between the growth of productivity in 20 US industries and publications in scientific journals directly related to them. He found that there were long lags between the publication of relevant research and improvements in industry productivity as a result, typically between 20 to 30 years. Even in areas like sciences, where time lags are lower and reducing faster, many years still have to elapse between the publication of research and improved economic growth.

8.3  Generating an evidence base about external impacts

Showing how university research feeds into wider economic, societal or public policy development entails three main information-collection steps:

  • making an effort to track and record information that will otherwise be unknown or will be known only informally and thus left implicit;
  • capturing in permanent form information that is explicitly known, but only in a temporary way, usually in ways that will otherwise be lost in the normal way of things;
  • encouraging external audiences to express their appreciation of contacts with a department, research lab or university in a more direct and explicit form than they will normally do.

In addition, however, departments and universities need to be able to easily access and arrange information about external impacts in forms that will be plausible and convincing for external funders, government regulators, the media, or other sections of the university. Often funders or regulators will ask for information about external impacts in a particular prescribed form, and their formats will typically be different. They may also change over time, especially in the current period, as interest in proven external impacts grows and universities get better at capturing relevant information. So it is a question of trying to anticipate what their format for reporting impacts will look like, and to collect information in forms that is sufficiently flexible so as to be readily adapted. Our general advice here is that nothing convinces external audiences so much as data and quantitative information. But in the current stage of development of impacts thinking, it will also often be necessary to produce case studies of influence and qualitative accounts and assessments, a topic that we address in detail at the end of the section.

Also, most of the data collection methods recommended here fit closely within our main definition of an external impact as ‘a recordable occasion of influence with a non-university organisation or actor’. But it is important to recognise that many funders, regulators and other parts of universities will probably be ‘naïve customers’ who are still operating with an extended conception of impacts, encompassing not just occasions of influence, but also an expectation or demand for proven causal effects on outputs, outcomes or social welfare. Intellectually we have argued that such conceptions are indefensible and cannot be implemented – but this does not mean that external audiences (and even university hierarchs) will recognise this. Departments, research labs and universities are hence likely to be asked about extended ‘impacts’ over which they actually have no control, and they must be able to put up some form of intellectually coherent and well-evidenced account.

For departments or research laboratories, the first step in understanding external impacts as occasions of influence, and at least getting some handle on extended causal ‘impacts’, is to adopt a systematic approach to recording interactions with all forms of outside audiences for research. Our essential recommendation here is – try to track everything, including especially the following:

-Electronic or other records of the department’s or lab’s work being discussed in general media (newspapers, TV, radio and general-interest internet website and blog sources) and specialist media (such as the ‘trade press’, industry journals or magazines, close-to-public policy journals and magazines), and the journals, newsletters or other publications of professions, think tanks, consultancies, trade unions, charities and NGOs. Most general media information is often collected by a university’s media and communications office, and so is a free good for the department or lab, although you usually need to ask to be given tailored or more detailed reports. Other easily available data can be provided by the university’s web managers, and sometimes central units organising major events, although again you usually need to ask to get specifically tailored information.

But it is important to recognise that usually far more information can be gleaned from specialist media that are much closer to the department’s or lab’s areas of interest, and that the knowledge needed to access this data will be largely confined to the department or lab itself. To tap into that will hence almost always require asking a researcher or a post-graduate student to look specifically for especially electronic data in the most relevant sources that could show coverage of what the department has done. The first time such work is undertaken it may take quite some time to find out what information is available and to collate it back for as long a period as seems relevant. But so long as the initial investigation is well-documented, to make clear what methods were used and what does and does not work for that department’s profile, subsequent annual top-up exercises each year can be quickly carried out.

It is worth bearing in mind here other public policy, professional or trade forums where the work done by the department’s staff may be discussed. Key sources in the social sciences might include:

  • debates and proceedings in the legislature or Parliament and parliamentary committees;
  • papers, publications and website coverage of research by the national government;
  • sub-national legislatures, executives and bureaucracies; and
  • regional and local councils or health authorities or regional development bodies, etc.

Funded linkages, such as research grants or support, consultancies or joint ventures, licensing income, payments made for training sessions or courses, conference or attendance fees for one-off events, and occasional donations or support for events. Much of the activity here will create an easily auditable financial trail in the university or department accounts. The great thing about financial data is that it gives an excellent indication of the scale of outside interest in university research – the more money has flowed through, the greater the interest and presumably the value to the external organisations or actors.

Time commitments by external actors to come to department events or seminars, or to make visits to come and talk to researchers or to consult them on issues. In business, government and civil society organisations, time is money. So the more that external actors give time to department or lab events, and the more senior these personnel are, the greater the imputed external value of what the department or lab is providing. It is worth bearing in mind the total time involved in getting to and from an event, including travelling time.

Having excellent data on events, including an extended, integrated conception of an ‘event’ discussed below in Chapter 9, is a key first step in being able to estimate time commitments. Getting department or lab members to log contacts with all external organisations in the most simple and time-economical ways is another key step, and should cover meetings, phone calls, emails, advice giving, inward or outward visits, etc. (also discussed in Chapter 9). If this cannot be done then the likelihood is that contacts will be hugely under-recorded. So at the year-end, or even three or four years from now, the department will end up vaguely claiming to funders or regulators that there have been ‘many’ contacts, without any recordable evidence to substantiate this. By contrast, simple logs of contacts, organisations involved, the people spoken to or attending, and the time involved in the event or contact can create the basis for quantifying external time commitments precisely.

Appreciations of contacts or work done are rarely explicit. People come to seminars, clap the speakers at the end, stay and chat over drinks, get ideas, make new contacts, and network with other attendees of interest to them. They think well of the department or lab because of all these things. But you cannot distil out this goodwill or favourable impression unless you ask the participants to record it in some way.

Getting seminar or lecture participants to rate events in response forms is now quite common for paid-for courses (where it is often rather onerous and tick-box style), but it is otherwise rare. So it makes sense for departments or labs to make it easy and expected for contacts to give them some feedback on events, ideally in a form that is very easy to fill in and can let respondents log free-text comments. For instance, pre-populate any response forms or emails with full details of the event or the contact already, and ideally also include the name, organisation and position details of the person being asked to comment, so that respondents do not have to waste time filling in details that should already be there.  If staff members have given interviews or seminars or been consulted, it is a great idea to write individually to the organiser (or to senior people met there) and to ask them to very briefly record their appreciation in an email response, pointing out to them that this can be helpful for the department or lab in securing future funding support (on both steps see Chapter 9). Even automated requests for feedback that thank people for coming to events or contacting the department, but also solicit reactions or appreciations, may be useful with regular audiences.

Following up on causal influences to trace extended effects is important in all those cases of funded linkages and time commitments where more significant results might be claimed. Staff members or department/lab leaders who believe that an important effect was achieved on outputs, outcomes or social welfare should make an effort to get that recorded in some way by the external organisation. Do not rest easy with ‘rumours of influence’ and a vague knowledge of what happened next. Instead, commit a little time or effort to making more concrete what you know about the extended ‘impact’ of an intervention or contact.

Asking close organisational or personal contacts of the department at the end of each year to evaluate what they got out of their relationship with the department can also capture causal follow-on effects more synoptically. It is important to also include here cases where a post-doctoral fellow or other skilled or well-trained student moves from the department or laboratory to a company or government agency and has an immediate effect in helping to sort out a problem or to bring a project to a successful conclusion (although in this case later effects cannot plausibly be claimed).

Growing departmental or research lab portfolios of external impacts activities entails recognising that at the collective, organisational level there are many opportunities for creating synergies and improving priorities and performance in contact-seeking. Individual staff members have so many demands on their time from academic work already, and may have such small or episodic external contacts, that it is not easy for them to manage their external contacts in different ways. And too often this knowledge will be both tacit (unavailable to other staff members or the department/lab leaders) and evanescent, getting superseded by new concerns or forgotten before it can be of any wider help to the organisation.

Yet accumulated at the departmental level, broader patterns and synergies will become more visible, as will opportunities to do more and gaps in contact-seeking or contact-making. Creating basic data on what the department has achieved, and then getting discussions of this information at a senior staff committee meeting and more briefly at wider staff meetings, can all help turn an otherwise disorganised mass of contacts into a better understood portfolio of external activities, one that can be actively managed and for which performance over time can be improved.

Developing metrics of performance, that is, quantifiable measures that capture key aspects and stay the same over time so that comparisons can be made, is the final stage in the process of departments or research labs increasing their self-awareness. Some of these metrics may be required anyway by government regulators or funding bodies, or by the university central administration. Often the indicators involved may not be all that valuable in capturing what really matters at the departmental or lab level. But they are often valuable nonetheless, because they alone allow comparisons with other departments in the home university, or with similar departments or research labs elsewhere. Comparisons often trigger productive questions about what practices we are not as yet following but might usefully copy or import.

However, metrics that are sui generis to the department may be more focused on what really matters to it, taking fuller account of factors that make its situation different from others. Such internal metrics can also be kept consistent, even if external comparison metrics must be altered – in response to funders, government officials or university hierarchs changing their minds about what information is to be collected, as they will often do. But purely internal or sui generis measures are also harder to create initially, to maintain consistently over time, or to communicate externally.

Writing case studies and other short accounts of external impacts is often a key activity in explaining what has been accomplished to funding bodies – whether foundations, government R and D agencies, or companies. The 2014 Research Excellence Framework (REF) used by the British government requires even the smallest academic departments to provide two case studies of external impacts, and limits the largest departments to providing no more than five or six such case studies. Typically case studies are short qualitative and narrative accounts, following the linkage from university research to influence over an external body, a stage that can be well-documented and where sensible judgements and claims can be made.

However, funders and external scrutinisers are rarely content to stop at this recognition and influence stage (the only sensible definition of impact in our view). Instead, like the UK’s REF process, they typically want departments to go further and to trace out how achieving an external influence then translated into that organisation’s outputs, the outcomes it achieved or the effect on social welfare. This is more difficult for departments to cover. If it is to be an auditable account (and not one that is too vague, too general, or often unsupported), it may well require departments going back to their external partners or to the bodies influenced and asking them to provide some such description or evaluation themselves.

Yet politicians and public policy-makers are often reluctant to commit their debts or influences to paper, because they do want to be seen as ‘pinching’ ideas or as being dependent upon others for good ideas or information. Equally, public bodies may not wish to be seen issuing statements or responses that publicly favour one university or department over its competitors. Corporations especially may not want to formally credit university researchers with helping them to create value-added for their businesses, lest they potentially open their businesses to legal claims. So it may be best for departments to think of accumulating media or specialist media evidence of these extended impacts, marshalling what social scientists call ‘unobtrusive measures’ rather than relying on being able to ‘cash in’ claims of influence in explicit statements from the bodies or personnel influenced.

Lastly it is worth stressing that the writing of ‘impact’ case studies to meet external requirements, or just for media consumption or to explain the department’s work for external audiences, is often a specialist, bureaucratic art form. For the British REF exercise, for instance, departments and research labs have incentives to try and define cases that span across the widest range of the department’s staff – not an easy task when you have perhaps 50 to 80 researchers to cover and only five or six cases are allowed. Similarly for external media, departments will often want accounts of where and how they achieved impacts that are more simple than a complex underlying picture, and yet which are also defensible, supported by good evidence, and do not open the department up to charges of over-claiming or misrepresentation.

For faculties (or Schools) that group together related disciplines, such as the physical sciences, technology disciplines, the social sciences or the humanities, the analysis of external impacts is also important. This may seem surprising, because for most academics and researchers, the department or research laboratory that they belong to is their primary organisational identity. Faculties come a long way down the pecking order of staff members’ identities, usually third or fourth behind the macro-identity of the university and the micro-identity of the departmental sub-group or specific research team or unit that they work in. Nor do most external organisations and actors think of their relationships with universities in terms of faculties. They overwhelmingly see themselves as having a relationship that involves specific departments at the science-forefront with a specific knowledge level. Or they may interact with university-wide bureaucracy or component bodies, such as its sub-companies handling consultancy, contractual and research-licensing, or joint venture matters, or the university’s corporate relations and media/communications units.

Yet faculties or schools are often important units within universities for the setting of priorities regarding spending increases or research expansion (or for cutbacks and research retrenchment) across individual departments. It is at the faculty or school level where external contacts and impacts have to be integrated with the university’s resource-allocation process, where promising areas need to be encouraged with seed-corn grants, where pilot linkages need to be nurtured and grown, and where staff and personnel recruitment need to be tweaked to give the right weight to the balance of discovery, integration and application work. Some key IT and web-based communications may also be managed at these levels. Hence the deans and administrators of faculties and schools are often important decision-makers in any major relationships with companies or government departments and agencies, and are always key interlocutors with departments about what is working or not, what is growing and what is fading, and where the university’s comparative advantage for the future (and hence its key mission) will lie. While large departments may have skilled research administrators, smaller ones will not. And so some or many of the key research administrators are usually located at the faculty or school level, where they can accumulate both the necessary broader vision to cover several related disciplines and the information needed to develop knowledge of external contacts.

At faculty level it is important not just to aggregate up information as it stands from the departmental level, but also to try to create a value-added element that compensates for small departments’ characteristically scantier information gathering. Faculty staff should aim to give particular help to smaller departments so that they can contribute to a broad picture at faculty level that is complete and without lacunae. Achieving strong linkages across departments is also an important aspect of the ‘local integration’ of intellectual and research impulses identified earlier (in Chapter 5) as a key function of universities. The closest networks and links are naturally those within faculties, and the administrative importance of faculties or schools means that they are primary information circuits for whole-of-science or related-discipline knowledge transfers and translations.

Faculties and schools should also pay special attention to the synergies between different science departments and to the inter-disciplinary areas that lie uncomfortably across the remits of different departments and research labs. At any one time, some of the most dynamic and rapid-advance fields will lie at the inter-section of different disciplines. Politically these interstitial areas are often weak, tending to be marginalised within each of their component departments by the stronger and more numerous staff groups in ‘core’ established sub-fields, where in fact most work may be replication, confirmatory or only incrementally expanding on existing knowledge. It may frequently fall to faculty-level decision-makers to get the right balance of new developments funding for less tried and more inter-disciplinary areas with the most intellectual promise and the most applied potential.

For universities, the same key points and lessons apply, only at the whole-institution scale. Universities’ central administrations are key centres for allocating resources at a top level between faculties and schools, and for conducting or monitoring some key aspects of external relations – especially via its press or media office, through the university’s online research depository and library service, through centralised IT and web/internet services, via alumni and fund-raising arms, corporate relations units, and of course consultancy and R and D commercialisation arms. University vice-chancellors and their deputies in the UK, or university presidents and provosts elsewhere, are not just important decision-makers but also key conduits for senior politicians, government departments and companies to form links with the university and its department. Hence their knowledge, dispositions and prejudices are often important drivers for certain kinds of advance that they know well or see as promising. Equally often, the personalities and prejudices of top university leaders can form key constraints on the progress of fields that they understand less well or have less sympathy for.

One of the key roles of top leaders is to nurture and grow the diffuse and often elusive concept of the university’s ‘brand’, capitalising on long-established strengths but also seeking to constantly modernise and keep up-to-date the things that the university is well-known for, and to stop ancient strengths metamorphosing (as they so easily can do) into off-putting ‘legacy’ images. University brands are long-lived, characteristically change rather slowly, and are often double-edged, attracting certain kinds of staff and expertise and repelling others. The same effects operate in the external relations realm also, in motivating possible partners or customers of the university’s research to explore possible linkages, and in motivating alumni and other established contacts to make donations. The brand effects tend to operate powerfully at the level when potential collaborators or partners are first thinking of where to look for academic help or advice.

The local integration effect of the university in bridging across disciplines is often matched by their top leaders’ central role as a conduit of external influence and information into all the faculties. University leaders move much more widely in elite business, government and professional circles than do even their most senior faculty. So maintaining good communication from departments and faculties to the vice-chancellor or president and their deputies, and good intelligence back from this leadership group to department and research leaders, is often critical for allowing the university to keep abreast of new opportunities. In small countries, and for lower-ranked universities in large countries, the university leadership team is often a key channel for ties to state/regional or local/city elites (covering businesses, public policy-makers, professions and other main civil society organisations). For universities in large countries this effect operates in a more fragmentary way, with their top leaders being key conduits of advance information about how to match other large universities in a much more competitive environment. Top university leaders often have more advance or ‘over-the-horizon’ information about changing government, business or professional priorities. Finally, top leaders play an equally important international intelligence role for large and research-intensive universities, who must increasingly live and thrive in a global university economy, struggling to acquire students, academic talent and direct investment in competition with other major universities across the world. Here top leaders often undertake more overseas trips, especially forging university partnerships with collaborating institutions. Where senior department and faculty staff also go along, there are strong possibilities for rapid intellectual and knowledge transfer advances here, characteristically allowing the information-seeking university to formulate a much more sophisticated and in-touch estimate of where its comparative strengths lie.

8.4 Comparing organizations’ and disciplines’ performance

Even if departments, faculties or universities have assembled good quality information on external impacts as occasions of influence, their decision-makers are often reluctant to do more than cherry-pick some tempting highlights that clearly put them in a good light. Often this stance of flashing only a few isolated titbits of information stems from a public relations fear that publishing more detailed accounts may open the academic unit in question up to criticism – especially the counter-claim that actually the department or university is not doing as well as it should be, given the funding it has received from the government. Academics and universities must often face ‘naïve customers’ in government or business, who often seem to ‘expect the moon’ from relatively small amounts of funding, to want the university contribution to external outcomes delivered in infeasible timescales, and to demand that such extended impacts are documented in unachievable detail. Hence departments or universities often react by hugging their cards close to their chest, and contenting themselves with rather vague ‘fairytales of extended causal influence’ that cannot be directly refuted.

However, if universities are to get better at legitimately claiming impacts (as influence), and at educating government and other funders about what kind of wider effects on outputs, outcomes or societal benefits can be reliably traced back to their research, it is important to break out of this cycle. The key step here is to find ways of comparing across departments and research labs and across disciplines. It is no good comparing evidence of the external impacts of a physics department and an English department unless we also know how each kind of department generally performs in a given country and institutional environment. Similarly, the common university fear that somehow ‘naïve’ customers or readers will impugn perfectly creditable impacts scores can best be exorcised by setting performance within an appropriate framework, one that takes account of the difficulties of achieving different kinds of influence over external audiences.

In this respect we follow up on the suggestions made in the previous section on data to collect by briefly reviewing some UK evidence. On funding and financing links from outside firms and government agencies to universities, Figure 8.1 below shows that a website audit of the top ten UK universities in late 2007 (just before the onset of the financial crisis) found 74 different centres or institutes with formal external funding – nearly half being in STEM subjects (including medicine) but with the social sciences next in line, and with very few externally-funded humanities centres or institutes. Unfortunately, we do not have information on the scale of these funding or financial links, which are often not made very explicit by universities or donors. Yet the Figure is already useful in providing some context, especially in showing the social sciences as being not too far behind the physical sciences and medicine in terms of funded unit or centre numbers.

Figure 8.1: The number of research centres and institutes funded by or formally linked to different kinds of sponsor bodies, by discipline group (in our web census of top 10 UK universities, December 2007)

 Type of sponsoring organisation
Government bodiesThird sector organisationsPrivate sector companiesOther academic institutionsTotal
Social science1427528
Medicine6102321
Science and technology812213
Joint disciplines70119
Humanities21003
Total 3714121174

Of course, Figure 8.1 does not cover other important forms of economic and financial linkage from business, such as the formation of spin-out companies or joint business ventures, where it is clear that the STEM disciplines account for the vast bulk of activity in the UK. Similarly, it will be important to look at other, less formally institutionalised types of linkage, especially the licensing of technologies from universities to businesses, and business support for individual research projects, or the work of post-doc staff or PhD students. Different disciplines within the STEM group, and even different sub-fields within particular disciplines, will often attract sharply varying levels of linkage-attention from each other. The information on corporate patronage of PhDs in the US also suggests that these patterns can vary considerably over time across many STEM disciplines, with funding reducing sharply in recessions or hard times, but expanding in boom times and in close-to-business areas that are fashionable in these booms.

Turning to the issue of assessing external audiences for different subjects, Figure 8.2 belowshows the results of a census of UK central government websites conducted in monthxx 20xx. We recorded all references found in website documents to different forms of university research and some interesting results emerged, such as the extensive number of references to social policy, medicine, health policy and law and order research, and, for instance, the small volume of references to management, economics, technology and geography research here.

Figure 8.2: The subject areas of academic research found on government department websites


Taking this analysis a little bit further, we also looked at which departments in the UK central government generated these website document references from. Figure 8.3 below shows that the biggest group came from the ministries covering crime and law and order (the Home Office), social security and welfare state systems (the Department of Work and Pensions), overseas aid (the Department for International Development), health and the environment (Defra), and finally transport and education. By contrast, the departments handling local government, taxation and (ironically) innovations and universities, had the lowest rates of citing academic papers and university research findings in website documents.

Figure 8.3: The visibility of academic research material on government department websites

Yet we have good reasons for believing that these observed behaviours are quite specific to different spheres of influence, associated with different kinds of citation and acknowledgement of influences. For instance, the Treasury and the Bank of England are among government bodies that are relatively reluctant to cite outside research in documents on their websites. However, both these institutions have many specialist economists and financial experts on their staff, they pay a great deal of attention to data trends and forecasts of economic variables, and on their internal websites or intranets they often review and cover a great many economics articles, forecasts and books from university economists and financial experts in the UK and overseas.

Moving from the public policy sphere to look at the general UK media also shows a different set of rankings of the external salience or visibility of different disciplines. Figure 8.4 below shows that the choice of search words to indicate university research makes quite a difference to the rank order of disciplines that results. However, leaving aside these detailed differences, the two halves of Figure 8.4 agree that among university disciplines, medicine, science and technology get the most media coverage. Political scientists and economics/business and finance academics also get a good deal of coverage, often commenting on developments in overseas countries or in economic or business data. They are followed at something of a distance by humanities disciplines like history, English and philosophy, and a range of social sciences, including law and sociology. At the bottom of the public visibility pile in both halves of the Figure are computer science (where there is a lot of IT coverage, but mostly company-focused), languages, anthropology and geography.

Figure 8.4: The disciplines of academic research covered in the UK press, May 2007

(a)

(b) Using search terms ‘Dr’ or ‘new findings’

Method Note: ‘Other humanities’ here includes Classics, Theology and religious Studies.

In comparing across universities, the normal approach to benchmarking is:

  • To locate an individual university against its situation-neighbours. For instance, in the UK universities have organised themselves into different ‘mission groups’ with the Russell Group representing the oldest and most prestigious universities, and different groups for the older-established ‘new’ universities by the 1970s, and those converted from polytechnics in the later period. In the US the same kind of comparison groups span across from the Ivy League, leading and large state universities, other private universities and colleges, and other state universities. Alternatively universities offering PhD programmes can be contrasted with those offering four year degree programmes, and others offering only two year programmes.
  • To locate a given department against the more general background of the university wherein it is located. The reasoning here in relation to impacts is that even though a middling or low-ranked university may have stellar academics and research programmes in particular departments, it is very difficult for the academics and researchers there to break out of the ‘mould’ that the university’s brand determines. Yet the 2008 Research Assessment Exercise in the UK showed that a good deal of the work that was top-ranked by government-appointed review panels was still being carried out by ‘pockets’ of staff in less research-intensive universities (those with lower overall department rankings in the RAE exercise).

In terms of assessing external impacts, it seems especially important to emphasise that both government and business impacts are likely to be constrained in significant ways by a university’s general brand and reputation. An excellent department or research lab isolated in a middling or poor university will rarely be able to counteract the information problems thus created for external sources to recognise the strength of the work it does. However, in STEM areas departments or labs may be able to partly counteract this problem by building detailed links with specialist industries in niche markets. And at a local or regional level, a strong department or lab may be able to make useful links with local or regional public policy-makers or businesses, especially in the US or Germany where university funding runs through state or regional governments and is partly conditional on making these sub-national linkages and promoting regional or local economic growth and development.

8.5   Managing impacts work – potential pitfalls

Coming to power is a costly business… Power makes stupid … Politics devours all seriousness for really intellectual things. Friedrich Nietzche, quoted in Flyberg (1998)

Power is more concerned with defining reality than with dealing with what reality “really” is. Bent Flyberg

As an occupational group, university researchers and academics still operate within many of the older professional practice ideas associated with the ‘private practice’ concept that stresses a dispassionate commitment to advancing knowledge, closely associated with the profession having a socially neutral stance. Academia is supposed to not take sides in the social struggles between labour and capital, between rich and poor, between haves and have-nots. The development of science or culture or fine art or philosophical thinking should go where it will, pursuing an independent course that is not directly or centrally involved in a class struggle or in other forms of distributional and societal conflict. And in theory academics and universities are institutions without their own vested interests, or that should at least struggle to act as if they were, not taking sides beyond the side of promoting knowledge development and the advancement of civilisation.

Of course, expressed in this way it is apparent that these are ideals which any university and any discipline only partly lives up to, and that there are biases in knowledge development in universities and academia which inevitably reflect the interests of academics and researchers themselves and their dependence on state and corporate patronage for research funding. Increasingly, university education has also moved out of being overwhelmingly public funded into at least a ‘mixed economy’, where universities can seem like just another kind of corporation marketing services to ‘customers’. Equally, the contemporary importance of universities for the flourishing of local, regional and national economies means that the old private practice concept of a small and disinterested group no longer stands up to critical attention.

Nonetheless, universities and academics can and do actively seek to remain relatively autonomous from wider social and economic influences. They characteristically seek to cultivate and protect a key area of independence, of openness and of responsibility to debate on the basis of evidence and well-tested scientific theory. And all university researchers in their hearts accept an obligation to constantly search for improvements in knowledge, and to recognise and adopt them, however uncomfortable they may be for established interests or commitments inside academia or among external actors and interests. This remains at the heart of the concept of professional neutrality in academia.

Sustaining this conception becomes more important (and sometimes difficult) the more that a university’s engagement with business or policy-makers increases. Paid for research or applied work can appear as ill-advised marketing or justification unless it is carried out to the highest scientific and professional standards. Critics of a scientific programme or a research conclusion are often mobilised politically and will look for means within their power to counter the effectiveness of contrary work from academics or university researchers. An influential perspective on public policy debates in liberal democratic countries portrays them as clashes between adversarial ‘advocacy coalitions’. If academics come to form part of one advocacy coalition, as they often do, then the opposing coalition will look for their own academic advisors and proponents. The attacks in 2009 on scientists studying global warming at the University of East Anglia are a key example of this effect.

In politics it is also important to recognise that social science, medical and science/technology researchers often end up trying to ‘speak truth to power’ in conditions where the powerful are more interested in bending perceptions into more convenient moulds (as in the epigraph from Bent Flyvberg for this section). In 2007 academics at the London School of Economics published a long report critical of the Labour government’s then flagship policy of introducing an identity card and compiling a huge IT-based register of all UK citizens, which they costed at around £10 to £18 billion, compared with a government estimate of £5 to £6 billion. The minister in charge denounced the LSE study as ‘mad’ and the senior civil servant at his ministry rang the LSE Director to wonder aloud how unfortunate such inconvenient research could be for the university’s future funding. In this case the LSE stuck by its academics and strongly resisted government pressure (Whitley et al., 2006). The ID card scheme itself was first drastically cut down in scale by Labour ministers, and was opposed by all other parties. It was cancelled by the next Conservative-Liberal Democrat government in mid 2010.

However, not all outcomes end up with academia winning through. To give one example, again from the UK, in the spring of 2010 a senior professor chairing the UK government’s drugs policy was forced by the home affairs minister to resign after publishing listings of the dangerousness of drugs that clearly contradicted the official lists of the most dangerous drugs (for instance, rating alcohol as more dangerous than cocaine). In a second case, a London university academic critical of government policies on using private finance to build hospitals in the National Health Service was giving evidence to a Parliamentary select committee, and found her research work attacked as shallow and biased by a loyal government MP – a charge that was almost impossible to refute in the time available. In all these cases the relative firepower of politicians or an organised lobbying group or advocacy coalition compared with unsuspecting or unprepared researchers is very unequal, and usually ends up creating casualties more easily on the university side.

These considerations suggest some key rules for departments and research laboratories that begin building new relationships with powerful external actors, or become associated in some way with an advocacy coalition that has already attracted counter-mobilisation by an alternative coalition.

  • Pick partners or funders for research work carefully and make sure that the terms of any funding and research linkage fit clearly within the university’s rules for partnerships and research funding, in particular safeguarding academic freedom to report research freely in appropriate professional journals and reports, after the normal time periods. Going public with your research is the best guarantee of its overall quality, both for the university and the research clients. Of course, in many commercially sensitive areas there will need to be appropriate protections for the intellectual property of the company and the university, and so some research may not be fully open to disclosure. But publication so that results can be replicated should still be the normal goal, even if time delays or restrictions have to be imposed.
  • Most universities will also have safeguards in place to try to ensure that their researchers do not sign up to carry out applied projects that they are not in fact appropriately qualified or experienced to undertake. It is important for reputation maintenance that departments and labs stick to research that they are well qualified to do. This usually also means having some ‘strength in depth’ in the area, so that several researchers can operate in effective teams.
  • It is important for researchers and departments to check that they have the backing of university leaders before entering into fields of work directly for business or government that may become politically controversial in the future. Compiling a ‘risk analysis’ can be useful here and can provide assurance to leaders that the research will be of high quality and that risks can be mitigated.
  • Once some controversy about external impacts work has arisen, it is also important that department and university leaders back up academic researchers who come under strong ‘political’ challenge, whether by an opposition advocacy coalition or by senior politicians or decision-makers.

Summary

  1. While academic departments, labs, and research groups produce a great deal of explicit knowledge, it is their collective ‘tacit knowledge,’ which is the most difficult to communicate to external audiences, that tends to have the most impact.
  2. The changing nature of commissioned academic work means that the time lag in achieving external impacts can be radically reduced, yet any external impact of non-commissioned work is likely to lag far beyond its academic impact.
  3. It is important for both individual departments/ research labs, schools or faculties, and the University as a whole to systematically collect, access and arrange auditable data on external impacts; keeping in mind that some ‘naïve customers’ like funders, regulators, and other parts of their universities may insist on proof of ‘extended’ impacts
  4. Making meaningful comparisons between universities’ and individual departments’ external impact requires contextual understanding of how departments and universities generally perform in a given country and institutional environment.
  5. Seeking to improve external impact should not mean sacrificing academic independence and integrity; compiling a risk assessment for working with external actors or funders is one way to mitigate the politicization of one’s research.

Back to top

Print Friendly, PDF & Email

Creative Commons Licence
This work is licensed under a Creative Commons Attribution 3.0 Unported License unless otherwise stated.