When governments invest public money in higher education research, and even more so when businesses, foundations or charities directly fund academic outputs, academics often see the difficulties in recording or demonstrating positive social outcomes as an inhibitor of future funding. Academic outputs can generate specific numbers of citations and be evaluated for quality in other ways. But the looming ‘so what?’ and ‘what next?’ questions tend to go mostly unanswered. Researchers applying for new funding sometimes get driven by crude government or business demands into concocting dubiously plausible claims about the social, business or public policy outcomes that have followed from their work. This straining of credibility characteristically takes the form of researchers or universities ‘credit-claiming’ in multi-causal contexts, where the research involved was perhaps only a tiny element of a complex pattern of far wider influences. This tends to devalue the reputation of research and to debase the coinage of ‘impact’ claims behind a mixture of university public-relations-speak, general hype and over-claiming, exacerbated by inadequately documented ‘case studies’ of influence.
These sorts of developments feed a general pattern of complaint from government and businesses that:-
a) there is a wide impacts gap between research being completed and published and its being recognised or achieving any external impacts beyond the university sector itself; and
b) there is an even wider outcomes (or wider consequences) gap between research being registered or used in some way by non-university actors and its then having any visible effect on how these other actors behave or decide to act.
In this chapter we review and address some of the difficulties that people have in mind when they discuss an ‘impacts gap’, and how this gap might arise in terms of the supply of research by academics, and the demand for research from business, government or civil society. The ‘impacts gap’ label is often used also to cover what we have described as the ‘outcomes gap’, and so we say a little about this extra dimension of the problem. However, our focus here remains solidly on achieving external impacts (defined as occasions of influence) and not on trying to trace the social consequence or outcomes of such impacts.
If there is indeed an impacts problem in UK higher education research, and in the social sciences particularly, it is worth examining what could be the causes of the problem before looking at possible remedies. We have identified five potential kinds of impact gap resulting from: demand and supply mismatches, insufficient incentives problems, poor mutual understanding and communication, cultural mismatch problems and a problem of weak social networks and social capital.
(Times Higher Education, 2010).A quick way to get to terms with the possible supply and demand problem for research impact is to consider that 85 per cent of the UK economy is based around the service sector yet 84 per cent of research funding flows into the STEM disciplines, covering all the physical sciences. Some social scientists argue that politicians, especially in the UK and the US, are overly pre-occupied with an outdated model of ‘science’ that focuses disproportionately on research areas most linked to manufacturing and technology industries. In the UK the charge is that political elites (in alliance with traditionally powerful sectors of the manufacturing industry) are trying to use research funding to create an economy that we don’t actually have, resulting in a surplus of science and technology expertise that can’t possibly be absorbed by the country’s small manufacturing base. In most OECD countries there is a similar potential problem in matching up how governments allocate research funding support and the economic importance of different sectors.
However, there are also clearly some important problems in looking for any one-to-one linkage between discipline groupings and particular parts of the economy. Some US research administrators argue that the apparently almost complete hegemony of the STEM disciplines in US government support for research is deceptive, because it fails to recognise that much of this funding total goes into what they term ‘human-dominated systems’. This concept covers areas like medical sciences, information technology and engineering, where there are close connections between the applied physical sciences and the development of social processes, including many vital service sector processes. They also argue that in these areas physical science or technology innovations often lie at the root of new industrial developments and the success of new service products. For example, the rise of Google was founded on a mathematical algorithm for ranking web pages, and innovations in the web-based handling of networks and rich media lay behind the rise of Facebook (which now includes 500 million people worldwide).
It is not feasible to fully separate out the ‘human-dominated systems’ parts of economies or research funding in these numbers, but we have been able to distinguish the importance of the medical sector in GDP numbers (here including both medical manufacturing and pharmaceuticals and medical services delivery via hospitals and family doctors) and in government research funding. A lesser problem is that although economic data covers the agricultural industry separately from other primary sector industries (such as mining or forestry), we can only pick out agricultural research funding, but not research funding that focuses on the other parts of the primary sector. Within these limits, it seems clear that across OECD countries, government funding for the STEM disciplines is always more important than the share of manufacturing in their economies, as Figure 6.1 shows for six major states.
Figure 6.1: The match-up between the economic importance of sectors in the economy (shares of GDP) and the shares of government research funding across discipline groups in six major OECD countries
Sources: GDP data taken from the CIA, World Factbook. Government research funding data are taken from National Science Foundation (2010), Table 4-16. Notes: On the X axis of this chart, outputs are shown as a percentage of GDP using the output (or % contribution) approach; by contrast, the ‘health sector’ number shows health expenditures as a percentage of GDP, so are not strictly comparable.
The breakdown of research funding shows the expenditures going to broad discipline groups as a percentage of all government science/research funding. It is important to note that the GDP sectoral percentages refer to billions of dollars, whereas the government research funding percentages refer to much smaller sums
Yet the Figure also shows that different countries have quite varying policies in how they support different discipline groups. The US has the strongest mismatch between the dominant importance of services in its economy and a research support policy that awards only one in every 16 dollars to the social sciences, and effectively none at all to the humanities. Sweden and Germany show a more ‘moderate’ pattern, with services accounting for around 70 per cent of their economies, and around a fifth of total research support flowing to the humanities and social sciences (HSS). Australia is quite similar, but has a higher HSS share of a quarter. Finally, two countries, Japan and Spain, allocate appreciably more resources to research support for HSS disciplines, a third in Japan’s case and nearly two fifths in Spain.
The breakdown of research funding shows the expenditures going to broad discipline groups as a percentage of all government science/research funding. It is important to note that the GDP sectoral percentages refer to billions of dollars, whereas the government research funding percentages refer to much smaller sums.
Part of the explanation for these variations rests on how far countries’ research funding supports medical sciences in relation to their medical industries sector. Sweden gives the medical sciences a share of research funding that is almost four times the importance of the medical industries in their economy, while Japan and Australia give three times as much support. By contrast, Germany and the US only give twice as much funding support to medical sciences in relation to the economic importance of these industries. Lastly, Spain actually gives less support to medical sciences than the medical industries share of their economy.
A further source of variation in funding support allocations may reflect the fact that non-English speaking countries assign more resources to language-related support. In Spain the humanities get more than a seventh of research support, and in Germany an eighth. Although we cannot break the HSS share down in the same way for Japan, it seems clear that the humanities share there is also considerable. However, Sweden only gives one sixteenth of its research support to humanities disciplines.
Figure 6.2 shows some general outcomes of mismatches between the supply-side of research (from higher education institutions, or HEI’s) and the demand-side (from business, government, or civil society).
Figure 6.2: The impact of demand and supply mismatches for research
The incentives for both sets of actors are influenced by the various costs and benefits involved, which can be either concentrated or dispersed. Ideally, of course, HEIs would undertake impact-achieving research at no net cost to respond to highly concentrated demand, the situation shown in cell 5 of the Figure. Some engineering and IT departments, and perhaps as many business schools, have long-term relationships with major corporations that might approximate this setting. More often than not, however, we see that universities face high costs (in terms of time and resources) in producing externally-facing outputs, without the certainty that external actors are genuinely interested in considering (or paying for) their labour (as in cell 2 of Figure 6.2). Even when producing research impact is cost-free and benefits academics and HEIs, having dispersed benefits on the demand side results in a sub-optimal situation of academics chasing businesspeople, government agencies, or other actors with potential solutions to real-life problems (as in cell 6).
Where producing research impact represents costs for academics on the supply side – which is more often the case – it is even more important that research funding responds to existing demand patterns as opposed to politically desired demand. Even where benefits to the demand side are concentrated, if there are dispersed costs (and benefits) on the supply side this will result in a strong demand but weak supply (as in cell 3). When benefits to the demand side and costs to the supply side are both concentrated, HEIs face a risk management problem which universities and academic research teams will respond to in different ways (as in cell 1). Where the benefits to the demand side are more dispersed (as in cells 2 and 4) there is an opportunity forWhere producing research impact represents costs for academics on the supply side – which is more often the case – it is even more important that research funding responds to existing demand patterns as opposed to politically desired demand. Even where benefits to the demand side are concentrated, if there are dispersed costs (and benefits) on the supply side this will result in a strong demand but weak supply (as in cell 3). When benefits to the demand side and costs to the supply side are both concentrated, HEIs face a risk management problem which universities and academic research teams will respond to in different ways (as in cell 1). Where the benefits to the demand side are more dispersed (as in cells 2 and 4) there is an opportunity for government research policy changes to create specific incentives to encourage the take-up of research (e.g. via tax concessions for companies giving universities research funding or making joint investments). But the most fundamental decision for governments to make will still focus on accurately understanding the potential for their economy to productively absorb different types of research, and on maintaining a balance of research funding across discipline groups that responds to that.
A closely related explanation for the existence of an impacts gaps (if there is a gap) is that there are too few or too weak incentives, either for universities to undertake applied or potentially applicable research, or for businesses or government users to provide active, consistent demand and associated support for universities’ applied efforts. Academics and researchers often lament that there are weak incentives inside universities or research institutes to undertake applied research. For instance, the Research Assessment Exercise in the UK was widely cited to us as creating an incentive for only pure research by academics and senior scientific civil servants. But it is also possible that the incentives for business or government to take up applied research are also weak. For instance, UK universities’ engineering departments complain that they are frequently called in by small and medium-sized firms to sort out acute analytic problems, who tend to rely on such support being continuously available – yet these firms do little to generate any continuous engagement or funding support for engineering departments
There are three problematic conjunctions possible here, and one optimised situation, shown in Figure 6.3.
Figure 6.3: Insufficient incentives problems
In cell 1 there are poor incentives to undertake applied work in the university sector, but equally only fragmented, incoherent or weak or passive demand from business or government potential users. In cell 2 the demand side users are involved and intelligent customers for research, and back up this stance by offering resources or involvement, but universities and researchers are diffident or reluctant to get involved with applied or applicable research.
In cell 3, by contrast, universities and research labs invest in external-facing research (perhaps because they are given the incentive to do so by specialist government research funding bodies), but then find many difficulties in interesting their presumptive or potential clients in business or inside mainstream government departments and agencies to use or engage with the research. For instance, STEM labs may find that the firms who could benefit from their research are in fact too small, too conservative, too inexpert or too lacking in venture capital to do so. Equally, government research bodies may make ‘political’ decisions to fund university research in fashionable or ‘manufacturing-fetishism’ areas that actually have little commercial potential, while neglecting other ‘hidden innovations’ with much greater business potential (Nesta, 2007).
Finally, in cell 4 there are strong and appropriate incentives for universities and research labs to focus on applied research, and there is active support and a ready market for well-evidenced ideas and solutions from businesses or public sector officials. Here incentives are adequate on both sides, there is no conflict of interest and business or government engagement with researchers is close, continuous and constructive. Universities and their clients face only less serious coordination and information-sharing challenges in aligning their research priorities.
Historically government research funding bodies have been preoccupied with insufficient incentives problems, especially in the relationships between university STEM research and high tech manufacturing industries. By strengthening the incentives for business to invest in more blue skies research governments have repeatedly tried to ‘pick winners’ and to influence the specific sectoral shape and content of high tech industrial growth. At the same time funding bodies in recent decades have increased the pressure on academics seeking research grants to show that they will disseminate findings, commercialise research wherever feasible and work co-cooperatively with industry to realise economic and societal benefits. Financial incentives (tax concessions to businesses and grant ‘conditionalities’ for researchers) plus regulatory measures (such as requiring industrial engagement of researchers) can both have extensive influence in readjusting both demand-side and supply-side incentives. Within universities and research labs, changes in funding arrangements tend to be highly effective (critics say ‘over-effective’) in accomplishing a re-prioritisation of applied research and better communication of existing research outputs, which could stimulate demand. But other more enduring aspects of academic culture may still create difficulties.
Even if there is a reasonable match between the university supply of and external demands for potentially applicable academic research, and even if incentive structures are appropriate for encouraging collaboration between academia and external actors, there may still be an understanding or communication gap between academics and potential clients. Potential clients often voice the view that researchers speak in academic jargon, think in silos, define problems in unnecessarily esoteric ways and cannot extend their specialised knowledge to effectively embrace joined-up problems. Pro-business commentators often add that insulted academics do not empathise with the difficulties and struggles of firms operating in relentlessly competitive environments.
Meanwhile academics tend to believe that business or government clients are content to remain stubbornly ignorant of relevant theoretical knowledge, which they under-value along with pure research, and do not understand which disciplines do what or the basics of the academic division of labour. Researchers in the social sciences and humanities told us in research fro the British Academy that government officials are potentially better informed and educated, but they are often hamstrung by political interventions and a governmental short-termism that makes attention to academic work highly episodic, selective and hence partial. ‘Evidence-based’ policy-making in this perspective can too easily degenerate into a short-term search by officials for some expedient academic ‘cover’, boosting the legitimacy for what ministers or top policy-makers want to do anyway.
These critical perceptions might partly be explained in terms of each side’s lack of information about the other actors. Potential clients in business or government actually face high information costs in understanding the specialised world of university research, in entering and acting as ‘intelligent customers’ in the often weakly defined ‘markets’ for applied research. Government officials often have to stick to academics who are supportive of current government policy or come up with convenient-messages, rather than using the researchers with the most expertise. Business users may lack the expertise or intellectual firepower necessary to assess what universities have on offer, and hence can make poor choices of supplier – especially true for smaller firms or those operating in radically new markets. If external actors have gone directly to a particular academic in the past and found the research unhelpful or irrelevant, this could trigger a ‘market for lemons’ perception that is accentuated by the proliferation of consultancies, think tanks and other ‘impacts interface’ actors.
However, there are ground for optimism that problems of understanding or communication problems can be alleviated, if not immediately, at least in the reasonably short term. The physical sciences have greatly improved their standards of internal and external professional communication over the last twenty years and changed the public understanding of science, as witnessed by the growing demand for well-written and authoritative ‘popular science’ books. The social sciences could learn a great deal from the physical sciences, not least in how to better write, design and explain evidence in books, articles and more generalist publications. Other general remedies could be to improve professional communication in academia, especially in the social sciences, and to increase funding for dissemination and communication in research support. Universities and research labs could also sponsor more frequent interaction events that bring academics and external audiences into closer and more extended or continuous contact, a goal of the UK’s Higher Education Innovation Fund (HEIF).
In the physical sciences there are often much stronger incentives underlying efforts at better communication. Venture capitalists have strong incentives to maintain surveillance even of technically difficult areas if they may potentially produce large-benefit innovations or help create competitive advantage (as the ‘Eureka’ model of research as discovery suggests). Similarly it is a truism that university and industry synergies lie behind some of the most dynamic industrial zones located in the hinterland of major university cities and clusters, like the concentration of medical innovators around Boston, Silicon Valley in California (close to Stanford), or the science parks around Cambridge. These strong synergies sustained by spinout companies have few parallels in the social sciences, but in capital cities (like Washington, Brussels (for EU institutions) and London) and other centres of government decision-making university social sciences often have greater chances of developing applied research for government, trade associations, unions, charities or lobbying clients that are in some ways parallel the STEM-discipline industrial zones concentration.
Shortly before graduating [from Cambridge]with a first [in physics], John Browne relates [how]… : “I was made to understand vividly that business was not held in high regard.” He was with friends, walking through Cambridge when they met one of his professors, the eminent physicist Brian Pippard. He turned to his colleague and said, “This is Browne. He is going to be a captain of industry. Isn’t that amusing?” Catherine Bennett (2010) on former BP Chief Executive Lord Browne. In 2010 Browne wrote a key report for the UK government that recommended radically raising student fees for university degrees, and lead to the cessation of most government funding for UK universities’ teaching, except in a few STEM subjects.
A more pessimistic take on communication and understanding problems is offered by analyses that stress much wider, deeper-rooted and hard-to-change cultural differences between academics and universities on the one hand, and their potential clients or patrons in government or business. If we look at the preference structure of academics and the ‘prestige structure’ of universities most observers would agree that for a majority of academics non-applicable (i.e. academic-only) research is ranked as more valuable or preferable than pure applicable research and both of these are ranked above immediately applicable research in most academics’ value systems.
Meanwhile, potential clients in business and governments have their own preference structure when it comes to research, in which mediated and immediately applicable outputs (produced by think tanks or consultancies) tends to win out over applicable research from academics. Pure and non-applicable is clearly seen as of little or no interest to business. And despite the repeated evidence that some critical scientific, mathematical or technological discoveries have long-lagged effects, there is a recurrent tendency for government funding bodies to see pure or ‘theory-driven’ research as of academic interest only. Such work is perhaps supported in the interests of maintaining disciplinary balance or coverage, or perhaps helping to attract a good mix of academic talent from overseas, but otherwise it is viewed as paying few dividends.
Working on different time scales can exaggerate this disconnect. While academics often work on long-term research projects, most UK and American businesses operate their investments on two to three-year timescales. (Some European major companies have longer-term investment planning). Government is similarly short on time and in the UK policy-making often suffers from a rapid turnover of ministers. For instance, under a Labour government, the UK’s central government ministry covering social security (the Department of Work and Pensions) had 10 different secretaries of state in the eight years from 2001 and 2010, each of whom had different detailed policies from his or her predecessors (Mottram, 2007).
Keeping the government and business informed on what relevant research is available requires that universities and researchers have quick turn-around times for queries, responding to research requests or bidding for business or government contracts. This time pressure is particularly acute when so many other ‘ideas aggregators’ (such as think tanks, management consultants and technology consultants) are keen to fill the gap. These partly ‘parasitic’ intermediaries may also wish to keep clients dumb once hooked, in order to boost their proprietary roles.
The results of long-standing cultural gaps are often that academics and clients meet but can talk past each other instead of collaborating meaningfully. Fostering long-run cultural convergence requires efforts to produce long-term, serial encounters between university researchers and their potential external customers and network partners in business or government. Initiatives here include the initiative by the coalition government in the UK (in an otherwise austere public spending climate) to establish ‘an elite network of Technology and Innovation Centres, based on international models such as the Fraunhofer Institutes in Germany’ (BIS, 2010, p. 43). Aimed at high tech industries, government funding is used here to sustain the growth of long-run awareness and relationships between business and research labs at a regional scale. Programmes for academic exchanges with business or government agencies, and for professional staffs in these sectors to spend time in university settings, are strongly developed in the physical sciences in the UK, and are growing but still small-scale in the social sciences. Exchange need to be two-way to maximise their potential benefits. Along with the continuous modification of business or civil service cultures produced by new intakes of graduates and professional staffs, and the impact of their feedback on universities themselves, it should be feasible to mitigate even long-standing cultural problems and related organisational difficulties in co-operating over a reasonable time period (say a decade) – as the growth of applied research in the UK in the 2000s strongly suggests.
A final approach to understanding an impacts gap looks at the nature of the linkages between academia and impact targets. In the social sciences the interactions between universities and external ‘customers’ for their research are generally not the type of regular, durable, binding, reciprocal, transitive, developmental or cumulative relationships that foster cooperation and mutual benefit. Although we have reviewed evidence of reasonably extensive contacts and linkages between researchers and business or government professional staffs, they none the less tend to be isolated, episodic, inconsistent, and unbalanced or non-reciprocal. The social sciences see more ‘spot market’ exchanges than ‘relational contracting’.
For example, a company may solicit academic input for a short period (which requires investment from the academic or university) but then the company involved effectively ‘drops’ the supplier immediately afterwards – perhaps because of changes of personnel (which are often frequent at an executive level in major business corporations), or perhaps because the pressing exigencies of competition require a change of strategy or priorities. The same company may then come back to the same research team later, but unless future ‘client’ needs are reliably signalled in advanced it may be almost impossible for the university or research lab involved to guess what work may be needed in the future.
Things are somewhat more stable in government, but there again policy ‘fashions’ and political priorities often change in unpredictable ways. The alternation of political parties in power, allied with constraints on officials’ ability to co-operate with politically ‘unwelcome’ research, may quite often create disruptive agenda changes that undermine effective research development. For instance, six weeks before the 1997 general election an LSE research team funded by the ESRC (a government funding body) sought co-operation from the Home Office (the relevant government department) on devising questions for an election survey researching voters’ attitudes to alternative PR electoral systems, pledged in the manifestos of the Labour and Liberal Democrat parties. Officials responded that they could not provide any inputs at all, because it was not the then Conservative ministers’ belief that any reform of the voting system was needed. Labour duly won a landslide at the 1997 election and embarked on four major voting system reforms, one of which the research team had not fully anticipated and so did not have specific questions included in the survey.
In the physical sciences and STEM disciplines, greater continuity in research relationships can be built up over time, where firms and research labs (and sometimes foundations or charities and labs) cement relationships that can last for long periods and encompass serial instances of co-operation. Sustaining the transactions involved is not cost-free, and uncertainties and risks produced by normal business cycles and competitive changes always require to be managed. It is only in cases where buoyantly funded government funding bodies invest long-term in creating major facilities or capabilities over long periods (say 10 to 15 years) that lower transaction costs, almost purely bureaucratic collaborations can be sustained.
In large or centralised countries (like the US and UK), strong competition between multiple universities for scarce patronage can produce a significant wastage of resources on seeking comparative advantages or negating other research centres progress. What economists term ‘influence costs’ (the costs of lobbying, campaigning, manoeuvring and seeking power) may rise and consume some of the national research budget. By contrast, small states in world markets (such as the Scandinavian countries) have ‘group jeopardy’ pressures that tend to foster greater pulling together in the national interest. Small countries with distinct languages characteristically confront shortages of talent and expertise in many niches and market segments where large country companies or governments enjoy the luxury of choosing between alternative university suppliers.
Adjusting the quality of relationships with external ‘customers’ is not easy to accomplish, either in the stronger networks from industry to the STEM disciplines or the more fragmentary and fluctuating networks in the social sciences. But it is possible to encourage the sort of virtuous cycles of academic/client relationships seen more often in Scandinavia and smaller countries and to pursue strategies that tend to foster an accumulation of ‘social capital’ and inter-sectoral trust relationships over the longer term. Pooling government or business funding of research around regionally-based development outcomes appears to have constructive results. Other possible remedies could include incentivising companies (and perhaps government agencies with consistent research needs) to donate more to universities and creating funding opportunities for joint university-client applications. De-siloing research funding pots and encouraging more joined-up scholarship could also help.
Government funders could also do more to get over their ‘rule of law’/fair treatment hang-ups about picking ‘winners’ from the university sector. But they could also require institutions getting larger or more secure funding to much more clearly foster and lead inter-university cooperation at regional and local levels, rather than behaving in a purely self-interested and competitive-aggrandising fashion. Assessing smaller countries’ research progress and capabilities cross-nationally, even in a middle sized nation like the UK, tends to be helpful in forcing universities and research labs to take a more accurate view of their capabilities in a globalising economy and polity.
If there is an impacts gap it has many different aspects and the character of any overall disjuncture in developing applicable research is likely to vary sharply across different disciplines, countries and time periods. Yet government funding bodies often seek to apply single-tool remedies rather homogeneously across all areas of the university sector, both in the name of fairness and of administrative simplification. The UK government’s blanket proposal to shift research funding support to one where all disciplines receive 25 per cent of available funding on the basis of demonstrating their ‘impacts’ is a signal case in point. Premissed (apparently) on the view that there is an acute incentives gap and under-supply of applied research by British universities, such blanket moves are highly unlikely to be effective. Such a gross re-targetting of funding will no doubt produce a substantial and visible diversion of efforts into finance-attracting research pathways. But if the UK’s impacts gap in fact stems in part from demand and supply mismatches, poor communications, or cultural discontinuities, the additional applied research that is summoned into life may not be either effective or good quality, nor likely to generate favourable consequences for the economy or public polity. A more granular view of the problem, and more differentiated strategies addressing the different causal origins of impacts gaps, would clearly be more likely to help produce better tailored and more effective new research.