7.1 A typology of potential influences
7.2 Evidence of external citation rates
7.3 Credit claiming for research
Most of the interesting and least studied topics in social science lie at the intersection between different disciplines (or different sub-fields), straddling the boundaries of academic silos often uncomfortably. And so it is with the study of academics and university researchers’ impacts beyond the academy itself, which has been approached somewhat tangentially by sociologists, philosophers of science, education researchers, knowledge management and organisational learning experts, economists and technology-transfer researchers, network analysts, and political scientists and public management specialists. But it would not yet be true to say that any of these sub-fields have really tackled the topic of systematically studying how, why, and where the full range of academics and researchers in higher education have impacts on business and markets, public policy-makers and government, media and cultural organisations, and civil society and NGOs. The different approaches adopted have all tended to have other slants and preoccupations.
We seek to rectify the resulting gaps in our knowledge by collating evidence and arguments from these different sub-fields to address the following twin questions. Theoretically, what factors might we expect to make a difference to academics achieving external impacts? And what evidence can be brought to bear upon these expectations, especially for the social sciences? We aim to build up a plausible picture of the bases of individual-level factors that tend to enhance the external influence of university researchers. In Chapter 8 we apply these individual-level insights to understanding how different levels of academic organisations acquire and develop their external impacts.
7.1: A typology of potential influences
A key starting point for considering how academics and researchers achieve external impacts has to start squarely with the problem that different authors and schools of thought within disciplines often take significantly different views of how to understand the physical and social worlds, and of what evidence is relevant and credible for societal actors seeking to determine their own strategies and developments, or to settle public policy decisions. In most fields of university research it is normal to find something approaching at least a three way split of viewpoints into:
- a dominant conventional wisdom, which tends to monopolise the ‘commanding heights’ in each academic profession. This ‘mainstream’ view always faces difficulties and puzzles in parts of its field where phenomena cannot be well explained. Accordingly it is constantly challenged by
- one or more new and ‘insurgent’ positions, offering a different and novel approach that may over time be worn down or incorporated into the mainstream, or may alternatively succeed in defining an alternative paradigm. In addition,
- the mainstream view may be critiqued by at least one past conventional wisdom or ‘legacy’ position, whose exponents are still fighting rearguard or guerrilla actions on behalf of their now less fashionable approach.
Given this kind of contestation of what counts as ‘knowledge’ or ‘science’ or ‘evidence’, it is commonly a fairly complex problem for governments, businesses, media organisations and civil society organisations to determine what counts as credible expertise.
In their influential book Rethinking Expertise (2007), Harry Collins and Robert Evans stress that even in the physical sciences knowledge relevant for societal decision-making transfuses rather slowly and incompletely towards relevant parties. For instance, they formulate two key rules for the deploying of scientific expertise into public policy making:
- ‘The fifty year rule: Scientific disputes take a long time to reach consensus, and thus there is not much scientific consensus about.
- The velocity rule: Because of the fifty year rule, the speed of political decision-making is usually faster than the development of scientific consensus’.
In a careful and nuanced discussion of how scientific and technical expertise can nonetheless be legitimately and constructively engaged with societal decision-making, Collins and Evans suggest that three bottom-line criteria are important –the credentials of a claimed expert, their experience in the applied field, and their track record of operating in this field or making relevant practical interventions already.
Other observers take a more sanguine view of consensus in the sciences. Another Collins, the sociologist of philosophy Randall Collins, famously argued that the STEM disciplines have far greater consensus-generating processes than the social sciences and consequently can sustain a more rapidly advancing knowledge frontier. ‘High-Consensus, Rapid-Discovery Science’ as found in the physical sciences began to develop from around 1600 onwards and it subsequently grew at an accelerating pace over time. In Collins’ view, all the STEM disciplines were distinguished by ‘high consensus on what counts as secure knowledge and rapid-discovery of a train of new results’. A ‘law of small numbers’ in intellectual disputes still operates in these disciplines (see Collins, 1998), limiting the number of top-rank theories or competing approaches to between two and seven positions. But in science disagreements occur only at the research frontier itself, not in the disciplinary foundations:
‘It is the existence of the rapid discovery research front that makes consensus possible on old results. When scientists have confidence they have a reliable method of discovery, they are attracted by the greater payoff in moving to a new problem than in continuing to expound old positions. The research forefront upstages all older controversies in the struggle for attention. Because the field is moving rapidly, prestige goes to the group associated with a lineage of innovations, which carries the implicit promise of being able to produce still further discoveries in the future. Rapid discovery and consensus are part of the same complex; what makes something regarded as a discovery rather than as a phenomenon subject to multiple interpretations is that it soon passes into the realm of consensus, and that depends upon the social motivation to move onward to fresh phenomena’ (Collins, 1994, pp.160-1).
By contrast, in fields without assured rapid discovery methods, Randall Collins argues that not only is debate between alternative positions pervasive, but academic prestige can often best be built by debating or reinterpreting ‘fundamentals’, ‘the cannon’ or classic texts over (and over) again. In this light, the social sciences certainly have recurring-but-moving-on debates. For instance, contemporary theories of the state in political science, spreading into sociology, philosophy and political economy also, have remained recognisably connected across two decades of modern debates (compare Dryzek and Dunleavy (2009) with Dunleavy and O’Leary (1987).
Our approach to understanding the potential influences bearing upon academics or university researchers achieving external influence is summarised in Figure 7.1, another multi-dimensional or ‘balanced scorecard’ type of framework, this time involving eight main factors (one of which might in turn be further sub-divided). Starting at the centre right position, we move in a clockwise direction through these eight dimensions, commenting on each in turn.
Figure 7.1: A typology of key factors shaping the external influence of academics and university researchers
1. Academic credibility is always likely to be of key importance to university researchers achieving external impacts, without in any way being determinate. Academics with watery or slender academic credentials can sometimes achieve influence with external interests. But for most university researchers who do so, having a bona fide academic record of publications and advancement is an important necessary (but far from sufficient) condition. Other things being equal, academics from more prestigious research universities will tend to be accorded more attention, and have their opinions sought more frequently. And (again ceteris paribus) academics with many publications, strong citations and consequently large h-scores can be expected to have more credibility than those with slender portfolios of publications that have been accorded little notice. Of course these are very large ceteris paribus clauses, and among the things that are in practice highly unlikely to be equal are the seven other factors in Figure 7.1.
2. Dispositional and sub-field constraints govern whether academics or researchers are actively trying or wanting to achieve external impacts, and how likely it is that they can do so given the areas in which they work. In an open survey for the British Academy in 2008 (that is, not a sample survey) we found that only around one in six respondents across the social sciences and humanities expressed ‘purist’ opposition to their discipline seeking greater public policy, business or civil society impacts. This slender piece of evidence meshes with the trend for applied work to grow in many STEM disciplines, and with the strong rate of applied work and engagement with outside interests. It is possible, therefore, that the strong public expression of ‘purist’ opposition to greater emphasis upon impacts and applied academic work may be misleading, the product of entrenched ‘traditions’ in academic discourse that have in the past been a majority view, and continue to be differentially expressed for ‘bandwagon’ reasons – energising people with an anti-impacts stance and creating a ‘spiral of silence’ for people favouring more applied work. On the other hand, there have been well-documented instances of more purist views of academe, as in the petition submitted to HEFCE by the University and College Union in early 2010, which attracted 13,000 signatures (Times Higher Education, 3 December 2009).
A key influence upon how much academics are willing to engage with achieving external impacts concerns the type of field that they work in. A well-known three-fold distinction was coined by Donald Stokes and is shown in Figure 7.2. It is widely used to get academics and researchers to situate their own work in surveys, as primarily involving three categories:
- Basic research is driven by academic and theory-based concerns and has no direct application (or potential for direct application) – it is ‘performed without thought of practical ends’ (Geiger, 1993, p.186, quoting Bush, 1945).
- User-inspired basic research is blue-skies, theory-driven, and concerns fundamentals, but nonetheless responds to the interests of (potential later) users. In Alan T. Waterman’s (1965) terms, this is ‘basic research which may be termed “mission-orientated” – that is, which is aimed at helping to solve some practical problem’ (quoted in Stokes, 1997, p. 62).
- Applied research is directly driven by a concern to answer users’ problems and to improve existing in-use technologies or social arrangements.
Figure 7.2: The three-way division of research
Clearly the more academics fall into the first category, the less likely they are to want to or in practice to have any chance of achieving external impacts. Academics doing applied research are likely on the other hand to have a much easier time achieving some external impacts, and stronger career incentives to do so. Academics in the middle category here may less regularly see opportunities for achieving external impacts. But on the other hand, because they are doing basic research, the consequences of their achieving success in their work may be more far-reaching – for instance, in STEM disciplines they might achieve more basic patents.
3. Networking skills (or accomplishments) are the first of a series of personal qualities and characteristics that are likely to extensively condition which academics achieve external impacts and how much influence they come to exert. If business people, government officials, media journalists or NGO staff are to take advice from an academic expert, quote their arguments or employ them as consultants, they must first know that they exist. In commercial areas, work enquiries, RFPs (requests for proposals) or ITTs (invitations to tender) will characteristically be sent only to academic organisations that are on lists of potential tenders. Getting on such lists requires in itself considerable amounts of information and undertaking preparatory work.
The principal reasons why academics are not asked to advise external bodies when they have highly relevant and credible expertise is that the potential recipient of advice has no idea that they exist and would confront pretty high costs in becoming better informed (often at short notice). By contrast, well-networked academics or university research teams know early on about business contracts, government research and policy initiatives, charity or NGO campaigns, and media foci (like anniversaries). They are plugged in so that they can be easily asked or consulted, and they are well prepared to respond to typically very short deadlines for business or government contracts, and to complete the also typically onerous ‘box-filling’ elements of tendering for contracts, applying for grants, or participating in extended public consultation processes.
The obvious difficulty here is that top academics are busy people, and (as Oscar Wilde remarked of socialism) networking seems to take up too many evenings. Making and keeping contacts characteristically involves a lot of scarce time. In addition, the personal characteristics of successful academics and top researchers may not match well with the capabilities needed in successful networking – such as personal confidence, extroversion and an outgoing personality, and an ability to communicate complex ideas simply.
These considerations also arise in other contexts, however. For instance, the people who come up with radically new inventions or innovations in business are often presented as unconventional ‘geeks’ or ‘mad inventor’ types, whose approach makes business executives (‘suits’) doubt or reject their capabilities and ideas. Some business advice texts accordingly recommend two-person teams (dyads) of innovators allied with a more managerial and conventionally dressed/operating ‘product champion’, an executive whose task it is to be the public face of the innovation, smoothing its path through approvals and finance-raising and providing assurance for investors that business plans will be adhered to. Pairings of academic experts with ‘product champions’ are not widely observed in the university sector, but within research teams the development of specialised managerial roles such as the ‘grants entrepreneur’ (see Chapter 1) may parallel those of product champions in business. More generally senior professors often provide ‘ballast’ for work that will actually be carried through by younger (often more ‘geeky’, aka technically capable) research staff.
4. The personal communication capacity of academics is an additional, if often closely related, personal quality. Research results and implications never speak for themselves, and they can only rarely be communicated to elite level personnel by producing a report and assuming that it will be read. Academics who are going to have external impacts must normally be good public speakers, adept at presenting a case for funding, responding to questions, expounding complex issues in a clear way, explaining scientific or technical concepts to a ‘lay’ audience, and, through their personal appearance, communicating informed conviction and confidence in their analysis and academic credibility. These ‘political’ qualities are not universally available in academia, although the requirements of teaching, professional communication at conferences, etc. and increasing elements of formal training during doctoral work or induction as a junior lecturer all tend to give many academics a considerable starting proficiency in this area.
5. Interaction expertise is a different personal quality stressed by Collins and Evans (2007). It denotes the ability to get on constructively with other people while working in extended organisational teams. The importance of collective ‘tacit knowledge’ means that translating academic knowledge to apply to particular problems and organisational situations is a far from straightforward business. It is a common experience in science that a laboratory or research team may have considerable initial difficulties in appreciating what exactly the techniques being used in a different lab are, or how to replicate them in a different setting. This key barrier can vary frequently and can only be overcome by visiting the other research lab in person, thereby absorbing a huge amount of contextual and ‘organisational culture’ information that remains latent in other forms of communication.
Similarly, it requires an empathetic competence on the part of scientists, academics or researchers to appreciate how their knowledge or expertise needs to be adapted in order to apply it in particular different organisational settings, such as those of business and government. In the social sciences there is always a huge ‘culture shock’ in considering how knowledge can be translated for a business, government, or other organisation in civil society – which largely explains the increasing emphasis in professional education upon internships, capstone projects and consultancies undertaken directly for external organisations. The same importance of tacit knowledge largely underpins the value of secondment schemes providing academics and researchers with opportunities to work directly in external organisations.
6. External reputation is the first of a complex but important set of conditions that may lie largely outside the control of academics or university researchers themselves. An external reputation operates essentially at two different levels, the first and most important being the insider, elite, or ‘client’ reputation (flow 6a in Figure 7.1). Closely related to networking, one group of people who can build a successful insider reputation are distinguished scientists or stellar academics with effective public personas and strong elite connections established through university connections, or academic service on quasi-governmental agencies, consultative committees or professional bodies – and sometimes through party political linkages. At a top board level, a few major corporations sometimes forge links with very senior outside business academics, economists or scientists, using them to internalise either a ‘challenge’ function to their strategic or technical thinking, or to enhance their long-term horizon-scanning capacity.
In addition, however, there is a much larger group of academics with lesser reputations but who have good contacts with business managers or government officials. They acquire insider reputations as ‘sound’ judges of technical issues, or ‘a safe pair of hands’ for handling more ‘middle-levels of power’ issues – usually because the researchers involved are assiduous networkers, convincing personal communicators, and in personality terms they are ready and able to work co-operatively and to deliver reliably on deadlines. At this level in government, not being linked to a political party, and not having expressed prior strong views on key issues in the media or in NGO campaigns, are often seen by officials as indicative of neutrality, trustworthiness on secrecy concerns and lower public or insider risk. Someone like this is the kind of dispassionate expert who will not ‘bite back’ or make a fuss if their views acquire a different political spin in practice, or if the work they are commissioned to undertake is left on the shelf when things do not work out as initially planned. In business, less well-known university researchers – who can work closely within a company ‘line’, or whose views mesh most closely with other aspects of company strategy or carry conviction with board members or top managers – may be preferred as academic partners over more distinguished but less tractable academics. In short, ministers, government officials and business managers often pick researchers to link with because their views are congenial, rather than because they are impartially ‘the best’ experts for a job, especially where the commission involved is a low-profile one.
Once academics or researchers become involved with external organisations outside higher education there are clearly additional risks for them, which arise from the linkage not going well, from their advice being ignored, or from a ‘guilt by association’ effect linking them with controversial government or business policies. Universities and academic professions are critical environments and senior researchers are naturally sensitive to the implications of attracting criticism from colleagues or the student body, especially if developments occur that might seem to call into challenge the ‘scientific’ or impartial credentials of their work. Quite often senior academics will turn down possible commissions, contracts or associations with businesses or government departments because they foresee potential negative impacts upon their academic reputations, or because they believe that the linkage will not work and hence could risk damaging their existing, often carefully nurtured ‘insider’ reputation.
Equally government-academia relationships are sometimes marked by crises where a researcher’s intellectual integrity clashes with a ‘policy’ line being maintained despite the current evidence-base by a minister or politician. For instance, in the autumn of 2009 the Home Secretary in the UK’s Labour government (Alan Johnson) dismissed a medical professor (David Nutt) from the chairmanship of a misuse of drugs advisory body, saying that he had ‘lost confidence’ in the quality of Nutt’s advice. Nutt’s offence was to publish a listing of the dangerousness of drugs that classified cannabis as not causing harm (and hence one that could be legalised), whereas the government’s official classification placed it in the second most dangerous category (BBC News, 2009). Nutt accused the government of not heeding the medical evidence-base in its approach, and his dismissal caused further resignations by academics.
The other key dimension of external reputation is the public or media profile of a researcher (flow 6b in Figure 7.1). Businesses often wish to bring in external experts from universities partly for technical reasons (in which case their technical credentials need to be strong ) and/or for quasi-marketing reasons – for instance, to produce a generally favourable ‘think-piece’ or a piece of research that can be useful in high-end marketing terms. Similarly, government agencies sometimes commission research for purely technical assurances that their strategies or policies are appropriate or will work, but more often they also want a public report or document that strengthens the legitimacy of the policy choices made. This legitimacy-seeking by government occurs both in long-run, slow changing situations, and often in crises also. Most government advice documents in advanced industrial countries either have to be published directly, or may be force-released under Freedom of Information (FOI) or other open-book government policies. ‘Transparency’ requirements are typically strong in technical policy areas. So in both business and government it is often of first priority that the technical or professional credibility of academic experts is unchallenged, and that they do not have a prior public or media reputation that is any way adverse.
Entering the public policy arenas can often significantly increase the risks that researchers and their universities confront. During 2009, for example, a climate change research centre at the University of East Anglia closely linked with the science of global warming became the target for hostile criticism from warming-denier groups on the political right. By requesting copies of emails between researchers under FOI legislation, the global warming-deniers were able to assemble a dossier of emails in which scientists could be represented as selectively accentuating favourable evidence and seeking to suppress discordant evidence. The resulting storm of controversy significantly damaged public confidence in the science of global warming and required two different university and scientific reports to clear up.
On a much smaller scale, the political risks of public engagement for academics were illustrated by the case of a professor of health policy who gave evidence to a parliamentary Select Committee critical of the use of Private Finance Initiative (PFI) contracts in building new contracts. However, one of the MPs on the committee had been briefed by critics in the PFI industry about the professor’s work, and used the oral evidence hearing to impugn their academic credibility – a public criticism to which of course the academic involved had no form of redress (since conventional liberal laws do not apply in such top legislature settings).
Normally, perhaps with very packed political and media agendas, it might seem highly unlikely that a particular academic’s research can become the focus of any sustained attention. However, the expansion of the specialist media close to public policy, business or professional practice has considerably expanded the scope of what may now get attention. The development of ‘attack blogs’ – whose authors quite often extend into criticism of university research where it is used or cited by opponents – has particularly enlarged the chances of academic work attracting sustained ‘political’ criticism that goes well beyond the scope of conventional academic criticism.
One further implication is worth bringing out explicitly here, namely that academics who frequently write directly for the media as columnists or who are regular commentators, along with major public intellectuals in the French mode, are typically debarred from many ‘academic’ roles within government or business. They may already have a fixed public reputation on one side of an argument or another that more or less debars them from being seen as technically or academically impartial. They may in addition be seen professionally in the US or the UK as a ‘pop academic’, for it is certainly true that the time demands of regular media (or even specialist media) contributions are often very severe, leaving the person involved little time for longer-term academic work, let alone other forms of external impact. Of course there are prominent exceptions to this rule, such as the economist Paul Krugman, who has combined ceaseless commentary for the US media with winning the Nobel Prize in economics (for his earlier work), or the palaeontologist Stephen Jay Gould, whose Scientific American column was influential over many years, but who still found the time to compose his magnum opus, The Structure of Evolutionary Theory (2002) in the closing years of his life. But on the whole, for most academics, the demands of maintaining a constant media presence tend to be a barrier to other forms of external engagement.
7. Experience is the penultimate dimension in Figure 7.1 and in Collins and Evans’ (2007) terms it denotes the accumulation of practical knowledge in the area of scientific endeavour and an understanding of its practical applications or extensions. Experience especially is a relevant criterion for governments seeking expert advice about the interpretation of risk factors and of what is known and unknown, especially in the sense once famously characterised by the US Defence Secretary as ‘known unknowns’ and ‘unknown unknowns’. Experience also covers the extent to which an academic or researcher has existing knowledge of what is required in working with a particular client or ‘customer’ – especially the organisational know-how to operate successfully outside their academic comfort zone, with its famously long and elastic deadlines, conditional judgements and end-pleas for more evidence. Experience especially covers the ability of the expert to move (usually through interactions with others, for instance, in committee meetings) from technically known ground to broad judgements of possible risks and future developments.
Inherently, the best way to acquire relevant experience is to have carried out an exactly similar role previously. The next most useful basis for judging an expert’s experience is that they have previously carried out a parallel or analogous role, perhaps on a smaller scale, or in lower-level contexts that provide many clues and guide points for the current area. These considerations explain why governments especially tend to rely heavily on the same people to carry out successive expert roles. Indeed government agencies with extensive needs for academic advice (such as defence and scientific development funding bodies) regularly run a kind of nomenclatural system designed to ‘bring on’ a suitable range of researchers to fill these future slots when needed by giving them relevant experience. Academics who start down the route of extensive academic service may also tend to attract serial appointments from public bodies.
Business attitudes towards expert advice generally show a stronger focus on using young researchers and academics in the prime years of their creative flourishing. Venture capitalists especially may be interested in researchers with no experience at all, but with creative potential, innovative thinking and new ideas. They may also be interested in angles of analysis that can help firms achieve a (usually temporary) comparative edge over competitors, again usually associated with younger researchers most in-tune with modern methods. Hence especially innovative firms that run on more ad hoc organisational lines may place little value on past experience, which they associate with being sucked into established organisations’ ways of thinking or already-familiar technologies and approaches to problems. Pairings of innovators with more senior ‘product champions’ (that is, business-experienced people, or those with strong ‘insider’ status already) are also more common in innovative business areas (whereas in the public sector, outside experts are often expected to stand alone).
However, in broader business contexts and in more hierarchically-organised firms, the risk-reduction that follows from using outside researchers with experience is still considerably valued. A well-established way of combining the use of these researchers with the characteristic focus on innovation and on new techniques of analysis that confer a (brief) competitive advantage is to ground contacts within academic teams or research labs where the internal division of labour between senior, experienced academics and younger (more technically hot-shot) researchers provides a strong form of collective tacit knowledge. Hence large corporations often wish to maintain contacts with STEM labs over relatively long terms, and to commission applied research from teams whose capabilities they know well.
8. A track record of previous successful work in exactly the area, or in analogous areas, goes beyond simple experience to offer concrete evidence that an academic or a research team has achieved something similar to the current task – whether that be having produced a report or undertaken analysis, or invented a procedure or technique (or conceivably a product) that is similar to solving the current problem. In Alvin Gouldner’s (1973) view, a track record of past success in the same or similar endeavours (like the survival rate of a surgeon or the win rate of a lawyer) is a much better basis for a lay person to make judgements about whether to trust an expert or not than simply looking at their credentials or totting up their experience. A track record is also highly reassuring for risk-averse government or business leaders committing substantial resources or envisaging a later requirement to publish outcomes or justify the spending undertaken.
Looking across all the dimensions in Figure 7.1, we can also identify some important overarching factors that bring some of these dimensions together into different clusters of potentially linked elements. In the first place, there are some strongly age-related influences, especially building up academic credentials (dimension 1), developing networking skills (3), acquiring an established external reputation with insiders (6a), being able to point to relevant experience (7) and a track record (8) of past work – all of which take time and hence can rarely be done by newly appointed academics. It may also be that researchers in at least their 30s or 40s have more interaction expertise (dimension 5) through committee experience in their university or undertaking academic service roles. However, it is also worth noting that some dimensions in Figure 7.1 are not age-based. More senior academics may be of a disposition where they are less inclined to invest the time required to achieve external impacts, for instance, in undertaking media work. Nor are personal communication skills likely to be affected by age, while younger academics may have less of a constraining public reputation, without being known for fixed views or linked to political or corporate rivals. And, as we noted above, young researchers may be more in touch with forefront research techniques and analysis approaches valued by innovative businesses, especially in mathematical or technical areas.
Secondly, some dimensions in Figure 7.1 are more related to external legitimacy considerations, in cases where the involvement of an academic or researcher is seen as useful for building public confidence, or for strengthening the business, marketing or public policy case for pursuing a given course of action. Government officials or business managers choosing which academic researcher to ask to be involved with may worry less about getting absolutely the top expert or the very best obtainable research or evidence, in favour of choosing someone with the right profile to present a case authentically and plausibly to the public and the media. Especially important here are the overall fitness (note, not necessarily optimality) of the expert’s academic reputation and credibility (dimension 1 in Figure 7.1); the personal communication capabilities of the researcher in making speeches, fielding media questions or explaining findings in print (5); and the person’s public reputation (6a).
It should go almost without saying that it is unlikely that any university researcher is going to perform highly on all the criteria in Figure 7.1 at the same time. Instead there are likely to be many different combinations of qualities that can generate external impacts, just as we noted in Chapter 3 that there are many different formulas for career trajectories in academia, as different from each other as those of grant entrepreneurs, hub authors, obsessive researchers, ‘pop’ academics and senior teaching-orientated academics. Within the current state of knowledge about external impacts, there is no body of literature or argument that suggests how these combinations work a priori or on theoretical grounds, except to highlight an expectation of diversity. We move on in the next section to consider the available empirical evidence.
7.2: Evidence of external citation rates
The IPD provides a rare source for looking at some of the individual-level influences that may affect how many external impacts academics or researchers accumulate. Our approach here focused on the main Google search engine, and accordingly relies a good deal on the main Google algorithms and systems for screening out duplicate entries. We asked our coders to enter the names of each academic in our database and to work their way through the web pages linked to them in the sequence that Google suggested, but this time filtering out all results that related to the impacts of each academic within the university system. We eliminated all entries relating to academic publications, from book or journal publishers, and all sales entries, from bookshops or web-aggregators. We then collated information for the first 100 instances of external impacts related to that person. This rather laborious approach nonetheless generates high quality data about the electronic footprints left as a residue from academics’ or researchers’ external impacts. The Google ranking algorithms also filtered what got included in this partial census of each academic’s external impacts. Our data covered just over 20 personnel across five disciplines, shown in Figure 7.3, and generated nearly 6,600 mentions of academics or their research by outside organisations, split fairly evenly across the disciplines.
Figure 7.3: Summary of external impacts mentions gathered for academics in the IPD
Discipline | Mentions | Percentage of all mentions | Academic rank | Mentions | Percentage of all mentions |
---|---|---|---|---|---|
Economics | 1616 | 24.5 | Lecturer | 2207 | 33.5 |
Geography | 1515 | 23 | Senior Lecturer | 1608 | 24.4 |
Law | 1123 | 17 | Professor | 2782 | 42.2 |
Political Science | 1249 | 18.9 | Total | 6597 | 100% |
Sociology | 1094 | 16.6 | |||
Total | 6597 | 100% |
Note: Our main Google search method was to look for and record the first 100 references to an academic’s or researcher’s work made by external sources outside the university sector. Hence it is important to note that there is a maximum ceiling of 100 external references here. We covered somewhat more than 20 academics per discipline.
Professors generated most external impacts references, but not by much because our methods limited us to collecting only 100 references per person, a limit that most professors easily attained. Lecturers taken as a whole generated somewhat more references than senior lecturers, whose roles may be more inwards-facing or teaching orientated. However, Figure 7.4 shows that the median number of references per senior lecturer was slightly greater than for lecturers. Professors as a group had many more external impacts references than their more junior ranked colleagues. Indeed, half of professors achieved the full 100 mentions that we collected, compared with only the top quarter of lecturers and senior lecturers.
Figure 7.4: The number of citation by external sources for academics and researchers across academic ranks (for five social science disciplines in the IPD)
Maximum | Upper quartile | Median | Lower quartile | Minimum | Mean | Standard Deviation | N | |
---|---|---|---|---|---|---|---|---|
Lecturers | 100 | 100 | 28.5 | 5 | 0 | 43 | 41 | 49 |
Senior Lecturers | 100 | 100 | 34 | 9 | 0 | 49 | 44 | 35 |
Professors | 100 | 100 | 100 | xx | 0 | 78 | 39 | 36 |
Notes: as for Figure 7.3.
Turning to the nature of the external organisations referencing university researchers in the social sciences Figure 7.5 shows that for lecturers and senior lecturers the most common external source was think tanks, confirming the view of them in Chapter 6 as assiduous collators of university research and important ideas aggregators in the contemporary period. The second largest source of external impacts for these academics in these two ranks were interest groups, with other civil society organisations coming in at a closely similar level. Thus for lecturers and senior lecturers the main external impacts occur in sectors of society that are closest to their discipline and most interested in their line of research. Figure 7.5 shows that press and media interest was moderate but that impacts of government were slightly less, and impacts with business even less again. Turning to professors, it is apparent that their pattern of external influences in Figure 7.5 is rather different. This appears more rounded because professors attract somewhat higher proportions of their total references from government and from the press. Their references from business and diverse sources are also slightly raised compared to more junior staff.
Figure 7.5: Which kind of external sources referred to academics and researchers across academic ranks (for five social science disciplines in the IPD)
Looking across the five disciplines included in our pilot database, Figure 7.6 shows that the largest source of external references to economists are think tanks, more so than for other disciplines, followed by civil society sources, interest groups and government.
Figure 7.6: Which kind of external sources referred to academics and researchers across five social science disciplines (in the IPD)
In fact, among all five social science disciplines in Figure 7.6, government regularly seems to account for slightly more than or closer to 10 per cent of external references (least for geographers). Perhaps somewhat unexpectedly, political scientists as a group attract most references from business, as well as interest groups, and less from civil society – but like economists they seem to influence think tanks most externally. Geographers have a completely different pattern, with much less influence upon think tanks, a much greater impact on civil society, and the lowest rate of influence on interest groups of any of our disciplines. At the bottom of Figure 7.6 the patterns for both law academics and sociologists are surprisingly similar, with a strong dominance of civil society and interest group influences. Both have few business impacts, but law academics are mentioned somewhat more by government agencies and officials, while sociologists score relatively higher in terms of press and media coverage.
A key question arsing from the first section of this chapter concerns how far the external impacts of social science academics can be shown to be correlated or not with their academic citations scores. Inherent in the previous analysis is the idea that academic credibility is only one of many different factors that shape external influence – hence we might expect to see a relatively low correlation between academic and external impacts. Across our complete set of 120 academics in the pilot database the correlation coefficient is in fact 0.42, significant at the 1 per cent level – so that academics who are cited more in the academic literature of the social sciences are also clearly cited more in Google references from non-academic actors. However, this correlation could, of course, run both ways – showing that university researchers with greater academic credibility also have more external influence; but also potentially suggesting that academics judged significant or influential externally also attract extra academic citations.
Figure 7.7 shows that this linkage is weak for lecturers (whose academic publications are often restricted), strongest for senior lecturers and weak for professors – although this effect is almost certainly an artefact of our method here – since we impose a restrictive ceiling of 100 external references on all individuals in the database, a limit that half of the professors in our sample ran up against, creating a severely skewed and non-Gaussian distribution for this group. Turning to the linkages across disciplines shown in the second part of the Figure, the connection between external influence and academic citations is strongest for academics in law and sociology, somewhat weaker for economists, and both much weaker and non-significant for the geographers and political scientists in our sample.
Figure 7.7: Correlation coefficients between cleaned academic citation scores and external actors citing influence, in the IPD
Seniority | Lecturer | Senior Lecturer | Professor |
---|---|---|---|
Correlation Coefficient | 0.278 | 0.552 | 0.22 |
Significance level | >0.1 | >0.01 | Not significant |
N | 48 | 36 | 36 |
Discipline | Sociology | Law | Economics | Geography | Political Science |
---|---|---|---|---|---|
Correlation Coefficient | 0.595 | 0.591 | 0.415 | 0.299 | 0.194 |
Significance level | > 0.01 | > 0.05 | > 0.05 | Not significant | Not significant |
N | 24 | 24 | 24 | 24 | 24 |
Notes: As for Figure 7.3
7.3: Credit claiming for research
Using a screened version of main Google references seems to be an effective and increasingly relevant criterion for tracing external references, perhaps especially for the social sciences. It is interesting to briefly also consider the behaviour of individual lead researchers in seeking to identify impacts. We draw on two analyses. The first covers an intensive assessment of 20 projects funded by the ESRC in political science where project lead researchers were asked to nominate five main stakeholders with an interest in the success of and outcomes from their project, and to indicate the degree of impact which they claimed at the end of the project (LSE Public Policy Group, 20xx). Figure 7.8 shows the relationships between the outputs achieved by the project and the impacts claims made by researchers, and it is immediately apparent that there is no worthwhile or substantial pattern, with researchers in the top left hand quadrant seeming to over- claim in ‘hype’ mode, and those in the bottom right quadrant seeming to under-claim for impacts in a diffident or unperceptive mode.
Figure 7.8: Impacts claimed by project lead researchers for their projects, graphed against the number of references achieved
However, in Figure 7.9 we record the revised impacts claims that were arrived at by the Impacts Project researchers who moderated the project documents in detail, and also re-assessed the impacts achieved using somewhat better-developed methods than the overwhelmingly intuitive or ‘common sense’ accounts written up by the lead researchers in their response documents. The key effect here is to create a reasonable, if still weak, correlation between the total output score for projects and the moderated impact assessments, with a lower standard deviation and a more recognisable (if still variable) pattern.
Figure 7.9: Revised degree of association between our overall moderated impact evaluation and our impact score from our unobtrusive web search
In a related analysis for the British Academy we also looked intensively at the promises on impacts made by 37 successful applicants from the humanities and social sciences disciplines seeking research grant support, and compared them in detail with the impacts claimed by the lead researchers at the end of the project. Figure 7.10 shows the results. There is a rather simple pattern of alteration. In applying for grants, the researchers principally mentioned impacts relating to government agencies and bodies, with influence on think tanks as the second most common claims. In post-completion reports, the two switched positions, with most of the influence claims relating to think tanks, and with claims for government impacts considerably reduced and lower. By contrast, claims for influence over foreign governments and international organisations were the third most common in both pre- and post-research reports. Claims of business impacts were slender at both stages.
Figure 7.10: Anticipated and achieved impacts claimed by research leads for British Academy funded projects
Notes: Number of projects examined = 37