LSE - Small Logo
LSE - Small Logo

Blog Admin

April 6th, 2016

Getting our hands dirty: why academics should design metrics and address the lack of transparency.

2 comments | 1 shares

Estimated reading time: 10 minutes

Blog Admin

April 6th, 2016

Getting our hands dirty: why academics should design metrics and address the lack of transparency.

2 comments | 1 shares

Estimated reading time: 10 minutes

transparancyMetrics in academia are often an opaque mess, filled with biases and ill-judged assumptions that are used in overly deterministic ways. By getting involved with their design, academics can productively push metrics in a more transparent direction. Chris ElsdenSebastian Mellor and Rob Comber introduce an example of designing metrics within their own institution. Using the metric of grant income, their tool ResViz shows a chord diagram of academic collaboration and aims to encourage a multiplicity of interpretations.

This piece is part of a series on the Accelerated Academy.

There has been much anxiety about metrics and their multifarious implications for contemporary scholarship. The evocative notion of a ‘metric tide’ seems simultaneously inevitable and all-encompassing. In this article, we want to argue for the need and opportunity to direct, harness and at times resist the flow. But to do so, we need to complement critiques of metrics with getting our hands dirty in reflectively and critically designing metrics.

From our own discipline of human-computer interaction (HCI) we might characterise this approach as design-led research – critically making and doing to actualize and confront the issues in an area. At Open Lab, we use these methods to critically investigate technology across domains, in health, education and governance. This has much in common with recent methodological moves within social sciences. Outlining a manifesto for ‘live methods’, Back & Puwar argue the need to “develop empirical devices and probes that produce affects and reactions that re-invent relations to the social and environmental”.

Studying the design of metrics has two aims. First, to make more transparent the nature of their construction – the very fact that metrics are designed – the result of many overlapping factors and interests. Second, to reveal and seek to mediate the culture around metrics by producing systems that are for rather than of academics. In what follows, we introduce an example of designing metrics within our own institution, and reflect on the lessons we are starting to learn.

ResViz: visualizing collaboration through funded projects

ResViz is a visualization we developed (and was funded) in close collaboration with our university management. Built on an internal university dataset of externally funded projects, ResViz shows a chord diagram of academic collaboration (by individual academic, school and faculty), using the metric of grant income. The visualisation is based on live data, updated nightly, and is interactive. The data cannot be downloaded, but there is contingency for multiple perspectives on the data, to encourage exploration, with collaboration the central focus.

resvizResViz, seen in use here, is an interactive visualisation that shows grant income and academic collaboration at Newcastle University.

Notably however, ResViz is proposed eventually to be available to all staff, and so will make this data about funding and collaboration visible for the first time. Senior managers envisioned that ResViz (like other metrics) could not only be a tool for performance management, but one that was transparent throughout the university. As such it might promote self-management and inquiry about the funding landscape and collaboration by academic staff at all stages of tenure.

Given the wide-ranging concerns of ‘audit culture’ and ‘management by metrics’ we do not see the above agendas naively, or as unproblematic. Nevertheless, the university, in a variety of opaque processes and policies, is already using these metrics. As designers, we saw the potential in ResViz to investigate the implications of making metrics transparent, and sought opportunities for their deliberation, contestation and use by academics themselves. Furthermore, we sought to complement emerging frameworks around ‘responsible metrics’ – and investigate their practical application. Alongside our involvement in the design of ResViz, we have used the pilot deployment as an opportunity to interview 20 key stakeholders, from senior managers and administrators, to senior and early-career researchers. This is getting our hands dirty – but we advocate design as a crucial and productive partner to the long-held critique and suspicion of metrics.

How metrics are made

We should understand metrics as designed artefacts. While critiques about ill-judged assumptions, biases and the aura of ‘dataism’ all hold true, our experience with ResViz has shown us that it’s more nuanced than this. Producing a system like ResViz entails multiple partners and data sources coming together with different and at times conflicting agendas.

In our case, this entailed metadata such as staff names coming from central IT; research contracts from HR; and project memberships from the internal project management system. Each of these data sources may be separately managed – with different individuals responsible for how the data is applied and shared, the internal consistency of that data, and the technical requirements for its use. Each individual (with their own professional ethics and judgments regarding metrics) would ideally want to know exactly how this data would come to be used at each stage. The resulting design is inevitably a compromise of many different responsibilities, data processes, and politics coming together – with a potentially huge impact on the final design.

design metricsImage credit: User experience design training by Andy Bright Flickr CC BY-SA

For example, while many of the project funders are public charities and research councils who provide their own public listings of funded projects, there are also privately funded industry research projects that would not otherwise be publically visible. As such, a decision was made to anonymise all funder and project titles within ResViz. While this is of less consequence for administrators and senior managers who may already have access to this data, it significantly hampers the meaning and usefulness for academics – who might have relied on such details as a means of discovering new funding opportunities. These sorts of implications dramatically affect how these metrics can be used, and crucially how they are perceived – as a management tool rather than an academic one.

Generally, our position is in favour of greater transparency around metrics and their use. In this way they might be better understood and contested, especially by those with positions of power. However, as is clear with ResViz, total transparency is rarely achievable. Even in this case, where university management explicitly espouses transparency as a principle, in practice there are frequent limitations. Through the design of metrics we can recognise where compromises are made, why and to what consequence. Ultimately, we can then understand what metrics are workable within existing and technically supported processes and how these can be oriented for academics.

Mediating metric culture

Speaking with staff and administrators however, it became clear that the deficiencies of any data are only ever half the problem. For many, it was as much about how ResViz was to be used, and by whom. Would ResViz be a way of raising questions or determining answers? Could staff from other fields understand discipline-specific funding opportunities? Did they have the necessary prior context to read the data appropriately?

So while we’re insisting on design-led research into metrics, we should be wary of any suggestion that the right techniques, better visualization or more data would resolve all concerns. Individuals and managers clearly have a responsibility to use and judge metrics with care. That said, once again, we position good design as that which is responsive to and mediates particular cultures around metrics in an organisation. For example, how does ResViz contribute to or counter concerns about target-driven management? Is there any impact on internal ‘REF’ reviews? These local politics are critical in the culture around metric-based systems, their acceptance and enactment.

interpretations

It is clear though, that many academics feel in a bind in their relation to metrics. As Burrows has suggested, it’s ‘play or be played’. In the same breath, participants would talk about how they could make use of ResViz in communicating a particular narrative, while simultaneously warning of the way others, unlicensed as such, might interpret these metrics.

I think to be able to use this to communicate in a selective way, externally or internally, would be very useful, but I worry about how having open access to it, you’re seeing a very limited dimension of a person’s professional role and how that could be interpreted or misinterpreted .” (Senior academic)

And while rightly cautious, part of the problem with such a view is to assume that there is one canonical reading of the data and the reality it presents. By contrast, we hoped the interactivity, and exploratory nature of ResViz encourages a multiplicity of perspectives, with any one view of the data as a necessarily partial indicator of a greater whole. Drawing on the field of ‘Critical InfoViz’ , ResViz aims to be a ‘questioning lens’. ResViz lacks a great range of necessary context – but seeks to display humility, and is positioned as ideally a starting point rather than a conclusion.

Metrics do exist, and are in use in an often-opaque fashion, and carry a risk of being appropriated too determinately. Beyond critique, getting involved with the design of metrics offers the opportunity to create tools that productively bring them into question, demonstrating their capacity for multiple interpretations. However, this is about more than simple resistance. A multiplicity of views creates means for a wider constituency to engage, learn and make sense of the vast and potentially telling data that the University has. It allows people to create and present their own stories with this data. The risk is that this could further propagate a problematic culture of academic quantification and measurement – even amongst peers.

Yet, we would argue it is the current asymmetry, monopoly and lack of transparency about the way existing metrics are enacted that is most potentially damaging. A wider collective and interdisciplinary understanding of metrics as necessarily partial, subjective and indicative, mediated by careful and critically reflective design is surely a necessary counter.

The post is part of a series on the Accelerated Academy and is based on the author’s contribution presented at Power, Acceleration and Metrics in Academic Life (2 – 4 December 2015, Prague) which was supported by Strategy AV21 – The Czech Academy of Sciences. Videocasts of the conference can be found on the Sociological Review.

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Chris Elsden is a PhD student with a background in sociology, whose research concerns fieldwork and design for the experience of living and working a ‘data-driven life’. He can be found on Twitter @ElsdenChris.

Sebastian Mellor is a PhD student with a background in Mathematics, Statistics, and Machine Learning and an interest in visualisation, interaction design, infrastructure development, and technologies for enhancing research, collaboration, and technical exploration.

Rob Comber is a social psychologist, and lecturer in Computer-Mediated Communication. His research primarily examines the methods and tools for communities, including data communities.

The authors are all part of Open Lab, a human-computer interaction, social and ubiquitous computing research group in the School of Computing Science at Newcastle University, which can be found on Twitter @OpenLab_Ncl.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Measuring Research | The Accelerated Academy Series

2 Comments