Debates on the subject of the accelerated academy often focus on the contemporary condition of acceleration, or the transfusion of the ideology of acceleration from different domains into an unsuspecting academy. In contrast, Justin Tyler Clark, argues that the intellectual underpinnings of the accelerated academy do not lie in administrative and bureaucratic developments, but in attempts to create a quantified social science based on the model of the natural sciences and in particular the development of the timed intelligence and achievement tests.
This post is part of the Accelerated Academy series, you can find all the posts in the series here.
Recently, an Irish colleague shared a Gaelicism that sums up the problem with contemporary academia’s obsession with assessment:“You don’t fatten a pig by weighing it.” Too much time that could be devoted to researching and teaching is spent instead on trying to translate these core activities into the numerical scores that fill institutional reputation surveys, faculty performance reviews, student teaching reviews, grant budgets, teaching scores, impact factors, university rankings, and countless other metrics. As many critics observe, so much weighing of the pig distracts faculty from their traditional mission, inflates the cost of administration, and remakes the temporal life of the university in ways this blog’s Accelerated Academy section has explored for quite some time. How did this happen? How have academics gone from “measuring” and “weighing” nature, culture, and society to measuring and “quantifying” themselves?
Over the past decade or two, this question has been approached from many angles. In The Tyranny of Metrics (2018), economic historian Jerry Z. Muller identifies higher education’s metrics obsession as a response to a heightened demand for accountability. Such accounts come in the form of spreadsheets, rather than narratives; as scholar Christopher Newfield has shown, numbers “count” more than words in administrative decisions. What is called “continuous assessment,” especially when it demands quantification of the unquantifiable, inevitably wastes time and money, which perversely creates the need for yet more time-consuming accounting. According to Stephen Ball in “The Teacher’s Soul and the Terrors of Performativity” (2003), this new assessment regime empties out human relationships in the academy, replacing the “autonomous or collective self” with a careerist self performatively dedicated to external assessment standards.
the university has instead become the target of one of its own early 20th-century innovations and is complicit in the historical origins of acceleration.
Rather than policymaker expectations, the anthropologist David Graeber’s Bullshit Jobs (2018) attributes the passion for metrics to a far older cause. Like the corporations it increasingly emulates, the academy operates under the same logic as feudalism. Just as medieval lords increased their prestige by acquiring far more retainers than necessary, so university administrators establish their fiefdoms by assigning underlings to the modern equivalent of doorknob-polishing. The status of a dean is proportionate to the number of staff he or she employs in drawing up assessment-driven planning “visions” that rarely come to fruition.
A third explanation looks beyond public policy and organisational culture to the uncertain identity of the university. As the late literary critic Bill Readings once argued in The University in Ruins (1997), the 20th century university exchanged the outdated ideal of Culture for a more inclusive, yet purposely vague, concept: Excellence. Like other current university marketing buzzwords (e.g. “service,” “leadership,” and “creativity”) that the writer William Deresiewicz argues have little meaning, “excellence” is a useful ideal precisely because no one knows what it is. Metrics create an illusion of consensus around concepts like “excellence.” Yet whatever the rhetoric of excellence means, argues Filip Vostal in Accelerating Academia The Changing Structure of Academic Time (2016), its effect is quite material. Such rhetoric multiplies the number of “professional and bureaucratic demands that can be experienced as time-pressure” and demands the speediest possible adaptation to the latest conception of “excellence.”
Whatever function metrics serve in the academy, they are often seen as symptomatic of a particular historical development: the ascendance of neoliberalism. We are busier in part because the logic of Taylorism, having spread from the factory floor to the corporate office, has now infected the university itself. Academics, it would seem, bear a double-burden: we are not only pressured to publish and teach more in the same amount of time, but also to perform the role of the scientific manager, accounting for that productivity. At a time of professional precarity, it’s difficult to say no.
My own work doesn’t so much dispute these developments, as reconsider what made them possible in the first place. Rather than being a late, innocent, casualty of neoliberal metricization, as much of the previous literature suggests, I argue the university has instead become the target of one of its own early 20th-century innovations and is complicit in the historical origins of acceleration. As such, a seminal moment in the rise of acceleration can be traced to the desire of the nascent discipline of psychology to establish itself as a quantitative science and as part of this drive, a singular invention: the timed intelligence and achievement test.
Today, the use of timed tests like the SAT (the U.S.’s three-hour-fifty-minute undergraduate entrance exam) to assess prospective and current students seems almost beyond challenge. While we sometimes question the relevance and neutrality of the content of the tests, we rarely question the implicit relationship between intelligence, knowledge, and speed, inherent to them. The public’s belief that this relationship exists runs deep and is reflected in the popularity of bestselling books on quick thinking by Malcolm Gladwell and Daniel Kahneman, as well as the tendency to represent the genius of figures such as Sherlock Holmes in terms of their rapid thought.
Yet for most of the 19th century, at least in the United States, educators had little use for timed tests. The stereotypical figure of intelligence and learning was the deliberate, self-possessed orator, who rushed neither thought nor words. The value of this more measured idea of intelligence, is paradoxically underscored by the impacts of corporate demands for faster communication, as, in an echo of the more recent effects of acceleration, the professional status of stenographers declined precisely as these professions embraced “speed tests” that evaluated stenographers according to a new measure of ability: words-per-minute (WPM).
timed testing did not simply express, but create, for experts and public alike, the conviction that smarter minds were faster
Ironically, it was psychologists, not clerks or economists, who helped legitimise WPM and similar measurements as scientifically meaningful. In the late 19th century, the clock was one of the few instruments psychologists had available to measure the intangible operations of the mind. But, while time studies were obviously useful outside the laboratory on the assembly line, they had far less utility within it. When the first generation of American experimental psychologists brought their undergraduate subjects into their campus laboratories, they did not think of themselves as measuring the speed of mental processes, but rather their duration. As early reaction time experiments revealed, the undergraduates with shorter reaction times (or longer ones, for that matter) were not necessarily the highest achievers. It was only when psychologists began to model their experiments on those of the clerical speed tests that they previously inspired, that they could begin to advance the claim that faster was better. Ultimately, such timed testing did not simply express, but create, for experts and public alike, the conviction that smarter minds were faster.
This conviction is crucial to the operation of the modern academy, insofar as it allows the work of academia to be evaluated through the model of industrial efficiency. The institutional practice of evaluating scholars’ according to the speed of their publications, and the worth of those publications according to the citations generated by certain time markers, depends on an implicit conviction that “speed of thought” exists and governs the production, evaluation, and discussion of scholarly labour.
Ultimately, to respond to the accelerated academy, scholars need to look back at least century, to our predecessors’ role in reconceiving intellectual work as something that could be accelerated. When academics first designed the scale of mental speed, few dreamed they would one day find themselves standing on it. Any critique of the accelerated academy must take into account not only the contemporary factors of neoliberalism and the knowledge economy, but also the modes of thought that enable it and the role played by early 20th-century psychology in establishing them.
This post draws on the author’s article, The secret of quick thinking: The invention of mental speed in America, 1890–1925, published in Time and Society.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
Featured image credit: Agê Barros via Unsplash.