In Language and the Rise of the Algorithm, Jeffrey Binder weaves together the past five centuries of mathematics, computer science and linguistic thought to examine the development of algorithmic thinking. According to Juan M. del Nido, Binder’s nuanced interdisciplinary work illuminates attempts to maintain and bridge the boundary between technical knowledge and everyday language.
Language and the Rise of the Algorithm. Jeffrey Binder. The University of Chicago Press. 2023
Arguably, the history of what we now call algorithmic thinking is also the history of the consolidation of algebra, mathematics, calculus and formal logic as tools for composing, enunciating, and thinking about abstractions such as “some flowers are red”. But in less obvious ways, Language and the Rise of the Algorithm shows, it is also the history of trying to compute with, and often in spite of, language, to convey a meaningful proposition about the world. In other words, it is the history of ensuring that “red” actually means red – that we are all clear on who sets what red means (for example, experts through definition or ordinary people through usage) and agree on it – and of whether agreeing about these things is what matters when we use language.
The history of what we now call algorithmic thinking […]is also the history of trying to compute with, and often in spite of, language, to convey a meaningful proposition about the world.
Harking back to the 1500s, the first of the book’s five chapters examines attempts to use symbols to free writing from words at a time when vernaculars where plentiful, grammars unstable and literacy rates low. Algebra was not then considered part of mathematics proper but its rules, expressed in spoken language, were used for practical purposes like calculating taxes and inheritance. From myriad writing experiments emerged algebraic symbols: uncertain and indeterminate, they enabled computational reasoning about unknown values, a revolution that peaked when Viète first used letters in equations in 1591 (33-36).
Algebra was not [In the 1500s] considered part of mathematics proper but its rules, expressed in spoken language, were used for practical purposes like calculating taxes and inheritance
Chapter Two explores Leibniz’s attempts to produce a philosophical language made of symbols and unburdened by words, such that morals, metaphysics, and experiences are all subject to calculation. This was not an exercise in spitting out numbers, but with the aim of demonstrating the reasoning behind every step of communication: a truth-producing machine (62-64). The messiness of communication struck back: how can one ensure that all terms and their nuances are understood in the same way by different people? Leibniz argued that knowledge was divinely installed in us, waiting to be unlocked by devices such as his, but Locke’s argument that knowledge comes from sensory experience and requires an agreement over what things mean won the day (79), paving the way towards an emphasis on concepts and form.
Leibniz argued that knowledge was divinely installed in us, waiting to be unlocked […] but Locke’s argument that knowledge comes from sensory experience and requires an agreement over what things mean won the day
Leibniz also sought to resolve political differences through that language. Chapter Three argues Condorcet shared this goal and the premise that vernaculars were a hindrance, but contrary to Leibniz, he believed universal ideas needed to be taught, not uncovered. Condillac’s and Stanhope’s experiments with other logical machines – actual, material devices designed to think in logical terms through objects – epitomised two tensions framing the century after the French Revolution: first, the matter of whether the people, and their vernacular culture, or the learned, and their enlightened culture, should govern shared meanings – that is to say, give meaning – and second, whether algebra should focus on philosophical and conceptual explanations or on formal definitions and rules (121).
The latter drive would prevail, and as Chapter Four shows, rigour came to emanate not from verbal definitions or clarity of meanings, but from axiomatic systems judged on consistency: meanings are irrelevant to the formal rules by which the system operates (148). Developing this consistency would not require the complete replacement of vernaculars Leibniz and Condorcet argued for: rather, symbolic forms would work alongside vernaculars to produce truth values, as with Boolean logic – the one powering search engines, for example. The fifth and last chapter, “Mass Produced Software Components”, rise of programming languages, in particular ALGOL, and the consolidation of regardless of specifics: intelligible, actionable results within a given amount of time (166).
Binder’s rigorous dissection of debates over language, philosophy, geometry, algebra, history and culture spanning 500 years integrates debates that most disciplines today, aside from some strands of media studies and Science and Technology Studies, tend to treat separately
This book is a tightly packed, erudite contribution to the growing concern in the Humanities with algorithms. Binder’s rigorous dissection of debates over language, philosophy, geometry, algebra, history and culture spanning 500 years integrates debates that most disciplines today, aside from some strands of media studies and Science and Technology Studies, tend to treat separately or with a poor sense of their inbuilt connections. A welcome result of this exercise is the historicisation of certain critiques of technological interventions in politics that, generally lacking this kind of integrated, long-range view, we tend to treat as novel and cutting-edge. For example, an 1818 obituary for Charles Mahon, third Earl of Stanhope and inventor of the Demonstrator, a “reasoning machine”, already claimed that technical solutions for other-than-technical problems such as his tend to replicate the biases of their creators (113), and often the very problems they intended to solve. This critique of technoidealism is now commonplace in the social sciences.
A second benefit of the author’s mode of writing is not explicit in the book but is arguably more consequential. From Bacon’s dismissal of words as “idols of the market” in 1623 (15) to PageRank algorithm’s developers’ goal to remove human judgement by mechanisation in the 1990s (200), the book traces attempts across the centuries to free reason and knowledge from language and rhetoric. In doing this, Language and the Rise of the Algorithm effectively serves as a highly persuasive history of the affects, ethics and aspirations of technocratic reason and rule. The book cuts across the histories of bureaucracy and expertise and the birth of governmentality to tell us how an abstraction in how we make meaning work emerged – an abstraction we are asked to trust in, and argue for, partly because it is the kind of abstraction it ended up being.
The book traces attempts across the centuries to free reason and knowledge from language and rhetoric
This is a rich and nuanced book, at times encyclopaedic in scope, and except for a slight jump in complexity and some jargon in the fifth and last chapter, it will be accessible to readers lacking prior knowledge of algorithms, mathematics or language philosophy. It will be of interest to scholars across the social sciences and humanities, from philosophy and history to sociology and anthropology, as well as readers in political science, government studies and economics for the reasons listed above. It could work as course material for very advanced students.
This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.