Taming Silicon Valley by Gary Marcus examines the growing influence of AI, controlled by profit-driven Big Tech companies, on society and the dangers of AI’s unchecked development. Marcus’s insights and his proposals for stronger oversight, democratic accountability and regulation are compelling, though implementing such change will be an uphill battle, writes M. Kerem Koban.
Taming Silicon Valley: How We Can Ensure That AI Works for Us. Gary F. Marcus. MIT Press. 2024.
Artificial intelligence (AI) is everywhere, and is re-structuring professional, individual and societal norms, procedures and regulations. With the rise of Generative AI, discussions around the role and the function of AI, and its dangers, have become increasingly salient. In this context, Gary Marcus’s Taming Silicon Valley examines how AI interacts with our lives, how Big Tech (made up of companies like Meta, Google and OpenAI) is influencing policies, and how to claim rights for “fair, just, and trustworthy” AI given that “[W]e aren’t passive passengers on this journey” (5).
The origins and development of AI
The book begins by examining the origins of AI: a research programme in Darmouth College in 1956 which aimed to study machines and figure out how to make them intelligent. Training them to play chess was one of the first tasks. The next stage was integrating intelligence and processing large language models (LLMs), a key development for machines not only to operate in (relatively) intelligent ways, but more importantly, to start generating content, guiding preferences and performing tasks that we used to do. As such, AI can now store statistics, words, images, and generate different types of content, among its many other capabilities.
The question that emerges is how can ordinary people counter the influence of these firms and make AI fair, accountable, and working for our benefit?
Has such a highly capable “organism” been useful for us? A recent edited volume on the governance of Generative AI raised concerns about breaches of intellectual property rights, power politics shaping the regulation of AI and the emotional impact on “low-skilled” labour arising from the automation risk. Similarly, Marcus points to various immediate threats of Generative AI: political disinformation (eg, fake accounts, fake content), market manipulation (eg, fake news about firms or elections), accidental misinformation (eg, LLMs generating false content without deliberate intention to manipulate), bias and discrimination (eg, the social service scandals in the Netherlands and Australia), among others.
AI’s dangers and the struggle to hold Big Tech accountable
Taming Silicon Valley explores these immediate threats while also reflecting on the difficulty of incentivising Big Tech to address them. Marcus argues strongly that “the desire to do good decreases with time, in the eternal quest for growth” (84). He claims that while Big Tech has been productivity- and human empowerment- oriented since the late 1950s (83), recent years have witnessed what he calls “moral descent”: profit maximisation at the expense of data subjects and society more generally. Here, “moral descent” activates various tactics to manipulate public opinion about AI and Big Tech and to influence policy processes. Marcus notes that Big Tech emphasises future profits and technological breakthroughs to maintain the “AI hype” and minimises the risks of (generative) AI public attacks on critiques (86-98). Additionally, Big Tech seeks to influence policymaking in its own interest through lobbying and employing regulators.
The question that emerges is how can ordinary people counter the influence of these firms and make AI fair, accountable, and working for our benefit? Marcus argues that we must insist on our data rights and privacy, demand transparency, incentivise “good AI”, and hold the tech companies liable for their operations. He argues that we should invest in AI literacy and research on trustworthy AI that generates reliable content, information and knowledge, establish independent oversight with multiple layers, and ensure international coordination for AI governance.

Yet such measures are easier said than done, considering the institutional power Big Tech is currently enjoying. Many public services, along with private businesses, rely on these companies and their investments in AI infrastructure, affording them substantial influence over policy making. Related to this, power distorts regulatory policies. Countering the Big Tech lobby would require institutionalisation of democratic procedures and oversight, as Rahman puts it cogently. But the tech companies would surely resist democratic oversight and, if it was implemented, refuse to comply with it. Lobbying, revolving door arrangements and policy advice, along with many other channels of policy influence are likely to impede governments from implementing stricter regulation.
Challenging Big Tech through democratic institutions
Democratisation to counter power and domination could be enabled, however, through requiring “minipublics” (independent representatives of citizen/consumer groups or non-governmental organisations) on the executive boards of these companies (similar to McCarthy’s recommendation to regulate finance). Another channel is what Gerald Epstein argues for in a recent book on regulating the banking sector: “[T]o oppose the power of Bankers” Club and the more general assault on democracy, working for political candidates that support democracy and publicly oriented policies… is critical” (278). In a similar vein, nominating and selecting politicians that could challenge Big Tech domination could be the best way to address twelve threats Marcus identifies and argues for solutions to insist on.
The “tech coup” through the “Big Tech oligarchy” (see Zucman, Slobodian, Moynihan) now seems not only to have institutional power controlling the infrastructure (ie, neurons), but more worryingly, it has established control over the executive branch of the state apparatus. Big Tech’s state capture, or in other words, technocolonisation of the state apparatus, is threatening democracy. Many recent legal, administrative, and policy decisions in the US (and elsewhere around the world due to oligarchic, feudal features) indicate an extremist orientation to de-institutionalise democratic norms and procedures and an existential threat to the administrative state.
Big Tech will hardly sail against the tide […] and impose self-regulation, especially when gains are lucrative, and business has achieved state capture in the US.
Besides the threat to democracy is the pressure on the state apparatus, and specifically on bureaucracy, exacerbated by the ascent of populism (and with authoritarian and even fascist leanings) around the world. The stark example is the transformation of federal bureaucracy in the US by Elon Musk’s DOGE. Actions including dismantling or re-structuring, public organisations, firing public employees, cutting funding for social services and universities aim to detach the bureaucratic apparatus from society, and eventually lead to a significant decline in state capacity to perform administrative and policy functions. Given that the administrative branch is fundamental to regulate, steer, and guide business and society, the attack on and the challenge to the bureaucracy is not surprising. Yet we need the bureaucratic apparatus to counter power and domination and to democratise Big Tech and AI regulation and governance.
Finally, the increasing power of Big Tech and AI could cause a “tragedy of the commons”, as Marcus claims: “the field of AI is making a mistake, sacrificing long-term benefit for short-term gain … everybody goes for the trout while they are plentiful, and suddenly there are no trout left at all. What’s in everybody’s apparent self-interest in the short term is in nobody’s interest in the long term” (171). To countering this trend, Mansouri and Bailey suggest five ways to be “anti-AI” as opposed to a pessimistic and passive stance against business power: dismantle, “smash”, tame, escape, or resist.
None of these alternatives alone can be sufficient to direct AI and Big Tech towards social benefit. This is also because Big Tech will hardly sail against the tide (which is buoyed by the self-fulfilling AI hype it contributes to) and impose self-regulation, especially when gains are lucrative, and business has achieved state capture in the US. Nor would it be desirable to dismantle or resist altogether given potential benefits of AI. Bottom-up pressure, international policy coordination, oppositional political force around the world, and encouraging Big Tech to share the fruits and benefits with the society are critical to ensure that AI works for us all.
Note: This review gives the views of the author and not the position of the LSE Review of Books blog, nor of the London School of Economics and Political Science.
Image: JHVEPhoto on Shutterstock.
Enjoyed this post? Subscribe to our newsletter for a round-up of the latest reviews sent straight to your inbox every other Tuesday.
1 Comments