Artificial intelligence has survived the hype — investment from public and private actors remains on the rise, total market size is increasing, its potential to shape firms’ operational capacity is better understood, and now nations are looking to AI as the next general-purpose tech of promise for driving growth and competitiveness. This issue is coming to the fore with a new demand for public use and investment in novel AI applications for public health management. Across businesses, there’s a need to establish resilience against uncertainty in the workforce; and across nations international cooperation can lead to common standards in responsible data sharing and analytics provision. As such, it’s time to ask grander questions: what should Europe demand from the firms and individuals designing and leveraging artificial intelligence? What function should we make artificial intelligence serve in society?
There can only be iterative answers. For while the EU may have a unique ability to experiment with the regionalisation and international organisation of collective visions for AI, there is a need to better understand the kinds of AI scenarios we want to avoid and to build institutions that help strengthen the EU in case these scenarios emerge. The quality of such institutions will depend on an improved collective vision of what AI can provide, what citizens want, and the best way to marry them. This, in turn, will be decided by how the EU builds and pursues a vision for democratised AI, understanding the different ways in which this process may occur. This process must be driven to build an environment that incentivises collective value creation.
On the surface, the economic case for improved AI access is simple. Overtime, firms that can more effectively leverage and integrate strategic information in global markets can outperform those who cannot – from the ability to better update the product, identify clear markets, and engage with consumers correctly. In so doing, AI can help to reshape the microeconomic environments of everyday activities for individuals and businesses – thereby serving to reshape the strategic economic potential of industries and sectors, as well as the innovation agendas that can follow. Indeed, it is essential to understand that whatever shape the AI market takes can serve to form and mould all other markets.
Due to its exceptional potential in the long run, the AI industry has become concentrated. There’s only a limited pool of machine learning engineers, which created high-wage offerings that only large firms could afford. These firms have the data and infrastructure to deploy models in value maximising ways, and they can afford to provide space for novel research unconnected to short-term turnover demands.
This limited talent pool is compounded by a trend of major platforms turning to compete over how AI is democratised – investing in opening up machine learning libraries, open-sourcing best in class solutions that others cannot afford to build (GPT, BERT), and purchasing major machine learning cooperation platforms such as the old news acquisitions of Kaggle by Google and Github by Microsoft. But the race to democratise, or rather the race to own the platform by which firms integrate AI into their technology stack, and deepen their consumer relationship, is being taken up by a broader spectrum of actors – IBM’s continuing work on Watson, AT&T’s Acumos, H2o.AI, Ericcson’s AutoML investments, DataRobot, and beyond.
These investments have shaped the nature of competition less into a split between the race for algorithmic novelty and the race to build the platform, but which algorithmic novelty can be leveraged and automated with a new generation of automated machine learning and no-code and low code developments. The issue as such is less whether AI will be diffused and democratised, but what the different scenarios for its potential diffusion will be; whether democratisation can work in favour of collective value creation or to entrench existing market power; whether there will be empowering, enabling, and inclusive standards or extractive institutions and practices; whether democratisation can empower a new generation of firms and citizens or whether it will establish the second digital divide.
This question compounds. Responsible democratisation means that human centric and user centric standards need to be broader, to consider what happens when a multitude of such standards interact with one another, when AI applications interact and compete inter-culturally and internationally. Indeed, there are no value-neutral AI applications.
We cannot expect the divisions to be clear; rather they will be murky, mixed between exceptionally novel solutions for public value and highly extractive institutional frameworks, with both corporate and government uses of such technologies. The focus should be to look beyond ethics, towards the political economy, which determines which ethical approaches will succeed or not. Questions such as data ownership, limiting digital vertical integration of services, breaking barriers to scale, restructuring public commitments and duty of care become concerns that are endemic to the very question of a viable corporate strategy. It’s about more than imagining a post-advertising internet, though that can indeed get us far, but finding the first principles of how we want information to relate to the fundamentals of human agency in a world in which our daily lives are increasingly comporting with digital interfaces.
Europe is afforded a unique opportunity to be the leader in safe exploration and pursuit of democratised AI – improving the market and potential for AI solutions to advance new capacities, new practices, new institutions across all sectors, including the public sector. An AI in every home, an AI in every business – which, in small ways, is already the case, but the gulf between minor exploration and better deployment of best-in-class open systems remains both wide and deep.
♣♣♣
Notes:
- This blog post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics.
- Featured image by Giannis Skarlatos on Unsplash
- When you leave a comment, you’re agreeing to our Comment Policy
Josh Entsminger is a doctoral candidate at UCL’s Institute for Innovation and Public Purpose. He investigates policy approaches to building and shaping national artificial intelligence ecosystems. He has contributed to two World Economic Forum reports, and has been published at Project Syndicate, HBR Arabic, MIT Tech Review Insights, California Management Review Insights, and the World Economic Forum Blog. His articles have been syndicated to 10 national newspapers and translated into six languages.
Mark Esposito holds appointments as professor of business and economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University. He is a faculty member at Harvard University since 2011. Mark is a socio-economic strategist researching the Fourth Industrial Revolution and global shifts. He works at the interface between business, technology and government and co-founded Nexus FrontierTech, an artificial intelligence company. He He has authored/co-authored over 150 peer-reviewed publications and 11 books, among which Understanding How the Future Unfolds (2017) and The AI Republic (2019).
Terence Tse is a professor at ESCP Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities. In addition to being a sought-after global speaker., he has written over 110 published articles and three other books including The AI Republic: Building the Nexus Between Humans and Intelligent Automation.