LSE - Small Logo
LSE - Small Logo

Byron Reese

February 18th, 2019

As artificial intelligence changes the world, it changes our language too

1 comment | 2 shares

Estimated reading time: 5 minutes

Byron Reese

February 18th, 2019

As artificial intelligence changes the world, it changes our language too

1 comment | 2 shares

Estimated reading time: 5 minutes

New technologies don’t just change the world, they change our language too. The automobile era gave us over a thousand new words and phrases such as limousine, drive through, hot rod, tailgate, and muscle car. Computers gave us chips, CPUs, beta testers, operating systems, programs and bugs. The Internet has already contributed hundreds of new words and new meanings to old words, including spam, meme, hashtag and trolling.

This all makes sense. Technology gives us new experiences, new abilities, and new problems. Thus we need new words to keep up with this changing world. The great technology of our time, artificial intelligence, will do the same – it will change the world and our language with it.

Usually the new words flow from necessity and are created organically. But with AI, I am going to jump the gun a bit and suggest some new words that we are going to need to adjust to this new technology.

AiporiaUncertainty as to whether you are dealing with a human or an AI. From the ancient Greek aporia (ᾰ̓πορῐ́ᾱ), meaning a state of puzzlement.

Aiporia is a new phenomenon, but can be experienced on a regular basis. For instance, say you are on a website that has a real-time chat feature. You click on it, and up comes a message from “Sarah” asking if you need any help. You may wonder if Sarah is 1) a person named Sarah, 2) a person from another country not named Sarah presenting herself as “Sarah” to be more accessible, or 3) a chatbot named “Sarah.”

The confusion of aiporia is magnified further by the fact that all three may be true. Sarah may start out as a chatbot, but when you ask a more challenging question than “How much does shipping cost?”, the chat is continued by someone else, perhaps in another country. If your questions might be elevated to a more senior person, perhaps in the United States, and perhaps and coincidentally named Sarah. Expect aiporia to become more common.

AinigmaA decision made by an AI that a human cannot understand. Derived from enigma.

Pretend your company ranks third in Google for some search that is important to you, such as “Akron pool cleaning” and your competitor ranks second. Further suppose you found a Google engineer and asked them why this is. Likely they would shrug and say there is no way to know. There are simply too many factors at work for a human to understand those kinds of fine distinctions. It is an ainigma.”

Aithics – n – The moral aspects of an AI’s programming.

AI instantiates the morals and ethics of the programmers who create them. There is no escaping this. AI makes choices, choices involve relative values, and values are the manifestations of ethics. You can’t make an AI without any moral implications. The AI always has aithics of some kind. The question is simply whose it has.

BaisThe biases of an AI that stem from the data used to train it. From bias.

AIs are trained on data. Not all the data in the world, obviously, but a selection of data. Selecting which data to use is a choice made by a human, and such choices involve human bias. The AI isn’t biased. It is perfectly representing all the data in its world. But it is baised.

Brotherism – n – The use of AI to monitor and predict human behaviour. This can include “listening” to all phone conversations and “reading” all emails. Often done by governments. Derived from Big Brother, the Orwellian trope from 1984.

Data mining tools, the kinds we build for noble purposes such as finding new treatments for diseases, can just as easily be used to monitor people. They can read every email, convert every cell phone conversation to text, even use surveillance cameras to read lips as well as a human can. All of this together can be used to model every one of us, and make accurate predictions on whether we might commit a crime or who we might vote for. When wielded by the government, this is enormous power. This is brotherism.

Eception – n – an AI that is deliberately programmed to seem human and designed to trick humans. Often on social media.

Have you ever received an automated call that begins with, “Hi, this is Rachel. Can you hear me?” That is an eception. So are the millions or billions of bots that pretend to be human to try to get you to believe something or buy something.

Elize – The act of treating a machine as though it has emotions and feelings due to its life-like behaviour. From the 1960s computer program ELIZA.

In the 1960s, computer pioneer Joseph Weizenbaum was horrified to see people treat his simple psychologist chatbot, ELIZA, as though it were human. We instinctively do this. If something seems alive to us, we treat it differently than we treat, say, a power saw. As machines become more human-like, we are likely to elize them further, and be hesitant to interrupt them or put them in harm’s way.

Fauxnesis – n – Machine intelligence, as different from biological intelligence. From the Ancient Greek Phronesis, meaning intelligence.

Can a machine think? Alan Turing posed this question in 1950 and we still don’t have an answer. In reply, Noam Chomsky said, “That’s like asking if submarines swim.” Maybe the problem isn’t that we don’t know how to answer the question, rather that we don’t yet have a word to answer it. The word we need is fauxnesis.

Zomek – Machine life, as different from biological life. From the Ancient Greek zoe, meaning life, and mekhane, meaning machine.

There is no consensus definition of life. That’s a pretty interesting fact; that we can’t put into words what life is. Then it comes as no surprise that we are at a loss to say whether computers can be alive. Maybe they aren’t alive like us, but they have zomek, machine life.

♣♣♣

Notes:

  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image by Free-Photos under a Pixabay licence
  • When you leave a comment, you’re agreeing to our Comment Policy.

Byron Reese is the CEO and publisher of Gigaom, and hosts the Voices in AI podcast. He has been building and running Internet and software companies for twenty years and has obtained or has pending patents in disciplines as varied as crowdsourcing, content creation and psychographics.

 

 

 

About the author

Byron Reese

Byron Reese is the CEO and publisher of Gigaom, and hosts the Voices in AI podcast. He has been building and running Internet and software companies for twenty years and has obtained or has pending patents in disciplines as varied as crowdsourcing, content creation and psychographics.

Posted In: Technology

1 Comments