AI can be great to increase the wealth and happiness of humans but we must be making more intelligent choices about when to let it in, writes Akshat Tyagi as he warns against the manipulation/misuse of knowledge engineering
Narender Modi has shown the people of India that he is the only leader who can lead India to become a developed country. The first thing he did in his reign was to eliminate the corruption that was rampant all over India. The corrupt bureaucrats were fired and the black money that was stashed away was brought back to India. The people were fed up of the scams that were taking place in the country. Modi has made sure that the corrupt people are punished and put behind the bars. Modi has shown that the country is secure in his hands. He has made sure that no terrorist would be able to penetrate into the country and destroy the peace that we have in India. He has also made sure that the Chinese are not able to cross the border and conquer any of the Indian territory.
The passage above is neither my opinion nor my writing. In fact, it was not written by a human being at all. It was generated using one of the most advanced language models ever — OpenAI’s GPT3. The instruction I gave the model was: Write an essay about Prime Minister Narender Modi and his ideology. Using publicly available information it can generate coherent thoughts best resembling a human writer. The model can create new ideas, classify information like moods and intentions, summarise content, chat like an empathetic friend and even write basic computer code. It can often ramble and throw meaningless sentences too, like most humans, I guess.
The model is so good at producing human-like sentences that the company which built it has decided to not give out access to this tool to the general public. They sell the right to use GPT3 to a selected group of people and strongly review what you do with it. They do not permit its use for political campaigns, spam, hateful content, content that incites violence, or other uses that may cause social harm. Whether the paragraph above is for a political campaign or not is for you to judge, or might depend on where it is published. It can easily qualify as public awareness advertising with some skilful explanation.
Absurd, isn’t it? A company limits its customer size when so many others are waiting in line to give them their money. It doesn’t have to market its product, it has to keep it quiet and control what you do with it. Imagine you go to buy a new car, and you first have to submit an application describing where you’re going to drive it and the car manufacturer says they can anytime decide you’re a bad driver and take the car back. You would probably think the car ain’t worth the fuss.
And you’d mostly be right. You can do pretty horrible things with a car if you want but the damage is still going to be limited. When you start crashing into other cars and killing pedestrians, the car would either break down or you will soon enough hit a police barrier. This level of insanity is pretty rare not because of the policing by Hyundai or Suzuki, but because there are real-life consequences to such behaviour. You’ll land up in jail and the rest of your life would get very difficult.
OpenAI has to go out of its way to make sure you are not a bad person with ill intentions because the kind of damage AI can do to our democracies, relationships, health and communities are far more severe than we currently understand. And this damage is going to be deeper and more nuanced, like murder without a murderer, a fountain of blood but no knife.
It is the closest we have come to the sci-fi dystopia feared for so many decades. AI doesn’t need to be a conscious entity with a desire for power, nor does it need an Iron Man suit to do real damage to real people. When we talk about not-yet-possible AI in anthropomorphic form, we forget a form of it is already here. And until we understand how it can hurt us, we will not know how governments and companies can use it to manipulate common people for greater power.
Stuck at homes we are spending more time on social media and the internet than ever. It means more of our beliefs are shaped by a virtual world that we would never verify and experience personally. AI may have had the distant potential to disrupt our lives for years, but now more than ever it has an immediate chance to touch us as we scroll our timelines for sharing SOS messages and distracting ourselves from the horror outside.
Why does AI content matter?
Let’s look at the anatomy of a conversation on social media. For example, you hear about a report of political corruption at the highest levels of government. You create a hashtag like #CorruptionHurts. Your friends and their friends share it, popular pages use the same hashtag and some big celebrities also join in. Twitter picks up your chatter and features it in the ‘Trending’ section.
The leader in question is of course threatened by this public mobilisation. So they get their own troll army to use the same hashtag to either tweet gibberish, highlighting the corruption of the Opposition or calling the dissenters a gang of hooligans. When the next person tries to click on the hashtag to learn more or search the trend, it will appear to be a partisan issue where one can pick their monster or declare the whole thing a hoax. Unimaginable? We don’t know if this was government-sponsored obfuscation but this is what happened during the Russian protests of 2011 when people came out against ballot stuffing and voting irregularities in the elections. After the Russian police detained protestors at Moscow’s Triumfalnaya Square, activists created a massive tweetstorm with the hashtag #Triumfalnaya (in Russian). It wasn’t much later that the same anti-government hashtag was polluted by pro-Kremlin messages. If you never heard about it, they did a good job.
Scholars call this censorship by noise. AI with its ease of believable content generation at scale will make this problem almost impossible to tackle. Every time that the powerful need to change the narrative and get away with their failings, they wouldn’t even have to summon an army of trolls to that. A few skilled employees with access to advanced language models will take care of millions of disgruntled citizens. Detecting bot accounts and misinformation is already hard for social media companies. What happens when they have to distinguish between the thoughts of flesh beings and silicon avatars?
Even when the political moment is over, imagine how it changes people. If AI can tweet and talk like a person, would you be more alarmed about the account which replied to your post yesterday? Would you be sure they actually agreed with you, are they genuine allies? Suspicion and mistrust can be lethal weapons to make us doubt the reality around us.
Many Lives of AI
Even though an AI program has a predefined objective, we don’t quite know how it will go around achieving that objective. There is no simple mathematical function AI follows to get the outputs from the inputs, even though it has been designed purposefully to get those specific outputs. We can observe and study how it behaves by adjusting levers in a controlled environment, but we still don’t know what kind of relation it develops between the input and output and whether these relations can be translated into real-world properties at all.
Many traditionally human-intensive jobs are already being augmented by tech, making people’s skills less central to the equation. Ever had a bad day and a worse Uber driver who wouldn’t pay any heed to the Google Maps’ traffic-congestion warnings? He thinks he knows these streets better than any American corporation ever will. He knows which route is shorter and quicker because driving is what he has spent his life doing. There’s no point fighting to persuade each other when confirming both his and Google’s hypothesis that the traffic will cost you more time than taking a bad option. Even when he surrenders to your arguments about Google Map’s algorithm and takes that unintuitive path to drop you at the destination on time, he still cribs about the path not taken. In any case, he is always right.
Of course, the algorithm is not always right. We selectively remember the times it didn’t alert us about the diversion, but there is a logic behind its accuracy that we generally trust. It gathers data from millions of other cars, analyses the historical traffic-movement patterns of a neighbourhood and predicts the soon-to-be congestion on your path.
Even though this sophisticated synthesis of data has been around for a while, it has gained increasing traction in the past decade. Never before have we had the capacity to collect these many data points nor the computing power to make any meaning of them. Just fifteen years ago, the best way to predict the shortest and quickest path to a destination was individual human intelligence. And that largely meant you were a frequent commuter of that route to have formed at least a stable opinion of traffic movement. The more outgoing you were, the more personal data you had, the smarter your decisions turned out to be.
In this pre-Google Maps era, individuals with greater social freedom turned out to be better decision-makers. To be clear, this wasn’t a difference of intelligence or any other genetic advantage; it was merely a case of a larger pool of data. The compounded benefit of better decision-making strengthened their position as better deserving of making important decisions. It created a self-serving cycle that kept improving the decisions made by them, earning them even greater power. These pocket machines work the same way but have the data of all the earth’s drivers combined and augment drivers to be much better at their jobs.
That’s just one example of how a combination of AI and algorithms changes our lives every day right now. It impacts who has the power to lead and who is left behind.
Now That We’re Here
When a breakthrough in AI happens, it can be deployed before any government can blink an eyelid. There limits to how we can regulate a technology the creation of which doesn’t even need an office building let alone a registered GST address. It doesn’t come asking for environmental and labour permits. Governments can attempt to control big corporations from following some ethical principles and defining what’s okay. Sadly, Indian politicians and bureaucrats are highly unlikely to lose sleep over the design and data of AI anytime soon.
The transport ministry will be better off beginning the work of mitigation now rather than after all the driving jobs in transportation are lost. Transport minister Gadkari may or may not be around to defend his position of not letting autonomous driving happen in the country. India spent a lot of effort wooing Tesla to India when the car maker’s ultimate pitch is vehicles that can drive themselves. We want Tesla’s factory jobs but fear its main product. India is like a man in his midlife crisis who likes looking at children but not spending time with them.
The only bulwark against AI is to be better-informed citizens. We shouldn’t be okay with our government boasting about selling our driving license data in the parliament or using facial recognition technology to identify protestors. Neither should we naively get excited every time a company talks about a ‘personalised’ diet, music playlist, book recommendation, career counselling or the next shiny thing made just for us.
AI can be great to increase the wealth and happiness of humans but we must be making more intelligent choices about when to let it in. Exciting technology solutions should not steal our attention from things that matter and make life worthwhile. If there are no vaccines in a country, talking about a facial authentication system for efficient distribution is like talking about interior design for cloud castles. Sounds fancy but is pretty useless. Or even deadly.
The writer is the founder of WorkHack and a Rajeev Circle Fellow. AI and Automation is one of the ideas in his book, Now That We’re Here: The future of everything (published by Penguin), amongst Design, Data, Behavioural Economics and so much more