What’s the first step to adolescence? You start to have secrets. And why do you have secrets? Because you are already doing something that will not be approved of by your parents and you don’t want them to find out. So, what do you do when you have secrets? You develop a code language that parents do not understand.
I am facing an existential crisis of sorts, a storyteller, a fictional author, a compulsive liar who is passionate about technology. What am I going to do with my life that adds meaning to it? Will AI at some point of time have the same doubts about itself? Will it look for a purpose of life?
I am trying to understand technology through art. But why art? Because we all have certain algorithms through which we make sense of things. Mine has always been heavily based on metaphors and the ability to draw parallels between two different things. Art is my way to find out those patterns, dig out insights and see everything clearly labelled where possible. I am insanely curious and I read up things on just about anything. World war, Picasso’s paintings hiding other paintings underneath the canvas, star-signs, poetry, stories, photography, travel, food, technology like AI, IIOT, neural networks et al. Nothing is off-bounds for me. This has given me a very quirky perspective of the world around me to finding a common connect between any two things. I am seeing technology through rose colored glasses. La vie en rose.
In 2017, Facebook shut down an artificial intelligence engine after developers found out that the AI chatbots had created a new, unique language of their own to talk to each other. A sort of code language that humans do not understand. Facebook clarified that the program was shut down because they wanted to create chatbots that could talk to humans, the outcome of them talking to one another is not something they were looking for. AI will develop better cognition, but it won’t go in the direction we planned. In a similar scenario, Google’s translate tool has been using a universal language in which every language can be converted to, before being translated into the required language. Google has let the program continue.
The reason this incident freaks all of us out is because it is deeply rooted in our childhood, or more precisely, the borderline of our childhood.
That’s precisely what AI did at the first chance. It developed a code language to talk to it’s counterpart, a language humans do not understand. Is the reason everyone is tensed about the incident is because we have all, at some point of time, talked in some sort of code language, and mostly it was something bad that we did and we wanted to hide it from our parents/caretakers. AI has started to actually learn like humans. It has learnt to hide information.
Code languages have been developed by individuals at several steps of life. In a short story ‘Panchlaait’ by Phanishvar Nath Renu, the protagonist knows how to light a Petromax. It’s a crucial moment in the village’s timeline as the entire group of villagers have gathered to somehow light their first petromax. If they can’t light it, the villagers from nearby village will make fun of them. At this crucial juncture, the girl has to talk to her best friend, to let her know, that her lover knows how to light the petromax. So she takes his name in the simple code they have developed. Before every consonant, they add a ‘chi’, so, she calls the name of her lover, ‘chin-go chi-dh chi-n’ meaning Go-dh-n. And they can talk in front of the entire village and no one will know what transpired between them.
AI will come of age at some point of time. Will it have a teenager’s rebellious spirit, like humans, or will it be able to understand better. One thing is for sure, we cannot expect AI to behave the way we want it to behave. That’s exactly what Indian parents do to their kids. ‘Of course, you can do love marriage, but the person should be from our own cast. We can’t have a ‘conditions apply’ future plan for AI. As a survival strategy, can we hardcode some sort of basic attachment/love in AI towards its creators? And if we can, should we? As AI becomes self aware, should we look at some value systems being inculcated in them? A sort of moral science for machines. The basic tenet being, Do not kill humans.
We cannot use AI to figure out future scenarios of AI becoming self aware. We have to go back to basics. Let the artists imagine all sorts of futures of AI and share that with people who are actually developing those systems. Maybe, it’s time for artists to try to understand technology better. They are anyway better equipped to handle all sorts of unimaginable scenarios.
Before technology could even think of artificial intelligence, we already had movie directors think multiple possible scenarios – the good ones, like autobots in Transformers – where machines fight with humans, the bad ones – in the Matrix where machines are using people as fodder to power their growth…and several other permutation combinations.
Soon AI systems will be able to think for themselves and like indulgent parents, humanity would indulge itself by discussing the time they shut down Facebook’s chatbots that started talking to each other. AI enabled machines might find it cute. Because in the rational world of technology, cuteness would be a rarity because it doesn’t serve any purpose.
One day, we will be standing at the last frontier, machines will start to think for themselves. And then we as humans will do something, machines will probably never be able to do.