AI will become "ASI" within the next 50 years, which is the last invention mankind has to make... Whaaaat?.... While researching AI, I’ve found some fascinating tendencies like this and other observations on AI that I would like to share. There's a lot of stuff in here that's really mind-blowing.
The term “AI” has been used for a long time and in the past mostly for applications and programs that seem anything but intelligent from today's perspective, but that doesn't matter at all. Because in 30 years' time we may also look back and laugh at what we today call "intelligent". The Colossus computer, with which cryptanalysis of the Lorenz machine was successful, was the epitome of an intelligent machine in the early 1940s. The first palm-sized calculator was considered intelligent at the end of the 1960s. And so on.
On my way to work, I first take the train, then the subway. When I arrive at Nuremberg Central Station, no human brain decides which track I arrive at and how the other trains have to be navigated to avoid a collision. The design of an effective and (almost) flawless timetable and operation has long been in the hands of AI. As I walk through the station, I already know exactly when my next subway runs. But even these departure times were not calculated by humans, but by computers.
Mostly I have one or two minutes to buy a pretzel on the way and then I wait a moment until the driverless subway arrives. Yeah, you read right. We have driverless subways on many lines in Nuremberg (where the Paessler HQ is located). This project is called RUBIN and is carried out by Siemens Mobility. Since 2010, two lines have been running fully automatically, without a driver and without any serious accidents. Anyone who now thinks that this has led to massive layoffs of drivers, I can tell, at least in this case, AI has not stolen any jobs. 120 former subway drivers now work in customer and system service and ensure that operations run smoothly.
So, when I'm finally sitting in the subway, I like to read something on my mobile phone and most of the time I'm recommended articles that interest me (sometimes I get sport results, but the exception proves the rule). I could go on and on writing here, fill in a whole day, but to cut to the chase we can already state: AI is everywhere and it doesn't matter if we call it AI or anything else. Because as John McCarthy said in the late 1950s, once something works, we no longer call it AI. This term seems to be reserved for science fiction authors.
The future feels lame. Because we humans are programmed to make forecasts of the future according to the past. And to put it nicely, our past was modest. 70,000 years ago, there was the so-called cognitive revolution and somehow we Homo sapiens, who were nothing better than upright walking apes, managed to develop language, cultures and intellectual systems. This has made us the most successful animal species on the planet, but between the great milestones of human history (the first controlled fire, the first writing and cataloging system, the first functioning state) a fairly long time passed. First we were hunters and gatherers for the longest time (our genes are designed for that), then we were farmers. But from 1500 CE things have changed a little more radically.
Because in the past it always took somehow (more or less) a long time to reach certain milestones, we assume that this will also be the case in the future. But it doesn't work that way. If we put a person into a time capsule at the time of medieval Constantinople, about the year 1000 CE, and send him into the age of Christopher Columbus, he might find a few things interesting (a slightly better-functioning state system, larger cities or a few new medical findings), but on the whole he would not freak out in amazement. We’d get the same reaction by sending a person from the Kingdom of the Franks in the year 500 CE to the Constantinople of the year 1000 CE. Yes, Europe had changed politically, Islam had emerged as a new world religion, but you know, nothing too crazy.
Now, if one of the men from the year 1492 CE, who sailed on the Santa María to America, were sent to the present time, to a pulsating metropolis like London, Mumbai or Tokyo, he would believe to be in a different, alien world. He'd have a heart attack from excitement. We've changed so much and so rapidly.
If we look at the 70,000 years since the cognitive revolution and think about the future, about AI and the path ahead of us, we believe we are on a sidewalk with a slight incline, nothing you couldn't walk up in a relaxed way after 3 beers on a Sunday afternoon. In reality, however, we are at the base camp of Mount Everest and the summit climbs steeply in front of our heads.
There are many scientists who divide the general term AI into 3 categories: ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence). Sad, but true: we have the dumbest kind of AI right now, namely ANI. Whereby dumb in this context is to be understood relatively. It no longer makes sense to measure the Elo rating of the best chess computer, since it is continuously increasing and already exceeds that of the best human chess player many times over. But ask a chess program about the weather or a table reservation in your favorite restaurant and see how smart it will function in this case. In return, Google is a better research tool than the most read person or the finest archivist, but would look rather limited in a match against Magnus Carlsen or if it's supposed to operate a driverless subway in Nuremberg.
ANI can do what it was designed to do outstandingly well and far better than any human, but only that and nothing else. The path to AGI therefore means bringing AI to a general level of human intelligence, where AI can act as well as a human being in different fields. Conservative estimates assume that in 2050 we'll have developed an AI that is at AGI level. ASI, the "true AI", which can do everything far better than any human being, is the holy grail of AI research and conservative scientists consider an ASI in 2070 to be quite realistic.
Now many people believe that an ASI is a smart-ass AI, which calculates very fast and takes only one second for a task, for which a person needs one day. But it is not really about this kind of quantity, rather about the "quality of intelligence", and that is a completely different matter. What makes us humans more intelligent than monkeys or ants is not the difference in thinking speed, but the structure of our brains and our ability to have complex ideas. If we use an ant's capacity and increase its thinking speed 10,000 times, the ant will NOT be able to understand quantum mechanics, fly to the moon or build a Tesla car. An ASI will be a completely new deal, something that did not exist before and something that is potentially as superior to us as we are to ants. And when we consider the consequence that a super intelligent AI would be able to create improved versions of itself, the effect is no longer predictable and we are finally in the dark about what our future will look like. Scary? Then read the last point and decide for yourself.
First of all, let's note that an ASI is not a better variant of the human brain, but something completely different. The human brain is not per se a "better variant" of the brain of a monkey. Sure, we can do some things that a monkey will never be able to do (even if we try to teach him). But if me and an orangutan were stranded together on an abandoned island and had to fight for our survival in the coming weeks without the help of other people, there would be no doubt who would survive longer. Sure, I could quote Poe's poems to the orangutan in front of the campfire at night, but that's it. My brain is not designed to fight for my survival without a group of allies and without an elaborate plan. ASI is accordingly no better than us, nor necessarily superior on our sphere of existence, only objectively different and probably unfathomably smarter.
Secondly, it can be assumed that it will not be in the interest of an ASI to live our lives. The whole doomsday fantasies are all based on the assumption that a foreign species wants to destroy us in order to take over our planet and ultimately lead lives similar to ours. But I don't think an artificial intelligence much smarter than us actually has any such plans. Can you imagine an ASI getting drunk and stuffing chips in front of the TV? ASI will exist in a completely different world, as unattainable to us as we are to ants. I’m not a qualified scientist in the field of ants, but I believe that we humans only take on a noticeable existence for ants when we accidentally step on one. Otherwise, we live in a somehow “different world”. What's stopping an ASI from stepping on us, skeptics might ask.
Therefore, thirdly: In the last 70 years fantastic things have already happened in the field of mutual cooperation, that have led to a kind of global, moral togetherness that is unique to date. A new, universal morality which, of course, cannot yet be felt equally everywhere in the world, but which shows that violence and hostilities are decreasing rather than increasing in parallel with progress. There are no reasons to believe that an ASI is hostile to us. Instead, we should think about what a completely different kind of intelligence can do for mankind. A cure for all sorts of deadly diseases, an ideal form of politics and social coexistence, the answer to how a financial system must function in a way that does not exploit the majority of people - among other things, these are the questions that an intelligence that is not bound to our limits and has completely different capacities could solve. And when I think of myself continuing to enjoy my limited human life, while high up in the cloud (or where ever ASIs like to hang out), this superintelligence solves all our problems in a matter of seconds, then that's cool with me.