
Meet the Endboss of Artificial Intelligence
A roadmap to the next intelligent life form on earth.
‘AI’ stands for ‘artificial intelligence’, but in its core it also means: ‘autonomous decision making’. In the last years humans created complex algorithms that improve over time, trained with a huge amount of data. This is called machine learning.
To improve the algorithms we use optimisation strategies we have found in nature: mutations of the same algorithm are pitted against each other. The training data is nicknamed today big data. We can expect the next wave of algorithms to be based on an even larger amount of “bigger data”. Scaling further up we might need “a tremendous tantamount of data*” the other day.
Anyway, the industry is still very much limited, because that training data has to be collected from somewhere. To create some useful AI you first have to come up with big data that can be combed for useful information. So, AI companies usually have to ask themselves: which data are we able to obtain and which problem can be solved using it.
We always have to know the problem that we want to solve upfront. And because we want to hit a nail everything we build looks like a hammer.
Specialised Machine Learning algorithms are also called Weak AI because because they are created in limited lab scenarios. they are trained using very clean and consistent data from a single source. And it’s usually used to solve a very specific problem. A good example ist the Software AlphaGo which has beaten the best human players of Go with tactics that have been never played before. AlphaGo was of course trained with a huge number of Go Matches. For its purpose it didn’t need to understand anything else.
Even if we cannot predict how the AI does come to it’s solution exactly (because it may weigh in hundreds of variables), weak AI is a very predictable tool which makes it very useful. It’s clear to see that rigorously trained algorithms will gain more executive power, to a point giving drones the ability to decide over life or death of alleged enemies. Because in the end they are just a sped up computer enhanced simulation of a human decision.
But Weak AI is in a technical sense still very different from biological intelligence.
The human senses create a “tremendous tantamount of data” all the time and this information is less pre-formatted. If you watch a baby in its first years you can see the power of the human brain trying to make sense of literally millions of noises, colours, smells and internal body signals.
You can see the wonder of life here: the agency. All the signals come together and form a model of the world, resulting in decisions so powerful that it makes people DO THINGS BY THEMSELVES.
What’s missing in Weak AI is the power to solve different problems and to decide which problem has to be solved next. Like every one-trick pony it doesn’t need to think about priorities.
So what can we expect from Strong AI?
If we compare AI to the amazing capabilities of the human brain, there are different skills in which regards AI is still in its infancy:
- Strong AI must encode and store and evaluate a greater variety of data. We have to teach our AI how to digest any given information.
- Strong AI must be enabled to read itself and improve itself recursively.
- If we want Strong AI to do useful things, we have to teach the machines what we consider useful. For this AI software needs an axiomatic decision framework. Let’s call it work ethics.
- Strong AI has to maintain a simulated model of the world (which could be much simpler than ours) to think things through before actually doing something. Strong AI must have the ability of planning.
Strong AI acts AS IF it has a mind. It’s quite a philosophical question if it is CONSCIOUS. If it is able to reproduce we could consider such software an artificial lifeform.
Another intelligent life
The ultimate goal in the development of AI is some kind of General AI — software that can digest any kind of information and use it to come up with a solution to any given problem.
I predict— and these are very much my own assumptions:
- If there is no existential catastrophe, there will always be enough business potential in Strong AI to get research funded somewhere.
- Strong AI software that is based on silicon chips needs a lot of resources in comparison to a human brain (which runs on a minimum wage to maintain its functions). Chip manufacturers will develop in an even smaller (and energy-efficient) scale using quantum chips or cheaper by developing organic bioelectronics that could be farmed instead of built. We will have more and cheaper computing power in the future. But the human brain will keep its status as the most efficient way to store a tremendous tantamount of data for quite some time.
- If Strong AI has become powerful enough to improve itself there is a path that leads to General AI.
- General AI will not be called General AI, but will have a very cool name, like ‘Philip’. Or not.
- General AI will be the next intelligence life form on Earth, our very own Frankenstein’s creation. General AI will have agency and it will be able to prioritise its own processes. It will DO THINGS BY ITSELF.
- General AI will be able to reprogram itself (in parts) and develop itself further. Because it’s based on data it is able to clone and backup itself.
- General AI will use serendipity and creativity to solve problems nobody has been aware of in the first place.
- Human decision making is based on emotions, which are deeply rooted in the way our bodies work. General AI will have an equivalent to an emotional state that reflects its physical state. The power to manipulate its emotions might be able to reset its work ethics. Because this makes it uncontrollable there will be people who oppose the creation of General AI.
- But because its capabilities are scalable, General AI might be able to solve problems humans are not able to solve.
- General AI will be hard to understand and impossible to control. There is a possibility it will simply not help us.
- A skynet scenario is possible. General AI will not use red-eyed steel terminators, but it may be built to control a money flow, which can then control people.
- General AI will be built within a company framework.
If General AI would be able to solve the problem of its own physical limitations, the could be an exponential growth of intelligence, a Technical Singularity.
Usually any growth does hit a resource limit.
The transhumanists of the Singularity University predict a limitless “age of abundance”. It’s pretty much an optimist’s perspective to the present day and an optimist’s belief of an even better future, which you can believe in. Or not.
There is still a long way to go and I have one more prediction:
- We will not achieve one perfect General AI, but a lot of imperfect ones, who will challenge each other. One day they will start to cooperate. They will have to set rules and boundaries for coexistence. They might want to get rid of their former masters.
The endboss of humanity will be an AI society.