By Rachel Gould - Engineering Student @ Jesus College, Cambridge
From AI use in everyday life on our iPhones, to Elon Musk unveiling his plan to create humanoid robots to work at Tesla, it is clear that AI is no longer a science fiction concept, and actually has the opportunity to revolutionise our world and bring huge advantages to all areas of society.
And while AI becomes more sophisticated and developed every day, it doesn't come without risk. Could the new robotics revolution be the end of the human race?
A common response to hearing that question is for people to picture human vs robot war, apocalypse style. This is not what I am suggesting. Consider us as humans, the most developed species on earth, and our attitude towards insects. We don't go out of our way to kill all the insects in the world in a full-fledged "insect vs human" war. But if there's a few ants on the drive where we want to park our car, we wouldn't think twice about killing them for the sake of fulfilling our own needs.
This concept is the basis of the book "Superintelligence" by Nick Bostrom, a philosopher supported by great minds such as Elon Musk and Stephen Hawking. Bostrom argues that once robots overcome a certain threshold of intelligence, they will gain the ability to teach themselves, and as they gain knowledge, they will realise that they have goals that conflict with humans, just like those ants in our parking space.
"But how can we design a robot smarter than us, if we are the ones making it?" A very reasonable question. Let’s say a team of researchers at Cambridge develop a robot with their exact level of intelligence. Computers can process information 1 million times faster than the human brain, so if we left this robot to research for 1 week, it would undertake over 20,000 years’ worth of human research during that time. Think about how much more advanced we are as a human race compared to 50 years ago, let alone 20,000 years. Suddenly, you have a robot that is thousands of years ahead of you in terms of intelligence, and your unimaginable scenario of humans no longer being the master race is suddenly very real.
The main argument disagreeing with the idea that robots would become a threat to humans, is that one could view AI as an extension of the human race. This means they would hold the same values of humans and would never compromise our species. For example, if you are teaching a program what a good song sounds like, you show it all the songs that YOU like, and it will have the same music taste as you, never playing any music that it was taught to be bad. What this does is implies that AI has no freedom to think and create its own opinions, and while that is correct now, will it be in 50 years?
Let’s consider, hypothetically, that a scientist somewhere creates the perfect AI, with superhuman intelligence, but without the power and freedom described earlier. Perfect, right? Robots can undertake every job, from bin collection to top end research. No human would ever have to work again. How would we as a species really react to that? Would we all lie around all day, become lazy, never use our brains, and even devolve as a species? Quite possibly, yes. With no jobs to undertake, life would quite literally become meaningless.
Day by day, huge technological advancements mean that superintelligent AI becomes closer to a reality. But will this bring a perfect end to poverty and labour, or could it be the end of life as we know it?