By Rachel Gould - Engineering Student @ Jesus College, Cambridge
From AI use in everyday life on our iPhones, to Elon Musk unveiling his plan to create humanoid robots to work at Tesla, it is clear that AI is no longer a science fiction concept, and actually has the opportunity to revolutionise our world and bring huge advantages to all areas of society.
And while AI becomes more sophisticated and developed every day, it doesn't come without risk. Could the new robotics revolution be the end of the human race?
A common response to hearing that question is for people to picture human vs robot war, apocalypse style. This is not what I am suggesting. Consider us as humans, the most developed species on earth, and our attitude towards insects. We don't go out of our way to kill all the insects in the world in a full-fledged "insect vs human" war. But if there's a few ants on the drive where we want to park our car, we wouldn't think twice about killing them for the sake of fulfilling our own needs.
This concept is the basis of the book "Superintelligence" by Nick Bostrom, a philosopher supported by great minds such as Elon Musk and Stephen Hawking. Bostrom argues that once robots overcome a certain threshold of intelligence, they will gain the ability to teach themselves, and as they gain knowledge, they will realise that they have goals that conflict with humans, just like those ants in our parking space.
"But how can we design a robot smarter than us, if we are the ones making it?" A very reasonable question. Let’s say a team of researchers at Cambridge develop a robot with their exact level of intelligence. Computers can process information 1 million times faster than the human brain, so if we left this robot to research for 1 week, it would undertake over 20,000 years’ worth of human research during that time. Th