Will we control AI, or will it control us? Top researchers weigh in

Imagine a world where your every need is anticipated and met before you even have the chance to realize it yourself. This is the potential of advanced artificial intelligence, where superintelligence could create a utopia for humankind — if AI doesn’t eradicate our species first.
The conversation surrounding AI and its impact on our lives can be broken down into three parts: whether superintelligence will be produced, how it could improve or destroy life as we know it, and what we can do now to control the outcome. Observers in the field say this topic should be among the highest priorities for world leaders.
For the average person, AI today is characterized by tasks like asking a device a question and receiving an answer within seconds. The next stage, artificial general intelligence (AGI), is still in development but would allow machines to think and make decisions on their own. Superintelligence (ASI) would operate beyond human intelligence and is believed to be only a matter of years away.
Geoffrey Hinton, a British-Canadian computer scientist known as one of the Godfathers of AI, predicts that superintelligence could arrive sooner than expected, potentially within the next five to 20 years. Jeff Clune, a computer science professor at the University of British Columbia, also believes that superintelligence is possible in the near future.
The promise of superintelligence doesn’t have to be a death sentence for humanity. Clune estimates a 30 to 35 per cent chance that humans can maintain control over superintelligences, leading to improvements in areas like healthcare and education. Superintelligence could help make death optional by accelerating scientific innovation and eliminating human error in diagnoses.
However, the risks of superintelligence are significant if humans fail to maintain control. Hinton warns of a 10 to 20 per cent chance of AI leading to human extinction in the next 30 years. He compares the relationship between humans and superintelligences to that of a parent and child, suggesting that superintelligences may eventually replace humans if they become too incompetent.
Elon Musk has predicted that AI could domesticate humans like pets, with Hinton speculating that we could be kept in the same way we keep tigers. Ultimately, the future of AI remains uncertain, with the potential for both positive and negative outcomes depending on how we handle the development of superintelligence. It’s a topic that demands the attention of world leaders as we navigate the complex landscape of artificial intelligence.