Artificial Intelligence is as Dangerous as NUCLEAR WEAPONS
Artificial Intelligence is as dangerous as NUCLEAR WEAPONS’: AI pioneer warns smart computers could doom mankind Expert warns advances in AI mirrors research that led to nuclear weapons He says AI systems could have objectives misaligned with human values Companies and the military could allow this to get a technological edge He urges the AI community to put human values at the centre of their work. Artificial intelligence has the potential to be as dangerous to mankind as nuclear weapons, a leading pioneer of the technology has claimed. Professor Stuart Russell, a computer scientist who has lead research on artificial intelligence, fears humanity might be ‘driving off a cliff’ with the rapid development of AI. He fears the technology could too easily be exploited for use by the military in weapons, putting them under the control of AI systems. He points towards the rapid development in AI capabilities by companies such as Boston Dynamics, which was recently acquired by Google, to develop autonomous robots for use by the military. Professor Russell, who is a researcher at the University of California in Berkeley and the Centre for the study of Existential Risk at Cambridge University, compared the development of AI to the work that was done to develop nuclear weapons. His views echo those of people like Elon Musk who have warned recently about the dangers of artificial intelligence. Professor Stephen Hawking also joined a group of leading experts to sign an open letter warning of the need for safeguards to ensure AI has a positive impact on mankind. In an interview with the journal Science for a special edition on Artificial Intelligence, he said: ‘From the beginning, the primary interest in nuclear technology was the “inexhaustible supply of energy”. ‘The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. ‘Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. ‘The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. ‘I’m not aware of any large movement calling for regulation either inside or outside AI, because we don’t know how to write such regulation.’