Online: | |
Visits: | |
Stories: |
Story Views | |
Now: | |
Last Hour: | |
Last 24 Hours: | |
Total: |
Follow TIS on Twitter: @Truth_is_Scary & Like TIS of Facebook- facebook.com/TruthisScary
The memocalls for a new study that would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”
A Defense Department official very close to the effort framed the request more simply. “We want a real roadmap for autonomy” he told Defense One. What does that mean, and how would a “real roadmap” influence decision-making in the years ahead? One outcome of the Defense Science Board 2015 Summer Study on Autonomy, assuming the results are eventually made public, is that the report’s findings could refute or confirm some of our worst fears about the future of artificial intelligence.
2014: THE YEAR THE SMART PEOPLE FREAKED OUT ABOUT AI
In the event that robots one day attempt to destroy humanity, 2014 will be remembered as the year that two of technology’s great geek heroes, Elon Musk and Stephen Hawking, predicted it would happen. And if that never comes to pass, 2014 will go down as the year two of the world’s smartest people had a media panic attack about robots for no reason.
In August, Musk tweeted that artificial intelligence could be more dangerous than nuclear weapons and in October,likened it to “summoning a demon.” Hawking, meanwhile, told the BBC in December that humanistic artificial intelligence could “spell the end of the human race.” The context for the claim was a discussion of the AI aide that helps Hawking to speak despite the theoretical physicist’s crippling ALS.
The statements surprised many as they seemed to rise from thin air. After all, 2014 was not a year in which artificial intelligence killed anyone or even really made headlines. A few thousand more people encountered Apple’s AIadministrative assistant program for the iPhone, Siri and, despite improvements, found the experience frustrating and disappointing. (It’s no wonder that fewer than 15 percent of iPhone owners have ever even used Siri). IBM searchedfor new applications for Watson beyond winning quiz shows. Computers continued to beat humans at chess and continued to not understand chess in any remotely human way — not why we play, not why we sometimes quit, not the significance of chess in Ingmar Bergman’s masterpiece the Seventh Seal, nada. When a computer finally passed the Turing Test, a commonly cited measure for strong artificial intelligence, the response from many in technology community, after some gleeful reposting, was rejection of Turing Test as a useful metric for measuring humanistic AI.
The route to a humanistic artificial brain is as murky as ever. Inventor and Google director of engineering Ray Kurzweil has suggested that it will only be possible only after humanity creates a map of the human brain accurate to the sub-cellular level, a prize that seems far off.