Visitors Now:
Total Visits:
Total Stories:
Profile image
Story Views

Now:
Last Hour:
Last 24 Hours:
Total:

Artificial Intelligence Coming Soon – Why it may be very dangerous

Wednesday, December 5, 2012 18:17
% of readers think this story is Fact. Add your two cents.

(Before It's News)

Artifical Intelligence

 

Why is AI Dangerous?

To put it in a single sentence, I’d say that it’s because only a minority of cognitively possible goal sets place a high priority on the continued survival of human beings and the structures we value.

Another reason is that we can’t specify what we value in enough mathematical detail to transfer it to a new species without a lot of requisite hassle.

It would be easy if we could just transfer over the goal set of a “typical human” or a “nice person” and hope for the best. But there’s a problem: we have no experimental evidence of what happens when a human being can modify its own goals, or increase its own intelligence and/or physical power exponentially.

What little evidence we have of scenarios where people acquire a lot of power in a short amount of time indicates that the outcomes are usually not pretty. In fact, we have complicated democratic mechanisms built into our society to guard against these types of outcomes.

Most AI designers are missing the challenge because no one wants to have to take the responsibility of creating the first truly intelligent being. They just want to play with their program. The idea of taking any responsibility for the products of one’s research is a relatively recent notion, one that only holds weight with a minority of scientists and engineers, even today. This is usually because scientists and engineers are embedded in a large institutional apparatus that places responsibility so far up the chain of command that the actual researchers are absolved of most, if not all responsibility.

Back to the original issue of goal sets. Here are some likely applications for the most advanced AI technologies in the next 10-20 years:

  • Intelligence analysis and wargaming
  • Law enforcement
  • Analyzing interstate politics
  • Finance, banking, & investing
  • Controlling combat robots
  • Automating work flows

There are many others, but I put these on the top of the list because they have the most economic or political importance, and therefore will be getting the most research money.

As AI in these areas progresses, the systems will go from outputting decisions only when explicitly requested, to outputting decisions continually and automatically. When a human worker consults the machine for input, it will be more like dipping a cup into a stream and tapping into the preexisting flow of knowledge consolidation and decision-making, rather than flicking on a light switch or pressing “run” for a conventional computer program.

[CONTINUE READING]

Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

Top Stories
Recent Stories

Register

Newsletter

Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.