Visitors Now: | |
Total Visits: | |
Total Stories: |
To put it in a single sentence, I’d say that it’s because only a minority of cognitively possible goal sets place a high priority on the continued survival of human beings and the structures we value.
Another reason is that we can’t specify what we value in enough mathematical detail to transfer it to a new species without a lot of requisite hassle.
It would be easy if we could just transfer over the goal set of a “typical human” or a “nice person” and hope for the best. But there’s a problem: we have no experimental evidence of what happens when a human being can modify its own goals, or increase its own intelligence and/or physical power exponentially.
What little evidence we have of scenarios where people acquire a lot of power in a short amount of time indicates that the outcomes are usually not pretty. In fact, we have complicated democratic mechanisms built into our society to guard against these types of outcomes.
Most AI designers are missing the challenge because no one wants to have to take the responsibility of creating the first truly intelligent being. They just want to play with their program. The idea of taking any responsibility for the products of one’s research is a relatively recent notion, one that only holds weight with a minority of scientists and engineers, even today. This is usually because scientists and engineers are embedded in a large institutional apparatus that places responsibility so far up the chain of command that the actual researchers are absolved of most, if not all responsibility.
Back to the original issue of goal sets. Here are some likely applications for the most advanced AI technologies in the next 10-20 years:
There are many others, but I put these on the top of the list because they have the most economic or political importance, and therefore will be getting the most research money.
As AI in these areas progresses, the systems will go from outputting decisions only when explicitly requested, to outputting decisions continually and automatically. When a human worker consults the machine for input, it will be more like dipping a cup into a stream and tapping into the preexisting flow of knowledge consolidation and decision-making, rather than flicking on a light switch or pressing “run” for a conventional computer program.