AIBox Experiment, another thought

Tags: AI.
By lucb1e on 2011-10-11 22:42:51 +0100

There are two more very interresting things I read about AI lately. One of them is the Paperclip maximizer. On the Wiki page of LessWrong, it quotes Eliezer Yudkowsky, who brilliantly summarized it:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
- Eliezer Yudkowsky, 'Artificial Intelligence as a Positive and Negative Factor in Global Risk'

Just this quote alone had me thinking for quite a while. But I eventually read the rest of the article about the paperclip maximizer: "A paperclip maximizer is an agent that desires to fill the universe with as many paperclips as possible. It is usually assumed to be a superintelligent AI [...]."
If you want to read more, see this page: wiki.lesswrong.com/wiki/Paperclip_maximizer.


Something I thought of while reading another article about AI on LessWrong is a way by which we might find out if the AI will be destructive or constructive (I'm not saying good or bad anymore in this context, as that does not exist here). The AI is intelligent like a human (or more intelligent, but it's about the "it's human" part), so he should be 'humanoid' (unsure if that is the proper term, I mean 'human-like'). If it is, we can ask it to create an AIbox her/himself and run the experiment of setting it loose.

We need massive ammount of calculation power to accomplish this, simulate a world inside a world, but some people predict computers will be as smart as the entire human race in under 50 years. The system should be able to simulate a world with a few thousand 'inhabitants' inside a world with another million inhabitants or so.
lucb1e.com
Another post tagged 'AI': AIBox Experiment

Look for more posts tagged AI.

Previous post - Next post