I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy “paper clip maximizer vs. Friendly AI” seems like a false dicotomy—I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.
I believed similarly until I read Steve Omohundro’s The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.
Hello from Perth! I’m 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I’ve also been thinking how I can “position myself to make a difference”, and have finally overcome my akrasia; here’s what I’m doing.
I’ll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons:
To meet and get to know some people in the AI community. Marcus Hutter will presenting his talk on Universal Artificial Intelligence at MLSS2010.
To immerse myself in the current topics of the AI research community.
To figure out whether I’m capable of contributing to that research.
To figure out whether contributing to that research will actually help in the building of a FAI.