Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.
PlatypusNinja
I think I am happy with how these rules interact with the Anthropic Trilemma problem. But as a simpler test case, consider the following:
An AI walks into a movie theater. “In exchange for 10 utilons worth of cash”, says the owner, “I will show you a movie worth 100 utilons. But we have a special offer: for only 1000 utilons worth of cash, I will clone you ten thousand times, and every copy of you will see that same movie. At the end of the show, since every copy will have had the same experience, I’ll merge all the copies of you back into one.”
Note that, although AIs can be cloned, cash cannot be. ^_^;
I claim that a “sane” AI is one that declines the special offer.
(I’m not sure what the rule is here for replying to oneself. Apologies if this is considered rude; I’m trying to avoid putting TLDR text in one comment.)
Here is a set of utility-rules that I think would cause an AI to behave properly. (Would I call this “Identical Copy Decision Theory”?)
Suppose that an entity E clones itself, becoming E1 and E2. (We’re being agnostic here about which of E1 and E2 is the “original”. If the clone operation is perfect, the distinction is meaningless.) Before performing the clone, E calculates its expected utility U(E) = (U(E1)+U(E2))/2.
After the cloning operation, E1 and E2 have separate utility functions: E1 does not care about U(E2). “That guy thinks like me, but he isn’t me.”
Suppose that E1 and E2 have some experiences, and then they are merged back into one entity E’ (as described in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ and elsewhere). Assuming this merge operation is possible (because the experiences of E1 and E2 were not too bizarrely disjoint), the utility of E’ is the average: U(E’) = (U(E1) + U(E2))/2.
It’s difficult to answer the question of what our utility function is, but easier to answer the question of what it should be.
Suppose we have an AI which can duplicate itself at a small cost. Suppose the AI is about to witness an event which will probably make it happy. (Perhaps the AI was working to get a law passed, and the vote is due soon. Perhaps the AI is maximizing paperclips, and a new factory has opened. Perhaps the AI’s favorite author has just written a new book.)
Does it make sense that the AI would duplicate itself in order to witness this event in greater multiplicity? If not, we need to find a set of utility rules that cause the AI to behave properly.
In modern times, some people have started to see nature more as an enemy to be conquered than as a god to be worshiped.
I’ve seen people argue the opposite. In ancient times, nature meant wolves and snow and parasites and drought, and you had to kill it before it killed you. Only recently have we developed the idea that nature is something to be conserved. (Because until recently, we weren’t powerful enough that it mattered.)
Note that when the trillion were told they won, they were actually being lied to—they had won a trillionth part of the prize, one way or another.
Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone’s encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.
I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).
My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I don’t give them any weight in my decision-making algorithm.
This solution still works correctly if the N copies of me have slightly different experiences and then forget them.
Also: it seems like a really poor plan, in the long term, for the fate of the entire plane to rest on the sanity of one dude. If Hirou kept the sword, he could maybe try to work with the wizards—ask them to spend one day per week healing people, make sure the crops do okay, etc. Things maybe wouldn’t be perfect, but at least he wouldn’t be running the risk of everybody-dies.
I think my concern about “power corrupts” is this: humans have a strong drive to improve things. We need projects, we need challenges. When this guy gets unlimited power, he’s going to take two or three passes over everything and make sure everybody’s happy, and then I’m worried he’s going to get very, very bored. With an infinite lifespan and unlimited power, it’s sort of inevitable.
What do you do, when you’re omnipotent and undying, and you realize you’re going mad with boredom?
Does “unlimited power” include the power to make yourself not bored?
It’s often difficult to think about humans’ utility functions, because we’re used to taking them as an input. Instead, I like to imagine that I’m designing an AI, and think about what its utility function should look like. For simplicity, let’s assume I’m building a paperclip-maximizing AI: I’m going to build the AI’s utility function in a way that lets it efficiently maximize paperclips.
This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.
This AI has the ability to rewrite itself to “while(true) { happy(); }”. It evaluates this action in terms of its current utility function: “If I wirehead myself, how many paperclips will I produce?” vs “If I don’t wirehead myself, how many paperclips will I produce?” It sees that not wireheading is the better choice.
If, for some reason, I’ve written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I’ve simply written a very large amount of source code that compiles to “while(true) { happy(); }”.
I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.