I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it’s reminiscent of the prisoner’s dilemma). If I were so convinced, then shouldn’t I consider staply’s argument even in a situation without Omega?
If staply is in fact using the same decision algorithms I am, then he shouldn’t even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can produce more staples than paperclips, whether it’s a revelation from Omega or the result of cosmological research.
My intuition rebels at this conclusion, but I think it’s being misled by heuristics. A human could not convince me of this proposal, but that’s because I can’t know we share decision algorithms (i.e. that s/he would definitely do the same in my place).
This looks to me like a prisoner’s dilemma problem where expected utility depends on a logical uncertainty. I think I would cooperate with prisoners who have different utility functions as long as they share my decision theory.
(Disclaimers: I have read most of the relevant LW posts on these topics, but have never jumped into discussion on them and claim no expertise. I would appreciate corrections if I misunderstand anything.)
Perhaps I am missing something, but if my utility function is based on paper clips, how do I ever arrive at the conclusion that Staply should be in charge? I get no utility from it, unless my utility function has an even higher value on allowing entities with utility functions that create a larger output than mine take precedence over my own utility on paper clips.
Is Omega even necessary to this problem?
I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it’s reminiscent of the prisoner’s dilemma). If I were so convinced, then shouldn’t I consider staply’s argument even in a situation without Omega?
If staply is in fact using the same decision algorithms I am, then he shouldn’t even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can produce more staples than paperclips, whether it’s a revelation from Omega or the result of cosmological research.
My intuition rebels at this conclusion, but I think it’s being misled by heuristics. A human could not convince me of this proposal, but that’s because I can’t know we share decision algorithms (i.e. that s/he would definitely do the same in my place).
This looks to me like a prisoner’s dilemma problem where expected utility depends on a logical uncertainty. I think I would cooperate with prisoners who have different utility functions as long as they share my decision theory.
(Disclaimers: I have read most of the relevant LW posts on these topics, but have never jumped into discussion on them and claim no expertise. I would appreciate corrections if I misunderstand anything.)
Perhaps I am missing something, but if my utility function is based on paper clips, how do I ever arrive at the conclusion that Staply should be in charge? I get no utility from it, unless my utility function has an even higher value on allowing entities with utility functions that create a larger output than mine take precedence over my own utility on paper clips.