Pardon me, my question skipped far too many inferential steps for me to be comfortable that my meaning is clear. Allow me to query for the underlying premises more clearly:
* Is quantum-destroying-the-entire-universe suicide different to plain quantum-I-killed-myself-in-a-box suicide?
That is to say, does Eliezer consider it rational to optimise for the absolute tally of Everett branches or the percentage of them? In “The Bottom Line” Elizier gives an example definition of my effectiveness as a rationalist as how well my decision optimizes the percentage of Everett branches that don’t get me killed by faulty brakes.
Absurd as it may be, let us say that the LHC, or perhaps the LHSM (Large Hadron Spaghetti Monster) destroys the entire universe. If a particular Everett branch is completely obliterated by the Large Hadron Spaghetti Monster then do I still count those branches when I’m doing my percentage or do I only count the worlds where there is an actual universe there to count?
I can certainly imagine some consider parts of the Everett tree that are not in existence in any manner at all to have been ‘pruned’, think the end result is kind of neat and so decide that their utility function optimizes according to the number of Everett branches that contain the desired outcome divided by the number of Everett branches among those that exist. Another could say that they simply want to maximise the absolute number of Everett branches that contain desired outcomes. The LHSM eating an entire Everett branch would be the same as any other particular event making an Everett branch undesirable.
The above is my intuitive interpretation of the core of Elizier’s parting question. Obviously, if we are optimizing for percentage of Everett branches then it is rational to create an LHSM rigged to eat the branch if it contains nuclear terrorism, global economic crashes or Elizier accidentally unleashing the Replicators upon us all. If, however, we are optimizing by absolute desirable Everett branch count then rigging the LHSM to fire whenever the undesirable outcome occurs is merely a waste of resources.
* Are there fates worse than the universe being obliterated?
I note this question simply to acknowledge that other factors could weigh in to the answer to Elizier’s question than the significant one of whether we count not-actually-Everett branches. Perhaps Joe considers the obliteration of the universe to be an event like all others. The analogy would perhaps be to having x% of Everett branches go off in straight line together all rather transparent and not going any place in particular while the remaining (100-x)% of Everett branches head off in the typical way. Joe happens to assign −100 utility to the universe being obliterated, +2 to getting a foot massage and −3,000 to being beaten by a girl. Joe would multiply the x% of the not-Everett branch by −100 and (1-x)% by −3000. It would be rational for Joe to create a LHSM that would be fired whenever he suffered the feared humiliation. That is, unless he anticipated 1,450 foot massages in return for keeping the universe intact!
It seems to me that in order for it to be rational to LHSM the universe on the event of nuclear terrorism or global economic collapse and yet not rational to use the LHSM to make a friendly AI then:
Universe obliteration must be evaluated as a standard Everett branch.
AND Universe obliteration must be assigned a greater utility than nuclear terrorism or global economic collapse.
(This possibility is equivalent to standard quantum suicide with a caveat that tails was going to give you cancer anyway and you’re rather be dead.)
OR, ALTERNATIVELY
Universe obliteration is different from quantum suicide. Obliterated universes don’t count at all in the utility function so preventing nuclear terrorism by obliterating the universe makes the average world a better place once you do the math.
AND The complications involved in using the same LHSM to create a friendly AI are just not worth the hassle. (Or otherwise irrational for some reason that is unrelated to the rather large amount of universe obliteration that would be going on.)
Eliezer never implied an answer on whether he would fire a universe destroying LHC to prevent disaster. I wonder, if he did endorse that policy, would he also endorse using the same mechanism to further his research aim?
Pardon me, my question skipped far too many inferential steps for me to be comfortable that my meaning is clear. Allow me to query for the underlying premises more clearly:
* Is quantum-destroying-the-entire-universe suicide different to plain quantum-I-killed-myself-in-a-box suicide?
That is to say, does Eliezer consider it rational to optimise for the absolute tally of Everett branches or the percentage of them? In “The Bottom Line” Elizier gives an example definition of my effectiveness as a rationalist as how well my decision optimizes the percentage of Everett branches that don’t get me killed by faulty brakes.
Absurd as it may be, let us say that the LHC, or perhaps the LHSM (Large Hadron Spaghetti Monster) destroys the entire universe. If a particular Everett branch is completely obliterated by the Large Hadron Spaghetti Monster then do I still count those branches when I’m doing my percentage or do I only count the worlds where there is an actual universe there to count?
I can certainly imagine some consider parts of the Everett tree that are not in existence in any manner at all to have been ‘pruned’, think the end result is kind of neat and so decide that their utility function optimizes according to the number of Everett branches that contain the desired outcome divided by the number of Everett branches among those that exist. Another could say that they simply want to maximise the absolute number of Everett branches that contain desired outcomes. The LHSM eating an entire Everett branch would be the same as any other particular event making an Everett branch undesirable.
The above is my intuitive interpretation of the core of Elizier’s parting question. Obviously, if we are optimizing for percentage of Everett branches then it is rational to create an LHSM rigged to eat the branch if it contains nuclear terrorism, global economic crashes or Elizier accidentally unleashing the Replicators upon us all. If, however, we are optimizing by absolute desirable Everett branch count then rigging the LHSM to fire whenever the undesirable outcome occurs is merely a waste of resources.
* Are there fates worse than the universe being obliterated?
I note this question simply to acknowledge that other factors could weigh in to the answer to Elizier’s question than the significant one of whether we count not-actually-Everett branches. Perhaps Joe considers the obliteration of the universe to be an event like all others. The analogy would perhaps be to having x% of Everett branches go off in straight line together all rather transparent and not going any place in particular while the remaining (100-x)% of Everett branches head off in the typical way. Joe happens to assign −100 utility to the universe being obliterated, +2 to getting a foot massage and −3,000 to being beaten by a girl. Joe would multiply the x% of the not-Everett branch by −100 and (1-x)% by −3000. It would be rational for Joe to create a LHSM that would be fired whenever he suffered the feared humiliation. That is, unless he anticipated 1,450 foot massages in return for keeping the universe intact!
It seems to me that in order for it to be rational to LHSM the universe on the event of nuclear terrorism or global economic collapse and yet not rational to use the LHSM to make a friendly AI then:
Universe obliteration must be evaluated as a standard Everett branch.
AND
Universe obliteration must be assigned a greater utility than nuclear terrorism or global economic collapse.
(This possibility is equivalent to standard quantum suicide with a caveat that tails was going to give you cancer anyway and you’re rather be dead.)
OR, ALTERNATIVELY
Universe obliteration is different from quantum suicide. Obliterated universes don’t count at all in the utility function so preventing nuclear terrorism by obliterating the universe makes the average world a better place once you do the math.
AND
The complications involved in using the same LHSM to create a friendly AI are just not worth the hassle. (Or otherwise irrational for some reason that is unrelated to the rather large amount of universe obliteration that would be going on.)
Eliezer never implied an answer on whether he would fire a universe destroying LHC to prevent disaster. I wonder, if he did endorse that policy, would he also endorse using the same mechanism to further his research aim?