which is why the hordes of would-be meddling dabblers haven’t killed us all already.
I wonder to what extent we’ve been “saved” so far by anthropics. Okay, that’s probably not the dominant effect. I mean, yeah, it’s quite clear that AI is, as you note, REALLY hard.
But still, I can’t help but wonder just how little or much that’s there.
If you think anthropics has saved us from AI many times, you ought to believe we will likely die soon, because anthropics doesn’t constrain the future, only the past. Each passing year without catastrophe should weaken your faith in the anthropic explanation.
The first sentence seems obviously true to me, the second probably false.
My reasoning: to make observations and update on them, I must continue to exist. Hence I expect to make the same observations & updates whether or not the anthropic explanation is true (because I won’t exist to observe and update on AI extinction if it occurs), so observing a “passing year without catastrophe” actually has a likelihood ratio of one, and is not Bayesian evidence for or against the anthropic explanation.
Right. My point was in the future you are still going to say “wow the world hasn’t been destroyed yet” even if in 99% of alternate realities it was. cousn_it said:
Each passing year without catastrophe should weaken your faith in the anthropic explanation.
Which shouldn’t be true at all.
If you can not observe a catastrophe happen, then not observing a catastrophe is not evidence for any hypothesis.
“Not observing a catastrophe” != “observing a non-catastrophe”. If I’m playing russian roulette and I hear a click and survive, I see good reason to take that as extremely strong evidence that there was no bullet in the chamber.
But doesn’t the anthropic argument still apply? Worlds where you survive playing russian roulette are going to be ones where there wasn’t a bullet in the chamber. You should expect to hear a click when you pull the trigger.
Those universes where you die still exist, even if you don’t observe them. If you carry your logic to its conclusion, there would be no risk to playing russian roulette, which is absurd.
The standard excuse given by those who pretend to believe in many worlds is that you are likely to get maimed in the universes where you get shot but don’t die, which is somewhat unpleasant. If you come up with a more reliable way to quantum suicide, like using a nuke, they find another excuse.
Methinks that is still a lack of understanding, or a disagreement on utility calculations. I myself would rate the universes where I die as lower utility still than those were I get injured (indeed the lowest possible utility).
I do think ‘a disagreement on utility calculations’ may indeed be a big part of it. Are you a total utilitarian? I’m not. A big part of that comes from the fact that I don’t consider two copies of myself to be intrinsically more valuable than one—perhaps instrumentally valuable, if those copies can interact, sync their experiences and cooperate, but that’s another matter. With experience-syncing, I am mostly indifferent to the number of copies of myself to exist (leaving aside potential instrumental benefits), but without it I evaluate decreasing utility as the number of copies increases, as I assign zero terminal value to multiplicity but positive terminal value to the uniqueness of my identity.
My brand of utilitarianism is informed substantially by these preferences. I adhere to neither average nor total utilitarianism, but I lean closer to average. Whilst I would be against the use of force to turn a population of 10 with X utility each into a population of 3 with (X + 1) utility each, I would in isolation consider the latter preferable to the former (there is no inconsistency here—my utility function simply admits information about the past).
I’m saying that you can only observe not dying. Not that you shouldn’t care about universes that you don’t exist in or observe.
The risk in Russian roulette is, in the worlds where you do survive you will probably be lobotomized, or drop the gun shooting someone else, etc. Ignoring that, there is no risk. As long as you don’t care about universes where you die.
How is that relevant? If I take some action that results in the death of myself in some other Everett branch, then I have killed a human being in the multiverse.
Think about applying your argument to this universe. You shoot someone in the head, they die instantly, and then you say to the judge “well think of it this way: he’s not around to experience this. besides, there’s other worlds where I didn’t shoot him, so he’s not really dead!”
You can’t appeal to common sense. That’s the point of quantum immortality, it defies our common sense notions about death. Obviously, since we are used to assuming single-threaded universe, where death is equivalent to ceasing to exist.
Of course, if you kill someone, you still cause that person pain in the vast majority of universes, as well as grieving to their family and friends.
If star-trek-style teleportation was possible by creating a clone and deleting the original, is that equivalent to suicide/murder/death? If you could upload your mind to a computer but destroy your biological brain, is that suicide, and is the upload really you? Does destroying copies really matter as long as one lives on (assuming the copies don’t suffer)?
You absolutely appeal to common sense on moral issues. Morality is applied common sense, in the Minsky view of “common sense” being an assortment of deductions and inferences extracted from the tangled web of my personal experiential and computational history. Morality is the result of applying that common sense knowledgebase against possible actions in a planning algorithm.
Quantum “immortality” involves a sudden, unexpected, and unjustified redefinition of “death.” That argument works if you buy the premise. But, I don’t.
If you are saying that there is no difference between painlessly, instantaneously killing someone in one branch while letting them live another, verses letting that person live in both, then I don’t know how to proceed. If you’re going to say that then you might as well make yourself indifferent to the arrow of time as well, in which case it doesn’t matter if that person dies in all branches because he still “exists” in history.
Now I no longer know what we are talking about. According to my morality, it is wrong to kill someone. The existence of other branches where that person does not die does not have even epsilon difference on my evaluation of moral choices in this world. The argument from the other side seems inconsistent to me.
And yes, star trek transporters and destructive uploaders are death machines, a position I’ve previouslyarticulated on lesswrong.
You are appealing to a terminal value that I do not share. I think caring about clones is absurd. As long as one copy of me lives, what difference does it make if I create and delete a thousand others? It doesn’t change my experience or theirs. Nothing would change and I wouldn’t even be aware of it.
From my point of view, I do not like the thought that I might be arbitrarily deleted by a clone of myself. I therefore choose to commit to not deleting clones of myself; thus preventing myself from being deleted by any clones that share that commitment.
If you can not observe a catastrophe happen, then not observing a catastrophe is not evidence for any hypothesis.
I don’t think this is quite true (it can redistribute probability between some hypotheses). But this strengthens your position rather than weakening it.
Retracted: Not correct. What was I thinking? Just because you don’t observe the universes where the world was destroyed, doesn’t mean those universes don’t exist.
And now for a truly horrible thought:
I wonder to what extent we’ve been “saved” so far by anthropics. Okay, that’s probably not the dominant effect. I mean, yeah, it’s quite clear that AI is, as you note, REALLY hard.
But still, I can’t help but wonder just how little or much that’s there.
If you think anthropics has saved us from AI many times, you ought to believe we will likely die soon, because anthropics doesn’t constrain the future, only the past. Each passing year without catastrophe should weaken your faith in the anthropic explanation.
The first sentence seems obviously true to me, the second probably false.
My reasoning: to make observations and update on them, I must continue to exist. Hence I expect to make the same observations & updates whether or not the anthropic explanation is true (because I won’t exist to observe and update on AI extinction if it occurs), so observing a “passing year without catastrophe” actually has a likelihood ratio of one, and is not Bayesian evidence for or against the anthropic explanation.
Wouldn’t the anthropic argument apply just as much in the future as it does now? The world not being destroyed is the only observable result.
The future hasn’t happened yet.
Right. My point was in the future you are still going to say “wow the world hasn’t been destroyed yet” even if in 99% of alternate realities it was. cousn_it said:
Which shouldn’t be true at all.
If you can not observe a catastrophe happen, then not observing a catastrophe is not evidence for any hypothesis.
“Not observing a catastrophe” != “observing a non-catastrophe”. If I’m playing russian roulette and I hear a click and survive, I see good reason to take that as extremely strong evidence that there was no bullet in the chamber.
But doesn’t the anthropic argument still apply? Worlds where you survive playing russian roulette are going to be ones where there wasn’t a bullet in the chamber. You should expect to hear a click when you pull the trigger.
As it stands, I expect to die (p=1/6) if I play russian roulette. I don’t hear a click if I’m dead.
That’s the point. You can’t observe anything if you are dead, therefore any observations you make are conditional on you being alive.
Those universes where you die still exist, even if you don’t observe them. If you carry your logic to its conclusion, there would be no risk to playing russian roulette, which is absurd.
The standard excuse given by those who pretend to believe in many worlds is that you are likely to get maimed in the universes where you get shot but don’t die, which is somewhat unpleasant. If you come up with a more reliable way to quantum suicide, like using a nuke, they find another excuse.
Methinks that is still a lack of understanding, or a disagreement on utility calculations. I myself would rate the universes where I die as lower utility still than those were I get injured (indeed the lowest possible utility).
Better still if in all the universes I don’t die.
I do think ‘a disagreement on utility calculations’ may indeed be a big part of it. Are you a total utilitarian? I’m not. A big part of that comes from the fact that I don’t consider two copies of myself to be intrinsically more valuable than one—perhaps instrumentally valuable, if those copies can interact, sync their experiences and cooperate, but that’s another matter. With experience-syncing, I am mostly indifferent to the number of copies of myself to exist (leaving aside potential instrumental benefits), but without it I evaluate decreasing utility as the number of copies increases, as I assign zero terminal value to multiplicity but positive terminal value to the uniqueness of my identity.
My brand of utilitarianism is informed substantially by these preferences. I adhere to neither average nor total utilitarianism, but I lean closer to average. Whilst I would be against the use of force to turn a population of 10 with X utility each into a population of 3 with (X + 1) utility each, I would in isolation consider the latter preferable to the former (there is no inconsistency here—my utility function simply admits information about the past).
That line of thinking leads directly to recommending immediate probabilistic suicide, or at least indifference to it. No thanks.
How so?
I’m saying that you can only observe not dying. Not that you shouldn’t care about universes that you don’t exist in or observe.
The risk in Russian roulette is, in the worlds where you do survive you will probably be lobotomized, or drop the gun shooting someone else, etc. Ignoring that, there is no risk. As long as you don’t care about universes where you die.
Ok. I find this assumption absolutely crazy, but at least I comprehend what you are saying now.
Well think of it this way. You are dead/non-existent in the vast majority of universes as it is.
How is that relevant? If I take some action that results in the death of myself in some other Everett branch, then I have killed a human being in the multiverse.
Think about applying your argument to this universe. You shoot someone in the head, they die instantly, and then you say to the judge “well think of it this way: he’s not around to experience this. besides, there’s other worlds where I didn’t shoot him, so he’s not really dead!”
You can’t appeal to common sense. That’s the point of quantum immortality, it defies our common sense notions about death. Obviously, since we are used to assuming single-threaded universe, where death is equivalent to ceasing to exist.
Of course, if you kill someone, you still cause that person pain in the vast majority of universes, as well as grieving to their family and friends.
If star-trek-style teleportation was possible by creating a clone and deleting the original, is that equivalent to suicide/murder/death? If you could upload your mind to a computer but destroy your biological brain, is that suicide, and is the upload really you? Does destroying copies really matter as long as one lives on (assuming the copies don’t suffer)?
You absolutely appeal to common sense on moral issues. Morality is applied common sense, in the Minsky view of “common sense” being an assortment of deductions and inferences extracted from the tangled web of my personal experiential and computational history. Morality is the result of applying that common sense knowledgebase against possible actions in a planning algorithm.
Quantum “immortality” involves a sudden, unexpected, and unjustified redefinition of “death.” That argument works if you buy the premise. But, I don’t.
If you are saying that there is no difference between painlessly, instantaneously killing someone in one branch while letting them live another, verses letting that person live in both, then I don’t know how to proceed. If you’re going to say that then you might as well make yourself indifferent to the arrow of time as well, in which case it doesn’t matter if that person dies in all branches because he still “exists” in history.
Now I no longer know what we are talking about. According to my morality, it is wrong to kill someone. The existence of other branches where that person does not die does not have even epsilon difference on my evaluation of moral choices in this world. The argument from the other side seems inconsistent to me.
And yes, star trek transporters and destructive uploaders are death machines, a position I’ve previously articulated on lesswrong.
You are appealing to a terminal value that I do not share. I think caring about clones is absurd. As long as one copy of me lives, what difference does it make if I create and delete a thousand others? It doesn’t change my experience or theirs. Nothing would change and I wouldn’t even be aware of it.
From my point of view, I do not like the thought that I might be arbitrarily deleted by a clone of myself. I therefore choose to commit to not deleting clones of myself; thus preventing myself from being deleted by any clones that share that commitment.
I don’t think this is quite true (it can redistribute probability between some hypotheses). But this strengthens your position rather than weakening it.
Ok, correct.
Retracted: Not correct. What was I thinking? Just because you don’t observe the universes where the world was destroyed, doesn’t mean those universes don’t exist.