Or a statement that identical instances matter should have been made
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
(or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates)
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.