Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Or a statement that identical instances matter should have been made
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
(or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates)
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.