I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.
3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn’t be multiplying anything with 3^^^3 .
TBH, my ‘common sense’ explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.
[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]
I was referring to linear-additive nature of dust specks.
Well, I agree that 3^^^3 dust specks don’t quite add linearly… long before you reach that ridiculous mass, I expect you get all manner of weird effects that I’m not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.
But surely it’s not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.
And, as I said, it’s not at all clear to me that I intuit linear addition of suffering (whether it’s caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.
It’s not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.
You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum.
Ah, I see.
Yeah, sure, there’s only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn’t really matter), and there’s only Y possible ways for a dust speck to be, and there’s only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there’s some duplication. Indeed, there’s vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.
Agreed.
I’m still curious about what difference that makes.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it’s even more worthless when it goes along with intuition.
I get the feeling that attempts to ‘logically’ deliberate on morality from some simple principles like “utility” are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately.
similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic
But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that’s because brains have a more efficient architecture/algorithms than today’s generic computers.
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Or a statement that identical instances matter should have been made
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
(or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates)
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.
This is mistaken. E.g. see this post which discusses living in a Big World, as in eternal inflation theories where the universe extends infinitely and has random variation so that somewhere in the universe every possible galaxy or supercluster will be realized, and all the human brain states will be explored.
Or see Bostrom’s paper on this issue, which is very widely read around here. Many people think that our actions can still matter in such a world, e.g. that it’s better to try to give people chocolate than to torture them here on Earth, even if in ludicrously distant region there are brains that have experienced all the variations of chocolate and torture.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.
I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.
3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn’t be multiplying anything with 3^^^3 .
TBH, my ‘common sense’ explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.
[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]
Well, I agree that 3^^^3 dust specks don’t quite add linearly… long before you reach that ridiculous mass, I expect you get all manner of weird effects that I’m not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.
But surely it’s not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.
And, as I said, it’s not at all clear to me that I intuit linear addition of suffering (whether it’s caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.
It’s not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.
Ah, I see.
Yeah, sure, there’s only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn’t really matter), and there’s only Y possible ways for a dust speck to be, and there’s only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there’s some duplication. Indeed, there’s vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.
Agreed.
I’m still curious about what difference that makes.
Well, some difference that it should make:
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it’s even more worthless when it goes along with intuition.
I get the feeling that attempts to ‘logically’ deliberate on morality from some simple principles like “utility” are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately.
But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that’s because brains have a more efficient architecture/algorithms than today’s generic computers.
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
This is mistaken. E.g. see this post which discusses living in a Big World, as in eternal inflation theories where the universe extends infinitely and has random variation so that somewhere in the universe every possible galaxy or supercluster will be realized, and all the human brain states will be explored.
Or see Bostrom’s paper on this issue, which is very widely read around here. Many people think that our actions can still matter in such a world, e.g. that it’s better to try to give people chocolate than to torture them here on Earth, even if in ludicrously distant region there are brains that have experienced all the variations of chocolate and torture.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.