There’s no fundamental reason why value should be linear in number of dust specks
Yeah, that has always been my main problem with that scenario.
There are different ways to sum multiple sources of something. Consider linear vs paralel electrical circuits; the total output depends greatly on how you count the individual voltage sources (or resistors or whatever).
When it comes to suffering, well suffering only exists in consciousness, and each point of consciousness—each mind involved—experiences their own dust speck individually. There is no conscious mind in that scenario who is directly experiencing the totality of the dust specks and suffers accordingly. It is in no way obvious to me that the “right” way to consider the totality of that suffering is to just add it up. Perhaps it is. But unless I missed something, no one arguing for torture so far has actually shown it (as opposed to just assuming it).
Suppose we make this about (what starts as) a single person. Suppose that you, yourself, are going to be copied into all that humongous number of copies. And you are given a choice: before that happens, you will be tortured for 50 years. Or you will be unconscious for 50 years, but after copying each of your copies will get a dust speck in the eye. Either way you get copied, that’s not part of the choice. After that, whatever your choice, you will be able to continue with your lives.
EDIT: come to think of it, assume the copying template is taken from you before the 50 years start, so we don’t have to consider memories and lasting psychological effects of torture. My answer remains the same, even if in future I won’t remember the torture, I don’t want to go through it.
As far as I know, TvDS doesn’t assume that value is linear in dust specks. As you say, there are different ways to sum multiple sources of something. In particular, there are many ways to sum the experiences of multiple individuals.
For example, the whole problem evaporates if I decide that people’s suffering only matters to the extent that I personally know those people. In fact, much less ridiculous problems also evaporate… e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.
It is not obvious to me that I prefer that second way of thinking, though.
e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.
It is arguable (in terms of revealed preferences) that first-worlders typically do prefer that. This requires a slightly non-normative meaning of “prefer”, but a very useful one.
Oh, absolutely. I chose the example with that in mind.
I merely assert that “but that leads to thousands of people suffering!” is not a ridiculous moral problem for people (like me) who reveal such preferences to consider, and it’s not obvious that a model that causes the problem to evaporate is one that I endorse.
Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out.
One can go perverse and proclaims that one treats duplicates the same, but then if there’s a button which you press to replace everyone’s mind with mind of happiest person, you should press it.
I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The ‘murder’ is a form of irreversible state transition. The morality as it exist is about state transitions not about states.
It isn’t clear to me what the phrase “exactly-duplicated” is doing there. Is there a reason to believe that each individual dust-speck-in-eye event is exactly like every other? And if so, what difference does that make? (Relatedly, is there a reason to believe that each individual moment of torture is different from all the others? If it turns out that it’s not, does that imply something relevant?)
In any case, I certainly agree that one could reasonably expect the negvalue of suffering to flatten out no matter how much of it there is. It seems unlikely to me that fifty years of torture is anywhere near the asymptote of that curve, though… for example, I would rather be tortured for fifty years than be tortured for seventy years.
But even if it somehow is at the asymptotic limit, we could recast the problem with ten years of torture instead, or five years, or five months, or some other value that is no longer at that limit, and the same questions would arise.
So, no, I don’t think the TvDS problem depends on intuitions about the linear-additive nature of suffering. (Indeed, the more i think about it the less convinced i am that I have such intuitions, as opposed to approaches-a-limit intuitions. This is perhaps because thinking about it has changed my intuitions.)
I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.
3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn’t be multiplying anything with 3^^^3 .
TBH, my ‘common sense’ explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.
[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]
I was referring to linear-additive nature of dust specks.
Well, I agree that 3^^^3 dust specks don’t quite add linearly… long before you reach that ridiculous mass, I expect you get all manner of weird effects that I’m not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.
But surely it’s not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.
And, as I said, it’s not at all clear to me that I intuit linear addition of suffering (whether it’s caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.
It’s not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.
You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum.
Ah, I see.
Yeah, sure, there’s only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn’t really matter), and there’s only Y possible ways for a dust speck to be, and there’s only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there’s some duplication. Indeed, there’s vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.
Agreed.
I’m still curious about what difference that makes.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it’s even more worthless when it goes along with intuition.
I get the feeling that attempts to ‘logically’ deliberate on morality from some simple principles like “utility” are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately.
similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic
But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that’s because brains have a more efficient architecture/algorithms than today’s generic computers.
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Or a statement that identical instances matter should have been made
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
(or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates)
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.
This is mistaken. E.g. see this post which discusses living in a Big World, as in eternal inflation theories where the universe extends infinitely and has random variation so that somewhere in the universe every possible galaxy or supercluster will be realized, and all the human brain states will be explored.
Or see Bostrom’s paper on this issue, which is very widely read around here. Many people think that our actions can still matter in such a world, e.g. that it’s better to try to give people chocolate than to torture them here on Earth, even if in ludicrously distant region there are brains that have experienced all the variations of chocolate and torture.
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.
I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.
“State” doesn’t have to mean “frozen state” or something similar, it could mean “state of the world/universe”. E.g. “a state of the universe” in which many people are being tortured includes the torture process in it’s description. I think this is how it’s normally used.
Well, if you are to coherently take it that the transitions have value, rather than states, then you arrive at morality that regulates the transitions that the agent should try to make happen, ending up with morality that is more about means than about ends.
I think it’s simply that the pain feels like a state rather than dynamic process, and so utilitarianism treats it as state, while doing something feels like a dynamic process, so utilitarianism doesn’t treat it as state and is only concerned with difference in utilities.
Yeah, that has always been my main problem with that scenario.
There are different ways to sum multiple sources of something. Consider linear vs paralel electrical circuits; the total output depends greatly on how you count the individual voltage sources (or resistors or whatever).
When it comes to suffering, well suffering only exists in consciousness, and each point of consciousness—each mind involved—experiences their own dust speck individually. There is no conscious mind in that scenario who is directly experiencing the totality of the dust specks and suffers accordingly. It is in no way obvious to me that the “right” way to consider the totality of that suffering is to just add it up. Perhaps it is. But unless I missed something, no one arguing for torture so far has actually shown it (as opposed to just assuming it).
Suppose we make this about (what starts as) a single person. Suppose that you, yourself, are going to be copied into all that humongous number of copies. And you are given a choice: before that happens, you will be tortured for 50 years. Or you will be unconscious for 50 years, but after copying each of your copies will get a dust speck in the eye. Either way you get copied, that’s not part of the choice. After that, whatever your choice, you will be able to continue with your lives.
In that case, I don’t care about doing the “right” math that will make people call me rational, I care about being the agent who is happily NOT writhing in pain with 50 years more of it ahead of him.
EDIT: come to think of it, assume the copying template is taken from you before the 50 years start, so we don’t have to consider memories and lasting psychological effects of torture. My answer remains the same, even if in future I won’t remember the torture, I don’t want to go through it.
As far as I know, TvDS doesn’t assume that value is linear in dust specks. As you say, there are different ways to sum multiple sources of something. In particular, there are many ways to sum the experiences of multiple individuals.
For example, the whole problem evaporates if I decide that people’s suffering only matters to the extent that I personally know those people. In fact, much less ridiculous problems also evaporate… e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.
It is not obvious to me that I prefer that second way of thinking, though.
It is arguable (in terms of revealed preferences) that first-worlders typically do prefer that. This requires a slightly non-normative meaning of “prefer”, but a very useful one.
Oh, absolutely. I chose the example with that in mind.
I merely assert that “but that leads to thousands of people suffering!” is not a ridiculous moral problem for people (like me) who reveal such preferences to consider, and it’s not obvious that a model that causes the problem to evaporate is one that I endorse.
Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out.
One can go perverse and proclaims that one treats duplicates the same, but then if there’s a button which you press to replace everyone’s mind with mind of happiest person, you should press it.
I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The ‘murder’ is a form of irreversible state transition. The morality as it exist is about state transitions not about states.
It isn’t clear to me what the phrase “exactly-duplicated” is doing there. Is there a reason to believe that each individual dust-speck-in-eye event is exactly like every other? And if so, what difference does that make? (Relatedly, is there a reason to believe that each individual moment of torture is different from all the others? If it turns out that it’s not, does that imply something relevant?)
In any case, I certainly agree that one could reasonably expect the negvalue of suffering to flatten out no matter how much of it there is. It seems unlikely to me that fifty years of torture is anywhere near the asymptote of that curve, though… for example, I would rather be tortured for fifty years than be tortured for seventy years.
But even if it somehow is at the asymptotic limit, we could recast the problem with ten years of torture instead, or five years, or five months, or some other value that is no longer at that limit, and the same questions would arise.
So, no, I don’t think the TvDS problem depends on intuitions about the linear-additive nature of suffering. (Indeed, the more i think about it the less convinced i am that I have such intuitions, as opposed to approaches-a-limit intuitions. This is perhaps because thinking about it has changed my intuitions.)
I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.
3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn’t be multiplying anything with 3^^^3 .
TBH, my ‘common sense’ explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.
[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]
Well, I agree that 3^^^3 dust specks don’t quite add linearly… long before you reach that ridiculous mass, I expect you get all manner of weird effects that I’m not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.
But surely it’s not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.
And, as I said, it’s not at all clear to me that I intuit linear addition of suffering (whether it’s caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.
It’s not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.
Ah, I see.
Yeah, sure, there’s only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn’t really matter), and there’s only Y possible ways for a dust speck to be, and there’s only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there’s some duplication. Indeed, there’s vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.
Agreed.
I’m still curious about what difference that makes.
Well, some difference that it should make:
Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it’s even more worthless when it goes along with intuition.
I get the feeling that attempts to ‘logically’ deliberate on morality from some simple principles like “utility” are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately.
But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that’s because brains have a more efficient architecture/algorithms than today’s generic computers.
Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can’t do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.
edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of “if the eyes have vertical slits, it’s a cat” which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like ‘if it doesn’t harm anyone else it’s okay’)
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for ‘morality’ that we do today to recognize cats using a computer.
Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)
Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom’s paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it’s a niche objection to address.
After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: “well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there’s no point since brains in all states already exist, if you don’t care about identical copies.”
If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.
That’s incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.
edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn’t go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It’s fake-reductionism where you substitute one irreducible concept for another.
That’s an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.
If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That’s a little similar to the “proximity argument”, which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn’t have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don’t remember if anyone made this argument before… Of course there could be any number of holes in it.
Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.
ETA: the first version of this comment mixed up Pascal’s mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal’s mugging as well.
Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.
If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).
TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can’t model), even though it is engineered with some state in mind.
edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something’s screwed up with naive counting for probabilities, or the world is small.
This is mistaken. E.g. see this post which discusses living in a Big World, as in eternal inflation theories where the universe extends infinitely and has random variation so that somewhere in the universe every possible galaxy or supercluster will be realized, and all the human brain states will be explored.
Or see Bostrom’s paper on this issue, which is very widely read around here. Many people think that our actions can still matter in such a world, e.g. that it’s better to try to give people chocolate than to torture them here on Earth, even if in ludicrously distant region there are brains that have experienced all the variations of chocolate and torture.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.
“State” doesn’t have to mean “frozen state” or something similar, it could mean “state of the world/universe”. E.g. “a state of the universe” in which many people are being tortured includes the torture process in it’s description. I think this is how it’s normally used.
Well, if you are to coherently take it that the transitions have value, rather than states, then you arrive at morality that regulates the transitions that the agent should try to make happen, ending up with morality that is more about means than about ends.
I think it’s simply that the pain feels like a state rather than dynamic process, and so utilitarianism treats it as state, while doing something feels like a dynamic process, so utilitarianism doesn’t treat it as state and is only concerned with difference in utilities.