I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don’t have instructions to deal with such cases by assigning a dust spec a value of −1 util and torture a very low but > −3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way—because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my ‘utility function’ is buggy but my ‘utility function’ says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.
The claim that the answer is “obvious” is troubling.
I originally thought the answer was more obvious if you follow Eliezer’s suggestion, and start doing the math like it’s a physics problem. However, when I continued following up, I found that interestingly, the problem has even more lessons and can be related back to other posts by Eliezer and Yvain.
To begin to consider the physical implications of generating 3^^^3 Motes of dust. I’ll quote part of my post from the previous thread:
However, 3^^^3 is incomprehensibly bigger than any of that.
You could turn every planck length in the obseravble universe into a speck of dust. At Answerbag’s 2.5 x 10^184 cubic planck lengths, that’s still not enough dust. http://www.answerbag.com/q_view/33135
At this point, I thought maybe that another universe made of 10^80 computronium atoms is running universes like ours as simulation on individual atoms. That means 10^80 2.5 x 10^184 cubic planck lengths of dust. But that’s still not enough dust. Again. 2.510^264 specks of dust is still WAY less than 3^^^3
At this point, I considered checking if I could get enough dust specks if I literally converted everything in all Everett branches since the big bang beginning of time into dust, but my math abilities fail me. I’ll try coming back to this later.
Essentially, if you you treat it as a physics dilemma, and not an ethics dilemma, you realize very quickly that you are essentially converting multiple universes not even to disutility, but to dust to fuel disutility. Unless whatever inflicts the disutility is somehow epiphenomenal and does not obey physics, that many specks/speck equivalents seems to become a physics problem rather rapidly regardless of what you do.
If you then recast the problem as “I can either have one random person be tortured, or I can convert countless universes, many containing life, from getting turned into dust that will be used to hurt the residents of other countless universes?” Then the ethics problem becomes much simpler. I have a prohibition against torture, but I have a substantially larger and more thorough prohibition against destroying inhabited planets, stars, or galaxies.
I understand there is a possible counter to this in “Well, what of the least convenient world, where the second law of thermodynamics does not apply, and generating this much dust WON’T destroy universes, It will merely inconvenience people.”
But then there are other problems. If 1 out of every 1 googol person loses their life because of the momentary blindness from rubbing their eyes, then I’m going to suggest to the guy that consigning countless people to death is a better choice than torturing one. That’s not something I’m willing to accept either.
You could attempt to FURTHER counter this by saying “Well, what of the least convenient world, where this dust ISN’T going to kill people, It will merely inconvenience people! That’s it!”
But then there are other problems still. If 1 out of every 1 googol person is simply receives substantial injury because of the momentary blindness from rubbing their eyes, then..
“ERGH! No injuries! Just magical incovenience dust! This disutility has no further physical implications!”
As a side note, I think belief in belief may relate. I’m willing to accept a world which is inconvenient, to a point. But the person who you are arguing with is acting extremely similarly to someone who has an invisible dragon in his garage. So if I don’t believe in the invisible dragon, why do you trust this person?
But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”
Now, it is further possible to say “Your possible skepticism is not an issue! This is the least convenient possible world, so before giving he hit you with a beam which generates complete trust in what he says. You are completely sure the threat is real.”
But that doesn’t help. It’s easy to say “The threat of 3^^^3 dust specks is real.” I was taking that as a given BEFORE I started considering the problem harder. It was only after considering problem harder I realized it is potentially possible that part is fake, and this is going to involve an iterated loop.
(As a brief explanation, to make the mistake worse on the small risk side, any scenario slightly similar to this almost always involves an iterated loop, where you have to torture another person, and then another person, and then another person, and then you consign the entire earth to a lifetime of torture against a threat that was never real in the first place.)
I really am going to have to give this problem more thought to try to lay out the implications. It’s a really good problem that way.
Edit: Eliezer didn’t write Least Convenient Possible World. Yvain did. Fixed.
As a side note, I think belief in belief may relate. I’m willing to accept a world which is inconvenient, to a point. But the person who you are arguing with is acting extremely similarly to someone who has an invisible dragon in his garage.
No. Just no. A factual claim and a hypothetical thought-experiment are not the same thing. When you object to the details of the hypothetical thought experiment, and you nonetheless aren’t convinced by any modifications correcting it either, then you’re simply showing that this isn’t your true objection.
So many people seem to be trying to find ways to dance around the simple plain issue of whether we should consider the multiplication of small disutilities to possibly be morally equivalent (or worse) to a single humongous disutility.
On my part I say simply: YES. Torturing a person for 50 years is morally better than inflicting the momentary annoyance of a single dust speck to each of 3^^^3 people. I don’t see much sense in any arguments more complicated than a multiplication.
As simple and plain as that. People who’ve already written their bottom line differently, I am sure they can find whatever random excuses they want to explain it. But I just urge them to introspect for a sec and actually see whether that bottom line was actually affected any by the argument they placed above it.
So many people seem to be trying to find ways to dance around the simple plain issue of whether we should consider the multiplication of small disutilities to possibly be morally equivalent (or worse) to a single humongous disutility.
On my part I say simply: YES. Torturing a person for 50 years is morally better than inflicting the momentary annoyance of a single dust speck to each of 3^^^3 people. I don’t see much sense in any arguments more complicated than a multiplication.
I agree that this is the critical point, but you present this disagreement as if multiplying was the default approach, and the burden of proof fell entirely on any different evaluation method.
Myself, however, I’ve never heard a meaningful, persuasive argument in favour of naive utilitarian multiplication in the first place. I do believe that there is some humongous x_John above which it will be John’s rational preference to take a 1/x_John chance of torture rather than suffer a dust spek. But I do not believe that a dust speck in Alice’s eye is abstractly commensurable to a dust speck in Bob’s eye, or Alice’s torture to Bob’s torture, and a fortiori I also do not believe that 3^^^3 dust specks are commensurable to one random torture.
If John has to make a choice between the two (assuming he isn’t one of the affected people), he needs to consider the two possible worlds as a whole and decide which one he likes better, and he might have all sorts of reasons for favouring the dust speck world—for example, he might place some value on fairness.
I already came to that conclusion (Torture) when I posted on the October 7th in the previous thread. When I was thinking about the problem again on October 11th, I didn’t want to just repeat the exact same cached thoughts again, so I tried to think “Is there anything else about the problem I’m not thinking about?”
And then I thought “Oh look, concepts similarly to what other people mentioned on blog posts I’ve read. I’ll use THEIR cached thoughts. Aren’t I wise, parroting back the words of expert rationalists?” But that’s a horrible method of thinking that wont get me anywhere.
Furthermore, I ended my post with “I’m going to have to give this more thought.” Which is a stupid way to end a post. if it needs more thought, it needs more thought, so why did I post it?
So actually, I agree with your down vote. In retrospect, there are several reasons why that is actually a bad post, even though it seemed to make sense at the time.
Either way I’ll retract it but not blank it: Checking on other threads seems to indicate that blanking is inappropriate because it leaves some people wondering what was said, but it should be retracted.
I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don’t have instructions to deal with such cases by assigning a dust spec a value of −1 util and torture a very low but > −3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way—because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my ‘utility function’ is buggy but my ‘utility function’ says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.
The claim that the answer is “obvious” is troubling.
I originally thought the answer was more obvious if you follow Eliezer’s suggestion, and start doing the math like it’s a physics problem. However, when I continued following up, I found that interestingly, the problem has even more lessons and can be related back to other posts by Eliezer and Yvain.
To begin to consider the physical implications of generating 3^^^3 Motes of dust. I’ll quote part of my post from the previous thread:
Essentially, if you you treat it as a physics dilemma, and not an ethics dilemma, you realize very quickly that you are essentially converting multiple universes not even to disutility, but to dust to fuel disutility. Unless whatever inflicts the disutility is somehow epiphenomenal and does not obey physics, that many specks/speck equivalents seems to become a physics problem rather rapidly regardless of what you do.
If you then recast the problem as “I can either have one random person be tortured, or I can convert countless universes, many containing life, from getting turned into dust that will be used to hurt the residents of other countless universes?” Then the ethics problem becomes much simpler. I have a prohibition against torture, but I have a substantially larger and more thorough prohibition against destroying inhabited planets, stars, or galaxies.
I understand there is a possible counter to this in “Well, what of the least convenient world, where the second law of thermodynamics does not apply, and generating this much dust WON’T destroy universes, It will merely inconvenience people.”
But then there are other problems. If 1 out of every 1 googol person loses their life because of the momentary blindness from rubbing their eyes, then I’m going to suggest to the guy that consigning countless people to death is a better choice than torturing one. That’s not something I’m willing to accept either.
You could attempt to FURTHER counter this by saying “Well, what of the least convenient world, where this dust ISN’T going to kill people, It will merely inconvenience people! That’s it!”
But then there are other problems still. If 1 out of every 1 googol person is simply receives substantial injury because of the momentary blindness from rubbing their eyes, then..
“ERGH! No injuries! Just magical incovenience dust! This disutility has no further physical implications!”
As a side note, I think belief in belief may relate. I’m willing to accept a world which is inconvenient, to a point. But the person who you are arguing with is acting extremely similarly to someone who has an invisible dragon in his garage. So if I don’t believe in the invisible dragon, why do you trust this person?
Now, it is further possible to say “Your possible skepticism is not an issue! This is the least convenient possible world, so before giving he hit you with a beam which generates complete trust in what he says. You are completely sure the threat is real.”
But that doesn’t help. It’s easy to say “The threat of 3^^^3 dust specks is real.” I was taking that as a given BEFORE I started considering the problem harder. It was only after considering problem harder I realized it is potentially possible that part is fake, and this is going to involve an iterated loop.
(As a brief explanation, to make the mistake worse on the small risk side, any scenario slightly similar to this almost always involves an iterated loop, where you have to torture another person, and then another person, and then another person, and then you consign the entire earth to a lifetime of torture against a threat that was never real in the first place.)
I really am going to have to give this problem more thought to try to lay out the implications. It’s a really good problem that way.
Edit: Eliezer didn’t write Least Convenient Possible World. Yvain did. Fixed.
No. Just no. A factual claim and a hypothetical thought-experiment are not the same thing. When you object to the details of the hypothetical thought experiment, and you nonetheless aren’t convinced by any modifications correcting it either, then you’re simply showing that this isn’t your true objection.
So many people seem to be trying to find ways to dance around the simple plain issue of whether we should consider the multiplication of small disutilities to possibly be morally equivalent (or worse) to a single humongous disutility.
On my part I say simply: YES. Torturing a person for 50 years is morally better than inflicting the momentary annoyance of a single dust speck to each of 3^^^3 people. I don’t see much sense in any arguments more complicated than a multiplication.
As simple and plain as that. People who’ve already written their bottom line differently, I am sure they can find whatever random excuses they want to explain it. But I just urge them to introspect for a sec and actually see whether that bottom line was actually affected any by the argument they placed above it.
I agree that this is the critical point, but you present this disagreement as if multiplying was the default approach, and the burden of proof fell entirely on any different evaluation method.
Myself, however, I’ve never heard a meaningful, persuasive argument in favour of naive utilitarian multiplication in the first place. I do believe that there is some humongous x_John above which it will be John’s rational preference to take a 1/x_John chance of torture rather than suffer a dust spek. But I do not believe that a dust speck in Alice’s eye is abstractly commensurable to a dust speck in Bob’s eye, or Alice’s torture to Bob’s torture, and a fortiori I also do not believe that 3^^^3 dust specks are commensurable to one random torture.
If John has to make a choice between the two (assuming he isn’t one of the affected people), he needs to consider the two possible worlds as a whole and decide which one he likes better, and he might have all sorts of reasons for favouring the dust speck world—for example, he might place some value on fairness.
I already came to that conclusion (Torture) when I posted on the October 7th in the previous thread. When I was thinking about the problem again on October 11th, I didn’t want to just repeat the exact same cached thoughts again, so I tried to think “Is there anything else about the problem I’m not thinking about?”
And then I thought “Oh look, concepts similarly to what other people mentioned on blog posts I’ve read. I’ll use THEIR cached thoughts. Aren’t I wise, parroting back the words of expert rationalists?” But that’s a horrible method of thinking that wont get me anywhere.
Furthermore, I ended my post with “I’m going to have to give this more thought.” Which is a stupid way to end a post. if it needs more thought, it needs more thought, so why did I post it?
So actually, I agree with your down vote. In retrospect, there are several reasons why that is actually a bad post, even though it seemed to make sense at the time.
For clarity: I didn’t downvote you.
Thank you for the clarification about that.
Either way I’ll retract it but not blank it: Checking on other threads seems to indicate that blanking is inappropriate because it leaves some people wondering what was said, but it should be retracted.