People who choose torture, if the question was instead framed as the following would you still choose torture?
“Assuming you know your lifespan will be at least 3^^^3 days, would you choose to experience 50 years worth of torture, inflicted a day at a time at intervals spread evenly across your life span starting tomorrow, or one dust speck a day for the next 3^^^3 days of your life?”
Clever, but not, I think, very illuminating -- 3^^^3 is just as fantastically, intuition-breakingly huge as it ever was, and using the word “tomorrow” adds a nasty hyperbolic discounting exploit on top of that. All the basic logic of the original still seems to apply, and so does the conclusion: if a dust speck is in any way commensurate with torture (a condition assumed by the OP, but denied by enough objections that I think it’s worth pointing out explicitly), pick Torture, otherwise pick Specks.
One of the frustrating things about the OP is that most of the objections to it are based on more or less clever intuition pumps, while the post itself is essentially making a utilitarian case for ignoring your intuitions. Tends to lead to a lot of people talking past each other.
I’ve heard this rephrasing before but it means less than you might think. Human instinct tells us to postpone the bad as much as possible. Put aside the dustspeck issue for the moment: let’s compare torture to torture. I’d be tempted to choose a 1000 years of torture over a single year of torture, if the 1000 years are a few millions of years in the future, but the single year had to start now.
Does this fact mean I need concede 1000 years of torture are less bad than a single year? Surely not. It just illustrates human hyperbolic discounting.
I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.
The question remains whether that would be the right choice… and, if so, how to capture the principles underlying that choice in a generalizable way.
For example, in terms of human intuition, it’s clear that the difference between suffering for a day and suffering for five years plus one day is not the same as the difference between suffering for fifty years and suffering for fifty-five years, nor between zero days and five years. The numbers matter.
But it’s not clear to me how to project the principles underlying that intuition onto numbers that my intuition chokes on.
I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.
Could it be that in the 50 years worth of torture would also amount to more than a dust spec of daily discomfort caused by having been psychologically traumatized by the torture, for the remaining 3^^^3 days?
What if the 50 years of torture come at the end of the lifespan?
Istill would rather just take the dust speck now and then though. Nothing forbids me from having a function more nonlinear than 3^^^^[n] 3 , as a messy wired neural network i can easily implement imprecise algebra on the numbers that are far beyond any up arrow notation, or even numbers x,y,z… that are such that any finite integer x < y , any finite integer y < z , and so on . Infinities are not hard to implement at all. Consider comparisons on arrays made like ab[1] . I’m using strings when I need that property in software, so that i can always make some value that will have precedence.
edit: Note that one could think of the comparison between real values in above example as comparisons between a[1]*big number + a[2] , which may seem sensible, and then learn of the uparrows, get mind boggled, and reason that the up-arrows in a[2] will be larger than big number. But they never will change outcome of the comparison as per the actual logic where a[1] always matters more than a[2] .
Sure, if I factor in the knock-on effects of 50 years of torture (or otherwise ignore the original thought experiment and substitute my own) I might come to different results.
Leaving that aside, though, I agree that the nature of my utility function in suffering is absolutely relevant here, and it’s entirely possible for that function to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER.
The key word here is possible though. I don’t really know that it is.
People who choose torture, if the question was instead framed as the following would you still choose torture?
“Assuming you know your lifespan will be at least 3^^^3 days, would you choose to experience 50 years worth of torture, inflicted a day at a time at intervals spread evenly across your life span starting tomorrow, or one dust speck a day for the next 3^^^3 days of your life?”
Clever, but not, I think, very illuminating -- 3^^^3 is just as fantastically, intuition-breakingly huge as it ever was, and using the word “tomorrow” adds a nasty hyperbolic discounting exploit on top of that. All the basic logic of the original still seems to apply, and so does the conclusion: if a dust speck is in any way commensurate with torture (a condition assumed by the OP, but denied by enough objections that I think it’s worth pointing out explicitly), pick Torture, otherwise pick Specks.
One of the frustrating things about the OP is that most of the objections to it are based on more or less clever intuition pumps, while the post itself is essentially making a utilitarian case for ignoring your intuitions. Tends to lead to a lot of people talking past each other.
I’ve heard this rephrasing before but it means less than you might think. Human instinct tells us to postpone the bad as much as possible. Put aside the dustspeck issue for the moment: let’s compare torture to torture. I’d be tempted to choose a 1000 years of torture over a single year of torture, if the 1000 years are a few millions of years in the future, but the single year had to start now.
Does this fact mean I need concede 1000 years of torture are less bad than a single year? Surely not. It just illustrates human hyperbolic discounting.
I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.
The question remains whether that would be the right choice… and, if so, how to capture the principles underlying that choice in a generalizable way.
For example, in terms of human intuition, it’s clear that the difference between suffering for a day and suffering for five years plus one day is not the same as the difference between suffering for fifty years and suffering for fifty-five years, nor between zero days and five years. The numbers matter.
But it’s not clear to me how to project the principles underlying that intuition onto numbers that my intuition chokes on.
Could it be that in the 50 years worth of torture would also amount to more than a dust spec of daily discomfort caused by having been psychologically traumatized by the torture, for the remaining 3^^^3 days?
What if the 50 years of torture come at the end of the lifespan?
Istill would rather just take the dust speck now and then though. Nothing forbids me from having a function more nonlinear than 3^^^^[n] 3 , as a messy wired neural network i can easily implement imprecise algebra on the numbers that are far beyond any up arrow notation, or even numbers x,y,z… that are such that any finite integer x < y , any finite integer y < z , and so on . Infinities are not hard to implement at all. Consider comparisons on arrays made like ab[1] . I’m using strings when I need that property in software, so that i can always make some value that will have precedence.
edit: Note that one could think of the comparison between real values in above example as comparisons between a[1]*big number + a[2] , which may seem sensible, and then learn of the uparrows, get mind boggled, and reason that the up-arrows in a[2] will be larger than big number. But they never will change outcome of the comparison as per the actual logic where a[1] always matters more than a[2] .
Sure, if I factor in the knock-on effects of 50 years of torture (or otherwise ignore the original thought experiment and substitute my own) I might come to different results.
Leaving that aside, though, I agree that the nature of my utility function in suffering is absolutely relevant here, and it’s entirely possible for that function to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER.
The key word here is possible though. I don’t really know that it is.