The idea is that we can take a finite list of items like this
Torture for 50 years
Torture for 40 years
...
Torture for 1 day
...
Broken arm
Broken toe
...
Papercut
Sneeze
Dust Speck
Presented with such a list you must insist that two items on this list are incomparable. In fact you must claim that some item is incomparably worse than the next item. I don’t think that any number of broken toes is better than a broken arm. A million broken toes is clearly worse. Follow this chain of reasoning for each pair of items on the list. Claiming incomparably is a claim that no matter how much I try to subdivide my list, one item will still be infinitely worse than the next.
The idea of bouncing back is also not useful. Firstly it isn’t a sharp boundary, you can mostly recover but still be somewhat scarred. Secondly you can replace an injury with something that takes twice as long to bounce back from, and they still seem comparable. Something that takes most of a lifetime to bounce back from is comparable to something that you don’t bounce back from. This breaks if you assume immortality, or that bouncing back 5 seconds before you drop dead is of morally overwhelming significance, such that doing so is incomparable to not doing so.
Broken arms vs toes: I agree that any number of broken toes wouldn’t be better than a broken arm. But that’s the point, these are _comparable_.
Incomparable breaks occur where you put the ellipses in your list. Torture for 40-50 years vs torture for 1 day is qualitatively distinct. I imagine a human being can bounce back from torture for 1 day, have scars but manage to prosper. That would be hellishly more difficult with torture for 40 years. We could count torture by day, 1-(365*40) and there would be a point of no return there. A duration of torture a person can’t bounce back. It would depend on the person, what happens during and after etc, which is why it’s not possible to compute that day. That doesn’t mean we should ignore how humans work.
Here’s the main beef I have with Dust Specks vs Torture: Statements like “1 million broken toes” or “3^^^3 dust specks” disregard human experience. That many dust specks on one person is torture. One on each is _practically nothing_. I’m simulating people experiencing these, and the result I arrive at is this; choose best outcome from (0 utils * 3^^^3) vs (-3^^^3 utils). This is easy to answer.
You may say “but 1 dust speck on a person isn’t 0 utils, it’s a very small negative utility” and yes, technically you’re correct. But before doing the sum over people, take a look at the people. *Distribution matters.*
Humans don’t work like linear sensory devices. Utility can’t work linearly as well.
I like this insight—not only nonlinear but actually discontinuous. There are some marginal instants of torture that are hugely negative, mixed in with those that are more mildly negative. This is due to something that’s often forgotten in these discussions: ongoing impact of a momentary experience.
Being “broken” by torture may make it impossible to ever recover enough for any future experiences to be positive. There may be a few quanta of brokenness, but it’s not the case that every marginal second is all that bad, only some of them.
To me all the talk about utilities seem broken at a fundamental level. An implication of Timeless Decision Theory should be that an agent running TDT calculates utilities as an integral over time, so final values don’t have any explicit time dependence.
This fixes a lot of things, especially when combined with the texture of human experience. Utility should be a function of the states of the world it affects integrated over time. Since we don’t get to make that calculation in detail, we can approximate by choosing the kinds of actions that minimize the future bad impacts and maximize good ones.
This is the only view of utility that I can think of that preserves the “wisdom of the elders” point of view. It’s strange how often they turn out to be right as one ages, in saying “only care for the ones caring for you”, “focus on bettering yourself and not wallowing in bad circumstances” etc. These are the kind of actions that incorporate the notion that life is ongoing. One person only realizes these in an experiential way after having access to dozens of years of memories to reflect on.
Consequentialism (and utilitarianism as well IMO) is broad enough to incorporate both the necessity of universality and the view of virtue ethics if one thinks in the timeless utility perspective.
What if I make each time period in the ”...” one nanosecond shorter than the previous.
You must believe that there is some length of time, t>most of a day, such that everyone in the world being tortured for t-1 nanosecond is better than one person being tortured for t.
Suppose there was a strong clustering effect in human psychology, such that less than a week of torture left peoples minds in one state, and more than a week left them broken. I would still expect the possibility of some intermediate cases on the borderlines. Things as messy as human psychology, I would expect there to not be a perfectly sharp black and white cutoff. If we zoom in enough, we find that the space of possible quantum wavefunctions is continuous.
There is a sense in which specs and torture feel incomparable, but I don’t think this is your sense of incomparability, to me it feels like moral uncertainty about which huge number of specs to pick. I would also say that “Don’t torture anyone” and “don’t commit attrocities based on convoluted arguments” a good ethical injunction. If you think that your own reasoning processes are not very reliable, and you think philosophical thought experiments rarely happen in real life, then implementing the general rule “If I think I should torture someone, go to nearest psych ward” is a good idea. However I would want a perfectly rational AI which never made mistakes to choose torture.
we find the need for a weird cut off point, like a broken arm
For the cut-off point on a broken arm, I recommend the elbow [not a doctor].
Suppose there was a strong clustering effect in human psychology, such that less than a week of torture left peoples minds in one state, and more than a week left them broken. I would still expect the possibility of some intermediate cases on the borderlines. Things as messy as human psychology, I would expect there to not be a perfectly sharp black and white cutoff. If we zoom in enough, we find that the space of possible quantum wavefunctions is continuous.
I agree! You’ve made my point for me: it is precisely this messiness which grants us continuity on average. Some people will take longer than others to have qualitatively incomparably damaging effects from torture, and as such the expected impact of any significant torture will have a component on the severity level of 50 years torture. Hence, comparable (on expectation).
The idea is that we can take a finite list of items like this
Torture for 50 years
Torture for 40 years
...
Torture for 1 day
...
Broken arm
Broken toe
...
Papercut
Sneeze
Dust Speck
Presented with such a list you must insist that two items on this list are incomparable. In fact you must claim that some item is incomparably worse than the next item. I don’t think that any number of broken toes is better than a broken arm. A million broken toes is clearly worse. Follow this chain of reasoning for each pair of items on the list. Claiming incomparably is a claim that no matter how much I try to subdivide my list, one item will still be infinitely worse than the next.
The idea of bouncing back is also not useful. Firstly it isn’t a sharp boundary, you can mostly recover but still be somewhat scarred. Secondly you can replace an injury with something that takes twice as long to bounce back from, and they still seem comparable. Something that takes most of a lifetime to bounce back from is comparable to something that you don’t bounce back from. This breaks if you assume immortality, or that bouncing back 5 seconds before you drop dead is of morally overwhelming significance, such that doing so is incomparable to not doing so.
Broken arms vs toes: I agree that any number of broken toes wouldn’t be better than a broken arm. But that’s the point, these are _comparable_.
Incomparable breaks occur where you put the ellipses in your list. Torture for 40-50 years vs torture for 1 day is qualitatively distinct. I imagine a human being can bounce back from torture for 1 day, have scars but manage to prosper. That would be hellishly more difficult with torture for 40 years. We could count torture by day, 1-(365*40) and there would be a point of no return there. A duration of torture a person can’t bounce back. It would depend on the person, what happens during and after etc, which is why it’s not possible to compute that day. That doesn’t mean we should ignore how humans work.
Here’s the main beef I have with Dust Specks vs Torture: Statements like “1 million broken toes” or “3^^^3 dust specks” disregard human experience. That many dust specks on one person is torture. One on each is _practically nothing_. I’m simulating people experiencing these, and the result I arrive at is this; choose best outcome from (0 utils * 3^^^3) vs (-3^^^3 utils). This is easy to answer.
You may say “but 1 dust speck on a person isn’t 0 utils, it’s a very small negative utility” and yes, technically you’re correct. But before doing the sum over people, take a look at the people. *Distribution matters.*
Humans don’t work like linear sensory devices. Utility can’t work linearly as well.
I like this insight—not only nonlinear but actually discontinuous. There are some marginal instants of torture that are hugely negative, mixed in with those that are more mildly negative. This is due to something that’s often forgotten in these discussions: ongoing impact of a momentary experience.
Being “broken” by torture may make it impossible to ever recover enough for any future experiences to be positive. There may be a few quanta of brokenness, but it’s not the case that every marginal second is all that bad, only some of them.
To me all the talk about utilities seem broken at a fundamental level. An implication of Timeless Decision Theory should be that an agent running TDT calculates utilities as an integral over time, so final values don’t have any explicit time dependence.
This fixes a lot of things, especially when combined with the texture of human experience. Utility should be a function of the states of the world it affects integrated over time. Since we don’t get to make that calculation in detail, we can approximate by choosing the kinds of actions that minimize the future bad impacts and maximize good ones.
This is the only view of utility that I can think of that preserves the “wisdom of the elders” point of view. It’s strange how often they turn out to be right as one ages, in saying “only care for the ones caring for you”, “focus on bettering yourself and not wallowing in bad circumstances” etc. These are the kind of actions that incorporate the notion that life is ongoing. One person only realizes these in an experiential way after having access to dozens of years of memories to reflect on.
Consequentialism (and utilitarianism as well IMO) is broad enough to incorporate both the necessity of universality and the view of virtue ethics if one thinks in the timeless utility perspective.
What if I make each time period in the ”...” one nanosecond shorter than the previous.
You must believe that there is some length of time, t>most of a day, such that everyone in the world being tortured for t-1 nanosecond is better than one person being tortured for t.
Suppose there was a strong clustering effect in human psychology, such that less than a week of torture left peoples minds in one state, and more than a week left them broken. I would still expect the possibility of some intermediate cases on the borderlines. Things as messy as human psychology, I would expect there to not be a perfectly sharp black and white cutoff. If we zoom in enough, we find that the space of possible quantum wavefunctions is continuous.
There is a sense in which specs and torture feel incomparable, but I don’t think this is your sense of incomparability, to me it feels like moral uncertainty about which huge number of specs to pick. I would also say that “Don’t torture anyone” and “don’t commit attrocities based on convoluted arguments” a good ethical injunction. If you think that your own reasoning processes are not very reliable, and you think philosophical thought experiments rarely happen in real life, then implementing the general rule “If I think I should torture someone, go to nearest psych ward” is a good idea. However I would want a perfectly rational AI which never made mistakes to choose torture.
For the cut-off point on a broken arm, I recommend the elbow [not a doctor].
I agree! You’ve made my point for me: it is precisely this messiness which grants us continuity on average. Some people will take longer than others to have qualitatively incomparably damaging effects from torture, and as such the expected impact of any significant torture will have a component on the severity level of 50 years torture. Hence, comparable (on expectation).