I’m pretty new here so apologies if this is a stupid question or if it has been covered before. I couldn’t find anything on this topic so thought I’d ask the question before writing a full post on the idea.
If we believe that discomfort can be quantified and ‘stacked’ (e.g. X people with specks of dust in their eye = 1 death), is there any reason why this has to scale linearly from all perspectives?
What if the total can be less than the sum of its parts depending on the observer?
Picture a dynamic logarithmic scale of discomfort stacking with a ‘hard cap’ where every new instance contributes less and less to the total to the point of flatlining on a graph.
Each discrete level of discomfort has a different starting value—so an infinite number of something extremely mild could never amount to the value of even a single instance of extreme torture.
Every individual instance is ‘worth’ the full n=1 level of discomfort – but, when stacked, this is augmented and dynamically shifts, though only to an observer looking at the entire set of cumulative instances.
No matter how many people have a speck of dust in their eye – to an outside observer it would never amount to the cumulative discomfort of even one single death, despite every individual feeling the full extent of it as if they were the only one.
A number of us (probably a minority around here) don’t think “stacking” or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals. There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.
it’s very possible that it could become a practical problem at some point in the future.
I kind of doubt it. Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant. Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.
Do you think a logarithmic scale makes more sense than a linear scale?
Yes, but it probably doesn’t fix the underlying problem that quantifications are unstable and highly variable across agents.
For clarity – I do agree that a simple stacking model like this has its flaws as each individual ‘unit’ of discomfort / pain will not never be exactly equal in practice.
If you have the time, I’d like you to read my response to CstineSublime’s comment.
Regardless of whether or not some version of this thought experiment ever becomes a serious practical problem, if there was some non-zero chance of having a situation where a decision like this had to be made, wouldn’t it make sense to have some method of comparison?
In reality, every ‘stacking theory’ requires a bunch of assumptions, but if the decision had to be made for some reason, do you think a logarithmic scale is a more appropriate one to use?
I think that insisting on comparing unmeasurable and different things is an error. If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong. If you make up numbers that don’t fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.
Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don’t. There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse). There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.
Do you think a logarithmic scale makes more sense than a linear scale?
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)
Picture a dynamic logarithmic scale of discomfort stacking with a ‘hard cap’ where every new instance contributes less and less to the total to the point of flatlining on a graph.
Reality is structured such that there tend to be an endless number of (typically very complicated) ways of increasing a probability by a tiny amount. The problem with putting a hard cap on the desirability of some need or want is that the agent will completely disregard that need or want to affect the probability of a need or want that is not capped (e.g., the need to avoid people’s being tortured) even if that effect is extremely small.
I’m confused, is the death to discomfort comparison based on the cumulative experience that the loved ones and friends of a person who has died might experience in grief and despair that someone they cared about died? Or are you suggesting that a death is a superlatively uncomfortable event for the individual who is dying?
I can’t see a way of making discomfort to death fungible, at least partly because to experience discomfort requires someone to continue on living.
Maybe ‘death’ was a poor example as it inherently leads us to a state of relief from discomfort. If we instead take the example of ‘extreme torture’, then it makes more sense to compare the two.
The ‘discomfort’ I was referring to was more from a ‘physical sensation’ perspective rather than any second-order effects.
Imagine these experiences occur in a closed system with no influence on the outside world. Each person has been brought into existence by some higher power specifically for the purposes of this experiment. They have no family, no friends, and are genetically identical.
Imagine 1,000,000,000 participants with a single rational observer. The observer is forced by the higher power to make a choice – so some method of comparison is required.
Would it make more sense for the observer to choose for every single one of the participants to be burdened with a speck of dust in their eye, or for one single participant to be subjected to ‘extreme torture’?
Is there any point where increasing the number changes your mind?
For me it doesn’t matter how many participants there are – the option of torture should never be taken.
The ‘logarithmic stacking theory’ allows this to work mathematically, while a linear model does not.
Whether it be a paper-cut, a punch in the face, a leg break, a limb amputation etc. – there is some level of discomfort where the starting value of for one single person is higher than that of the limit of the log graph for 1,000,000,000+ people dealing with some lesser degree of discomfort.
That doesn’t necessarily have to be ‘extreme torture’ – this was just a more ‘obvious’ scenario that I used as an example.
Oh, sure. I was wondering about the reverse question: is there something that doesn’t really qualify as torture where subjecting a billion people to it is worse than subjecting one person to torture.
I’m also interested in how this forms some sort of “layered” discontinuous scale. If it were continuous, then you could form a chain of relations of the form “10 people suffering A is as bad as 1 person suffering B”, “10 people suffering B is as bad as 1 person suffering C”, and so on to span the entire spectrum.
Then it would take some additional justification for saying that 100 people suffering A is not as bad as 1 person suffering C, 1000 A vs 1 D, and so on.
[Question] Discomfort Stacking
I’m pretty new here so apologies if this is a stupid question or if it has been covered before. I couldn’t find anything on this topic so thought I’d ask the question before writing a full post on the idea.
If we believe that discomfort can be quantified and ‘stacked’ (e.g. X people with specks of dust in their eye = 1 death), is there any reason why this has to scale linearly from all perspectives?
What if the total can be less than the sum of its parts depending on the observer?
Picture a dynamic logarithmic scale of discomfort stacking with a ‘hard cap’ where every new instance contributes less and less to the total to the point of flatlining on a graph.
Each discrete level of discomfort has a different starting value—so an infinite number of something extremely mild could never amount to the value of even a single instance of extreme torture.
Every individual instance is ‘worth’ the full n=1 level of discomfort – but, when stacked, this is augmented and dynamically shifts, though only to an observer looking at the entire set of cumulative instances.
No matter how many people have a speck of dust in their eye – to an outside observer it would never amount to the cumulative discomfort of even one single death, despite every individual feeling the full extent of it as if they were the only one.
A number of us (probably a minority around here) don’t think “stacking” or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals. There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.
I’m unsure myself.
I wouldn’t want to simply avoid the question as it’s very possible that it could become a practical problem at some point in the future.
Do you think a logarithmic scale makes more sense than a linear scale?
I kind of doubt it. Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant. Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.
Yes, but it probably doesn’t fix the underlying problem that quantifications are unstable and highly variable across agents.
For clarity – I do agree that a simple stacking model like this has its flaws as each individual ‘unit’ of discomfort / pain will not never be exactly equal in practice.
If you have the time, I’d like you to read my response to CstineSublime’s comment.
Regardless of whether or not some version of this thought experiment ever becomes a serious practical problem, if there was some non-zero chance of having a situation where a decision like this had to be made, wouldn’t it make sense to have some method of comparison?
In reality, every ‘stacking theory’ requires a bunch of assumptions, but if the decision had to be made for some reason, do you think a logarithmic scale is a more appropriate one to use?
I think that insisting on comparing unmeasurable and different things is an error. If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong. If you make up numbers that don’t fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.
Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don’t. There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse). There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)
Reality is structured such that there tend to be an endless number of (typically very complicated) ways of increasing a probability by a tiny amount. The problem with putting a hard cap on the desirability of some need or want is that the agent will completely disregard that need or want to affect the probability of a need or want that is not capped (e.g., the need to avoid people’s being tortured) even if that effect is extremely small.
I’m confused, is the death to discomfort comparison based on the cumulative experience that the loved ones and friends of a person who has died might experience in grief and despair that someone they cared about died? Or are you suggesting that a death is a superlatively uncomfortable event for the individual who is dying?
I can’t see a way of making discomfort to death fungible, at least partly because to experience discomfort requires someone to continue on living.
I suppose you’re right.
Maybe ‘death’ was a poor example as it inherently leads us to a state of relief from discomfort. If we instead take the example of ‘extreme torture’, then it makes more sense to compare the two.
The ‘discomfort’ I was referring to was more from a ‘physical sensation’ perspective rather than any second-order effects.
Imagine these experiences occur in a closed system with no influence on the outside world. Each person has been brought into existence by some higher power specifically for the purposes of this experiment. They have no family, no friends, and are genetically identical.
Imagine 1,000,000,000 participants with a single rational observer. The observer is forced by the higher power to make a choice – so some method of comparison is required.
Would it make more sense for the observer to choose for every single one of the participants to be burdened with a speck of dust in their eye, or for one single participant to be subjected to ‘extreme torture’?
Is there any point where increasing the number changes your mind?
For me it doesn’t matter how many participants there are – the option of torture should never be taken.
The ‘logarithmic stacking theory’ allows this to work mathematically, while a linear model does not.
Is there some level of discomfort short of extreme torture for a billion to suffer where the balance shifts?
In my model, yes.
Whether it be a paper-cut, a punch in the face, a leg break, a limb amputation etc. – there is some level of discomfort where the starting value of for one single person is higher than that of the limit of the log graph for 1,000,000,000+ people dealing with some lesser degree of discomfort.
That doesn’t necessarily have to be ‘extreme torture’ – this was just a more ‘obvious’ scenario that I used as an example.
Oh, sure. I was wondering about the reverse question: is there something that doesn’t really qualify as torture where subjecting a billion people to it is worse than subjecting one person to torture.
I’m also interested in how this forms some sort of “layered” discontinuous scale. If it were continuous, then you could form a chain of relations of the form “10 people suffering A is as bad as 1 person suffering B”, “10 people suffering B is as bad as 1 person suffering C”, and so on to span the entire spectrum.
Then it would take some additional justification for saying that 100 people suffering A is not as bad as 1 person suffering C, 1000 A vs 1 D, and so on.