A number of us (probably a minority around here) don’t think “stacking” or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals. There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.
it’s very possible that it could become a practical problem at some point in the future.
I kind of doubt it. Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant. Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.
Do you think a logarithmic scale makes more sense than a linear scale?
Yes, but it probably doesn’t fix the underlying problem that quantifications are unstable and highly variable across agents.
For clarity – I do agree that a simple stacking model like this has its flaws as each individual ‘unit’ of discomfort / pain will not never be exactly equal in practice.
If you have the time, I’d like you to read my response to CstineSublime’s comment.
Regardless of whether or not some version of this thought experiment ever becomes a serious practical problem, if there was some non-zero chance of having a situation where a decision like this had to be made, wouldn’t it make sense to have some method of comparison?
In reality, every ‘stacking theory’ requires a bunch of assumptions, but if the decision had to be made for some reason, do you think a logarithmic scale is a more appropriate one to use?
I think that insisting on comparing unmeasurable and different things is an error. If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong. If you make up numbers that don’t fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.
Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don’t. There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse). There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.
Do you think a logarithmic scale makes more sense than a linear scale?
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)
A number of us (probably a minority around here) don’t think “stacking” or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals. There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.
I’m unsure myself.
I wouldn’t want to simply avoid the question as it’s very possible that it could become a practical problem at some point in the future.
Do you think a logarithmic scale makes more sense than a linear scale?
I kind of doubt it. Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant. Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.
Yes, but it probably doesn’t fix the underlying problem that quantifications are unstable and highly variable across agents.
For clarity – I do agree that a simple stacking model like this has its flaws as each individual ‘unit’ of discomfort / pain will not never be exactly equal in practice.
If you have the time, I’d like you to read my response to CstineSublime’s comment.
Regardless of whether or not some version of this thought experiment ever becomes a serious practical problem, if there was some non-zero chance of having a situation where a decision like this had to be made, wouldn’t it make sense to have some method of comparison?
In reality, every ‘stacking theory’ requires a bunch of assumptions, but if the decision had to be made for some reason, do you think a logarithmic scale is a more appropriate one to use?
I think that insisting on comparing unmeasurable and different things is an error. If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong. If you make up numbers that don’t fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.
Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don’t. There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse). There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.
Assuming that this article is a reaction to “Torture vs. Dust Specks”, the hypothetical number of people suffering from dust specks was specified as 3^^^3, which in practice is an unimaginably large number. Big numbers such as “the number of particles in the entire known universe” are not sufficient even to describe its number of digits. Therefore, using a logarithmic scale changes nothing.
Logarithmic scale with a hard cap is an inelegant solution, comparable to a linear scale with a hard cap.
What you probably want instead is some formula like in the theory of relativity, where the speed of a rocket approaches but never reaches a certain constant c. For example, you might claim that if a badness of any specific thing is X, then the badness of this thing happening even to a practically infinite number of people is still only approaching some finite value C*X. (Not sure if C is constant across different kinds of suffering.)
That seems like a nice justification for scope insensitivity. We are not insensitive, it’s just that saving 2,000 birds or saving 200,000 birds really has approximately the same moral value!
The problem with this justification is what qualifies as the “same kind of suffering”. Suppose that infinite people getting a dust speck in their eyes aggregates into 1000 units of badness. If instead, an infinite number people get a dust speck in their left eyes, and an infinite number of different people get a dust speck in their right eyes, does this aggregate into 1000 or 2000 units of badness, and why? What about dusk specks vs sand specks?
Or is this supposed to aggregate over different kinds of suffering? So even an almost infinite number of people, each one mildly discomforted in a unique way, are a less bad outcome than one person suffering horribly?
...shortly, it is not enough to say “in this specific scenario, I would define the proper way to calculate utility this way”, you should provide a complete theory, and then see how well it works in other scenarios.
(Also, you need to consider practically infinitely small numbers of people—that is, people suffering certain fate with a microscopically tiny probability.)