How many units of physical pain per speck?
How many units of perceived mistreatment per speck?
How many instances of basic human rights infringements per speck?
I don’t have a terminal value as following: “People should never experience an infinitesimally irritating dust speck hitting their eye”
I do have a terminal value as following: “People should never go through torture”
So in this case we can make another calculation which is: Googolplex of instances that are compatible with my terminal values vs a single event that is not compatible with my terminal values.
I’d like to add however that if these events are causally connected then dust specks would become the obvious choice. I’m sure there’s a certain probability of getting into a car accident due to blinking etc, lots of other ways to make essentially the same argument. Anyway that aspect was not emphasized in either post so I take it was not intended either.
If the intial option 1 was written as “Save 400 lives with certainty, 100 people die with certainty” it would be less misleading. Because if you interpret the option 1 as no one dying, it actually is the correct choice, although it later becomes clear anyway.
Because if you interpret the option 1 as no one dying
Such a reading would, frankly, be at the very least extremely careless.
When the juxtaposition is between saving 400 lives or saving 500 lives, it’s obvious that an additional 100 people are NOT being saved in the first scenario.
I don’t have a terminal value as following: “People should never experience an infinitesimally irritating dust speck hitting their eye”
I do have a terminal value as following: “People should never go through torture”
Are you sure you can identify your terminal values as well as that? Most people can’t.
If so, can you please give a full list of your terminal values, or as full such a list as you can make it? Thanks in advance.
To identify a single value does not require you to identify all your values, which your sardonic comment seems to suggest. I chose that phrasing because it was plausible. In creating this example of terminal values I did no want to get into a full analysis of what’s wrong here, I merely intended to point out that the torture is not an obvious option, and do that with a compact reply. The post seems to suggest convertibility between dust specks and torture, if you can come up with a couple of ways to weigh the situation where convertibility does not follow, it becomes a trivial issue to keep listing. That in my opinion is sufficient to conclude that there is no obvious ultimately absolutely correct right answer, and that you should proceed with care instead of shrugging and giving the torture verdict. Most of the actual problems with the dilemma do not stem from the number googolplex, but rather this being a hypothetical setup which seems to eliminate consequences, and consequences are usually a big of part of what people perceive right and wrong, however you can argue that when examining consequences eventually you will hit some kind terminal values. So there you have it.
If you don’t consider this particular one type of disutility (dust speck) convertible into the other (torture), the standard follow-up argument is to ask you to identify the smallest kind of disutility that might nonetheless be somehow convertible.
The typical list of examples include “a year’s unjust imprisonment”, “a broken leg”, “a splitting migraine”, “a diarrhea”, “an annoying hiccup”, “a papercut”, “a stubbed toe”.
Would any of these, if taking the place of the “dust speck”, change your position so that it’s now in favour of preferring to avert 3^^^3 repetitions of the lesser disutility, rather than avert the single person from being tortured?
E.g. is it better to save a single person from being tortured for 50 years, or better to save 3^^^3 people from suffering a year’s unjust imprisonment?
As you can see from both of my above comments it’s not the mathematical aspect that’s problematic. You choosing the word “disutility” means you’ve already accepted these units as convertible to a single currency.
In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....
How many units of physical pain per speck? How many units of perceived mistreatment per speck? How many instances of basic human rights infringements per speck?
I don’t have a terminal value as following: “People should never experience an infinitesimally irritating dust speck hitting their eye”
I do have a terminal value as following: “People should never go through torture”
So in this case we can make another calculation which is: Googolplex of instances that are compatible with my terminal values vs a single event that is not compatible with my terminal values.
I’d like to add however that if these events are causally connected then dust specks would become the obvious choice. I’m sure there’s a certain probability of getting into a car accident due to blinking etc, lots of other ways to make essentially the same argument. Anyway that aspect was not emphasized in either post so I take it was not intended either.
If the intial option 1 was written as “Save 400 lives with certainty, 100 people die with certainty” it would be less misleading. Because if you interpret the option 1 as no one dying, it actually is the correct choice, although it later becomes clear anyway.
Such a reading would, frankly, be at the very least extremely careless.
When the juxtaposition is between saving 400 lives or saving 500 lives, it’s obvious that an additional 100 people are NOT being saved in the first scenario.
Are you sure you can identify your terminal values as well as that? Most people can’t.
If so, can you please give a full list of your terminal values, or as full such a list as you can make it? Thanks in advance.
To identify a single value does not require you to identify all your values, which your sardonic comment seems to suggest. I chose that phrasing because it was plausible. In creating this example of terminal values I did no want to get into a full analysis of what’s wrong here, I merely intended to point out that the torture is not an obvious option, and do that with a compact reply. The post seems to suggest convertibility between dust specks and torture, if you can come up with a couple of ways to weigh the situation where convertibility does not follow, it becomes a trivial issue to keep listing. That in my opinion is sufficient to conclude that there is no obvious ultimately absolutely correct right answer, and that you should proceed with care instead of shrugging and giving the torture verdict. Most of the actual problems with the dilemma do not stem from the number googolplex, but rather this being a hypothetical setup which seems to eliminate consequences, and consequences are usually a big of part of what people perceive right and wrong, however you can argue that when examining consequences eventually you will hit some kind terminal values. So there you have it.
If you don’t consider this particular one type of disutility (dust speck) convertible into the other (torture), the standard follow-up argument is to ask you to identify the smallest kind of disutility that might nonetheless be somehow convertible.
The typical list of examples include “a year’s unjust imprisonment”, “a broken leg”, “a splitting migraine”, “a diarrhea”, “an annoying hiccup”, “a papercut”, “a stubbed toe”.
Would any of these, if taking the place of the “dust speck”, change your position so that it’s now in favour of preferring to avert 3^^^3 repetitions of the lesser disutility, rather than avert the single person from being tortured?
E.g. is it better to save a single person from being tortured for 50 years, or better to save 3^^^3 people from suffering a year’s unjust imprisonment?
As you can see from both of my above comments it’s not the mathematical aspect that’s problematic. You choosing the word “disutility” means you’ve already accepted these units as convertible to a single currency.
In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....