I don’t have a terminal value as following: “People should never experience an infinitesimally irritating dust speck hitting their eye”
I do have a terminal value as following: “People should never go through torture”
Are you sure you can identify your terminal values as well as that? Most people can’t.
If so, can you please give a full list of your terminal values, or as full such a list as you can make it? Thanks in advance.
To identify a single value does not require you to identify all your values, which your sardonic comment seems to suggest. I chose that phrasing because it was plausible. In creating this example of terminal values I did no want to get into a full analysis of what’s wrong here, I merely intended to point out that the torture is not an obvious option, and do that with a compact reply. The post seems to suggest convertibility between dust specks and torture, if you can come up with a couple of ways to weigh the situation where convertibility does not follow, it becomes a trivial issue to keep listing. That in my opinion is sufficient to conclude that there is no obvious ultimately absolutely correct right answer, and that you should proceed with care instead of shrugging and giving the torture verdict. Most of the actual problems with the dilemma do not stem from the number googolplex, but rather this being a hypothetical setup which seems to eliminate consequences, and consequences are usually a big of part of what people perceive right and wrong, however you can argue that when examining consequences eventually you will hit some kind terminal values. So there you have it.
If you don’t consider this particular one type of disutility (dust speck) convertible into the other (torture), the standard follow-up argument is to ask you to identify the smallest kind of disutility that might nonetheless be somehow convertible.
The typical list of examples include “a year’s unjust imprisonment”, “a broken leg”, “a splitting migraine”, “a diarrhea”, “an annoying hiccup”, “a papercut”, “a stubbed toe”.
Would any of these, if taking the place of the “dust speck”, change your position so that it’s now in favour of preferring to avert 3^^^3 repetitions of the lesser disutility, rather than avert the single person from being tortured?
E.g. is it better to save a single person from being tortured for 50 years, or better to save 3^^^3 people from suffering a year’s unjust imprisonment?
As you can see from both of my above comments it’s not the mathematical aspect that’s problematic. You choosing the word “disutility” means you’ve already accepted these units as convertible to a single currency.
In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....
Are you sure you can identify your terminal values as well as that? Most people can’t.
If so, can you please give a full list of your terminal values, or as full such a list as you can make it? Thanks in advance.
To identify a single value does not require you to identify all your values, which your sardonic comment seems to suggest. I chose that phrasing because it was plausible. In creating this example of terminal values I did no want to get into a full analysis of what’s wrong here, I merely intended to point out that the torture is not an obvious option, and do that with a compact reply. The post seems to suggest convertibility between dust specks and torture, if you can come up with a couple of ways to weigh the situation where convertibility does not follow, it becomes a trivial issue to keep listing. That in my opinion is sufficient to conclude that there is no obvious ultimately absolutely correct right answer, and that you should proceed with care instead of shrugging and giving the torture verdict. Most of the actual problems with the dilemma do not stem from the number googolplex, but rather this being a hypothetical setup which seems to eliminate consequences, and consequences are usually a big of part of what people perceive right and wrong, however you can argue that when examining consequences eventually you will hit some kind terminal values. So there you have it.
If you don’t consider this particular one type of disutility (dust speck) convertible into the other (torture), the standard follow-up argument is to ask you to identify the smallest kind of disutility that might nonetheless be somehow convertible.
The typical list of examples include “a year’s unjust imprisonment”, “a broken leg”, “a splitting migraine”, “a diarrhea”, “an annoying hiccup”, “a papercut”, “a stubbed toe”.
Would any of these, if taking the place of the “dust speck”, change your position so that it’s now in favour of preferring to avert 3^^^3 repetitions of the lesser disutility, rather than avert the single person from being tortured?
E.g. is it better to save a single person from being tortured for 50 years, or better to save 3^^^3 people from suffering a year’s unjust imprisonment?
As you can see from both of my above comments it’s not the mathematical aspect that’s problematic. You choosing the word “disutility” means you’ve already accepted these units as convertible to a single currency.
In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution—not an algorithm that indicates how a person or an AI should decide.
I don’t expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....