The fundamental question of how to help a person or group of people DESPITE their irrationality and inability to optimize for themselves is … unsolved at best. On at least two dimensions:
What do you optimize for? They don’t have coherent goals, and you wouldn’t have access to them anyway. Do you try to maximize their end-of-life expressions of satisfaction? Short-term pain relief? Simple longevity, with no quality metric? It’s hard enough to answer this for oneself, let alone others.
If their goals (or happiness/satisfaction metrics) conflict with yours, or with what you’d want them to want, how much of your satisfaction do you sacrifice for theirs?
And even if you have good answers for those, you have to decide how much you trust them not to harm you, either accidentally because they’re stupid (or constrained by context that you haven’t modeled), or intentionally because they care less about you than you about them. If you KNOW that they strongly prefer the truth, and you’re doing them harm to lie, but they’re idiots who’ll blow up the world, does this justify taking away their agency?
Me too, but I recognize that I’m much less happy with people applying the reasoning to take away my self-direction and choice. I’m uncomfortable with the elitism that says “I’m better at it, so I follow different rules”, but I don’t know any better answer.
If we change “blow up the world” to “kill a fly” at what point does the confidence start to waiver?
If we change “will blow up” to “maybe blow up” to “might blow up” when does it start to waiver?
Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks.
Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if “will they blow up the world?” would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world.
I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn’t mean they are inadmissible to exercise any of their powers.
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system.
I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.
The fundamental question of how to help a person or group of people DESPITE their irrationality and inability to optimize for themselves is … unsolved at best. On at least two dimensions:
What do you optimize for? They don’t have coherent goals, and you wouldn’t have access to them anyway. Do you try to maximize their end-of-life expressions of satisfaction? Short-term pain relief? Simple longevity, with no quality metric? It’s hard enough to answer this for oneself, let alone others.
If their goals (or happiness/satisfaction metrics) conflict with yours, or with what you’d want them to want, how much of your satisfaction do you sacrifice for theirs?
And even if you have good answers for those, you have to decide how much you trust them not to harm you, either accidentally because they’re stupid (or constrained by context that you haven’t modeled), or intentionally because they care less about you than you about them. If you KNOW that they strongly prefer the truth, and you’re doing them harm to lie, but they’re idiots who’ll blow up the world, does this justify taking away their agency?
I’m happy with a confident “yes” to that last question.
Me too, but I recognize that I’m much less happy with people applying the reasoning to take away my self-direction and choice. I’m uncomfortable with the elitism that says “I’m better at it, so I follow different rules”, but I don’t know any better answer.
If we change “blow up the world” to “kill a fly” at what point does the confidence start to waiver?
If we change “will blow up” to “maybe blow up” to “might blow up” when does it start to waiver?
Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks.
Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if “will they blow up the world?” would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world.
I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn’t mean they are inadmissible to exercise any of their powers.
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system.
I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.