LW have thought about Petrov somewhat and I found myself torn a bit balancing various virtues in a fictional situation that seems to be a limit case.
Kido as the highest rank of the Kempeitai (martial and secret police) gets the job of figuring out who tried to shoot the emperor. He is expected to succeed in this or forfeit his life. He discovers that a sniper of a super power with nukes essentially confesses to the act. He thinks that relying this information up the chain would push his country into war which it would lose (his country doesn’t have nukes). Instead he suppresses the evidence and prepares to fail on his task. In the eleventh hour a “safe” suspect presents itself and the plot moves on.
Does Kido here do a heroic act like Petrov? He is tasked with a truth uncovering mission and fabricates a falsehood to bolster stability. I know that “the first victim of war is truth” but here it seems to be made in name of peace. With Petrov he knew the system wasn’t reliable and that other people might not have the skill or context to evalaute the data better. Here it seems that reporting the facts upstream would not place the decision makers in a worse position. I guess if multiple people know about it it would be harder to pretend it didn’t took place. Leaking it to the press could force emperors/capitals hand but are 5-7 people knowing up the chain about it too much?
In an AI that is supposed to cooperate in not bringing about catastrophic outcomes would it be proper to priotize non-war over technical accuracy or is the corruption of truth unacceptable here and taking autonomy to decide their own fate away from the AIs operators/customers?
Even if your superiors know what really happened, they’ll want you to fabricate a plausible lie for the public story. They’ll also want you to carry the risk so that if the truth does eventually come out they can deny knowing about the coverup.
When the agent willingly chooses into death, I don’t think there is any significant risk to take on left.
There is the side of responcibility of bearing shame which can transcend death. I guess I found an aspect of it I didn’t previously realise, when you think a situation will only resolve with an evil act and you could punt the decision to be made by another party it can seem like a favour to make the act happen via the party that carries the stain most gracefully.
The setting seems so morally grey that being complicit with the coverup would be that large of a blip in the radar. Later on when the generals disagree with the emperors confidants they pretty much do a coup by excluding the capital people from decision making. Part of the danger from Ripper from Strangelove is that he can just act as if he received an order without actually receiving one. When Kido acts without consultation how does he know that he is not operating with a faulty “bodily fluids” motive? What is the difference between a coup and exercising implicit autonomy, if any?
The fundamental question of how to help a person or group of people DESPITE their irrationality and inability to optimize for themselves is … unsolved at best. On at least two dimensions:
What do you optimize for? They don’t have coherent goals, and you wouldn’t have access to them anyway. Do you try to maximize their end-of-life expressions of satisfaction? Short-term pain relief? Simple longevity, with no quality metric? It’s hard enough to answer this for oneself, let alone others.
If their goals (or happiness/satisfaction metrics) conflict with yours, or with what you’d want them to want, how much of your satisfaction do you sacrifice for theirs?
And even if you have good answers for those, you have to decide how much you trust them not to harm you, either accidentally because they’re stupid (or constrained by context that you haven’t modeled), or intentionally because they care less about you than you about them. If you KNOW that they strongly prefer the truth, and you’re doing them harm to lie, but they’re idiots who’ll blow up the world, does this justify taking away their agency?
Me too, but I recognize that I’m much less happy with people applying the reasoning to take away my self-direction and choice. I’m uncomfortable with the elitism that says “I’m better at it, so I follow different rules”, but I don’t know any better answer.
If we change “blow up the world” to “kill a fly” at what point does the confidence start to waiver?
If we change “will blow up” to “maybe blow up” to “might blow up” when does it start to waiver?
Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks.
Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if “will they blow up the world?” would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world.
I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn’t mean they are inadmissible to exercise any of their powers.
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system.
I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.
Spoilers for Man in the High Castle ahead.
LW have thought about Petrov somewhat and I found myself torn a bit balancing various virtues in a fictional situation that seems to be a limit case.
Kido as the highest rank of the Kempeitai (martial and secret police) gets the job of figuring out who tried to shoot the emperor. He is expected to succeed in this or forfeit his life. He discovers that a sniper of a super power with nukes essentially confesses to the act. He thinks that relying this information up the chain would push his country into war which it would lose (his country doesn’t have nukes). Instead he suppresses the evidence and prepares to fail on his task. In the eleventh hour a “safe” suspect presents itself and the plot moves on.
Does Kido here do a heroic act like Petrov? He is tasked with a truth uncovering mission and fabricates a falsehood to bolster stability. I know that “the first victim of war is truth” but here it seems to be made in name of peace. With Petrov he knew the system wasn’t reliable and that other people might not have the skill or context to evalaute the data better. Here it seems that reporting the facts upstream would not place the decision makers in a worse position. I guess if multiple people know about it it would be harder to pretend it didn’t took place. Leaking it to the press could force emperors/capitals hand but are 5-7 people knowing up the chain about it too much?
In an AI that is supposed to cooperate in not bringing about catastrophic outcomes would it be proper to priotize non-war over technical accuracy or is the corruption of truth unacceptable here and taking autonomy to decide their own fate away from the AIs operators/customers?
Even if your superiors know what really happened, they’ll want you to fabricate a plausible lie for the public story. They’ll also want you to carry the risk so that if the truth does eventually come out they can deny knowing about the coverup.
When the agent willingly chooses into death, I don’t think there is any significant risk to take on left.
There is the side of responcibility of bearing shame which can transcend death. I guess I found an aspect of it I didn’t previously realise, when you think a situation will only resolve with an evil act and you could punt the decision to be made by another party it can seem like a favour to make the act happen via the party that carries the stain most gracefully.
The setting seems so morally grey that being complicit with the coverup would be that large of a blip in the radar. Later on when the generals disagree with the emperors confidants they pretty much do a coup by excluding the capital people from decision making. Part of the danger from Ripper from Strangelove is that he can just act as if he received an order without actually receiving one. When Kido acts without consultation how does he know that he is not operating with a faulty “bodily fluids” motive? What is the difference between a coup and exercising implicit autonomy, if any?
The fundamental question of how to help a person or group of people DESPITE their irrationality and inability to optimize for themselves is … unsolved at best. On at least two dimensions:
What do you optimize for? They don’t have coherent goals, and you wouldn’t have access to them anyway. Do you try to maximize their end-of-life expressions of satisfaction? Short-term pain relief? Simple longevity, with no quality metric? It’s hard enough to answer this for oneself, let alone others.
If their goals (or happiness/satisfaction metrics) conflict with yours, or with what you’d want them to want, how much of your satisfaction do you sacrifice for theirs?
And even if you have good answers for those, you have to decide how much you trust them not to harm you, either accidentally because they’re stupid (or constrained by context that you haven’t modeled), or intentionally because they care less about you than you about them. If you KNOW that they strongly prefer the truth, and you’re doing them harm to lie, but they’re idiots who’ll blow up the world, does this justify taking away their agency?
I’m happy with a confident “yes” to that last question.
Me too, but I recognize that I’m much less happy with people applying the reasoning to take away my self-direction and choice. I’m uncomfortable with the elitism that says “I’m better at it, so I follow different rules”, but I don’t know any better answer.
If we change “blow up the world” to “kill a fly” at what point does the confidence start to waiver?
If we change “will blow up” to “maybe blow up” to “might blow up” when does it start to waiver?
Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks.
Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if “will they blow up the world?” would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world.
I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn’t mean they are inadmissible to exercise any of their powers.
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system.
I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.