When it comes to epistemic warfare, they are already here, confidence 85%. (People on both sides of a given political divide would agree that they are already here, of course for different reasons. Pro-choice and pro-life; Brexit and Remain; Republican and Democrat.)
when considerations of proportionality and mitigating collateral damage are applied
Do you have a more concrete model for when x units of censorship/lying are appropriate for y utils/hedons/whatever? Not a trick question, although I doubt any two people could agree on such a model unless highly motivated (“you can’t come out of the jury room until you agree”). The question may be important when it comes time to teach an AI how to model our utility function.
My intuitive model would be “no censorship or lying is ever appropriate for less than n utils, and p units of censorship or lying are never appropriate for any number of utils short of a guaranteed FAI”. And then...a vast grayness in the middle. n is fairly large; I can’t think of any remotely feasible political goals in the U.S. that I’d endorse my representatives lying and censoring in order to accomplish.
I’d endorse widespread lying and censorship to prevent/avoid/solve a handful of seemingly intractable and also highly dangerous Nash equilibria with irreversible results, like climate change. We’d need to come up with some Schelling fences first, since you wouldn’t want to just trust my judgment (I don’t).
I think you need legible rules for norms to scale in an adversarial game, so it can’t be direct utility threshold based rules.
Proportionality is harder to make legible, but when lies are directed at political allies that’s clear friendly fire or betrayl. Lying to the general public also shouldn’t fly, that’s indiscriminate.
I really don’t think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get trust/credibility when joining the fray so late. If there were greater honesty and accuracy we’d have invested more in nuclear power a long time ago, but now that other renewable tech has descended the learning curve faster different options make sense going forward. In the Cold War, anti-nuclear movements generally got a bit hijacked by communists trying to make the U.S. weaker and to shift focus from mutual to unilateral action… there’s a lot of bad stuff influenced by lies in distant past that constrain options in the future. I guess it would be interesting to see what deception campaigns in history are the most widely considered good and successful after the fact. I assume most are ones with respect to war, such as ally deception about the D-Day landings.
Fair points. Upon reflection, I would probably want to know in advance that the Dark Arts intervention was going to work before authorizing it, and we’re not going to get that level of certainty short of an FAI anyway, so maybe it’s a moot point.
When it comes to epistemic warfare, they are already here, confidence 85%. (People on both sides of a given political divide would agree that they are already here, of course for different reasons. Pro-choice and pro-life; Brexit and Remain; Republican and Democrat.)
Do you have a more concrete model for when x units of censorship/lying are appropriate for y utils/hedons/whatever? Not a trick question, although I doubt any two people could agree on such a model unless highly motivated (“you can’t come out of the jury room until you agree”). The question may be important when it comes time to teach an AI how to model our utility function.
My intuitive model would be “no censorship or lying is ever appropriate for less than n utils, and p units of censorship or lying are never appropriate for any number of utils short of a guaranteed FAI”. And then...a vast grayness in the middle. n is fairly large; I can’t think of any remotely feasible political goals in the U.S. that I’d endorse my representatives lying and censoring in order to accomplish.
I’d endorse widespread lying and censorship to prevent/avoid/solve a handful of seemingly intractable and also highly dangerous Nash equilibria with irreversible results, like climate change. We’d need to come up with some Schelling fences first, since you wouldn’t want to just trust my judgment (I don’t).
The term of dark times is relative. Seeing darkness in present times doesn’t mean that the future won’t be darker.
I think you need legible rules for norms to scale in an adversarial game, so it can’t be direct utility threshold based rules.
Proportionality is harder to make legible, but when lies are directed at political allies that’s clear friendly fire or betrayl. Lying to the general public also shouldn’t fly, that’s indiscriminate.
I really don’t think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get trust/credibility when joining the fray so late. If there were greater honesty and accuracy we’d have invested more in nuclear power a long time ago, but now that other renewable tech has descended the learning curve faster different options make sense going forward. In the Cold War, anti-nuclear movements generally got a bit hijacked by communists trying to make the U.S. weaker and to shift focus from mutual to unilateral action… there’s a lot of bad stuff influenced by lies in distant past that constrain options in the future. I guess it would be interesting to see what deception campaigns in history are the most widely considered good and successful after the fact. I assume most are ones with respect to war, such as ally deception about the D-Day landings.
Fair points. Upon reflection, I would probably want to know in advance that the Dark Arts intervention was going to work before authorizing it, and we’re not going to get that level of certainty short of an FAI anyway, so maybe it’s a moot point.