tl;dr: because policy discussions follow contextualizing norms, value judgements are often interpreted as endorsements of (uncooperative) actions; people used to decoupling norms should carefully account for this in order to communicate effectively.
One difficulty in having sensible discussions of AI policy is a gap between the norms used in different contexts—in particular the gap between decoupling and contextualizing norms. Chris Leong defines them as follows:
Decoupling norms: It is considered eminently reasonable to require the truth of your claims to be considered in isolation—free of any potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or an intentional evasion.
LessWrong is one setting which follows very strong decoupling norms. Another is discussion of axiology in philosophy (i.e. which outcomes are better or worse than others). In discussions of axiology, it’s taken for granted that claims are made without considering cooperative or deontological considerations. For example, if somebody said “a child dying by accident is worse than an old person being murdered, all else equal”, then the local discussion norms would definitely not treat this as an endorsement of killing old people to save children from accidents; everyone would understand that there are other constraints in play.
By contrast, in environments with strong contextualizing norms, claims about which outcomes are better or worse than others can be interpreted as endorsements of related actions. Under these norms, the sentence above about accidents and murders could be taken as (partial) endorsement of killing old people in order to save children, unless the speaker added relevant qualifications and caveats.
In particular, I claim that policy discussions tend to follow strong contextualizing norms. I think this is partly for bad reasons (people in politics avoid decoupled statements because they’re easier to criticise) and partly for good reasons. In particular, decoupling norms make it easier to:
Construct negative associations such as stereotypes by saying literally true things about non-central examples of a category.
“Set the agenda” in underhanded ways—e.g. by steering conversations towards hypotheticals which put the burden of proof on one’s opponents, while retreating to the defence of “just asking questions”.
Exploit the lossiness of some communication channels (e.g. by getting your opponents to say things which will be predictably misinterpreted by the media).
However, I’m less interested in arguing about the extent to which these norms are a good idea or not, and more interested in the implications of these norms for one’s ability to communicate effectively. One implication: there are many statements where saying them directly in policy discussions will be taken as implying other statements that the speaker didn’t mean to imply. Most of these are not impossible to say, but instead need to be said much more carefully in order to convey the intended message. The additional effort required may make some people decide it’s no longer worthwhile to say those statements; I think this is not dishonesty, but rather responsiveness to costs of communication. To me it seems analogous to how there are many statements that need to be said very carefully in order to convey the intended message under high-decoupling norms (e.g. under high-decoupling norms, saying that someone is arguing in bad faith is a serious accusation which requires strong justification).
In particular, under contextualizing norms, saying “outcome X is worse than outcome Y” can be seen as an endorsement of acting in ways which achieve outcome Y instead of outcome X. There are a range of reasons why you might not endorse this despite believing the original statement (even aside from reputational/coalitional concerns). For example, if outcome Y is “a war”:
You might hold yourself to deontological constraints about not starting wars.
You might worry that endorsing some wars would make other non-endorsed wars more likely.
You might hold yourself to decision-theoretic constraints like “people only gave me the ability to start the war because they trusted that I wouldn’t do it”.
If many people disagree with the original claim, then you might think that unilaterally starting the war is defecting in an epistemic prisoner’s dilemma.
If many people have different values from you, then you might think that unilaterally starting the war is defecting in a moral prisoner’s dilemma.
One way of summarizing these points is as different facets of cooperative/deontological morality. To restate the point, then: under contextualizing norms, unless you carefully explain that you’re setting aside cooperative constraints, then claims like “X is worse than Y” can often reasonably be interpreted as a general endorsement of causing Y in order to avoid X. And I strongly recommend that people who are used to decoupling norms take that into account when interacting in environments with contextualizing norms, especially public ones.
This also goes the other way, though. For example, I think discussions of international policy are higher-decoupling in some ways than discussions of domestic policy or individual morality. In particular, advocacy of widely-supported interventions backed by violence in the international context is not taboo; whereas advocacy of violence by individuals, or by governments against their citizens, is strongly taboo. I think this taboo is very important, and urge others not to break it; nor to conflate advocacy of lawful force (carried out by legitimate (Schelling point) authorities) and unlawful force (carried out unilaterally, in a way that undermines our general ability to coordinate to avoid violence).
Policy discussions follow strong contextualizing norms
tl;dr: because policy discussions follow contextualizing norms, value judgements are often interpreted as endorsements of (uncooperative) actions; people used to decoupling norms should carefully account for this in order to communicate effectively.
One difficulty in having sensible discussions of AI policy is a gap between the norms used in different contexts—in particular the gap between decoupling and contextualizing norms. Chris Leong defines them as follows:
LessWrong is one setting which follows very strong decoupling norms. Another is discussion of axiology in philosophy (i.e. which outcomes are better or worse than others). In discussions of axiology, it’s taken for granted that claims are made without considering cooperative or deontological considerations. For example, if somebody said “a child dying by accident is worse than an old person being murdered, all else equal”, then the local discussion norms would definitely not treat this as an endorsement of killing old people to save children from accidents; everyone would understand that there are other constraints in play.
By contrast, in environments with strong contextualizing norms, claims about which outcomes are better or worse than others can be interpreted as endorsements of related actions. Under these norms, the sentence above about accidents and murders could be taken as (partial) endorsement of killing old people in order to save children, unless the speaker added relevant qualifications and caveats.
In particular, I claim that policy discussions tend to follow strong contextualizing norms. I think this is partly for bad reasons (people in politics avoid decoupled statements because they’re easier to criticise) and partly for good reasons. In particular, decoupling norms make it easier to:
Construct negative associations such as stereotypes by saying literally true things about non-central examples of a category.
“Set the agenda” in underhanded ways—e.g. by steering conversations towards hypotheticals which put the burden of proof on one’s opponents, while retreating to the defence of “just asking questions”.
Exploit the lossiness of some communication channels (e.g. by getting your opponents to say things which will be predictably misinterpreted by the media).
However, I’m less interested in arguing about the extent to which these norms are a good idea or not, and more interested in the implications of these norms for one’s ability to communicate effectively. One implication: there are many statements where saying them directly in policy discussions will be taken as implying other statements that the speaker didn’t mean to imply. Most of these are not impossible to say, but instead need to be said much more carefully in order to convey the intended message. The additional effort required may make some people decide it’s no longer worthwhile to say those statements; I think this is not dishonesty, but rather responsiveness to costs of communication. To me it seems analogous to how there are many statements that need to be said very carefully in order to convey the intended message under high-decoupling norms (e.g. under high-decoupling norms, saying that someone is arguing in bad faith is a serious accusation which requires strong justification).
In particular, under contextualizing norms, saying “outcome X is worse than outcome Y” can be seen as an endorsement of acting in ways which achieve outcome Y instead of outcome X. There are a range of reasons why you might not endorse this despite believing the original statement (even aside from reputational/coalitional concerns). For example, if outcome Y is “a war”:
You might hold yourself to deontological constraints about not starting wars.
You might worry that endorsing some wars would make other non-endorsed wars more likely.
You might hold yourself to decision-theoretic constraints like “people only gave me the ability to start the war because they trusted that I wouldn’t do it”.
If many people disagree with the original claim, then you might think that unilaterally starting the war is defecting in an epistemic prisoner’s dilemma.
If many people have different values from you, then you might think that unilaterally starting the war is defecting in a moral prisoner’s dilemma.
One way of summarizing these points is as different facets of cooperative/deontological morality. To restate the point, then: under contextualizing norms, unless you carefully explain that you’re setting aside cooperative constraints, then claims like “X is worse than Y” can often reasonably be interpreted as a general endorsement of causing Y in order to avoid X. And I strongly recommend that people who are used to decoupling norms take that into account when interacting in environments with contextualizing norms, especially public ones.
This also goes the other way, though. For example, I think discussions of international policy are higher-decoupling in some ways than discussions of domestic policy or individual morality. In particular, advocacy of widely-supported interventions backed by violence in the international context is not taboo; whereas advocacy of violence by individuals, or by governments against their citizens, is strongly taboo. I think this taboo is very important, and urge others not to break it; nor to conflate advocacy of lawful force (carried out by legitimate (Schelling point) authorities) and unlawful force (carried out unilaterally, in a way that undermines our general ability to coordinate to avoid violence).