We think that most people who see political speech know it to be political speech and automatically discount it. We hope that speaking in a different way will cut through these filters.
That’s one reason why an outspoken method could be better. But it seems like you’d want some weighing of the pros and cons here? (Possible drawbacks of such messaging could include it being more likely to be ignored, or cause a backlash, or cause the issue to become polarized, etc.)
Like, presumably the experts who recommend being careful what you say also know that some people discount obviously political speech, but still recommend/practice being careful what you say. If so, that would suggest this one reason is not on its own enough to override the experts’ opinion and practice.
Could we talk about a specific expert you have in mind, who thinks this is a bad strategy in this particular case?
AI risk is a pretty weird case, in a number of ways: it’s highly counter-intuitive, not particularly politically polarized / entrenched, seems to require unprecedentedly fast and aggressive action by multiple countries, is almost maximally high-stakes, etc. “Be careful what you say, try to look normal, and slowly accumulate political capital and connections in the hope of swaying policymakers long-term” isn’t an unconditionally good strategy, it’s a strategy adapted to a particular range of situations and goals. I’d be interested in actually hearing arguments for why this strategy is the best option here, given MIRI’s world-model.
(Or, separately, you could argue against the world-model, if you disagree with us about how things are.)
I don’t really have a settled view on this; I’m mostly just interested in hearing a more detailed version of MIRI’s model. I also don’t have a specific expert in mind, but I guess the type of person that Akash occasionally refers to—someone who’s been in DC for a while, focuses on AI, and has encouraged a careful/diplomatic communication strategy.
“Be careful what you say, try to look normal, and slowly accumulate political capital and connections in the hope of swaying policymakers long-term” isn’t an unconditionally good strategy, it’s a strategy adapted to a particular range of situations and goals.
I agree with this. I also think that being more outspoken is generally more virtuous in politics, though I also see drawbacks with it. Maybe I’d wished OP mentioned some of the possible drawbacks of the outspoken strategy and whether there are sensible ways to mitigate those, or just making clear that MIRI thinks they’re outweighed by the advantages. (There’s some discussion, e.g., the risk of being “discounted or uninvited in the short term”, but this seems to be mostly drawn from the “ineffective” bucket, not from the “actively harmful” bucket.)
AI risk is a pretty weird case, in a number of ways: it’s highly counter-intuitive, not particularly politically polarized / entrenched, seems to require unprecedentedly fast and aggressive action by multiple countries, is almost maximally high-stakes, etc.
Yeah, I guess this is a difference in worldview between me and MIRI, where I have longer timelines, am less doomy, and am more bullish on forceful government intervention, causing me to think increased variance is probably generally bad.
That said, I’m curious why you think AI risk is highly counterintuitive (compared to, say, climate change) -- it seems the argument can be boiled down to a pretty simple, understandable (if reductive) core (“AI systems will likely be very powerful, perhaps more than humans, controlling them seems hard, and all that seems scary”), and it has indeed been transmitted like that successfully in the past, in films and other media.
I’m also not sure why it’s relevant here that AI risk is relatively unpolarized—if anything, that seems like it should make it more important not to cause further polarization (at least if highly visible moral issues being relatively unpolarized represent unstable equilibriums)?
We think that most people who see political speech know it to be political speech and automatically discount it. We hope that speaking in a different way will cut through these filters.
That’s one reason why an outspoken method could be better. But it seems like you’d want some weighing of the pros and cons here? (Possible drawbacks of such messaging could include it being more likely to be ignored, or cause a backlash, or cause the issue to become polarized, etc.)
Like, presumably the experts who recommend being careful what you say also know that some people discount obviously political speech, but still recommend/practice being careful what you say. If so, that would suggest this one reason is not on its own enough to override the experts’ opinion and practice.
Could we talk about a specific expert you have in mind, who thinks this is a bad strategy in this particular case?
AI risk is a pretty weird case, in a number of ways: it’s highly counter-intuitive, not particularly politically polarized / entrenched, seems to require unprecedentedly fast and aggressive action by multiple countries, is almost maximally high-stakes, etc. “Be careful what you say, try to look normal, and slowly accumulate political capital and connections in the hope of swaying policymakers long-term” isn’t an unconditionally good strategy, it’s a strategy adapted to a particular range of situations and goals. I’d be interested in actually hearing arguments for why this strategy is the best option here, given MIRI’s world-model.
(Or, separately, you could argue against the world-model, if you disagree with us about how things are.)
I don’t really have a settled view on this; I’m mostly just interested in hearing a more detailed version of MIRI’s model. I also don’t have a specific expert in mind, but I guess the type of person that Akash occasionally refers to—someone who’s been in DC for a while, focuses on AI, and has encouraged a careful/diplomatic communication strategy.
I agree with this. I also think that being more outspoken is generally more virtuous in politics, though I also see drawbacks with it. Maybe I’d wished OP mentioned some of the possible drawbacks of the outspoken strategy and whether there are sensible ways to mitigate those, or just making clear that MIRI thinks they’re outweighed by the advantages. (There’s some discussion, e.g., the risk of being “discounted or uninvited in the short term”, but this seems to be mostly drawn from the “ineffective” bucket, not from the “actively harmful” bucket.)
Yeah, I guess this is a difference in worldview between me and MIRI, where I have longer timelines, am less doomy, and am more bullish on forceful government intervention, causing me to think increased variance is probably generally bad.
That said, I’m curious why you think AI risk is highly counterintuitive (compared to, say, climate change) -- it seems the argument can be boiled down to a pretty simple, understandable (if reductive) core (“AI systems will likely be very powerful, perhaps more than humans, controlling them seems hard, and all that seems scary”), and it has indeed been transmitted like that successfully in the past, in films and other media.
I’m also not sure why it’s relevant here that AI risk is relatively unpolarized—if anything, that seems like it should make it more important not to cause further polarization (at least if highly visible moral issues being relatively unpolarized represent unstable equilibriums)?