I think this is a good object level post. Problem is, I don’t think MIRI is at the object level. Quote from the comm. strat.: “The main audience we want to reach is policymakers.”
Communication is no longer a passive background channel for observing a world, but speech becomes an action changing it. Predictions start to influence the things they predict.
Say AI doom is a certainty. People will be afraid, and stop research. Few years later doom doesn’t happen, everyone complains.
Say AI doom is an impossibility. Research continues, something something paperclips. Few years later nobody will complain because no one will be alive.
(This example itself is overly simplistic, real-world politics and speech actions are even more counterintuitive.)
So MIRI became a political organization. Their stated goal is “STOP AI”, and they took the radical approach to it. Politics is different from rationality, and radical politics is different from standard politics.
For example, they say they want to shatter the overton window. Infighting usually breaks groups; but during that, the opponents need to engage with their position, which is a stated subgoal.
It’s ironic that a certain someone said Politics is the Mind-Killer a decade ago. But because of that, I think they know what they are doing. And it might work in the end.
Interesting, thank you. I think that all makes sense, and I’m sure it plays at least some part in their strategy. I’ve wondered about this possibility a little bit.
Yudkowsky has been consistent in his belief that doom is near certain without a lot more time to work on alignment. He’s publicly held that opinion, and spent a huge amount of effort explaining and arguing for it since well before the current wave of success with deep networks. So I think for him at least, it’s a sincerely held belief.
Your point about the stated belief changing the reality is important. Everything is safer if you think it’s dangerous—you’ll take more precautions.
With that in mind, I think it’s pretty important for even optimists to heavily sprinkle in the message “this will probably go well IF everyone involved is really careful”.
I think this is a good object level post. Problem is, I don’t think MIRI is at the object level. Quote from the comm. strat.: “The main audience we want to reach is policymakers.”
Communication is no longer a passive background channel for observing a world, but speech becomes an action changing it. Predictions start to influence the things they predict.
Say AI doom is a certainty. People will be afraid, and stop research. Few years later doom doesn’t happen, everyone complains.
Say AI doom is an impossibility. Research continues, something something paperclips. Few years later nobody will complain because no one will be alive.
(This example itself is overly simplistic, real-world politics and speech actions are even more counterintuitive.)
So MIRI became a political organization. Their stated goal is “STOP AI”, and they took the radical approach to it. Politics is different from rationality, and radical politics is different from standard politics.
For example, they say they want to shatter the overton window. Infighting usually breaks groups; but during that, the opponents need to engage with their position, which is a stated subgoal.
It’s ironic that a certain someone said Politics is the Mind-Killer a decade ago. But because of that, I think they know what they are doing. And it might work in the end.
Interesting, thank you. I think that all makes sense, and I’m sure it plays at least some part in their strategy. I’ve wondered about this possibility a little bit.
Yudkowsky has been consistent in his belief that doom is near certain without a lot more time to work on alignment. He’s publicly held that opinion, and spent a huge amount of effort explaining and arguing for it since well before the current wave of success with deep networks. So I think for him at least, it’s a sincerely held belief.
Your point about the stated belief changing the reality is important. Everything is safer if you think it’s dangerous—you’ll take more precautions.
With that in mind, I think it’s pretty important for even optimists to heavily sprinkle in the message “this will probably go well IF everyone involved is really careful”.