Taking AGI more seriously; seeing warning shots; etc. Like I said, I think these people are reasonable, but even the most reasonable people have a strong instinct to rally around their group’s flag when it’s being attacked. I don’t think most OS people are hardcore libertarians, I just think they don’t take the risks seriously enough right now to abandon the thing that has historically worked really well (especially when they’re probably disproportionately seeing the most unreasonable arguments from alignment people, because that’s how twitter works).
In general there’s a strong tendency amongst rationalists to assume that if people haven’t come around on AI risk yet, they’ll never come around. But this is just bad modeling of how other people work. You should model most people in these groups as, say, 5x less open to abstract arguments than you, and 5x more responsive to social consensus. Once the arguments start seeming less abstract (and they start “feeling the AGI”, as Ilya puts it), and the social consensus builds, there’s plenty of scope for people to start caring much more about alignment.
Taking AGI more seriously; seeing warning shots; etc. Like I said, I think these people are reasonable, but even the most reasonable people have a strong instinct to rally around their group’s flag when it’s being attacked. I don’t think most OS people are hardcore libertarians, I just think they don’t take the risks seriously enough right now to abandon the thing that has historically worked really well (especially when they’re probably disproportionately seeing the most unreasonable arguments from alignment people, because that’s how twitter works).
In general there’s a strong tendency amongst rationalists to assume that if people haven’t come around on AI risk yet, they’ll never come around. But this is just bad modeling of how other people work. You should model most people in these groups as, say, 5x less open to abstract arguments than you, and 5x more responsive to social consensus. Once the arguments start seeming less abstract (and they start “feeling the AGI”, as Ilya puts it), and the social consensus builds, there’s plenty of scope for people to start caring much more about alignment.