There might be more obvious ways but (at least the ones I can think of) are either only temporary and local disruptions or cannot be executed by a small coordinated group.
I understand that LW and the AI Safety community does not want to be associated with terrorism and ending civilisation, however Eliezer talked about blowing up AI labs, so I am uncertain where the line here would be.
I accept that “preemtively destroying civilization” might be excluded from the definition of PWAs but is that something that is at all discussed on LW or in the AI Safety community? Seems to me that if you 99.99% believe that AGI will kill or worse torture us, then it should be on the table.
Thank you for your answer!
There might be more obvious ways but (at least the ones I can think of) are either only temporary and local disruptions or cannot be executed by a small coordinated group.
I understand that LW and the AI Safety community does not want to be associated with terrorism and ending civilisation, however Eliezer talked about blowing up AI labs, so I am uncertain where the line here would be.
I accept that “preemtively destroying civilization” might be excluded from the definition of PWAs but is that something that is at all discussed on LW or in the AI Safety community? Seems to me that if you 99.99% believe that AGI will kill or worse torture us, then it should be on the table.