I’m confused here at what this response is arguing against. As far as I can tell, no major alignment organization is arguing that collapse is desirable, so I do not understand what you’re arguing for.
No organisation, but certainly the individual known as Dzoldzaya, who wrote both the article I linked and the comment I was replying to. By Dzoldzaya’s use of “We”, they place themselves within the AI Safety/etc. community, if not any particular organisation.
Here, Dzoldzaya recommends teaming up with neo-Luddites to stop AI, by unspecified means that I would like to see elaborated on. (What does ‘KOL’ mean in this context?) There, Dzoldzaya considers trying to delay AI by destroying society, by ways such as by engineering a pandemic that kills 95% of the population (i.e. 7.6 billion people). The only problems Dzoldzaya sees with such proposals is that they might not work, e.g. if the pandemic kills 99.9% instead and makes recovery impossible. But it would delay catastrophic AI and give us a better chance at the far future where (Dzoldzaya says) most value lies, against which 7.6 billion people are nothing. Dzoldzaya claims to be only 40% sure that such plans would lead to a desirable outcome, but given the vast future at stake, a 40% shot at it is a loud call to action.
KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).
I’m confused here at what this response is arguing against. As far as I can tell, no major alignment organization is arguing that collapse is desirable, so I do not understand what you’re arguing for.
No organisation, but certainly the individual known as Dzoldzaya, who wrote both the article I linked and the comment I was replying to. By Dzoldzaya’s use of “We”, they place themselves within the AI Safety/etc. community, if not any particular organisation.
Here, Dzoldzaya recommends teaming up with neo-Luddites to stop AI, by unspecified means that I would like to see elaborated on. (What does ‘KOL’ mean in this context?) There, Dzoldzaya considers trying to delay AI by destroying society, by ways such as by engineering a pandemic that kills 95% of the population (i.e. 7.6 billion people). The only problems Dzoldzaya sees with such proposals is that they might not work, e.g. if the pandemic kills 99.9% instead and makes recovery impossible. But it would delay catastrophic AI and give us a better chance at the far future where (Dzoldzaya says) most value lies, against which 7.6 billion people are nothing. Dzoldzaya claims to be only 40% sure that such plans would lead to a desirable outcome, but given the vast future at stake, a 40% shot at it is a loud call to action.
ETA: Might “KOL” mean “voice”, as in “bat kol”?
KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).