KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).
KOL = Key Opinion Leaders, as in a small group of influential people within the neo-Luddite space. My argument here was simply that people concerned about AI alignment need to be politically astute, and more willing to find allies with whom they may be less aligned.
I think it’s probably a problem that those interested in AI alignment are far more aligned with techno-optimists, who I see as pretty dangerous allies, than more cautious, less technologically sophisticated groups, (bureaucrats or neo-Luddites).
Don’t know why you feel the need to use my unrelated post to attempt to discredit my comment here- strikes me as pretty bad form on your part. But, to state the obvious, a 40% shot at a desirable outcome is obviously not a call to action if the 60% is very undesirable (I mention that negative outcomes involve either extinction or worse).