Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.
Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there’s really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.
Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?
I believe that is MIRI’s stated purpose.
Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.
Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there’s really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.