What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.