Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.