I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words,
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.