Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the ‘wrongness’ of criminal acts.
Let me rephrase this: If you believed—very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts’ utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it’s the utility of the former that’s judged.
Also, if I ever found myself perceiving that “only I was willing to acknowledge the material evidence which lead me to this conclusion...”, the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
Then do so.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of “but it could be a faulty belief” is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.
What I’m elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework’s methodology for such evaluations.
The AI serial-killer scenario is one such example.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words,
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the ‘wrongness’ of criminal acts.
Let me rephrase this: If you believed—very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts’ utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it’s the utility of the former that’s judged.
Also, if I ever found myself perceiving that “only I was willing to acknowledge the material evidence which lead me to this conclusion...”, the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
Then do so.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of “but it could be a faulty belief” is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.
What I’m elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework’s methodology for such evaluations.
The AI serial-killer scenario is one such example.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.