That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.
At the moment, I do not.
I also find it rather unsurprising that the comment in question has been as far down-voted as it has been, though once again I am left noting how while I am not surprised, I am disappointed with LW in general. This is happening too often, I fear.
Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the ‘wrongness’ of criminal acts.
Let me rephrase this: If you believed—very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts’ utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it’s the utility of the former that’s judged.
Also, if I ever found myself perceiving that “only I was willing to acknowledge the material evidence which lead me to this conclusion...”, the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
Then do so.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of “but it could be a faulty belief” is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.
What I’m elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework’s methodology for such evaluations.
The AI serial-killer scenario is one such example.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words,
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
In that case, allow me to add that I believe the current likelihood of UFAI to be well below any other known species-level existential risk, and that I also believe that the current crop of AGI researchers are sufficiently fit to address this problem.
That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.
At the moment, I do not.
I wouldn’t be terribly surprised, though, if this were the sort of consideration likely to be conveniently ignored by those in charge of enforcing the relevant laws in your jurisdiction!
Anyone interested in “reporting” me to local law enforcement need only message me privately and I will provide them with my full name, address, and contact information for my local law enforcement.
I am that confident that this is a non-issue.
Send to: logos01@TempEmail.net (Address will expire on Nov. 23, 2011)
The demonstration of the invalidity of the raised concern of this dialogue being treated legally as a death threat, and furthermore the insincerity of its being raised as a concern: after a larger than 24-hour window not one message has arrived at that address (unless it was removed between the intervals I checked it, somehow).
This, then, is evidence against the legitimacy of the complaint; evidence for the notion that what’s really motivating these responses, then, isn’t concerns that this dialogue would be treated as a death threat, but some other thing. What precisely that other thing is, my offer could not differentiate between.
Or maybe, you know, everyone here knows it wasn’t actually a death threat and has no desire to get you in legal trouble for no reason, but wanted to warn you it could be perceived that way out of genuine concern?
No, what’s going on here is something significantly “other” than “everyone here knows it wasn’t actually a death threat [...] but wanted to warn you it could be perceived that way.”—those are mutually exclusive conditions by the way; either everyone does not know this, or it can’t be perceived that way.
The truly ironic thing is that there isn’t a legitimate interpretation of my words that could make them a death threat. I responded to an initial counterfactual with a query as to the moral justification of refusing to take individual action in an end-of-the-world-if-you-don’t scenario.
In attempting to explore this, I was met with repeated willful refusals to engage the scenario, admonitions to “not be creepy”, and bald assertions that “I’m not better at moral calculus but worse”.
These responses, I cannot help but conclude, are demonstrative of cached moral beliefs inducing emotional responses overriding clear-headed reasoning. I’m used to this; the overwhelming majority of people are frankly unable to start from the ‘sociopathic’ (morally agnostic, that is) view and work their way back to a sound moral epistemology. It is no surprise to me that the population of LW is mainly comprised of “neurotypical” individuals. (Please note: this is not an assumption of superiority on my part.)
This is unfortunate, but… short of ‘taking the karma beating’ there’s really no way for me to demonstratively point that out in any effective way.
I don’t think I’m going to continue to respond any further in this thread, though. It’s ceased being useful to any extent, insofar as I can see.
A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction. You might want to seek legal advice, which I’m unable to provide.
A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction.
The facility with which free exercise (free speech) would be applied to this particular dialogue leaves me sufficiently confident that I have absolutely no legal concerns to worry about whatsoever. The entire nature of counterfactual dialogue is such that you are making it clear that you are not associating the topic discussed with any particular reality. I.e.; you are not actually advocating it.
And, frankly, if LW isn’t prepared to discuss the “harder” questions of how to apply our morality in such murky waters, and is only going to reserve itself to the “low-hanging fruit”—well… I’m fully justified in being disappointed in the community.
I expect better, you see, of a community that prides itself on “claiming” the term “rationalist”.
That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.
At the moment, I do not.
I also find it rather unsurprising that the comment in question has been as far down-voted as it has been, though once again I am left noting how while I am not surprised, I am disappointed with LW in general. This is happening too often, I fear.
Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the ‘wrongness’ of criminal acts.
Let me rephrase this: If you believed—very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts’ utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it’s the utility of the former that’s judged.
Also, if I ever found myself perceiving that “only I was willing to acknowledge the material evidence which lead me to this conclusion...”, the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
Then do so.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of “but it could be a faulty belief” is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.
What I’m elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework’s methodology for such evaluations.
The AI serial-killer scenario is one such example.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.
Quoted for posterity.
In that case, allow me to add that I believe the current likelihood of UFAI to be well below any other known species-level existential risk, and that I also believe that the current crop of AGI researchers are sufficiently fit to address this problem.
I wouldn’t be terribly surprised, though, if this were the sort of consideration likely to be conveniently ignored by those in charge of enforcing the relevant laws in your jurisdiction!
Anyone interested in “reporting” me to local law enforcement need only message me privately and I will provide them with my full name, address, and contact information for my local law enforcement.
I am that confident that this is a non-issue.
Send to: logos01@TempEmail.net (Address will expire on Nov. 23, 2011)
What are you trying to prove, here? What’s the point of this?
The demonstration of the invalidity of the raised concern of this dialogue being treated legally as a death threat, and furthermore the insincerity of its being raised as a concern: after a larger than 24-hour window not one message has arrived at that address (unless it was removed between the intervals I checked it, somehow).
This, then, is evidence against the legitimacy of the complaint; evidence for the notion that what’s really motivating these responses, then, isn’t concerns that this dialogue would be treated as a death threat, but some other thing. What precisely that other thing is, my offer could not differentiate between.
Or maybe, you know, everyone here knows it wasn’t actually a death threat and has no desire to get you in legal trouble for no reason, but wanted to warn you it could be perceived that way out of genuine concern?
“As it stands your comment could be interpreted as a death threat. This is not cool and likely illegal.”
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you. ( Note particularly in this case the willful refusal to accept the counterfactual and the accusations of irresponsibility for “not emphasizing strongly enough” skepticism in reaching conclusions. )
No, what’s going on here is something significantly “other” than “everyone here knows it wasn’t actually a death threat [...] but wanted to warn you it could be perceived that way.”—those are mutually exclusive conditions by the way; either everyone does not know this, or it can’t be perceived that way.
The truly ironic thing is that there isn’t a legitimate interpretation of my words that could make them a death threat. I responded to an initial counterfactual with a query as to the moral justification of refusing to take individual action in an end-of-the-world-if-you-don’t scenario.
In attempting to explore this, I was met with repeated willful refusals to engage the scenario, admonitions to “not be creepy”, and bald assertions that “I’m not better at moral calculus but worse”.
These responses, I cannot help but conclude, are demonstrative of cached moral beliefs inducing emotional responses overriding clear-headed reasoning. I’m used to this; the overwhelming majority of people are frankly unable to start from the ‘sociopathic’ (morally agnostic, that is) view and work their way back to a sound moral epistemology. It is no surprise to me that the population of LW is mainly comprised of “neurotypical” individuals. (Please note: this is not an assumption of superiority on my part.)
This is unfortunate, but… short of ‘taking the karma beating’ there’s really no way for me to demonstratively point that out in any effective way.
I don’t think I’m going to continue to respond any further in this thread, though. It’s ceased being useful to any extent, insofar as I can see.
What’s there to be disappointed with?
In this case? The demonstrated inability to parse counterfactuals from postulates, in emotionally charged contexts.
A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction. You might want to seek legal advice, which I’m unable to provide.
The facility with which free exercise (free speech) would be applied to this particular dialogue leaves me sufficiently confident that I have absolutely no legal concerns to worry about whatsoever. The entire nature of counterfactual dialogue is such that you are making it clear that you are not associating the topic discussed with any particular reality. I.e.; you are not actually advocating it.
And, frankly, if LW isn’t prepared to discuss the “harder” questions of how to apply our morality in such murky waters, and is only going to reserve itself to the “low-hanging fruit”—well… I’m fully justified in being disappointed in the community.
I expect better, you see, of a community that prides itself on “claiming” the term “rationalist”.