I know both of you are speaking hypothetically, but please don’t make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.
but please don’t make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.
I understand that this topic has a high yuck factor—but it is the duty of the rigorously disciplined rationalist to maintain that discipline even in the face of uncomfortable thoughts.
“Yuck factor” has nothing to do with it. The “duty of the rigorously disciplined rationalist” does not include ignoring others’ reactions to your statements.
Avoiding unpleasant and “creepy” topics merely because others find them unpleasant is to fail in that duty. It does, in fact, include ignoring others’ reactions to your statements in terms of topic.
The topic was already framed; and the reactions have been most vehement to statements well-framed with context, ignoring that context as they espouse those reactions.
To allow an entire topic to be squelched for no better reason than others saying “that is creepy”, or something analogous, is in fact a failure mode.
I really don’t want to be perceived as advocating murder. Please don’t get hung up on my use of the word “murder.” I really just meant deliberate killing. What I was talking about would not be murder any more than when the US military killed Osama Bin Laden. Murder is bad and illegal. For the USGov to kill Bin Laden was both legal and good, hence definitely not murder.
Maybe if it turns out that UFAI is a big problem in like the 2030s then pro-AI people will be viewed in that decade somewhat like how pro-Bin Laden people are viewed now.
From the topic, in this case “selection effects in estimates of global catastrophic risk”. If you casually mention you don’t particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn’t matter that you don’t wish to do anybody harm.
I can’t control what other people say but I didn’t at any point say that I don’t care about humans, nor did I say that personally killing anyone is a good idea ever.
my main point was that the probabilities of various xRisks don’t matter. My side point was that if it turned out that UFAI was a significant risk then politically enforced luddism would be the logical response. I like to make that point once in awhile in the hopes that SingInst will realize the wisdom of it.
politically enforced luddism would be the logical response.
It would be a response, but you have described it as “logical” instead of with an adjective describing some of its relative virtues.
Also, distinguish the best response for society and the best response for an advocate, even if you think they are nearly the same, just to show you’ve considered that.
I know both of you are speaking hypothetically, but please don’t make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.
I understand that this topic has a high yuck factor—but it is the duty of the rigorously disciplined rationalist to maintain that discipline even in the face of uncomfortable thoughts.
You’re missing Steven’s point: “avoid looking needlessly creepy”.
I’m not missing it. I’m rejecting it.
“Yuck factor” has nothing to do with it. The “duty of the rigorously disciplined rationalist” does not include ignoring others’ reactions to your statements.
Avoiding unpleasant and “creepy” topics merely because others find them unpleasant is to fail in that duty. It does, in fact, include ignoring others’ reactions to your statements in terms of topic.
The topic was already framed; and the reactions have been most vehement to statements well-framed with context, ignoring that context as they espouse those reactions.
To allow an entire topic to be squelched for no better reason than others saying “that is creepy”, or something analogous, is in fact a failure mode.
I really don’t want to be perceived as advocating murder. Please don’t get hung up on my use of the word “murder.” I really just meant deliberate killing. What I was talking about would not be murder any more than when the US military killed Osama Bin Laden. Murder is bad and illegal. For the USGov to kill Bin Laden was both legal and good, hence definitely not murder.
Maybe if it turns out that UFAI is a big problem in like the 2030s then pro-AI people will be viewed in that decade somewhat like how pro-Bin Laden people are viewed now.
You shouldn’t do it because it’s an invitation for people to get sidetracked. We try to avoid politics for the same reason.
Sidetracked from what?
From the topic, in this case “selection effects in estimates of global catastrophic risk”. If you casually mention you don’t particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn’t matter that you don’t wish to do anybody harm.
I can’t control what other people say but I didn’t at any point say that I don’t care about humans, nor did I say that personally killing anyone is a good idea ever.
my main point was that the probabilities of various xRisks don’t matter. My side point was that if it turned out that UFAI was a significant risk then politically enforced luddism would be the logical response. I like to make that point once in awhile in the hopes that SingInst will realize the wisdom of it.
It would be a response, but you have described it as “logical” instead of with an adjective describing some of its relative virtues.
Also, distinguish the best response for society and the best response for an advocate, even if you think they are nearly the same, just to show you’ve considered that.