Selection Effects in estimates of Global Catastrophic Risk
Here’s a poser that occurred to us over the summer, and one that we couldn’t really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there’s a problem—Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?
- 9 Nov 2011 16:47 UTC; -4 points) 's comment on Selection Effects in estimates of Global Catastrophic Risk by (
This isn’t right. Eliezer got into the AI field because he wanted to make a Singularity happen sooner, and only later determined that AI risk is high. Even if Eliezer thought that nuclear war is a bigger risk than AI, he would still be in AI, because he would be thinking that creating a Singularity ASAP is the best way to prevent nuclear war.
I suggest that if you have the ability to evaluate the arguments on an object level, then do that, otherwise try to estimate P(E|H1) and P(E|H2) where E is the evidence you see and H1 is the “low risk” hypothesis (i.e., AI risk is actually low), H2 is the “high risk” hypothesis, and apply Bayes’ rule.
Here’s a simple argument for high AI risk. “AI is safe” implies that either superintelligence can’t be created by humans, or any superintelligence we do create will somehow converge to a “correct” or “human-friendly” morality. Either of these may turn out to be true, but it’s hard to see how anyone could (justifiably) have high confidence in either of them at this point in our state of knowledge.
As for P(E|H1) and P(E|H2), I think it’s likely that even if AI risk is actually low, there would be someone in the world trying to make a living out of “crying wolf” about AI risk, so that alone (i.e., an apparent expert warning about AI risk) doesn’t increase the posterior probability of H2 much. But what would be the likelihood of that person also creating a rationalist community and trying to “raise the sanity waterline”?
I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering. For instance, The Singularity Institute, The Future of Humanity Institute and the Bulletin of the Atomic Scientists all seem pretty heavily oriented around doom. Such organisations initially attract those with high risk estimates, and they then actively try and “sell” their estimates to others.
Obtaining less biased estimates seems rather challenging. The end of the would would obviously be an unprecidented event.
The usual way of eliciting probability is with bets. However, with an apocalypse, this doesn’t work too well. Attempts to use bets have some serious problems.
That’s why I refuse to join SIAI or FHI. If I did, I’d have to discount my own risk estimates, and I value my opinions too much for that. :)
One should read materials from the people in the organization from before it was formed and grant those extra credence depending on how much one suspects the organization has written its bottom line first.
Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.
In the case of Yudkowsky, he started out all: yay: Singularity—and was actively working on accelerating it:
http://www.wired.com/science/discoveries/news/2001/04/43080?currentPage=all
This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence—since they reflect the state of play before the financial motives crept in.
Yes, those writings were also free from financial motivation and less subject to the author’s feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.
An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they’d be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.
As an aside, that article is awful. Most glaringly, he said:
I don’t see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?
One way of testing this is to whether people are willing to discuss existential risk threats that cannot be solved by giving them money. Such comments do exist (see for example Stephen Hawking’s comments about the danger of aliens). It is however interesting to note that he’s made similar remarks about the threat of AI. (See e.g. here). I’m not sure whether such evaluations are relevant.
Also, I don’t think it follows that people like Yudkowsky and Hellman necessarily decide to study the existential risks they do because they have a higher than average estimate for the threats in question. They may just have internalized the threats more. Most humans simply don’t internalize the risks of existential risk in a way that alters their actions, even if they are willing to acknowledge high probabilities of problems.
An attitude of “faster” might help a little to deal with the threat from aliens.
Our actions can probably affect the issue—at least a little—so money might help.
Hawking’s comments are pretty transparently more about publicity than fundraising, though.
I’d prefer humanity choose to cooperate with aliens if we are in the stronger position. But I agree that we shouldn’t expect them to do the same, and that this does argue for generic importance of developing technology faster. (On the other hand, intelligent life seems to be really rare, so trying to outrace others might be a bad idea if there isn’t much else, or if the reason there’s so little is because of some future filtration event.)
Nuclear weapons have been available on the “black market” (thanks to sloppy soviet handling practices) for decades, yet no terrorist or criminal group has ever used a nuclear fission initiation device. Nuclearrisk.org claims “terrorists may soon get their own button on the vest”, citing Al-Qaeda’s open desire to acquire nuclear weapons.
I am unable, if I assume fully honest and rational assessments, to rectify these points of fact with one another. They disagree with each other. Given the fact that, furthermore, many of these assessments of risk seem to carry the implicit assumption that if a single nuke is used, the whole world will start glowing in the dark ( see: http://news.stanford.edu/news/2009/july22/hellman-nuclear-analysis-071709.html for an example of this (Martin Hellman himself) ) -- well, it gets further absurd.
In other words; folks need to be careful, when crafting expert opinions, to avoid Déformation professionnelle.
Cite please. From Pinker’s new book:
200 Soviet nukes lost in Ukraine—article from Sept 13, 2002. There have been reported losses of nuclear submarines at sea since then as well (though those are improbably recoverable). Note: even if that window is closed now, it was open then, and no terrorist groups used that channel to acquire nukes—nor is there, as your citation notes, even so much as an actually recorded attempt to do so—in the entirety of that window of opportunity.
When dozens of disparate extremist groups failed to even attempt to acquire a specific category of weapon, we can safely at that point generalize into a principle that governs how ‘terrorists’ interact with ‘nukes’ (in this case) such that they are exceedingly unlikely to want to do so.
In this case, I assert it is because all such groups are inherently political, and as such the knowable political fallout (pun intended) of using a nuclear bomb is sufficient that it in and of itself acts as a deterrant against their use: I am possessed of a strong belief that any terrorist organization that used a nuclear bomb would be eradicated by the governments of every nation on the planet. There is no single event more likely to unify the hatred of all mankind against the perpetrator than the rogue use of a nuclear bomb; we have stigmatized them to that great an extent.
A Pravda article about an accounting glitch is not terribly convincing. Accounting problems do not even mean that the bombs were accessible at any point (assuming they existed), much less that they have been available ‘on the “black market” (thanks to sloppy soviet handling practices) for decades’! Srsly.
(Nor do lost submarines count; the US and Russia have difficulties in recovering them, black-market groups are right out, even the drug cartels can barely build working shallow subs.)
You’ve missed the point of what I was asserting with that article.
I was demonstrating that the Soviets did not keep proper track of their nuclear weapons, to the point where even they did not know how many they had. The rest follows from there with public-knowledge information not the least of which being the extremity of corruption that existed in the CCCP.
Risk mitigation groups would gain some credibility by publishing concrete probability estimates of “the world will be destroyed by X before 2020” (and similar for other years). As many of the risks are a rather short event (think nuclear war / asteroid strike / singularity), the world will be destroyed by a single cause and the respective probabilities can be summed. I would not be surprised if the total probability comes out well above 1. Has anybody ever compiled a list of separate estimates?
On a related note, how much of the SIAI is financed on credit? Any group which estimates high risks of disastrous events should be willing to pay higher interest rates than market average. (As the expected amount of repayments is reduced by the nontrivial probability of everyone dying before maturity of the contract).
I don’t care at all about the long-term survival of the human race. Is there any reason I should? I care about the short-term survival of humanity but only because it affects me and other people that I care about. But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.
Certainly that is true in this case. I’m not going to put a lot of work into developing an elaborate plan to do something that I don’t think should be done.
Define “long-term”, then, as “more than a decade from today”. I.e.; “long-term” includes your own available lifespan.
Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to “me in prison” == 0.25x “end of the species”.
… I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It’s a dishonest conversational tactic.
Well, I give equivalent utility to “death of all the people I care about” and “end of the species.” Thinking about it harder I feel like “death of all the people I care about” is more like 10-100X worse than my own death. Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?
I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing. I was going to explain how badly you are underestimating the complexity of the world around you and overestimating how far you can stray from your own personal experience and still make reasonable predictions. But this is just a silly conversation that everyone else on LW seems to hate so y bother?
I’m curious, now, as to what nation or state you live in.
Well—in this scenario you are “going to die” regardless of the outcome. The only question is whether the people you care about will. Would you kill others (who were themselves also going to die if you did nothing) and allow yourself to die, if it would save people you cared about?
(Also, while it can lead to absurd consequences—Eliezer’s response to the Sims games for example—might I suggest a re-examination of your internal moral consistency? As it stands it seems like you’re allowing many of your moral intuitions to fall in line with evolutionary backgrounds. Nothing inherently wrong with that—our evolutionary history has granted us a decent ‘innate’ morality. But we who ‘reason’ can do better.)
I didn’t list any plan. This was intentional. I’m not going to give pointers to others who might be seeking them out for reasons I personally haven’t vetted on how to do exactly what this topic entails. That, unlike what some others have criticized about this conversation, actually would be irresponsible.
That being said, the fact that you’re addressing this to the element you are is really demonstrating a further non-sequitor. It doesn’t matter whether or not you believe the scenario plausible: what would your judgment of the rightfulness of carrying out the action yourself int he absence of democratic systems be?
Why allow your opinions to be swayed by the emotional responses of others?
In my case, I’m currently sitting at −27 on my 30-day karma score. That’s not even the lowest I’ve been in the last thirty days. I’m not really worried about my popularity here. :)
I live in Illinois. I am curious as to y you are curious.
Probably. For instance, I would try to defend my wife/child from imminent physical harm even if it put me in a lot of danger. If that meant trying to kill someone then I would do that but in that case it would be justifiable and I probably wouldn’t go to prison if I survived.
I feel like we are doomed to talk about different things. I think you are talking about “morally right” which I don’t usually think about unless I am trying to convince someone to do something against their own interest. I observe that large democratic governments deliberately kill people all the time without consequence. I also observe that individuals have more trouble doing so. Consequently, I think that individuals trying to kill people is a bad idea. So its not right in the same sense that exercising a 60 delta call 3 mos from expiration is not right.
My opinions are unaffected but my actions might be. If I am telling jokes and everyone is staring at me stone faced I’m likely to stop.
I imagine if you lived in Norway you would not be of that opinion.
What are Norwegian prisons like?
Yeah, that’s… what I was getting at. (Was this meant as a refutation somehow? I’m confused.)
How many people you didn’t know would you equate to being “of equal concern” to you as one person you do know when deciding whether or not it’s worth it to risk your own life to save them? Please express this as a ratio—unknowns:knowns -- and then, if you like, knowns:loveds.
“because Eliezer is probably the world expert on AI risk”
There is no experts on the AI risk. There’s nothing where to get expertise from. He read some SF, got caught on an idea, did not study (self study or otherwise) CS or any actual relevant body of knowledge to the point of producing anything useful, and he is a very convincing writer. The experts, you’ll get experts in 2050. He’s a dilettante.
People follow some sort of distribution on their risk estimates. Eliezer is just the far far off end of the bell curve on the risk estimate for AI, among those with writing skills. He does make some interesting points, but he’s not a risk estimator.
I think you need to consider this point further. Before you go through the effort of estimating a probability it is good to know if there is any value to such an estimate. For instance, if you did a lot of work and figured out that the prob that the world would be destroyed by UFAI was 5% would that change your behavior in any way? What if you found it to be 50% or .000005%? Personally, I don’t think I would do much differently. Maybe in the 50% case I would vote for the mass murder of all AI researchers but currently I don’t know of any major political candidates with that in their platform. Other than that it seems like pretty useless information to me.
If you were willing to assert that AI researchers should be ‘murdered’, why would you limit yourself to the political process to that end? Why not start picking them off in various ways through your own direct actions? (Such as saving up enough money to put out hits on them all simultaneously, etc..)?
What I’m getting at is; why do you restrict what you believe to be “right and necessary” to a democratic process when you could take individual action to that end as well?
There are several reasons why I wouldn’t want to personally murder AI researchers, even if I believed that they were going to destroy the world (which I don’t).
I don’t want to go to prison
2.. I generally don’t like killing mammals. People are some of the least cute mammals out there but it would still take an emotional toll to kill them. I’d rather outsource the killing of mammals to others.
I think that killing all AI researchers would be fairly effective government policy in terms of reducing the risk of UFAI. I think that my trying to kill AI researchers would do nothing to prevent UFAI.
I know both of you are speaking hypothetically, but please don’t make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.
I understand that this topic has a high yuck factor—but it is the duty of the rigorously disciplined rationalist to maintain that discipline even in the face of uncomfortable thoughts.
You’re missing Steven’s point: “avoid looking needlessly creepy”.
I’m not missing it. I’m rejecting it.
“Yuck factor” has nothing to do with it. The “duty of the rigorously disciplined rationalist” does not include ignoring others’ reactions to your statements.
Avoiding unpleasant and “creepy” topics merely because others find them unpleasant is to fail in that duty. It does, in fact, include ignoring others’ reactions to your statements in terms of topic.
The topic was already framed; and the reactions have been most vehement to statements well-framed with context, ignoring that context as they espouse those reactions.
To allow an entire topic to be squelched for no better reason than others saying “that is creepy”, or something analogous, is in fact a failure mode.
I really don’t want to be perceived as advocating murder. Please don’t get hung up on my use of the word “murder.” I really just meant deliberate killing. What I was talking about would not be murder any more than when the US military killed Osama Bin Laden. Murder is bad and illegal. For the USGov to kill Bin Laden was both legal and good, hence definitely not murder.
Maybe if it turns out that UFAI is a big problem in like the 2030s then pro-AI people will be viewed in that decade somewhat like how pro-Bin Laden people are viewed now.
You shouldn’t do it because it’s an invitation for people to get sidetracked. We try to avoid politics for the same reason.
Sidetracked from what?
From the topic, in this case “selection effects in estimates of global catastrophic risk”. If you casually mention you don’t particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn’t matter that you don’t wish to do anybody harm.
I can’t control what other people say but I didn’t at any point say that I don’t care about humans, nor did I say that personally killing anyone is a good idea ever.
my main point was that the probabilities of various xRisks don’t matter. My side point was that if it turned out that UFAI was a significant risk then politically enforced luddism would be the logical response. I like to make that point once in awhile in the hopes that SingInst will realize the wisdom of it.
It would be a response, but you have described it as “logical” instead of with an adjective describing some of its relative virtues.
Also, distinguish the best response for society and the best response for an advocate, even if you think they are nearly the same, just to show you’ve considered that.
That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.
At the moment, I do not.
I also find it rather unsurprising that the comment in question has been as far down-voted as it has been, though once again I am left noting how while I am not surprised, I am disappointed with LW in general. This is happening too often, I fear.
Most of us frown on irresponsible encouragements to criminal acts.
As well you should. Of course, this carries a number of interesting assumptions:
The assumption of irresponsibility.
The assumption of encouragement.
The assumption of the ‘wrongness’ of criminal acts.
Let me rephrase this: If you believed—very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...
Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?
In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you.
It was your posts I condemned and downvoted as irresponsible, it was your posts’ utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it’s the utility of the former that’s judged.
Also, if I ever found myself perceiving that “only I was willing to acknowledge the material evidence which lead me to this conclusion...”, the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.
That you don’t seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.
Then do so.
I don’t seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned “material evidence” at all—an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of “but it could be a faulty belief” is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.
What I’m elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework’s methodology for such evaluations.
The AI serial-killer scenario is one such example.
Don’t you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?
If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about “checks and confirmations”, wouldn’t you feel remotely responsible?
And as a sidenote, the lone nuts you’d be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that’d be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they’d be more likely to bring about the world’s doom, not prevent it.
Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a “90%” certainty statement would be immune to the kind of discourse this site focuses on in the first place.
It’s not just implicit; it’s necessary to reach that state. It’s not irresponsible to know your audience.
Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem—and my words have no impact on that one way or the other on those individuals.
No. Nor should I. Any person who could come to a statement of “I am 90% certain of X” (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual… well, they would have had their conclusion long before they read my words.
I’m curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI ‘crowd’ are undifferentiated, today, in terms of their accessibility.
What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?
I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.
My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others’ studies of psychology—including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven’t the energy to list right now.
I’m not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that “area of the world” I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.
Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.
However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.
This is contradictory.
Quoted for posterity.
In that case, allow me to add that I believe the current likelihood of UFAI to be well below any other known species-level existential risk, and that I also believe that the current crop of AGI researchers are sufficiently fit to address this problem.
I wouldn’t be terribly surprised, though, if this were the sort of consideration likely to be conveniently ignored by those in charge of enforcing the relevant laws in your jurisdiction!
Anyone interested in “reporting” me to local law enforcement need only message me privately and I will provide them with my full name, address, and contact information for my local law enforcement.
I am that confident that this is a non-issue.
Send to: logos01@TempEmail.net (Address will expire on Nov. 23, 2011)
What are you trying to prove, here? What’s the point of this?
The demonstration of the invalidity of the raised concern of this dialogue being treated legally as a death threat, and furthermore the insincerity of its being raised as a concern: after a larger than 24-hour window not one message has arrived at that address (unless it was removed between the intervals I checked it, somehow).
This, then, is evidence against the legitimacy of the complaint; evidence for the notion that what’s really motivating these responses, then, isn’t concerns that this dialogue would be treated as a death threat, but some other thing. What precisely that other thing is, my offer could not differentiate between.
Or maybe, you know, everyone here knows it wasn’t actually a death threat and has no desire to get you in legal trouble for no reason, but wanted to warn you it could be perceived that way out of genuine concern?
“As it stands your comment could be interpreted as a death threat. This is not cool and likely illegal.”
Logos, you don’t need to preach about utilitarian calculations to us. You have it the other way around. We don’t condemn your words because we can’t make them, we condemn them because we can make them better than you. ( Note particularly in this case the willful refusal to accept the counterfactual and the accusations of irresponsibility for “not emphasizing strongly enough” skepticism in reaching conclusions. )
No, what’s going on here is something significantly “other” than “everyone here knows it wasn’t actually a death threat [...] but wanted to warn you it could be perceived that way.”—those are mutually exclusive conditions by the way; either everyone does not know this, or it can’t be perceived that way.
The truly ironic thing is that there isn’t a legitimate interpretation of my words that could make them a death threat. I responded to an initial counterfactual with a query as to the moral justification of refusing to take individual action in an end-of-the-world-if-you-don’t scenario.
In attempting to explore this, I was met with repeated willful refusals to engage the scenario, admonitions to “not be creepy”, and bald assertions that “I’m not better at moral calculus but worse”.
These responses, I cannot help but conclude, are demonstrative of cached moral beliefs inducing emotional responses overriding clear-headed reasoning. I’m used to this; the overwhelming majority of people are frankly unable to start from the ‘sociopathic’ (morally agnostic, that is) view and work their way back to a sound moral epistemology. It is no surprise to me that the population of LW is mainly comprised of “neurotypical” individuals. (Please note: this is not an assumption of superiority on my part.)
This is unfortunate, but… short of ‘taking the karma beating’ there’s really no way for me to demonstratively point that out in any effective way.
I don’t think I’m going to continue to respond any further in this thread, though. It’s ceased being useful to any extent, insofar as I can see.
What’s there to be disappointed with?
In this case? The demonstrated inability to parse counterfactuals from postulates, in emotionally charged contexts.
A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction. You might want to seek legal advice, which I’m unable to provide.
The facility with which free exercise (free speech) would be applied to this particular dialogue leaves me sufficiently confident that I have absolutely no legal concerns to worry about whatsoever. The entire nature of counterfactual dialogue is such that you are making it clear that you are not associating the topic discussed with any particular reality. I.e.; you are not actually advocating it.
And, frankly, if LW isn’t prepared to discuss the “harder” questions of how to apply our morality in such murky waters, and is only going to reserve itself to the “low-hanging fruit”—well… I’m fully justified in being disappointed in the community.
I expect better, you see, of a community that prides itself on “claiming” the term “rationalist”.