Is this blog post potentially a source of X-risk? Popularizing the idea of “mix together chemicals you got in the mail” sounds like an attack vector for an AGI wanting to escape its box.
At the risk of arguing from fictional evidence, Eliezer writes in That Alien Message of an AGI coming up with a cover story to convince someone outside the box to construct nanomachines (amino acids!) to do its bidding.
We sent messages [...] to labs that did their equivalent of DNA sequencing and protein synthesis. We found some unsuspecting schmuck, and gave it a plausible story [...], and told it to mix together some vials it got in the mail. Protein-equivalents that self-assembled into the first-stage nanomachines, that built the second-stage nanomachines, that built the third-stage nanomachines...
“Make your own COVID-19 vaccine at home!” sounds like a pretty compelling cover story, to the point that nobody on LessWrong has yet commented on the possibility of this being the machinations of a malicious actor, despite this very specific scenario already being wargamed by Eliezer himself!
I’m not accusing the author of secretly being an AI writing this story, I think that’s rather unlikely, and a less fictional framing is that maybe normalizing the idea of “it’s really easy to try your own biology experiments at home” increases X-risk from accidental or intentional biological hazards created by people who think this blog post is cool and are inspired to try their own experimentation at home.
More generally, if this blog post is an information hazard, what are the norms on LessWrong for discussing or promoting potential attention hazards? Currently we’re incentivized to get upvotes for cool, intellectually interesting posts and comments, and we don’t get upvotes for keeping information hazards to ourselves. ;)
I think another John Wentworth post is applicable here. It’s not hard to invent reasons why any given post might increase existential risk by some amount. (What if your comment encourages pro-censorship attitudes that hamper the collective intellectual competence we need to reduce existential risk?) In order to not function as trolling, you need to present a case for the risk being plausible, not just possible.
TIL, thanks for the information on that. I’m not trying to troll, my apologies if my comment comes across that way. It’s just interesting to me that this specific scenario was written about before, yet wasn’t surfaced in the discussion.
Is this blog post potentially a source of X-risk? Popularizing the idea of “mix together chemicals you got in the mail” sounds like an attack vector for an AGI wanting to escape its box.
No, an AGI would already have to escaped it’s box to take over companies from which you can order chemicals and sent out what it wants. From there it can simply pay people to do what it wants. EmeraldCloud Lab also exists and could mix out what the AGI wants. Plenty of people take drugs they order on the Darknet.
Is this blog post potentially a source of X-risk? Popularizing the idea of “mix together chemicals you got in the mail” sounds like an attack vector for an AGI wanting to escape its box.
At the risk of arguing from fictional evidence, Eliezer writes in That Alien Message of an AGI coming up with a cover story to convince someone outside the box to construct nanomachines (amino acids!) to do its bidding.
“Make your own COVID-19 vaccine at home!” sounds like a pretty compelling cover story, to the point that nobody on LessWrong has yet commented on the possibility of this being the machinations of a malicious actor, despite this very specific scenario already being wargamed by Eliezer himself!
I’m not accusing the author of secretly being an AI writing this story, I think that’s rather unlikely, and a less fictional framing is that maybe normalizing the idea of “it’s really easy to try your own biology experiments at home” increases X-risk from accidental or intentional biological hazards created by people who think this blog post is cool and are inspired to try their own experimentation at home.
More generally, if this blog post is an information hazard, what are the norms on LessWrong for discussing or promoting potential attention hazards? Currently we’re incentivized to get upvotes for cool, intellectually interesting posts and comments, and we don’t get upvotes for keeping information hazards to ourselves. ;)
I think another John Wentworth post is applicable here. It’s not hard to invent reasons why any given post might increase existential risk by some amount. (What if your comment encourages pro-censorship attitudes that hamper the collective intellectual competence we need to reduce existential risk?) In order to not function as trolling, you need to present a case for the risk being plausible, not just possible.
TIL, thanks for the information on that. I’m not trying to troll, my apologies if my comment comes across that way. It’s just interesting to me that this specific scenario was written about before, yet wasn’t surfaced in the discussion.
No, an AGI would already have to escaped it’s box to take over companies from which you can order chemicals and sent out what it wants. From there it can simply pay people to do what it wants. EmeraldCloud Lab also exists and could mix out what the AGI wants. Plenty of people take drugs they order on the Darknet.