This is nice to see, I’ve been generally kind of unimpressed by what have felt like overly generous handwaves re: gray gooey nanobots, and I do think biological cells are probably our best comparison point for how nanobots might work in practice.
That said, I see some of the discussion here veering in the direction of brainstorming novel ways to do harm with biology, which we have a general norm against in the biosecurity community – just wanted to offer a nudge to y’all to consider the cost vs. benefit of sharing takes in that direction. Feel free to follow up with me over DM!
I don’t see specifically gray gooey nanobots having a visible presence on LW. When people gesture at nanotech, it’s mostly in the sense of molecular manufacturing, local self-contained infrastructure for producing advanced things like computers, a macroscale activity. This is important for quickly instantiating designs that can’t be constructed on existing infrastructure, bootstrapping molecular manufacturing capability starting from things like existing RNA printers.
This way, bringing new things into physical existence only requires having their designs, given a sufficiently versatile manufacturing toolset. If there is no extended delay with incrementally upgrading production facilities all over the world, ability to design machines thousands of times faster than human civilization directly translates into ability to quickly manufacture them.
(The diamondoid bacterium things Yudkowsky keeps mentioning don’t particularly need self-replication capabilities to make the same point, they could just as well be pumped out by Zerg queens foraging underground. The details of this don’t matter for the point being made, there are many independent ways of eating the world that don’t overall become less effective because some of them are on further reflection infeasible.)
It’s a fair point that this topic touches on potential infohazards. I don’t think anything I’ve said so far is particularly novel, although in the saying I’m perhaps making the ideas less obscure. I also haven’t really gone into much depth of detail (mostly because of my relative lack of expertise). My main aim has been to nudge others into taking the threats more seriously, even after seeing a related strawman cut down.
This is nice to see, I’ve been generally kind of unimpressed by what have felt like overly generous handwaves re: gray gooey nanobots, and I do think biological cells are probably our best comparison point for how nanobots might work in practice.
That said, I see some of the discussion here veering in the direction of brainstorming novel ways to do harm with biology, which we have a general norm against in the biosecurity community – just wanted to offer a nudge to y’all to consider the cost vs. benefit of sharing takes in that direction. Feel free to follow up with me over DM!
I don’t see specifically gray gooey nanobots having a visible presence on LW. When people gesture at nanotech, it’s mostly in the sense of molecular manufacturing, local self-contained infrastructure for producing advanced things like computers, a macroscale activity. This is important for quickly instantiating designs that can’t be constructed on existing infrastructure, bootstrapping molecular manufacturing capability starting from things like existing RNA printers.
This way, bringing new things into physical existence only requires having their designs, given a sufficiently versatile manufacturing toolset. If there is no extended delay with incrementally upgrading production facilities all over the world, ability to design machines thousands of times faster than human civilization directly translates into ability to quickly manufacture them.
(The diamondoid bacterium things Yudkowsky keeps mentioning don’t particularly need self-replication capabilities to make the same point, they could just as well be pumped out by Zerg queens foraging underground. The details of this don’t matter for the point being made, there are many independent ways of eating the world that don’t overall become less effective because some of them are on further reflection infeasible.)
Strong +1 to this
I’m also happy to discuss stuff about norms further 1 on 1 -- the best way to contact me, anonymously or non-anonymously, is through this short form.
I assume the strong +1 was specifically on the infohazards angle? (Which I also strongly agree with.)
Yep, that’s right—thanks for clarifying!
Remember, we are talking about the power of intelligence here.
For nanobots to be possible, there needs to be one plan that works. For them to be impossible, every plan needs to fail.
How unassailably solid did the argument for airplanes look before any were built?
It’s a fair point that this topic touches on potential infohazards. I don’t think anything I’ve said so far is particularly novel, although in the saying I’m perhaps making the ideas less obscure. I also haven’t really gone into much depth of detail (mostly because of my relative lack of expertise). My main aim has been to nudge others into taking the threats more seriously, even after seeing a related strawman cut down.