Self-replicating nanotech that also does something useful and also also outcompetes biological life and also also also faithfully self-replicates (i.e. you don’t end up in a situation where the nanobots that do the “replicate” task better at the cost of the “do something useful” task replicate better and end up taking over) is hard enough that even if it’s technically physically possible it won’t be the path that the minimum-viable-superintelligence takes to gaining power.
There probably isn’t any other path to “sufficient power in the physical world to make more computer chips” that does not route through “humans do human-like stuff at human-like speeds for you”
That implies that the sequence of events “the world looks normal and not at all like all of the chip fabs are fully automated, and then suddenly all the humans die of something nobody saw coming” is unlikely to happen.
But this is a contingent fact about the world as it is today, and it’s entirely possible to screw up this nice state of affairs, accidentally or intentionally.
Therefore, even if you think that you are on a path to a pivotal act, if your plan starts look like “and in step 3 I give my AI a fully-automated factory which can produce all components of itself given sufficient raw materials and power, and can also build a chip fab, and then in step 4 I give my AI instructions that it should perform an act which looks to me like a pivotal act, which it will surely do by doing something amazing with nanotech”, you should stop and reevaluate your plan.
Does this sound like an accurate summary to you?
Also, as a side note is there accepted terminology to distinguish between “an act that the actor believes will be pivotal” and “an act that is in fact pivotal”? I find myself wanting to make that distinction quite a bit, and it would be nice if there were accepted terminology.
I think 1-4 are good summaries of the arguments I’m making about nanobots. I would add another point that the reason it is hard to make nanobots is not about a lack of computational abilities (although that could also be a bottleneck) but simply a lack of knowledge about the physical world that can only be resolved by learning more about the physical world in a way that is relevant to making nanobots.
On point 5, from my current perspective, I think the idea of pivotal acts is totalitarian, not a good idea and most likely to screw things up if ever attempted. So I wasn’t mainly trying to make a statement about them here (that would be another post). I was making a side argument about them that is roughly summarized in 5 -- giving an AI full physical capabilities seems like a very dangerous step and if it is part of your plan for a pivotal act you should be especially worried that you are making things worse.
If I understand your argument, it is as follows:
Self-replicating nanotech that also does something useful and also also outcompetes biological life and also also also faithfully self-replicates (i.e. you don’t end up in a situation where the nanobots that do the “replicate” task better at the cost of the “do something useful” task replicate better and end up taking over) is hard enough that even if it’s technically physically possible it won’t be the path that the minimum-viable-superintelligence takes to gaining power.
There probably isn’t any other path to “sufficient power in the physical world to make more computer chips” that does not route through “humans do human-like stuff at human-like speeds for you”
That implies that the sequence of events “the world looks normal and not at all like all of the chip fabs are fully automated, and then suddenly all the humans die of something nobody saw coming” is unlikely to happen.
But this is a contingent fact about the world as it is today, and it’s entirely possible to screw up this nice state of affairs, accidentally or intentionally.
Therefore, even if you think that you are on a path to a pivotal act, if your plan starts look like “and in step 3 I give my AI a fully-automated factory which can produce all components of itself given sufficient raw materials and power, and can also build a chip fab, and then in step 4 I give my AI instructions that it should perform an act which looks to me like a pivotal act, which it will surely do by doing something amazing with nanotech”, you should stop and reevaluate your plan.
Does this sound like an accurate summary to you?
Also, as a side note is there accepted terminology to distinguish between “an act that the actor believes will be pivotal” and “an act that is in fact pivotal”? I find myself wanting to make that distinction quite a bit, and it would be nice if there were accepted terminology.
I think 1-4 are good summaries of the arguments I’m making about nanobots. I would add another point that the reason it is hard to make nanobots is not about a lack of computational abilities (although that could also be a bottleneck) but simply a lack of knowledge about the physical world that can only be resolved by learning more about the physical world in a way that is relevant to making nanobots.
On point 5, from my current perspective, I think the idea of pivotal acts is totalitarian, not a good idea and most likely to screw things up if ever attempted. So I wasn’t mainly trying to make a statement about them here (that would be another post). I was making a side argument about them that is roughly summarized in 5 -- giving an AI full physical capabilities seems like a very dangerous step and if it is part of your plan for a pivotal act you should be especially worried that you are making things worse.