There may be a nanotech critical point. Getting to full advanced nanotech probably involves many stages of bootstrapping. If lots of nanobots have been designed on a computer, then an early stage of the bootstrapping process might be last to be designed. (Building a great nanobot with a mediocre nanobot might be easier than building the mediocre nanobot from something even worse.) This would mean a sudden transition where one group potentially suddenly had usable nanotech.
So, can a team of 100 very smart humans, working together, with hand coded nanotech, stop an ASI being created.
I would be unsurprised if blindly scanning and duplicating a human, to the resolution where memories and personality was preserved, was not that hard with hand coded nanotech. (Like a few researcher months of effort)
Making nanomachines that destroy GPU’s seems not that hard either.
Nor does making enough money to just buy all the GPU’s and top AI talent available.
Actually, find everyone capable of doing AI research, and pay them 2x as much to do whatever they like (as long as they don’t publish and don’t run their code on non toy problems) sounds like a good plan in general.
“For only $10 000 a year in fine whisky, we can keep this researcher too drunk to do any dangerous AI research.” But thousands more researchers like him still spend their nights sober adjusting hyperparameters. That’s why were asking you to help in this charity appeal.
(Idea meant more as interesting wild speculation. Unintended incentives exist. That isn’t to say it would be totally useless.)
If destroying GPUs is the goal, there seem to be a lot simpler, less speculative ways than nanomachines. The semiconductor industry is among the most vulnerable, as the pandemic has shown, with an incredibly long supply chain that mostly consists of a single or a handful of suppliers, defended against sabotage largely by “no one would actually do such a thing”.
Of course that is assuming we don’t have a huge hardware overhang in which case current stockpiles might already be sufficient for doom, or that ASI will be based heavily on GPU computing at all.
There may be a nanotech critical point. Getting to full advanced nanotech probably involves many stages of bootstrapping. If lots of nanobots have been designed on a computer, then an early stage of the bootstrapping process might be last to be designed. (Building a great nanobot with a mediocre nanobot might be easier than building the mediocre nanobot from something even worse.) This would mean a sudden transition where one group potentially suddenly had usable nanotech.
So, can a team of 100 very smart humans, working together, with hand coded nanotech, stop an ASI being created.
I would be unsurprised if blindly scanning and duplicating a human, to the resolution where memories and personality was preserved, was not that hard with hand coded nanotech. (Like a few researcher months of effort)
Making nanomachines that destroy GPU’s seems not that hard either.
Nor does making enough money to just buy all the GPU’s and top AI talent available.
Actually, find everyone capable of doing AI research, and pay them 2x as much to do whatever they like (as long as they don’t publish and don’t run their code on non toy problems) sounds like a good plan in general.
“For only $10 000 a year in fine whisky, we can keep this researcher too drunk to do any dangerous AI research.” But thousands more researchers like him still spend their nights sober adjusting hyperparameters. That’s why were asking you to help in this charity appeal.
(Idea meant more as interesting wild speculation. Unintended incentives exist. That isn’t to say it would be totally useless.)
If destroying GPUs is the goal, there seem to be a lot simpler, less speculative ways than nanomachines. The semiconductor industry is among the most vulnerable, as the pandemic has shown, with an incredibly long supply chain that mostly consists of a single or a handful of suppliers, defended against sabotage largely by “no one would actually do such a thing”.
Of course that is assuming we don’t have a huge hardware overhang in which case current stockpiles might already be sufficient for doom, or that ASI will be based heavily on GPU computing at all.