“We could simulate a bunch of human-level scientists trying to build nanobots.” This idea seems far-fetched:
If it was easy to create nanotechnology by just hiring a bunch of human-level scientists, we could just do that directly, without using AI at all.
Perhaps we could simulate thousands and thousands of human-level intelligences (although of course these would not be remotely human-like intelligences; they would be part of a deeply alien AI system) at accelerated speeds. But this seems like it would probably be more hardware-intensive than just turning up the dial and running a single superintelligence. In other words, this proposal seems to have a very high “alignment tax”. And even after paying that hefty tax, I’d still be worried about alignment problems if I was simulating thousands of alien intelligences at super-speed!
Besides all the hardware you’d need, wouldn’t this be very complicated to implement on the software side, with not much overlap with today’s AI designs?
Has anyone done a serious analysis of how much semiconductor capacity could be destroyed using things like cruise missiles + nationalizing and shutting down supercomputers? I would be interested to know if this is truly a path towards disabling like 90% of the world’s useful-to-AI-research compute, or if the number is much smaller because there is too much random GPU capacity out there in the wild even when you commandeer TSMC fabs and AWS datacenters.
“We could simulate a bunch of human-level scientists trying to build nanobots.”
This idea seems far-fetched:
If it was easy to create nanotechnology by just hiring a bunch of human-level scientists, we could just do that directly, without using AI at all.
Perhaps we could simulate thousands and thousands of human-level intelligences (although of course these would not be remotely human-like intelligences; they would be part of a deeply alien AI system) at accelerated speeds. But this seems like it would probably be more hardware-intensive than just turning up the dial and running a single superintelligence. In other words, this proposal seems to have a very high “alignment tax”. And even after paying that hefty tax, I’d still be worried about alignment problems if I was simulating thousands of alien intelligences at super-speed!
Besides all the hardware you’d need, wouldn’t this be very complicated to implement on the software side, with not much overlap with today’s AI designs?
Has anyone done a serious analysis of how much semiconductor capacity could be destroyed using things like cruise missiles + nationalizing and shutting down supercomputers? I would be interested to know if this is truly a path towards disabling like 90% of the world’s useful-to-AI-research compute, or if the number is much smaller because there is too much random GPU capacity out there in the wild even when you commandeer TSMC fabs and AWS datacenters.