Sure but this takes time and resources and you get sublinear scaling in compute/$ in datacenters/supercomputers. Nvidia doesn’t yet produce a million high end GPUs in an entire year. GPT4 training already used a noticeable fraction of nvidia’s flagship GPU output. Nvidia/TSMC can’t easily scale this up by many OOM—even one OOM will take time.
Or you build some biological compute-substrate that literally just makes very large brain blobs that you can somehow use for computation.
There are some early demonstrations of small neural circuits built this way, but its very far from any practical tech, with much riding on the ‘somehow’.
There are tons of different ways to get many OOMs of improvement here.
Where? Your two poor examples provide very little, and do not multiply together.
You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone)
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.
Sure but this takes time and resources and you get sublinear scaling in compute/$ in datacenters/supercomputers. Nvidia doesn’t yet produce a million high end GPUs in an entire year. GPT4 training already used a noticeable fraction of nvidia’s flagship GPU output. Nvidia/TSMC can’t easily scale this up by many OOM—even one OOM will take time.
There are some early demonstrations of small neural circuits built this way, but its very far from any practical tech, with much riding on the ‘somehow’.
Where? Your two poor examples provide very little, and do not multiply together.
You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.