EY’s doom model—or more accurately my model of his model—is one where in the near future an AGI not much smarter than us running on normal hardware (ex GPUs) “rewrites its own source code” resulting in a noticeably more efficient AI which then improves the code further and so on, bottoming out in many OOM improvement in efficiency and then strong nanotech killing us.
I made the same point on the other post, but I don’t understand this. Eliezer does not believe that you somehow get to improve the thermodynamic efficiency of your hardware, by rewriting the code that runs on your hardware. This doesn’t even have anything to do with thermodynamic efficiency limits, since we are talking about algorithmic progress here.
Maybe you intended to write something else here, since this feels like a non-sequitur.
They need to come from some combination of software and hardware. Eliezer’s model seems to source much of that from software initially, but also hardware probably via nanotech, and he cites brain thermodynamic inefficiency to support this. Or why do you think he cites thermodynamic efficiency?
I’ve already written extensively about the software of intelligence and made tangible predictions well in advance which have in fact come to pass (universal learning, scaling hypothesis, etc).
In my model the brain is reasonably efficient in both hardware and software, and have extensive arguments for both. The software argument is softer and less quantitative, but supported by my predictive track record.
I mean, you just make more GPUs. Or you do some work on reversible computation or optical interconnect. Or you build some biological compute-substrate that literally just makes very large brain blobs that you can somehow use for computation. There are so many ways that seem really very feasible to me.
The key point is that this physical limit here really doesn’t matter very much. There are tons of different ways to get many OOMs of improvement here.
Sure but this takes time and resources and you get sublinear scaling in compute/$ in datacenters/supercomputers. Nvidia doesn’t yet produce a million high end GPUs in an entire year. GPT4 training already used a noticeable fraction of nvidia’s flagship GPU output. Nvidia/TSMC can’t easily scale this up by many OOM—even one OOM will take time.
Or you build some biological compute-substrate that literally just makes very large brain blobs that you can somehow use for computation.
There are some early demonstrations of small neural circuits built this way, but its very far from any practical tech, with much riding on the ‘somehow’.
There are tons of different ways to get many OOMs of improvement here.
Where? Your two poor examples provide very little, and do not multiply together.
You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone)
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.
I made the same point on the other post, but I don’t understand this. Eliezer does not believe that you somehow get to improve the thermodynamic efficiency of your hardware, by rewriting the code that runs on your hardware. This doesn’t even have anything to do with thermodynamic efficiency limits, since we are talking about algorithmic progress here.
Maybe you intended to write something else here, since this feels like a non-sequitur.
Where do the many OOM come from?
They need to come from some combination of software and hardware. Eliezer’s model seems to source much of that from software initially, but also hardware probably via nanotech, and he cites brain thermodynamic inefficiency to support this. Or why do you think he cites thermodynamic efficiency?
I’ve already written extensively about the software of intelligence and made tangible predictions well in advance which have in fact come to pass (universal learning, scaling hypothesis, etc).
In my model the brain is reasonably efficient in both hardware and software, and have extensive arguments for both. The software argument is softer and less quantitative, but supported by my predictive track record.
I mean, you just make more GPUs. Or you do some work on reversible computation or optical interconnect. Or you build some biological compute-substrate that literally just makes very large brain blobs that you can somehow use for computation. There are so many ways that seem really very feasible to me.
The key point is that this physical limit here really doesn’t matter very much. There are tons of different ways to get many OOMs of improvement here.
Sure but this takes time and resources and you get sublinear scaling in compute/$ in datacenters/supercomputers. Nvidia doesn’t yet produce a million high end GPUs in an entire year. GPT4 training already used a noticeable fraction of nvidia’s flagship GPU output. Nvidia/TSMC can’t easily scale this up by many OOM—even one OOM will take time.
There are some early demonstrations of small neural circuits built this way, but its very far from any practical tech, with much riding on the ‘somehow’.
Where? Your two poor examples provide very little, and do not multiply together.
You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.