You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone)
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.
You seem to repeatedly be switching back and forth between “what is feasible with current tech” and “what is feasible with future tech”. If you don’t think that superhuman AI can make novel technological developments, then of course you shouldn’t expect any kind of fast takeoff really. That position also seems pretty weak to me.
My model is one of mostly smooth continuous (but crazy transformative) progress following something like the roodman model to singularity ~2048 ish, vs EY’s model of a sudden hard takeoff of a single AGI. To the extent i’m switching back between near future and farther future it is because primarily i’m replying to those construing my arguments about the near future to apply to the farther future or vice versa.
Makes sense, but I think the key points to then pay attention to is the question of how fast AGI could make technological hardware and software progress. Also, my current model of Eliezer thinks that the hard takeoff stuff is more likely to happen after the AI has killed everyone (or almost everyone), not before, so it’s also not super clear how much that matters (the section in your post about bioweapons touches on this a bit, but doesn’t seem that compelling to me, which makes sense since it’s very short and clearly an aside).
If EY’s current model has shifted more to AGI killing everyone with a supervirus vs nanotech then analyzing that in more detail would require going more into molecular biology, bioweapons research, SOTA vaccine tech, etc—most of which is distal from my background and interests. But on the onset I do of course believe that biotech is more likely than drexlerian nanotech as the path a rogue AGI would use to kill many humans.