I do not believe that 3a is sufficiently logically supported. The criticism of AI risk that have seemed the strongest to me have been about how there is no engagement in the AI alignment community about the various barriers that undercut this argument. Against them, The conjecture about what protein folding and ribosomes might one have the possibility to do really weak counterargument, based as it is on no empirical or evidentiary reasoning.
Specifically, I believe further nuance is needed about the can vs will distinction in the assumption that the first AGI to make a hostile move will have sufficient capability to reasonably guarantee decisive strategic advantage. Sure, it’s of course possible that some combination of overhang risk and covert action allows a leading AGI to make some amount of progress above and beyond humanity’s in terms of technological advancement. But the scope and scale of that advantage is critical, and I believe it is strongly overstated. I can accept that an AGI could foom overnight—that does not mean that it will, simply by virtue of it being hypothetically possible.
All linked resources and supporting arguments have a common thread of taking it for granted that cognition alone can give an AGI a decisive technology lead. My model of cognition is instead of a logarithmically decreasing input into the rate of technological change. A little bit of extra cognition will definitely speed up scientific progress on exotic technological fronts, but an excess of cognition is not fungible for other necessary inputs to technological progress, such as the need for experimentation for hypothesis testing and problem solving on real world constraints related to unforeseen implementation difficulties related to unexplored technological frontiers.
Based on this, I think the fast takeoff hypothesis falls apart and a slow takeoff hypothesis is a much more reasonable place to reason from.
Against them, The conjecture about what protein folding and ribosomes might one have the possibility to do really weak counterargument, based as it is on no empirical or evidentiary reasoning
I’m not sure I’ve parsed this correctly, but if I have, can I ask what unsupported conjecture you think undergirds this part of the argument? It’s difficult to say what counts as “empirical” or “evidentiary” reasoning in domains where the entire threat model is “powerful stuff we haven’t managed to build ourselves yet”, given we can be confident that set isn’t empty. (Also, keep in mind that nanotech is merely presented as a lower bound of how STEM-AGI might achieve DSA, being a domain where we already have strong reasons to believe that significant advances which we haven’t yet achieved are nonetheless possible.)
Cognition should instead be thought of as a logarithmically decreasing input into the rate of technological change.
Why? This doesn’t seem to be it worked with humans, where it was basically a step function from technology not existing, to existing.
A little bit of extra cognition
This sure is assuming a good chunk of the opposing conclusion.
...but an excess of cognition is not fungible for other necessary inputs to technological progress, such as the need for experimentation for hypothesis testing and problem solving on real world constraints related to unforeseen implementation difficulties related to unexplored technological frontiers.
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
It matters because the original poster isn’t saying we don’t use it to solve real world problems, but rather that real world constraints (I.e. laws of physics) will limit its speed of advancement.
An AI likely cannot easily predict a chaotic system unless it can simulate reality at a high fidelity. I guess Op is assuming the TAI won’t have this capability, so even if we do solve real world problems with AI, it is still limited by real world experimentation requirements.
I do not believe that 3a is sufficiently logically supported. The criticism of AI risk that have seemed the strongest to me have been about how there is no engagement in the AI alignment community about the various barriers that undercut this argument. Against them, The conjecture about what protein folding and ribosomes might one have the possibility to do really weak counterargument, based as it is on no empirical or evidentiary reasoning.
Specifically, I believe further nuance is needed about the can vs will distinction in the assumption that the first AGI to make a hostile move will have sufficient capability to reasonably guarantee decisive strategic advantage. Sure, it’s of course possible that some combination of overhang risk and covert action allows a leading AGI to make some amount of progress above and beyond humanity’s in terms of technological advancement. But the scope and scale of that advantage is critical, and I believe it is strongly overstated. I can accept that an AGI could foom overnight—that does not mean that it will, simply by virtue of it being hypothetically possible.
All linked resources and supporting arguments have a common thread of taking it for granted that cognition alone can give an AGI a decisive technology lead. My model of cognition is instead of a logarithmically decreasing input into the rate of technological change. A little bit of extra cognition will definitely speed up scientific progress on exotic technological fronts, but an excess of cognition is not fungible for other necessary inputs to technological progress, such as the need for experimentation for hypothesis testing and problem solving on real world constraints related to unforeseen implementation difficulties related to unexplored technological frontiers.
Based on this, I think the fast takeoff hypothesis falls apart and a slow takeoff hypothesis is a much more reasonable place to reason from.
I’m not sure I’ve parsed this correctly, but if I have, can I ask what unsupported conjecture you think undergirds this part of the argument? It’s difficult to say what counts as “empirical” or “evidentiary” reasoning in domains where the entire threat model is “powerful stuff we haven’t managed to build ourselves yet”, given we can be confident that set isn’t empty. (Also, keep in mind that nanotech is merely presented as a lower bound of how STEM-AGI might achieve DSA, being a domain where we already have strong reasons to believe that significant advances which we haven’t yet achieved are nonetheless possible.)
Why? This doesn’t seem to be it worked with humans, where it was basically a step function from technology not existing, to existing.
This sure is assuming a good chunk of the opposing conclusion.
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
It matters because the original poster isn’t saying we don’t use it to solve real world problems, but rather that real world constraints (I.e. laws of physics) will limit its speed of advancement.
An AI likely cannot easily predict a chaotic system unless it can simulate reality at a high fidelity. I guess Op is assuming the TAI won’t have this capability, so even if we do solve real world problems with AI, it is still limited by real world experimentation requirements.