Against them, The conjecture about what protein folding and ribosomes might one have the possibility to do really weak counterargument, based as it is on no empirical or evidentiary reasoning
I’m not sure I’ve parsed this correctly, but if I have, can I ask what unsupported conjecture you think undergirds this part of the argument? It’s difficult to say what counts as “empirical” or “evidentiary” reasoning in domains where the entire threat model is “powerful stuff we haven’t managed to build ourselves yet”, given we can be confident that set isn’t empty. (Also, keep in mind that nanotech is merely presented as a lower bound of how STEM-AGI might achieve DSA, being a domain where we already have strong reasons to believe that significant advances which we haven’t yet achieved are nonetheless possible.)
Cognition should instead be thought of as a logarithmically decreasing input into the rate of technological change.
Why? This doesn’t seem to be it worked with humans, where it was basically a step function from technology not existing, to existing.
A little bit of extra cognition
This sure is assuming a good chunk of the opposing conclusion.
...but an excess of cognition is not fungible for other necessary inputs to technological progress, such as the need for experimentation for hypothesis testing and problem solving on real world constraints related to unforeseen implementation difficulties related to unexplored technological frontiers.
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
It matters because the original poster isn’t saying we don’t use it to solve real world problems, but rather that real world constraints (I.e. laws of physics) will limit its speed of advancement.
An AI likely cannot easily predict a chaotic system unless it can simulate reality at a high fidelity. I guess Op is assuming the TAI won’t have this capability, so even if we do solve real world problems with AI, it is still limited by real world experimentation requirements.
I’m not sure I’ve parsed this correctly, but if I have, can I ask what unsupported conjecture you think undergirds this part of the argument? It’s difficult to say what counts as “empirical” or “evidentiary” reasoning in domains where the entire threat model is “powerful stuff we haven’t managed to build ourselves yet”, given we can be confident that set isn’t empty. (Also, keep in mind that nanotech is merely presented as a lower bound of how STEM-AGI might achieve DSA, being a domain where we already have strong reasons to believe that significant advances which we haven’t yet achieved are nonetheless possible.)
Why? This doesn’t seem to be it worked with humans, where it was basically a step function from technology not existing, to existing.
This sure is assuming a good chunk of the opposing conclusion.
And, sure, but it’s not clear why any of this matters? What is the thing that we’re going to (attempt) to do with AI, if not use it to solve real-world problems?
It matters because the original poster isn’t saying we don’t use it to solve real world problems, but rather that real world constraints (I.e. laws of physics) will limit its speed of advancement.
An AI likely cannot easily predict a chaotic system unless it can simulate reality at a high fidelity. I guess Op is assuming the TAI won’t have this capability, so even if we do solve real world problems with AI, it is still limited by real world experimentation requirements.