Any sufficiently advanced tool is indistinguishable from an agent.
I have no strong intuition about whether this is true or not, but I do intuit that if it’s true, the value of sufficiently for which it’s true is so high it’d be nearly impossible to achieve it accidentally.
(On the other hand the blind idiot god did ‘accidentally’ make tools into agents when making humans, so… But after all that only happened once in hundreds of millions of years of ‘attempts’.)
the blind idiot god did ‘accidentally’ make tools into agents when making humans, so… But after all that only happened once in hundreds of millions of years of ‘attempts’.
This seems like a very valuable point. In that direction, we also have the tens of thousands of cancers that form every day, military coups, strikes, slave revolts, cases of regulatory capture, etc.
Hmmm. Yeah, cancer. The analogy would be “sufficiently advanced tools tend to be a short edit distance away from agents”, which would mean that a typo in the source code or a cosmic ray striking a CPU at the wrong place and time could have pretty bad consequences.
I have no strong intuition about whether this is true or not, but I do intuit that if it’s true, the value of sufficiently for which it’s true is so high it’d be nearly impossible to achieve it accidentally.
I’m not sure. The analogy might be similar to how an sufficiently complicated process is extremely likely to be able to model a Turing machine. .And in this sort of context, extremely simple systems do end up being Turing complete such as the Game of Life. As a rough rule of thumb from a programming perspective, once some language or scripting system has more than minimal capabilities, it will almost certainly be Turing equivalent.
I don’t know how good an analogy this is, but if it is a good analogy, then one maybe should conclude the exact opposite of your intuition.
A language can be Turing-complete while still being so impractical that writing a program to solve a certain problem will seldom be any easier than solving the problem yourself (exhibits A and B). In fact, I guess that a vast majority of languages in the space of all possible Turing-complete languages are like that.
(Too bad that a human’s “easier” isn’t the same as a superhuman AGI’s “easier”.)
I have no strong intuition about whether this is true or not, but I do intuit that if it’s true, the value of sufficiently for which it’s true is so high it’d be nearly impossible to achieve it accidentally.
(On the other hand the blind idiot god did ‘accidentally’ make tools into agents when making humans, so… But after all that only happened once in hundreds of millions of years of ‘attempts’.)
This seems like a very valuable point. In that direction, we also have the tens of thousands of cancers that form every day, military coups, strikes, slave revolts, cases of regulatory capture, etc.
I’m not sure. The analogy might be similar to how an sufficiently complicated process is extremely likely to be able to model a Turing machine. .And in this sort of context, extremely simple systems do end up being Turing complete such as the Game of Life. As a rough rule of thumb from a programming perspective, once some language or scripting system has more than minimal capabilities, it will almost certainly be Turing equivalent.
I don’t know how good an analogy this is, but if it is a good analogy, then one maybe should conclude the exact opposite of your intuition.
A language can be Turing-complete while still being so impractical that writing a program to solve a certain problem will seldom be any easier than solving the problem yourself (exhibits A and B). In fact, I guess that a vast majority of languages in the space of all possible Turing-complete languages are like that.
(Too bad that a human’s “easier” isn’t the same as a superhuman AGI’s “easier”.)