Yeah, this looks right. I guess you could rephrase my post as saying that narrow AI could solve most problems we’d want an AI to solve, but with less danger than the designs discussed on LW (e.g. UDT over Tegmark multiverse).
Vladimir’s writing style has high information density, but he leaves the work of unpacking to the reader. In this context “that’s what evolution was saying” seems to be a shorthand for something like:
Evolution optimized for goals that did not necessarily imply general intelligence, nor did evolution ever anticipate creating a general intelligence. Nevertheless a general intelligence appeared as the result of evolution’s optimizations. By analogy we should be not be too sure about narrow AI developments not leading to AGI.
Yeah, this looks right. I guess you could rephrase my post as saying that narrow AI could solve most problems we’d want an AI to solve, but with less danger than the designs discussed on LW (e.g. UDT over Tegmark multiverse).
That’s what evolution was saying. Since recently I expect narrow AI developments to be directly on track to an eventual intelligence explosion.
What narrow AI developments do you have in mind?
Who’s ‘evolution’?
Apparently whoever downvoted understood what Vladimir was saying, can you please explain? I can’t parse “what evolution was saying”.
Vladimir’s writing style has high information density, but he leaves the work of unpacking to the reader. In this context “that’s what evolution was saying” seems to be a shorthand for something like:
Evolution optimized for goals that did not necessarily imply general intelligence, nor did evolution ever anticipate creating a general intelligence. Nevertheless a general intelligence appeared as the result of evolution’s optimizations. By analogy we should be not be too sure about narrow AI developments not leading to AGI.
Ah. This seems about right, though I think Vladimir’s statement was denser either denser and/or more ambiguous than usual.