We’ll build the most powerful AI we think we can control. Nothing prevents us from ever getting that wrong. If building one car with brakes that don’t work made everyone in the world die in a traffic accident, everyone in the world would be dead.
There’s also the problem of an AGI consistently exhibiting aligned behavior due to low risk tolerance, until it stops doing that (for all sorts of unanticipated reasons).
This is especially compounded by the current paradigm of brute forcing randomly generated-neural networks, since the resulting systems are fundamentally unpredictable and unexplainable.
Retracted because I used the word “fundamentally” incorrectly, resulting in a mathematically provably false statement (in fact it might be reasonable to assume that neutral networks are both fundamentally predictable and even fundamentally explainable, although I can’t say for sure since as of Nov 2023 I don’t have a sufficient understanding of Chaos theory). They sure are unpredictable and unexplainable right now, but there’s nothing fundamental about that.
This comment shouldn’t have been upvoted by anyone. It said something that isn’t true.
So how did we get from narrow AI to super powerful AI? Foom? But we can build narrow AIs that don’t foom, because we have. We should be able to build narrow AIs that don’t foom by not including anything that would allow them to recursively self improve [*].
EY’s answer to the question “why isn’t narrow AI safe” wasn’t “narrow AI will foom”, it was “we won’t be motivated to keep AI’s narrow”.
[*] not that we could tell them how to self-improve, since we don’t really understand it ourselves.
We’ll build the most powerful AI we think we can control. Nothing prevents us from ever getting that wrong. If building one car with brakes that don’t work made everyone in the world die in a traffic accident, everyone in the world would be dead.
There’s also the problem of an AGI consistently exhibiting aligned behavior due to low risk tolerance, until it stops doing that (for all sorts of unanticipated reasons).
This is especially compounded by the current paradigm of brute forcing randomly generated-neural networks, since the resulting systems are fundamentally unpredictable and unexplainable.
Retracted because I used the word “fundamentally” incorrectly, resulting in a mathematically provably false statement (in fact it might be reasonable to assume that neutral networks are both fundamentally predictable and even fundamentally explainable, although I can’t say for sure since as of Nov 2023 I don’t have a sufficient understanding of Chaos theory). They sure are unpredictable and unexplainable right now, but there’s nothing fundamental about that.
This comment shouldn’t have been upvoted by anyone. It said something that isn’t true.
So how did we get from narrow AI to super powerful AI? Foom? But we can build narrow AIs that don’t foom, because we have. We should be able to build narrow AIs that don’t foom by not including anything that would allow them to recursively self improve [*].
EY’s answer to the question “why isn’t narrow AI safe” wasn’t “narrow AI will foom”, it was “we won’t be motivated to keep AI’s narrow”.
[*] not that we could tell them how to self-improve, since we don’t really understand it ourselves.