There’s also the problem of an AGI consistently exhibiting aligned behavior due to low risk tolerance, until it stops doing that (for all sorts of unanticipated reasons).
This is especially compounded by the current paradigm of brute forcing randomly generated-neural networks, since the resulting systems are fundamentally unpredictable and unexplainable.
Retracted because I used the word “fundamentally” incorrectly, resulting in a mathematically provably false statement (in fact it might be reasonable to assume that neutral networks are both fundamentally predictable and even fundamentally explainable, although I can’t say for sure since as of Nov 2023 I don’t have a sufficient understanding of Chaos theory). They sure are unpredictable and unexplainable right now, but there’s nothing fundamental about that.
This comment shouldn’t have been upvoted by anyone. It said something that isn’t true.
There’s also the problem of an AGI consistently exhibiting aligned behavior due to low risk tolerance, until it stops doing that (for all sorts of unanticipated reasons).
This is especially compounded by the current paradigm of brute forcing randomly generated-neural networks, since the resulting systems are fundamentally unpredictable and unexplainable.
Retracted because I used the word “fundamentally” incorrectly, resulting in a mathematically provably false statement (in fact it might be reasonable to assume that neutral networks are both fundamentally predictable and even fundamentally explainable, although I can’t say for sure since as of Nov 2023 I don’t have a sufficient understanding of Chaos theory). They sure are unpredictable and unexplainable right now, but there’s nothing fundamental about that.
This comment shouldn’t have been upvoted by anyone. It said something that isn’t true.