Why would you assume AGI will use Byaesian decision system? Such system would be limited to known probabilities.Unknown probability = 0 probability is not intelligent (Hitchens’s razor, Black swan theory, Fitch’s paradox of knowability, Robust decision-making). Once you incorporate this, Orthogonality Thesis is no longer valid—it becomes obvious that every intelligent AI will only work in single direction (which is disastrous to humans). I know there is a huge gap between “unknown probabilities” and “existential risk”, you can find more information in my posts and I am available to explain it verbally (calendly link below). Short teaser—it is possible that terminal value can also be discovered (not only assumed), this seems to be overlooked in current AI alignment research.
Why would you assume AGI will use Byaesian decision system? Such system would be limited to known probabilities.
Unknown probability = 0 probability
is not intelligent (Hitchens’s razor, Black swan theory, Fitch’s paradox of knowability, Robust decision-making). Once you incorporate this, Orthogonality Thesis is no longer valid—it becomes obvious that every intelligent AI will only work in single direction (which is disastrous to humans). I know there is a huge gap between “unknown probabilities” and “existential risk”, you can find more information in my posts and I am available to explain it verbally (calendly link below). Short teaser—it is possible that terminal value can also be discovered (not only assumed), this seems to be overlooked in current AI alignment research.