We have Knightian uncertainty over our set of environments, it is not a probability distribution over environments. So, we might as well go with the maximin policy.
For any fixed n, there are computations which can’t be correctly predicted in n steps.
Logical induction will consider all possibilities equally likely in the absence of a pattern.
Logical induction will consider a sufficiently good psudorandom algorithm as being random.
Any kind of Knightian uncertainty agent will consider psudorandom numbers to be an adversarial superintelligence unless proved otherwise.
Logical induction doesn’t depend on your utility function. Knightian uncertainty does.
There is a phenomena whereby any sufficiently broad set of hypothesis doesn’t influence actions. Under the set of all hypothesis, anything could happen whatever you do,
However, there are sets of possibilities that are sufficiently narrow to be winnable, yet sufficiently broad to need to expend resources combating the hypothetical adversary. If it understands most of reality, but not some fundamental particle, it will assume that the particle is behaving in an adversarial manor.
If someone takes data from a (not understood) particle physics experiment, and processes it on a badly coded insecure computer, this agent will assume that the computer is now running an adversarial superintelligence. It would respond with some extreme measure like blowing the whole physics lab up.
Logical induction doesn’t have interesting guarantees in reinforcement learning, and doesn’t reproduce UDT in any non-trivial way. It just doesn’t solve the problems infra-Bayesianism sets out to solve.
Logical induction will consider a sufficiently good pseudorandom algorithm as being random.
A pseudorandom sequence is (by definition) indistinguishable from random by any cheap algorithm, not only logical induction, including a bounded infra-Bayesian.
If it understands most of reality, but not some fundamental particle, it will assume that the particle is behaving in an adversarial manor.
No. Infra-Bayesian agents have priors over infra-hypotheses. They don’t start with complete Knightian uncertainty over everything and gradually reduce it. The Knightian uncertainty might “grow” or “shrink” as a result of the updates.
For any fixed n, there are computations which can’t be correctly predicted in n steps.
Logical induction will consider all possibilities equally likely in the absence of a pattern.
Logical induction will consider a sufficiently good psudorandom algorithm as being random.
Any kind of Knightian uncertainty agent will consider psudorandom numbers to be an adversarial superintelligence unless proved otherwise.
Logical induction doesn’t depend on your utility function. Knightian uncertainty does.
There is a phenomena whereby any sufficiently broad set of hypothesis doesn’t influence actions. Under the set of all hypothesis, anything could happen whatever you do,
However, there are sets of possibilities that are sufficiently narrow to be winnable, yet sufficiently broad to need to expend resources combating the hypothetical adversary. If it understands most of reality, but not some fundamental particle, it will assume that the particle is behaving in an adversarial manor.
If someone takes data from a (not understood) particle physics experiment, and processes it on a badly coded insecure computer, this agent will assume that the computer is now running an adversarial superintelligence. It would respond with some extreme measure like blowing the whole physics lab up.
Logical induction doesn’t have interesting guarantees in reinforcement learning, and doesn’t reproduce UDT in any non-trivial way. It just doesn’t solve the problems infra-Bayesianism sets out to solve.
A pseudorandom sequence is (by definition) indistinguishable from random by any cheap algorithm, not only logical induction, including a bounded infra-Bayesian.
No. Infra-Bayesian agents have priors over infra-hypotheses. They don’t start with complete Knightian uncertainty over everything and gradually reduce it. The Knightian uncertainty might “grow” or “shrink” as a result of the updates.