I’m in the process of writing summaries. I replied as soon as I read your response.
If you want people to read what you write, learn to write in a readable way.
You are pretty much the first person to give me feedback on this. I do not have an accurate representation as to how opaque this is at all.
In algorithmic representations:
Separate hypotheses are inseparable.
Hypotheses are required to be a complete world model (account for every part of the input).
Conflicting hypotheses are not able to be held simultaneously. This stems mainly from there being no requirement for running in a finite amount of space and time.
There are other issues with Solomonoff induction in its current form, such as an inability to tolerate error, an inability to separate input in the first place, and an inability to exchange error for simplicity, among others. Some of these are addressable with this particular extension of SI; some are addressable with other extensions.
There is a similar intuition about nondeterministic hypotheses and a requirement that only part of the hypothesis must match the output, as nondeterministic Turing machines can be simulated by deterministic Turing machines via the simulation of every possible execution flow, but that strikes me as somewhat dodgy.
I’m in the process of writing summaries. I replied as soon as I read your response.
You are pretty much the first person to give me feedback on this. I do not have an accurate representation as to how opaque this is at all.
Separate hypotheses are inseparable.
Hypotheses are required to be a complete world model (account for every part of the input).
Conflicting hypotheses are not able to be held simultaneously. This stems mainly from there being no requirement for running in a finite amount of space and time.