Why do you insist on making life harder on yourself? If the problem isn’t solved satisfactorily in a simple world model, e.g. a deterministic finite process with however good mathematical properties you’d like, it’s not yet time to consider more complicated situations, with various poorly-understood kinds of uncertainty, platonic mathematical objects, and so on and so forth.
Why do you insist on making life harder on yourself?
I thought it might be interesting to sketch the outline of a possible solution to the level 4 multiverse decision problem, so people can get a sense of how much work is left to be done (i.e., a lot). This is also a subject that I’ve been interested in for a long time, so I couldn’t resist bringing it up.
Anyway, I gave 2 other examples with simple world models. Can you suggest more simple models that I should test this theory with?
Why do you insist on making life harder on yourself? If the problem isn’t solved satisfactorily in a simple world model, e.g. a deterministic finite process with however good mathematical properties you’d like, it’s not yet time to consider more complicated situations, with various poorly-understood kinds of uncertainty, platonic mathematical objects, and so on and so forth.
I thought it might be interesting to sketch the outline of a possible solution to the level 4 multiverse decision problem, so people can get a sense of how much work is left to be done (i.e., a lot). This is also a subject that I’ve been interested in for a long time, so I couldn’t resist bringing it up.
Anyway, I gave 2 other examples with simple world models. Can you suggest more simple models that I should test this theory with?
I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see
http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf
if you’re curious...
-- Ben Goertzel
It’s the “do what a superintelligence would do” decision theory!!!