Enumerate mathematical objects by representing them in a description language and enumerating all strings. Look for structures that are in some sense indistinguishable from “you”. (taboo “you”, and solve a few philosphical problems along the way). There’s your set of possible universes. Distribute probability in some way.
Bayesian inference falls out by aggregating sets of possible worlds, and talking about total probability.
In the same stroke with whch you solve the “you”-identification problem, solve the value-identification problem so that you can distribute utility over possible worlds, too. Excercising the logical power to actually observe the worlds that involve you on a close enough level will involve some funky shit where you end up determining/observing your entire future utility-maximizing policy/plan. This will involve crazy recursion and turning this whole thing inside-out, and novel work in math on programs deducing their own output. (see TDT, UDT, and whatever solves their problems).
Approximating this thing will be next to impossible, but we have an existence proof by example (humans), so get to it. (we don’t have prrof that lawful recursion is possible, though, if I understand correctly)
Our current half-assed version of the inference thing (Solominoff Induction) uses Turing Machines (ick) as the description language, and P’= 2^(-L), where L is the length of the strings describing the universes (that’s an improper prior, but renorm handles that quick).
We have proofs that P’ = 1 does not work (no free lunch (or is that not the right one here...)), and we can pack all of our degrees of freedom into the design of the description language if we choose the length prior. (Or is that almost all? Proof, anyone?)
This leaves just the design of the description langauge. Computable programming languages seem OK, but all have unjustified inductive bias. Basically we have to figure out which one is a close approximation for our prior. Turing machines don’t seem particularly priveledged in this respect.
EDIT: Bolded the Tl;dr.
EDIT: Downvotes? WTF? Can we please have a norm that people can speculate freely in meditation threads without being downvoted? At least point out flaws… If it’s not about logical flaws, I don’t know what it is, and the downvote carries very nearly no information.
Enumerate mathematical objects by representing them in a description language and enumerating all strings. Look for structures that are in some sense indistinguishable from “you”. (taboo “you”, and solve a few philosphical problems along the way). There’s your set of possible universes. Distribute probability in some way.
Bayesian inference falls out by aggregating sets of possible worlds, and talking about total probability.
In the same stroke with whch you solve the “you”-identification problem, solve the value-identification problem so that you can distribute utility over possible worlds, too. Excercising the logical power to actually observe the worlds that involve you on a close enough level will involve some funky shit where you end up determining/observing your entire future utility-maximizing policy/plan. This will involve crazy recursion and turning this whole thing inside-out, and novel work in math on programs deducing their own output. (see TDT, UDT, and whatever solves their problems).
Approximating this thing will be next to impossible, but we have an existence proof by example (humans), so get to it. (we don’t have prrof that lawful recursion is possible, though, if I understand correctly)
Our current half-assed version of the inference thing (Solominoff Induction) uses Turing Machines (ick) as the description language, and P’= 2^(-L), where L is the length of the strings describing the universes (that’s an improper prior, but renorm handles that quick).
We have proofs that P’ = 1 does not work (no free lunch (or is that not the right one here...)), and we can pack all of our degrees of freedom into the design of the description language if we choose the length prior. (Or is that almost all? Proof, anyone?)
This leaves just the design of the description langauge. Computable programming languages seem OK, but all have unjustified inductive bias. Basically we have to figure out which one is a close approximation for our prior. Turing machines don’t seem particularly priveledged in this respect.
EDIT: Bolded the Tl;dr.
EDIT: Downvotes? WTF? Can we please have a norm that people can speculate freely in meditation threads without being downvoted? At least point out flaws… If it’s not about logical flaws, I don’t know what it is, and the downvote carries very nearly no information.