I can’t point you to existing resources, but from my perspective, I assumed an algorithmic ontology because it seemed like the only way to make decision theory well defined (at least potentially, after solving various open problems). That is, for an AI that knows its own source code S, you could potentially define the “consequences of me doing X” as the logical consequences of the logical statement “S outputs X”. Whereas I’m not sure how this could even potentially be defined under a physicalist ontology, since it seems impossible for even an ASI to know the exact details of itself as a physical system.
This does lead to the problem that I don’t know how to apply LDT to humans (who do not know their own source code), which does make me somewhat suspicious that the algorithmic ontology might be a wrong approach (although physicalist ontology doesn’t seem to help). I mentioned this as problem #6 in UDT shows that decision theory is more puzzling than ever.
I am indeed interested in decision theory that applies to agents other than AIs that know their own source code. Though I’m not sure why it’s a problem for the physicalist ontology that the agent doesn’t know the exact details of itself — seems plausible to me that “decisions” might just be a vague concept, which we still want to be able to reason about under bounded rationality. E.g. under physicalist EDT, what I ask myself when I consider a decision to do X is, “What consequences do I expect conditional on my brain-state going through the process that I call ‘deciding to do X’ [and conditional on all the other relevant info I know including my own reasoning about this decision, per the Tickle Defense]?” But I might miss your point.
Re: mathematical universe hypothesis: I’m pretty unconvinced, though I at least see the prima facie motivation (IIUC: we want an explanation for why the universe we find ourselves in has the dynamical laws and initial conditions it does, rather than some others). Not an expert here, this is just based on some limited exploration of the topic. My main objections:
The move from “fundamental physics is very well described by mathematics” to “physics is (some) mathematical structure” seems like a map-territory error. I just don’t see the justification for this.
I worry about giving description-length complexity a privileged status when setting priors / judging how “simple” a hypothesis is. The Great Meta-Turing Machine in the Sky as described by Schmidhuber scores very poorly by the speed prior.
It’s very much not obvious to me that conscious experience is computable. (This is a whole can of worms in this community, presumably :).)
I can’t point you to existing resources, but from my perspective, I assumed an algorithmic ontology because it seemed like the only way to make decision theory well defined (at least potentially, after solving various open problems). That is, for an AI that knows its own source code S, you could potentially define the “consequences of me doing X” as the logical consequences of the logical statement “S outputs X”. Whereas I’m not sure how this could even potentially be defined under a physicalist ontology, since it seems impossible for even an ASI to know the exact details of itself as a physical system.
This does lead to the problem that I don’t know how to apply LDT to humans (who do not know their own source code), which does make me somewhat suspicious that the algorithmic ontology might be a wrong approach (although physicalist ontology doesn’t seem to help). I mentioned this as problem #6 in UDT shows that decision theory is more puzzling than ever.
ETA: I was (and still am) also under strong influence of Tegmark’s Mathematical universe hypothesis. What’s your view on it?
Thanks, that’s helpful!
I am indeed interested in decision theory that applies to agents other than AIs that know their own source code. Though I’m not sure why it’s a problem for the physicalist ontology that the agent doesn’t know the exact details of itself — seems plausible to me that “decisions” might just be a vague concept, which we still want to be able to reason about under bounded rationality. E.g. under physicalist EDT, what I ask myself when I consider a decision to do X is, “What consequences do I expect conditional on my brain-state going through the process that I call ‘deciding to do X’ [and conditional on all the other relevant info I know including my own reasoning about this decision, per the Tickle Defense]?” But I might miss your point.
Re: mathematical universe hypothesis: I’m pretty unconvinced, though I at least see the prima facie motivation (IIUC: we want an explanation for why the universe we find ourselves in has the dynamical laws and initial conditions it does, rather than some others). Not an expert here, this is just based on some limited exploration of the topic. My main objections:
The move from “fundamental physics is very well described by mathematics” to “physics is (some) mathematical structure” seems like a map-territory error. I just don’t see the justification for this.
I worry about giving description-length complexity a privileged status when setting priors / judging how “simple” a hypothesis is. The Great Meta-Turing Machine in the Sky as described by Schmidhuber scores very poorly by the speed prior.
It’s very much not obvious to me that conscious experience is computable. (This is a whole can of worms in this community, presumably :).)