IIUC, Yudkowsky’s epistemology is essentially that Solomonoff induction is the ideal of unbounded epistemic rationality that any boundedly rational reasoner should try to approximate.
I contest that Solomonoff induction is the self-evident ideal epistemic rationality.
Seconded. There seems to be no reason to privilege Turing machines or any particular encoding. (both choices have unavoidable inductive bias that is essentially arbitrary)
I think you’re missing the relevant piece—bounded rationality.
And it doesn’t matter what the Solomonoff prior actually looks like if you can’t compute it.
IIUC, Yudkowsky’s epistemology is essentially that Solomonoff induction is the ideal of unbounded epistemic rationality that any boundedly rational reasoner should try to approximate.
I contest that Solomonoff induction is the self-evident ideal epistemic rationality.
Seconded. There seems to be no reason to privilege Turing machines or any particular encoding. (both choices have unavoidable inductive bias that is essentially arbitrary)