So the answer is “it depends on the languages involved”.
I thought it was talking about the overhead to describe an object in an absolute sense, but it turns out the constant is related to the difficulty of language emulation.
Well, maybe you could create a graph that for each pair of languages contains the two numbers, and using methods such as HodgeRank (implementation), uncovered set or top cycle to create a single number for each language, which’d give you a simplicity comparison between languages. Ideas (with a little bit more detail) here and here (“Towards the Best Programming Language for Universal Induction”).
Fun hypothesis: I suspect that doing this, or constructing a prior over programming languages that gets updated according to observations (a sort of two-level AIXI) collapses UDASSA into egoism, because the programming language that says “my observation is the output of the empty program”.
“Worked” as in “I thought a bit and have ideas that were shot down by others, but some intuitions”, yes—motivated by this podcast which contains a good explanation of the issues. I’ve been mainly motivated by philosophical problems with AIXI & Solomonoff induction, not by anything concrete, though. And it doesn’t seem super important, so I haven’t written any of it up.
So the answer is “it depends on the languages involved”.
I thought it was talking about the overhead to describe an object in an absolute sense, but it turns out the constant is related to the difficulty of language emulation.
Well, maybe you could create a graph that for each pair of languages contains the two numbers, and using methods such as HodgeRank (implementation), uncovered set or top cycle to create a single number for each language, which’d give you a simplicity comparison between languages. Ideas (with a little bit more detail) here and here (“Towards the Best Programming Language for Universal Induction”).
Fun hypothesis: I suspect that doing this, or constructing a prior over programming languages that gets updated according to observations (a sort of two-level AIXI) collapses UDASSA into egoism, because the programming language that says “my observation is the output of the empty program”.
So does that mean you worked a little on the additive constant issue I talked about in the question?
“Worked” as in “I thought a bit and have ideas that were shot down by others, but some intuitions”, yes—motivated by this podcast which contains a good explanation of the issues. I’ve been mainly motivated by philosophical problems with AIXI & Solomonoff induction, not by anything concrete, though. And it doesn’t seem super important, so I haven’t written any of it up.