More attempts to solve important problems is good. So kudos.
Let me see if I get this straight: see which programs simulate a system, and find the pareto boundary of such programs by ranking them the levin complexity. Then, conditioning your probability distribution on this set, compute its entropy. For a flat pareto boundary, you have a constant levin complexity, really the largest levin complexity which can simulate some system. Is that right?
That would be confusing, because it seem like you can just inject extra code and arbitrary epicycles into a program and retain its type + results, to increase its levin complexity without bound. So I don’t see how you can have a pareto frontier.
Now, I understand I’m probably misunderstanding this, so my judgement is likely to change.But I don’t think this solution has the right flavour. It seems too ad hoc to work and doesn’t match my intuitions.
EDIT:
My intuitions are more like “how abstract is this thing” or how many ideas does this thing permit when I’m thinking of how complex it is, and that seems like what you’re suggesting as an intuition as well. Which doesn’t quite seem to fit this notion. Like, if I look at it, it doesn’t seem to be telling me that “this is the number of sensible abstractions you can have”. Unless you view sensible as being pareto optimal in this sense, but I don’t think I do. I agree calculating the entropy as a way ot measure number of distinct ideas is elegant though.
That would be confusing, because it seem like you can just inject extra code and arbitrary epicycles into a program and retain its type + results, to increase its levin complexity without bound. So I don’t see how you can have a pareto frontier.
The Pareto frontier consists of those programs with the smallest possible description length/runtime(rather, those programs with an optimal tradeoff between runtime/description—programs such that you can’t make the runtime shorter without increasing the description length and vice versa). So adding extra code without making the program faster wouldn’t put you on the Pareto frontier. The Levin complexity is constant along lines of unit slope in my parameterization, so I use it to measure how even the slope of the frontier is.
Like, if I look at it, it doesn’t seem to be telling me that “this is the number of sensible abstractions you can have”
My intuition is that sensible abstractions should either let you (a) compress/predict part of the system or (b) allow you to predict things faster. I think that this captures a lot of what makes abstractions good, although in practice things can be more complicated(e.g. in a human context, the goodness of an abstraction might be related to how well it relates to a broader memeplex or how immediately useful it is for accomplishing a task. I’m trying to abstract(ha) away from those details and focus on observer-independent qualities that any good abstraction has to have)
I agree that the definition is a bit unfortunately ad hoc. My main problem with it is that it doesn’t seem to naturally ‘hook’ into the dynamics of the system—it’s too much about how a passive observer would evaluate the states after the fact, it’s not ‘intrinsic’ enough. My hope is that it will be easier to get better definitions after trying to construct some examples.
More attempts to solve important problems is good. So kudos.
Let me see if I get this straight: see which programs simulate a system, and find the pareto boundary of such programs by ranking them the levin complexity. Then, conditioning your probability distribution on this set, compute its entropy. For a flat pareto boundary, you have a constant levin complexity, really the largest levin complexity which can simulate some system. Is that right?
That would be confusing, because it seem like you can just inject extra code and arbitrary epicycles into a program and retain its type + results, to increase its levin complexity without bound. So I don’t see how you can have a pareto frontier.
Now, I understand I’m probably misunderstanding this, so my judgement is likely to change.But I don’t think this solution has the right flavour. It seems too ad hoc to work and doesn’t match my intuitions.
EDIT:
My intuitions are more like “how abstract is this thing” or how many ideas does this thing permit when I’m thinking of how complex it is, and that seems like what you’re suggesting as an intuition as well. Which doesn’t quite seem to fit this notion. Like, if I look at it, it doesn’t seem to be telling me that “this is the number of sensible abstractions you can have”. Unless you view sensible as being pareto optimal in this sense, but I don’t think I do. I agree calculating the entropy as a way ot measure number of distinct ideas is elegant though.
The Pareto frontier consists of those programs with the smallest possible description length/runtime(rather, those programs with an optimal tradeoff between runtime/description—programs such that you can’t make the runtime shorter without increasing the description length and vice versa). So adding extra code without making the program faster wouldn’t put you on the Pareto frontier. The Levin complexity is constant along lines of unit slope in my parameterization, so I use it to measure how even the slope of the frontier is.
My intuition is that sensible abstractions should either let you (a) compress/predict part of the system or (b) allow you to predict things faster. I think that this captures a lot of what makes abstractions good, although in practice things can be more complicated(e.g. in a human context, the goodness of an abstraction might be related to how well it relates to a broader memeplex or how immediately useful it is for accomplishing a task. I’m trying to abstract(ha) away from those details and focus on observer-independent qualities that any good abstraction has to have)
I agree that the definition is a bit unfortunately ad hoc. My main problem with it is that it doesn’t seem to naturally ‘hook’ into the dynamics of the system—it’s too much about how a passive observer would evaluate the states after the fact, it’s not ‘intrinsic’ enough. My hope is that it will be easier to get better definitions after trying to construct some examples.