That would be confusing, because it seem like you can just inject extra code and arbitrary epicycles into a program and retain its type + results, to increase its levin complexity without bound. So I don’t see how you can have a pareto frontier.
The Pareto frontier consists of those programs with the smallest possible description length/runtime(rather, those programs with an optimal tradeoff between runtime/description—programs such that you can’t make the runtime shorter without increasing the description length and vice versa). So adding extra code without making the program faster wouldn’t put you on the Pareto frontier. The Levin complexity is constant along lines of unit slope in my parameterization, so I use it to measure how even the slope of the frontier is.
Like, if I look at it, it doesn’t seem to be telling me that “this is the number of sensible abstractions you can have”
My intuition is that sensible abstractions should either let you (a) compress/predict part of the system or (b) allow you to predict things faster. I think that this captures a lot of what makes abstractions good, although in practice things can be more complicated(e.g. in a human context, the goodness of an abstraction might be related to how well it relates to a broader memeplex or how immediately useful it is for accomplishing a task. I’m trying to abstract(ha) away from those details and focus on observer-independent qualities that any good abstraction has to have)
I agree that the definition is a bit unfortunately ad hoc. My main problem with it is that it doesn’t seem to naturally ‘hook’ into the dynamics of the system—it’s too much about how a passive observer would evaluate the states after the fact, it’s not ‘intrinsic’ enough. My hope is that it will be easier to get better definitions after trying to construct some examples.
The Pareto frontier consists of those programs with the smallest possible description length/runtime(rather, those programs with an optimal tradeoff between runtime/description—programs such that you can’t make the runtime shorter without increasing the description length and vice versa). So adding extra code without making the program faster wouldn’t put you on the Pareto frontier. The Levin complexity is constant along lines of unit slope in my parameterization, so I use it to measure how even the slope of the frontier is.
My intuition is that sensible abstractions should either let you (a) compress/predict part of the system or (b) allow you to predict things faster. I think that this captures a lot of what makes abstractions good, although in practice things can be more complicated(e.g. in a human context, the goodness of an abstraction might be related to how well it relates to a broader memeplex or how immediately useful it is for accomplishing a task. I’m trying to abstract(ha) away from those details and focus on observer-independent qualities that any good abstraction has to have)
I agree that the definition is a bit unfortunately ad hoc. My main problem with it is that it doesn’t seem to naturally ‘hook’ into the dynamics of the system—it’s too much about how a passive observer would evaluate the states after the fact, it’s not ‘intrinsic’ enough. My hope is that it will be easier to get better definitions after trying to construct some examples.