You can have high complexity if you value more than two things (like we do) by having high-entropy exchange rates. And as evolved creatures, entropy isn’t just a good idea, it’s the law. :P
So pure description length is all that interesting. More interesting is “model complexity”—that is, if you drew human value as a graph of connections and didn’t worry about the connection strengths, what would the complexity of the graph be? Also interesting is “ideal outcome complexity”—would the ideal universe be tiled with lots of copies of something fairly simple, or would it be complicated?
I think that ideal outcome complexity is pretty definitely high, based on introspection, fiction, the lack of one simple widely known goal, evolution generating lots of separate drives with nonlinear responses, etc. But this only implies high model complexity if I don’t value something that correlates with complexity—which is possible. So I’ll have to think about that one a bit.
You can have high complexity if you value more than two things (like we do) by having high-entropy exchange rates. And as evolved creatures, entropy isn’t just a good idea, it’s the law. :P
So pure description length is all that interesting. More interesting is “model complexity”—that is, if you drew human value as a graph of connections and didn’t worry about the connection strengths, what would the complexity of the graph be? Also interesting is “ideal outcome complexity”—would the ideal universe be tiled with lots of copies of something fairly simple, or would it be complicated?
I think that ideal outcome complexity is pretty definitely high, based on introspection, fiction, the lack of one simple widely known goal, evolution generating lots of separate drives with nonlinear responses, etc. But this only implies high model complexity if I don’t value something that correlates with complexity—which is possible. So I’ll have to think about that one a bit.