However, I’m very pessimistic about the feasibility of getting abstract things like “human values” and similar for free. Even complicated high-dimensional things like “humans”, especially when meant to include e.g. uploads, are things I am not so optimistic about the feasibility of (especially once you consider certain challenges at the margins). It just doesn’t seem like the methods that can be used to create world models have anything that would robustly capture such abstract things.
I’m confused at why you think this if you’re very optimistic on getting interpretable pointers/world models to things. What makes values or abstract concepts different, exactly.
The most feasible concept of values learning that I’ve seen has been inverse reinforcement learning, but even that concept seems way too underdetermined to be sufficient for learning values. Whereas for simple objects it seems like there are lots of seemingly-sufficient ideas on the table, just waiting until the data gets good enough.
I’m confused at why you think this if you’re very optimistic on getting interpretable pointers/world models to things. What makes values or abstract concepts different, exactly.
The most feasible concept of values learning that I’ve seen has been inverse reinforcement learning, but even that concept seems way too underdetermined to be sufficient for learning values. Whereas for simple objects it seems like there are lots of seemingly-sufficient ideas on the table, just waiting until the data gets good enough.