I’m very optimistic about the feasibility of creating world-models with interpretable pointers to “objects”. Things like chairs. In fact, my optimism is sufficiently strong that I tend to take such world-models for granted when thinking of how to achieve alignment. And furthermore I expect interpretable world models to be a necessary condition for alignment.
However, I’m very pessimistic about the feasibility of getting abstract things like “human values” and similar for free. Even complicated high-dimensional things like “humans”, especially when meant to include e.g. uploads, are things I am not so optimistic about the feasibility of (especially once you consider certain challenges at the margins). It just doesn’t seem like the methods that can be used to create world models have anything that would robustly capture such abstract things.
However, I’m very pessimistic about the feasibility of getting abstract things like “human values” and similar for free. Even complicated high-dimensional things like “humans”, especially when meant to include e.g. uploads, are things I am not so optimistic about the feasibility of (especially once you consider certain challenges at the margins). It just doesn’t seem like the methods that can be used to create world models have anything that would robustly capture such abstract things.
I’m confused at why you think this if you’re very optimistic on getting interpretable pointers/world models to things. What makes values or abstract concepts different, exactly.
The most feasible concept of values learning that I’ve seen has been inverse reinforcement learning, but even that concept seems way too underdetermined to be sufficient for learning values. Whereas for simple objects it seems like there are lots of seemingly-sufficient ideas on the table, just waiting until the data gets good enough.
I’m very optimistic about the feasibility of creating world-models with interpretable pointers to “objects”. Things like chairs. In fact, my optimism is sufficiently strong that I tend to take such world-models for granted when thinking of how to achieve alignment. And furthermore I expect interpretable world models to be a necessary condition for alignment.
However, I’m very pessimistic about the feasibility of getting abstract things like “human values” and similar for free. Even complicated high-dimensional things like “humans”, especially when meant to include e.g. uploads, are things I am not so optimistic about the feasibility of (especially once you consider certain challenges at the margins). It just doesn’t seem like the methods that can be used to create world models have anything that would robustly capture such abstract things.
I’m confused at why you think this if you’re very optimistic on getting interpretable pointers/world models to things. What makes values or abstract concepts different, exactly.
The most feasible concept of values learning that I’ve seen has been inverse reinforcement learning, but even that concept seems way too underdetermined to be sufficient for learning values. Whereas for simple objects it seems like there are lots of seemingly-sufficient ideas on the table, just waiting until the data gets good enough.