The stronger version is: EUT is inadequate as a theory of agents (for the same reasons, and in the same ways) during an agent’s “growing up” period as well as all the time. I think the latter is the case for several reasons, for example:
agents get exposed to novel “ontological entities” continuously (that e.g. they haven’t yet formed evaluative stances with respect to), and not just while “growing up”
there is a (generative) logic that governs how an agent “grows up” (develops into a “proper agent”), and that same logic continues to apply throughout an agent’s lifespan
I think this is a very important point; my post on value systematization is a (very early) attempt to gesture towards what an agent “growing up” might look like.
Yeah neat, I haven’t yet gotten to reading it but is definitely on my list. Seems (and some folks suggested to me) that it’s quite related to the sort of thing I’m discussing in value change problem too.
There are some similarities, although I’m focusing on AI values not human values. Also, seems like the value change stuff is thinking about humanity on the level of an overall society, whereas I’m thinking about value systematization mostly on the level of an individual AI agent. (Of course, widespread deployment of an agent could have a significant effect on its values, if it continues to be updated. But I’m mainly focusing on the internal factors.)
I think this is a very important point; my post on value systematization is a (very early) attempt to gesture towards what an agent “growing up” might look like.
Yeah neat, I haven’t yet gotten to reading it but is definitely on my list. Seems (and some folks suggested to me) that it’s quite related to the sort of thing I’m discussing in value change problem too.
There are some similarities, although I’m focusing on AI values not human values. Also, seems like the value change stuff is thinking about humanity on the level of an overall society, whereas I’m thinking about value systematization mostly on the level of an individual AI agent. (Of course, widespread deployment of an agent could have a significant effect on its values, if it continues to be updated. But I’m mainly focusing on the internal factors.)