the connotation would be something roughly like “speaker is an agent (always assumed background), and this has a negative impact on their goals/utility/whatever”. That does not actually require mapping out the whole valence-learning algorithm in the human brain.
I agree that understanding “flourishing-as-understood-by-the-speaker” can just have a pointer to the speaker’s goals/utility/valence/whatever; I was saying that maximizing “flourishing-as-understood-by-the-speaker” needs to be able to query/unpack those goals/utility/valence/whatever in detail. (I think we’re on the same page here.)
I agree that understanding “flourishing-as-understood-by-the-speaker” can just have a pointer to the speaker’s goals/utility/valence/whatever; I was saying that maximizing “flourishing-as-understood-by-the-speaker” needs to be able to query/unpack those goals/utility/valence/whatever in detail. (I think we’re on the same page here.)