Hi Caerulea-Lawrence, thanks for your comment. The reason we say: “If you don’t understand that worldview, then you’ll be unable to predict what these groups will do. You will also struggle to communicate with them in a way that they care about, or persuade them to do things differently.” is not because we are trying to convince anyone to have a particular worldview with this piece—it’s because we are trying to motivate people to see other perspectives even if they are still stuck in their own perspective. That is, there are instrumental reasons to try to see things from other people’s perspectives, even if you are convinced you’re 100% right and they are totally wrong.
I wonder what about this piece makes you think we’re trying to use it to promote a particular worldview? The intention of the piece is precisely the opposite—to promote understanding multiple world views (and learning what the different worldviews have to offer).
A major goal in this piece is to try to be fair to every worldview without advocating for any worldview in particular. This is hard to do, and it’s possible we failed in specific ways—if you have specific examples of us being unfair to a worldview, please let us know, and if you make a case we find convincing that we’ve given short shrift to that perspective we’ll change it. We’ve already done that based on past feedback on this piece (updating a few of the descriptions based on feedback from people who hold that worldview). We’re trying to describe each worldview in a way that most of the people who hold that view agree with and endorse the way we describe it.
The way you define values in your comment:
“From the AI “engineering” perspective, values/valued states are “rewards” that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection.”
is just something different than what I’m talking about in my post when I use the phrase “intrinsic values.”
From what I can tell, you seem to be arguing:
[paraphrasing] “In this one line of work, we define values this way”, and then jumping from there to “therefore, you are misunderstanding values,” when actually I think you’re just using the phrase to mean something different than I’m using it to mean.
Reply