As mentioned, I did think of this of this model before, and I also disagree with Justin/Convergence on how to use it.
Lets say that the underlying space for the vector field is the state of the world. Should we really remove curl? I’d say no. It is completely valid to want to move along some particular path, even a circle, or more likely, a spiral.
Alternatively, lets say that the underlying space for the vector field is world histories. Now we should remove curl, becasue any circular preference in this space is inconsistent. But what even is the vector field in this picture?
***
My reason for considering values as a vector is becasue that is sort of how it feels to me on the inside. I have noticed that my own values are very different depending on my current mood and situation.
When I’m sand/depressed, I become a selfish hedonist. All I care about is for me to be happy again.
When I’m happy I have more complex and more altruistic values. I care about truth and the well-being of others.
It’s like these wants are not tracking my global values at all, but just pointing out a direction in which I want to move. I doubt that I even have global values, because that would be very complicated, and also what would be the use of that? (Except when building a super intelligent AI, but that did not happen much in our ancestral environment.)
As mentioned, I did think of this of this model before, and I also disagree with Justin/Convergence on how to use it.
Lets say that the underlying space for the vector field is the state of the world. Should we really remove curl? I’d say no. It is completely valid to want to move along some particular path, even a circle, or more likely, a spiral.
Alternatively, lets say that the underlying space for the vector field is world histories. Now we should remove curl, becasue any circular preference in this space is inconsistent. But what even is the vector field in this picture?
***
My reason for considering values as a vector is becasue that is sort of how it feels to me on the inside. I have noticed that my own values are very different depending on my current mood and situation.
When I’m sand/depressed, I become a selfish hedonist. All I care about is for me to be happy again.
When I’m happy I have more complex and more altruistic values. I care about truth and the well-being of others.
It’s like these wants are not tracking my global values at all, but just pointing out a direction in which I want to move. I doubt that I even have global values, because that would be very complicated, and also what would be the use of that? (Except when building a super intelligent AI, but that did not happen much in our ancestral environment.)