The fact that the values of intelligent agents are completely arbitrary is in conflict with the historical trend of moral progress observed so far on Earth
You wrote:
It’s possible to believe that the values of intelligent agents are “completely arbitrary” (a.k.a. orthogonality), and that the values of humans are NOT completely arbitrary. (That’s what I believe.)
I don’t use “in conflict” as “ultimate proof by contradiction”, and maybe we use “completely arbitrary” differently. This doesn’t seem a major problem: see also adjusted statement 2, reported below
for any goal G, it is possible to create an intelligent agent whose goal is G
Back to you:
You seem kinda uninterested in the “initial evaluation” part, whereas I see it as extremely central. I presume that’s because you think that the agent’s self-updates will all converge into the same place more-or-less regardless of the starting point. If so, I disagree, but you should tell me if I’m describing your view correctly.
I do expect to see some convergence, but I don’t know exactly how much and for what environments and starting conditions. The more convergence I see from experimental results, the less interested I’ll become in the initial evaluation. Right now, I see it as a useful tool: for example, the fact that language models can already give (flawed, of course) moral scores to sentences is a good starting point in case someone had to rely on LLMs to try to get a free agent. Unsure about how important it will turn out to be. And I’ll happily have a look at your valence series!
I wrote:
You wrote:
I don’t use “in conflict” as “ultimate proof by contradiction”, and maybe we use “completely arbitrary” differently. This doesn’t seem a major problem: see also adjusted statement 2, reported below
Back to you:
I do expect to see some convergence, but I don’t know exactly how much and for what environments and starting conditions. The more convergence I see from experimental results, the less interested I’ll become in the initial evaluation. Right now, I see it as a useful tool: for example, the fact that language models can already give (flawed, of course) moral scores to sentences is a good starting point in case someone had to rely on LLMs to try to get a free agent. Unsure about how important it will turn out to be. And I’ll happily have a look at your valence series!