While I agree with you that Jaynes’ description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it’s meant to be about humans, not generalized agents.
You claim that Jaynes’ conclusion is that “agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent’s own state, not a world-model.” But this isn’t true. His conclusion is specifically about humans.
I want to reinforce that I’m not disagreeing with you about your claims about generalized agents, or even about what Jaynes says elsewhere in the book. I’m only refuting the way you’ve interpreted the two paragraphs you quoted here. If you’re going to call a passage of ET Jaynes’ “silly,” you have to be right on the money to get away with it!
While I agree with you that Jaynes’ description of how loss functions operate in people does not extend to agents in general, the specific passage you have quoted reads strongly to me as if it’s meant to be about humans, not generalized agents.
You claim that Jaynes’ conclusion is that “agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent’s own state, not a world-model.” But this isn’t true. His conclusion is specifically about humans.
I want to reinforce that I’m not disagreeing with you about your claims about generalized agents, or even about what Jaynes says elsewhere in the book. I’m only refuting the way you’ve interpreted the two paragraphs you quoted here. If you’re going to call a passage of ET Jaynes’ “silly,” you have to be right on the money to get away with it!
Thanks. We don’t seem to have a “That’s fair” or “Touché” react (which seems different and weaker than “Changed my mind”).