Reworded somewhat. E is expectation value, as is now stated; it does not need to calculated, we just need to know that a maximiser will always make the decision that maximises the expected value of U, while a satisficer may sometimes make a different decision; hence the presence of a U-maximiser increases the expected value of U over the presence of an otherwise equivalent U-satisficer.
An agent is “An entity which is capable of Action)”; an AI or human being or collection of neurons that can do stuff. It’s a general term here, so I didn’t define it.
Reworded somewhat. E is expectation value, as is now stated; it does not need to calculated, we just need to know that a maximiser will always make the decision that maximises the expected value of U, while a satisficer may sometimes make a different decision; hence the presence of a U-maximiser increases the expected value of U over the presence of an otherwise equivalent U-satisficer.
An agent is “An entity which is capable of Action)”; an AI or human being or collection of neurons that can do stuff. It’s a general term here, so I didn’t define it.