More precisely, goals (=preference) are in what system does (which includes all processes happening inside the system as well), which is simply a statement of system determining its preference (while the coding is disregarded, so what matters is behavior and not particular atoms that implement this behavior). Of course, system’s actions are not optimal according to system’s goals (preference).
On the other hand, two agents can be said to have the same preference if they agree (on reflection, which is not actually available) on what should be done in each epistemic state (which doesn’t necessarily mean they’ll solve the optimization problem the same way, but they work on the same optimization problem). This is also the way out from the ontology problem: this equivalence by preference doesn’t mention the real world.
More precisely, goals (=preference) are in what system does (which includes all processes happening inside the system as well), which is simply a statement of system determining its preference (while the coding is disregarded, so what matters is behavior and not particular atoms that implement this behavior). Of course, system’s actions are not optimal according to system’s goals (preference).
On the other hand, two agents can be said to have the same preference if they agree (on reflection, which is not actually available) on what should be done in each epistemic state (which doesn’t necessarily mean they’ll solve the optimization problem the same way, but they work on the same optimization problem). This is also the way out from the ontology problem: this equivalence by preference doesn’t mention the real world.