The game is not over!
Michael Vassar said: “[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.”
For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.
Whats are “shared strong attractors”? You cant use the words “shared”, “strong”, “attractor” or any synonyms.
What’s a “high-level reflective aspiration”? You can’t use the words “high-level”, “reflective ”, “aspiration” or any synonyms.
Caledonian said: “Then declaring the intention to create such a thing takes for granted that there are shared strong attractors.”
We can’t really say if there are “shared strong attractors” one way or the other until we agree on what that means. Otherwise it’s like arguing about wither falling trees make “sound” in the forest. We must let the taboo game play out before we start arguing about things.
Shared strong attractors: values/goals that more than [some percentage] of humans would have at reflective equilibrium.
high-level reflective aspirations: ditto, but without the “[some percentage] of humans” part.
Reflective equilibrium*: a state in which an agent cannot increase its expected utility (eta: according to its current utility function) by changing its utility function, thought processes, or decision procedure, and has the best available knowledge with no false beliefs.
*IIRC this is a technical term in decision theory, so if the technical definition doesn’t match mine, use the former.
a state in which an agent cannot increase its expected utility by changing its utility function
Surely if you could change your utility function you could always increase your expected utility that way, e.g. by defining the new utility function to be the old utility function plus a positive constant.
The game is not over! Michael Vassar said: “[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.”
For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.
Whats are “shared strong attractors”? You cant use the words “shared”, “strong”, “attractor” or any synonyms.
What’s a “high-level reflective aspiration”? You can’t use the words “high-level”, “reflective ”, “aspiration” or any synonyms.
Caledonian said: “Then declaring the intention to create such a thing takes for granted that there are shared strong attractors.”
We can’t really say if there are “shared strong attractors” one way or the other until we agree on what that means. Otherwise it’s like arguing about wither falling trees make “sound” in the forest. We must let the taboo game play out before we start arguing about things.
Shared strong attractors: values/goals that more than [some percentage] of humans would have at reflective equilibrium.
high-level reflective aspirations: ditto, but without the “[some percentage] of humans” part.
Reflective equilibrium*: a state in which an agent cannot increase its expected utility (eta: according to its current utility function) by changing its utility function, thought processes, or decision procedure, and has the best available knowledge with no false beliefs.
*IIRC this is a technical term in decision theory, so if the technical definition doesn’t match mine, use the former.
Surely if you could change your utility function you could always increase your expected utility that way, e.g. by defining the new utility function to be the old utility function plus a positive constant.
I think Normal_Anomaly means “judged according to the old utility function”.
EDIT: Incorrect gender imputation corrected.
I do mean that, fixed. By the way, I am female (and support genderless third-person pronouns, FWIW).
Thank you, that makes sense to me now.