What do you mean by “follow a utility function”? Why do you thinks humans don’t do it?
Humans are neither independent not transitive. Human preferences change over time, depending on arbitrary factors, including how choices are framed. Humans suffer because of things they cannot affect, and humans suffer because of details of their probability assessment (eg ambiguity aversion). That bears repeating—humans have preference over their state of knowledge. The core of this is that “assessment of fact” and “values” are not disconnected in humans, not disconnected at all. Humans feel good when a team they support wins, without them contributing anything to the victory. They will accept false compliments, and can be flattered. Social pressure changes most values quite easily.
Need I go on?
If it isn’t there, what does it mean to have a correct solution to the FAI problem?
A utility function which, if implemented by the AI, would result in a positive, fulfilling, worthwhile existence for humans. Even if humans had a utility, it’s not clear that a ruling FAI should have the same one, incidentally. The utility is for the AI, and it aims to capture as much of human value as possible—it might just be the utility of a nanny AI (make reasonable efforts to keep humanity from developing dangerous AIs, going extinct, or regressing technologically, otherwise, let them be).
What do you mean by “follow a utility function”? Why do you thinks humans don’t do it?
Humans are neither independent not transitive…
You still haven’t defined “follow a utility function”. Humans are not ideal rational optimizers of their respective utility functions. It doesn’t mean they don’t have them. Deep Blue often plays moves which are not ideal, nevertheless I think it’s fair to say it optimizes winning. If you make intransitive choices, it doesn’t mean your terminal values are intransitive. It means your choices are not optimal.
Human preferences change over time...
This is probably the case. However, the changes are slow, otherwise humans wouldn’t behave coherently at all. The human utility function is only defined approximately, but the FAI problem only makes sense in the same approximation. In any case, if you’re programming an AI you should equip it with the utility function you have at that moment.
...humans have preference over their state of knowledge...
Why do you think it is inconsistent with having a utility function?
...what does it mean to have a correct solution to the FAI problem?
A utility function which, if implemented by the AI, would result in a positive, fulfilling, worthwhile existence for humans.
How can you know that a given utility function has this property? How do you know the utility function I’m proposing doesn’t have this property?
Even if humans had a utility, it’s not clear that a ruling FAI should have the same one, incidentally.
Isn’t it? Assume your utility function is U. Suppose you have the choice to create a superintelligence optimizing U or a superintelligence optimizing something other than U, let say V. Why would you choose V? Choosing U will obviously result in an enormous expected increase of U, which is what you want to happen, since you’re a U-maximizing agent. Choosing V will almost certainly result in a lower expectation value of U: if the V-AI chooses strategy X that leads to higher expected U than the strategy that would be chosen by a U-AI then it’s not clear why the U-AI wouldn’t choose X.
Humans are not ideal rational optimizers of their respective utility functions.
Then why claim that they have one? If humans have intransitive preferences (A>B>C>A), as I often do, then why claim that actually their preferences are secretly transitive but they fail to act on them properly? Nothing we know about the brain points to there being a hidden box with a pristine and pure utility function, that we then implement poorly.
...humans have preference over their state of knowledge...
Why do you think it is inconsistent with having a utility function?
They have preferences like ambiguity aversion, eg being willing to pay to find out, during a holiday, whether they were accepted for a job, while knowing that they can’t make any relevant decisions with that early knowledge. This is not compatible with following a standard utility function.
They have preferences like ambiguity aversion, eg being willing to pay to find out, during a holiday, whether they were accepted for a job, while knowing that they can’t make any relevant decisions with that early knowledge. This is not compatible with following a standard utility function.
I don’t know what you mean by “standard” utility function. I don’t even know what you mean by “following”. We want to find out since uncertainty makes you nervous, being nervous is unpleasant and pleasure is a terminal value. It is entirely consistent with having a utility function and with my formalism in particular.
Humans are not ideal rational optimizers of their respective utility functions.
Then why claim that they have one? If humans have intransitive preferences (A>B>C>A), as I often do, then why claim that actually their preferences are secretly transitive but they fail to act on them properly?
In what epistemology are you asking this question? That is, what is the criterion according to which the validity of answer would be determined?
If you don’t think human preferences are “secretly transitive”, then why do you suggest the following:
Whenever revealed preferences are non-transitive or non-independent, use the person’s stated meta-preferences to remove the issue. The AI thus calculates what the person would say if asked to resolve the transitivity or independence (for people who don’t know about the importance of resolving them, the AI would present them with a set of transitive and independent preferences, derived from their revealed preferences, and have them choose among them).
What is the meaning of asking a person to resolve intransitivities if there are no transitive preferences underneath?
That is, what is the criterion according to which the validity of answer would be determined?
Those are questions for you, not for me. You’re claiming that humans have a hidden utility function. What do you mean by that, and what evidence do you have for your position?
I’m claiming that it is possible to define the utility function of any agent. For unintelligent “agents” the result is probably unstable. For intelligent agents the result should be stable.
The evidence is that I have a formalism which produces this definition in a way compatible with intuition about “agent having a utility function”. I cannot present evidence which doesn’t rely on intuition since that would require having another more fundamental definition of “agent having a utility function” (which AFAIK might not exist). I do not consider this to be a problem since all reasoning falls back to intuition if you ask “why” sufficiently many times.
I don’t see any meaningful definition of intelligence or instrumental rationality without a utility function. If we accepts humans are (approximately) rational / intelligent, they must (in the same approximation) have utility functions.
It also seems to me (again, intuitively) that the very concept of “preference” is incompatible with e.g. intransitivity. In the approximation it makes sense to speak of “preferences” at all, it makes sense to speak of preferences compatible with the VNM axioms ergo utility function. Same goes for the concept of “should”. If it makes sense to say one “should” do something (for example build a FAI), there must be a utility function according to which she should do it.
Bottom line, eventually it all hits philosophical assumptions which have no further formal justification. However, this is true of all reasoning. IMO the only valid method to disprove such assumptions is either by reductio ad absurdum or by presenting a different set of assumptions which is better in some sense. If you have such an alternative set of assumption for this case or a wholly different way to resolve philosophical questions, I would be very interested to know.
I’m claiming that it is possible to define the utility function of any agent.
It is trivially possible to do that. Since no choice is strictly identical, you just add enough details to make each choice unique, and then choose a utility function that will always reach that choice (“subject has a strong preference for putting his left foot forwards when seeing an advertisement for deodorant on Tuesday morning that are the birthdays of prominent Dutch politicians”).
A good simple model of human behaviour is that of different modules expressing preferences and short-circuiting the decision making in some circumstances, and a more rational system (“system 2”) occasionally intervening to prevent loss through money pumps. So people are transitive in their ultimate decisions, often and to some extent, but their actual decisions depend strongly on which choices are presented first (ie their low level preferences are intransitive, but the rational part of them prevents loops). Would you say these beings have no preferences?
I’m claiming that it is possible to define the utility function of any agent.
It is trivially possible to do that. Since no choice is strictly identical, you just add enough details to make each choice unique, and then choose a utility function that will always reach that choice
My formalism doesn’t work like that since the utility function is a function over possible universes, not over possible choices. There is no trivial way to construct a utility function wrt which the given agent’s intelligence is close to maximal. However it still might be the case we need to give larger weight to simple utility functions (otherwise we’re left with selecting a maximum in an infinite set and it’s not clear why it exists). As I said, I don’t have the final formula.
A good simple model of human behaviour is that of different modules expressing preferences and short-circuiting the decision making in some circumstances, and a more rational system (“system 2”) occasionally intervening to prevent loss through money pumps. So people are transitive in their ultimate decisions, often and to some extent, but their actual decisions depend strongly on which choices are presented first (ie their low level preferences are intransitive, but the rational part of them prevents loops). Would you say these beings have no preferences?
I’d say they have a utility function. Image a chess AI that selects moves by one of two strategies. The first strategy (“system 1”) uses simple heuristics like “check when you can” that produce an answer quickly and save precious time. The second strategy (“system 2”) runs a minimax algorithm with a 10-move deep search tree. Are all of the agent’s decisions perfectly rational? No. Does it have a utility function? Yes: winning the game.
Humans are neither independent not transitive. Human preferences change over time, depending on arbitrary factors, including how choices are framed. Humans suffer because of things they cannot affect, and humans suffer because of details of their probability assessment (eg ambiguity aversion). That bears repeating—humans have preference over their state of knowledge. The core of this is that “assessment of fact” and “values” are not disconnected in humans, not disconnected at all. Humans feel good when a team they support wins, without them contributing anything to the victory. They will accept false compliments, and can be flattered. Social pressure changes most values quite easily.
Need I go on?
A utility function which, if implemented by the AI, would result in a positive, fulfilling, worthwhile existence for humans. Even if humans had a utility, it’s not clear that a ruling FAI should have the same one, incidentally. The utility is for the AI, and it aims to capture as much of human value as possible—it might just be the utility of a nanny AI (make reasonable efforts to keep humanity from developing dangerous AIs, going extinct, or regressing technologically, otherwise, let them be).
You still haven’t defined “follow a utility function”. Humans are not ideal rational optimizers of their respective utility functions. It doesn’t mean they don’t have them. Deep Blue often plays moves which are not ideal, nevertheless I think it’s fair to say it optimizes winning. If you make intransitive choices, it doesn’t mean your terminal values are intransitive. It means your choices are not optimal.
This is probably the case. However, the changes are slow, otherwise humans wouldn’t behave coherently at all. The human utility function is only defined approximately, but the FAI problem only makes sense in the same approximation. In any case, if you’re programming an AI you should equip it with the utility function you have at that moment.
Why do you think it is inconsistent with having a utility function?
How can you know that a given utility function has this property? How do you know the utility function I’m proposing doesn’t have this property?
Isn’t it? Assume your utility function is U. Suppose you have the choice to create a superintelligence optimizing U or a superintelligence optimizing something other than U, let say V. Why would you choose V? Choosing U will obviously result in an enormous expected increase of U, which is what you want to happen, since you’re a U-maximizing agent. Choosing V will almost certainly result in a lower expectation value of U: if the V-AI chooses strategy X that leads to higher expected U than the strategy that would be chosen by a U-AI then it’s not clear why the U-AI wouldn’t choose X.
Then why claim that they have one? If humans have intransitive preferences (A>B>C>A), as I often do, then why claim that actually their preferences are secretly transitive but they fail to act on them properly? Nothing we know about the brain points to there being a hidden box with a pristine and pure utility function, that we then implement poorly.
They have preferences like ambiguity aversion, eg being willing to pay to find out, during a holiday, whether they were accepted for a job, while knowing that they can’t make any relevant decisions with that early knowledge. This is not compatible with following a standard utility function.
I don’t know what you mean by “standard” utility function. I don’t even know what you mean by “following”. We want to find out since uncertainty makes you nervous, being nervous is unpleasant and pleasure is a terminal value. It is entirely consistent with having a utility function and with my formalism in particular.
In what epistemology are you asking this question? That is, what is the criterion according to which the validity of answer would be determined?
If you don’t think human preferences are “secretly transitive”, then why do you suggest the following:
What is the meaning of asking a person to resolve intransitivities if there are no transitive preferences underneath?
Those are questions for you, not for me. You’re claiming that humans have a hidden utility function. What do you mean by that, and what evidence do you have for your position?
I’m claiming that it is possible to define the utility function of any agent. For unintelligent “agents” the result is probably unstable. For intelligent agents the result should be stable.
The evidence is that I have a formalism which produces this definition in a way compatible with intuition about “agent having a utility function”. I cannot present evidence which doesn’t rely on intuition since that would require having another more fundamental definition of “agent having a utility function” (which AFAIK might not exist). I do not consider this to be a problem since all reasoning falls back to intuition if you ask “why” sufficiently many times.
I don’t see any meaningful definition of intelligence or instrumental rationality without a utility function. If we accepts humans are (approximately) rational / intelligent, they must (in the same approximation) have utility functions.
It also seems to me (again, intuitively) that the very concept of “preference” is incompatible with e.g. intransitivity. In the approximation it makes sense to speak of “preferences” at all, it makes sense to speak of preferences compatible with the VNM axioms ergo utility function. Same goes for the concept of “should”. If it makes sense to say one “should” do something (for example build a FAI), there must be a utility function according to which she should do it.
Bottom line, eventually it all hits philosophical assumptions which have no further formal justification. However, this is true of all reasoning. IMO the only valid method to disprove such assumptions is either by reductio ad absurdum or by presenting a different set of assumptions which is better in some sense. If you have such an alternative set of assumption for this case or a wholly different way to resolve philosophical questions, I would be very interested to know.
It is trivially possible to do that. Since no choice is strictly identical, you just add enough details to make each choice unique, and then choose a utility function that will always reach that choice (“subject has a strong preference for putting his left foot forwards when seeing an advertisement for deodorant on Tuesday morning that are the birthdays of prominent Dutch politicians”).
A good simple model of human behaviour is that of different modules expressing preferences and short-circuiting the decision making in some circumstances, and a more rational system (“system 2”) occasionally intervening to prevent loss through money pumps. So people are transitive in their ultimate decisions, often and to some extent, but their actual decisions depend strongly on which choices are presented first (ie their low level preferences are intransitive, but the rational part of them prevents loops). Would you say these beings have no preferences?
My formalism doesn’t work like that since the utility function is a function over possible universes, not over possible choices. There is no trivial way to construct a utility function wrt which the given agent’s intelligence is close to maximal. However it still might be the case we need to give larger weight to simple utility functions (otherwise we’re left with selecting a maximum in an infinite set and it’s not clear why it exists). As I said, I don’t have the final formula.
I’d say they have a utility function. Image a chess AI that selects moves by one of two strategies. The first strategy (“system 1”) uses simple heuristics like “check when you can” that produce an answer quickly and save precious time. The second strategy (“system 2”) runs a minimax algorithm with a 10-move deep search tree. Are all of the agent’s decisions perfectly rational? No. Does it have a utility function? Yes: winning the game.