While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.
You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn’t maximize any utility function. If you’re aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.
Sorry to respond to this 2 years late. I’m aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn’t mean they’re aren’t maximizing a utility function however.
Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn’t always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen).
Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack.
But I digress. Let’s cut to the chase here. You can do what you want but you can’t choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it’s not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.
While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.
You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn’t maximize any utility function. If you’re aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.
Sorry to respond to this 2 years late. I’m aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn’t mean they’re aren’t maximizing a utility function however.
Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn’t always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen).
Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack.
But I digress. Let’s cut to the chase here. You can do what you want but you can’t choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it’s not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.
That’s not exactly true. The Allais paradox does help to demonstrate why explicit utility functions are a poor way to model human behavior, though.