A SENSIBLY DESIGNED MIND WOULD NOT RESOLVE ALL ORTHOGONAL METRICS INTO A SINGLE OBJECTIVE FUNCTION
Why? As you say, humans don’t. But human minds are weird, overcomplicated, messy things shaped by natural selection. If you write a mind from scratch, while understanding what you’re doing, there’s no particular reason you can’t just give it a single utility function and have that work well. It’s one of the things that makes AIs different from naturally evolved minds.
This perfect utility function is an imaginary, impossible construction. It would be mistaken from the moment it is created.
This intelligence is invariably going to get caught up in the process of allocating certain scarce resources among billions of people. Some of their wants are orthogonal.
There is no doing that perfectly, only well enough.
People satisfice, and so would an intelligent machine.
How do you know? It’s a strong claim, and I don’t see why the math would necessarily work out that way. Once you aggregate preferences fully, there might still be one best solution, and then it would make sense to take it.
Obviously you do need a tie-breaking method for when there’s more than one, but that’s just an optimization detial of an optimizer; it doesn’t turn you into a satisficer instead.
I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer.
Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excellent book Risk Savvy in English. Using the example of portfolio optimization he explained simple rules when dealing with uncertainty:
For a complex diffuse problem with many unknowns and many options: Use simple heuristics.
For a simple well defined problem with known constraints: Use a complex model.
The recent banking crisis gives proof: Complex evaluation models failed to predict the upcoming crisis. Gigerenzer is currently developing simple heuristic rules together with the Bank of England.
For the complex not well defined control problem we should not try to find a complex utility function. With the advent of AGI we might have only one try.
Why? As you say, humans don’t. But human minds are weird, overcomplicated, messy things shaped by natural selection. If you write a mind from scratch, while understanding what you’re doing, there’s no particular reason you can’t just give it a single utility function and have that work well. It’s one of the things that makes AIs different from naturally evolved minds.
This perfect utility function is an imaginary, impossible construction. It would be mistaken from the moment it is created.
This intelligence is invariably going to get caught up in the process of allocating certain scarce resources among billions of people. Some of their wants are orthogonal.
There is no doing that perfectly, only well enough.
People satisfice, and so would an intelligent machine.
How do you know? It’s a strong claim, and I don’t see why the math would necessarily work out that way. Once you aggregate preferences fully, there might still be one best solution, and then it would make sense to take it. Obviously you do need a tie-breaking method for when there’s more than one, but that’s just an optimization detial of an optimizer; it doesn’t turn you into a satisficer instead.
I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer.
Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excellent book Risk Savvy in English. Using the example of portfolio optimization he explained simple rules when dealing with uncertainty:
For a complex diffuse problem with many unknowns and many options: Use simple heuristics.
For a simple well defined problem with known constraints: Use a complex model.
The recent banking crisis gives proof: Complex evaluation models failed to predict the upcoming crisis. Gigerenzer is currently developing simple heuristic rules together with the Bank of England.
For the complex not well defined control problem we should not try to find a complex utility function. With the advent of AGI we might have only one try.