The distinction is not between male and female. Instead, the issue is whether to design a mind around the pursuit of a mathematically optimal single objective.
Pinker is right that singlemindedly pursuing a single, narrow objective would be psychotic for a person.
Meanwhile, Omohundro points out that the amount of computing time required to use a computerized optimization method to make decisions explodes as more knowledge about the real world is built into the optimization.
Herbert Simon, meanwhile, points out that people do not optimize-they SATISFICE. They choose an answer to the problem at hand which is NOT OPTIMAL, but is GOOD ENOUGH using heuristics, then they move on to solve the next problem.
In finance, meanwhile, when constructing a portfolio, you do not exactly optimize a single objective-See Sharpe and Markowitz. If anything, you optimize a combination of risk and reward.
To resolve just these two orthogonal metrics into a single utility function requires a lot of cognitive labor-you have to figure out the decision-maker’s level of “risk aversion.” That is a lot of work, and frequently the decision-maker just rebels.
So now you’re trying to optimize this financial portfolio with the two orthogonal metrics of risk and reward collapsed into one- are you going to construct a set of probability distribution functions (pdf) over time for every possible security in your portfolio? No, you’re going to screen away alot of duds first and think harder about the securities which have a better chance of entering the portfolio.
When thinking about mind design, and just when thinking effectively, always incorporate:
-Bounded rationality
-Diversification
-Typically, some level of risk aversion.
-A cost to obtaining new pieces of information, and value of information
-Many, complex goals.
Many of these goals do not require very much thinking to determine that they are “somewhat important.”
Suppose we have several little goals (such as enough food, avoid cold, avoid pain, care for family, help our friends, help humanity).
We will expend a lot of effort resolving them from orthogonal metrics into a single goal. So instead, we do something like automatically eat enough, avoid cold and avoid pain, unless there is some exception is triggered. We do not re-balance these factors every moment.
That process does not always work out perfectly-but it works out better than complete analysis paralysis.
A SENSIBLY DESIGNED MIND WOULD NOT RESOLVE ALL ORTHOGONAL METRICS INTO A SINGLE OBJECTIVE FUNCTION, nor try to assess a pdf about every possible fact.
DROP THE PAPERCLIP MAXIMIZERS ALREADY. They are fun to think about, but they have little to do with how minds will eventually be designed..
A SENSIBLY DESIGNED MIND WOULD NOT RESOLVE ALL ORTHOGONAL METRICS INTO A SINGLE OBJECTIVE FUNCTION
Why? As you say, humans don’t. But human minds are weird, overcomplicated, messy things shaped by natural selection. If you write a mind from scratch, while understanding what you’re doing, there’s no particular reason you can’t just give it a single utility function and have that work well. It’s one of the things that makes AIs different from naturally evolved minds.
This perfect utility function is an imaginary, impossible construction. It would be mistaken from the moment it is created.
This intelligence is invariably going to get caught up in the process of allocating certain scarce resources among billions of people. Some of their wants are orthogonal.
There is no doing that perfectly, only well enough.
People satisfice, and so would an intelligent machine.
How do you know? It’s a strong claim, and I don’t see why the math would necessarily work out that way. Once you aggregate preferences fully, there might still be one best solution, and then it would make sense to take it.
Obviously you do need a tie-breaking method for when there’s more than one, but that’s just an optimization detial of an optimizer; it doesn’t turn you into a satisficer instead.
I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer.
Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excellent book Risk Savvy in English. Using the example of portfolio optimization he explained simple rules when dealing with uncertainty:
For a complex diffuse problem with many unknowns and many options: Use simple heuristics.
For a simple well defined problem with known constraints: Use a complex model.
The recent banking crisis gives proof: Complex evaluation models failed to predict the upcoming crisis. Gigerenzer is currently developing simple heuristic rules together with the Bank of England.
For the complex not well defined control problem we should not try to find a complex utility function. With the advent of AGI we might have only one try.
On the Pinkner excerpt:
He is part way to a legitimate point.
The distinction is not between male and female. Instead, the issue is whether to design a mind around the pursuit of a mathematically optimal single objective.
Pinker is right that singlemindedly pursuing a single, narrow objective would be psychotic for a person.
Meanwhile, Omohundro points out that the amount of computing time required to use a computerized optimization method to make decisions explodes as more knowledge about the real world is built into the optimization.
Herbert Simon, meanwhile, points out that people do not optimize-they SATISFICE. They choose an answer to the problem at hand which is NOT OPTIMAL, but is GOOD ENOUGH using heuristics, then they move on to solve the next problem.
In finance, meanwhile, when constructing a portfolio, you do not exactly optimize a single objective-See Sharpe and Markowitz. If anything, you optimize a combination of risk and reward.
To resolve just these two orthogonal metrics into a single utility function requires a lot of cognitive labor-you have to figure out the decision-maker’s level of “risk aversion.” That is a lot of work, and frequently the decision-maker just rebels.
So now you’re trying to optimize this financial portfolio with the two orthogonal metrics of risk and reward collapsed into one- are you going to construct a set of probability distribution functions (pdf) over time for every possible security in your portfolio? No, you’re going to screen away alot of duds first and think harder about the securities which have a better chance of entering the portfolio.
When thinking about mind design, and just when thinking effectively, always incorporate:
-Bounded rationality -Diversification -Typically, some level of risk aversion. -A cost to obtaining new pieces of information, and value of information -Many, complex goals.
Many of these goals do not require very much thinking to determine that they are “somewhat important.”
Suppose we have several little goals (such as enough food, avoid cold, avoid pain, care for family, help our friends, help humanity).
We will expend a lot of effort resolving them from orthogonal metrics into a single goal. So instead, we do something like automatically eat enough, avoid cold and avoid pain, unless there is some exception is triggered. We do not re-balance these factors every moment.
That process does not always work out perfectly-but it works out better than complete analysis paralysis.
A SENSIBLY DESIGNED MIND WOULD NOT RESOLVE ALL ORTHOGONAL METRICS INTO A SINGLE OBJECTIVE FUNCTION, nor try to assess a pdf about every possible fact.
DROP THE PAPERCLIP MAXIMIZERS ALREADY. They are fun to think about, but they have little to do with how minds will eventually be designed..
Why? As you say, humans don’t. But human minds are weird, overcomplicated, messy things shaped by natural selection. If you write a mind from scratch, while understanding what you’re doing, there’s no particular reason you can’t just give it a single utility function and have that work well. It’s one of the things that makes AIs different from naturally evolved minds.
This perfect utility function is an imaginary, impossible construction. It would be mistaken from the moment it is created.
This intelligence is invariably going to get caught up in the process of allocating certain scarce resources among billions of people. Some of their wants are orthogonal.
There is no doing that perfectly, only well enough.
People satisfice, and so would an intelligent machine.
How do you know? It’s a strong claim, and I don’t see why the math would necessarily work out that way. Once you aggregate preferences fully, there might still be one best solution, and then it would make sense to take it. Obviously you do need a tie-breaking method for when there’s more than one, but that’s just an optimization detial of an optimizer; it doesn’t turn you into a satisficer instead.
I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer.
Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excellent book Risk Savvy in English. Using the example of portfolio optimization he explained simple rules when dealing with uncertainty:
For a complex diffuse problem with many unknowns and many options: Use simple heuristics.
For a simple well defined problem with known constraints: Use a complex model.
The recent banking crisis gives proof: Complex evaluation models failed to predict the upcoming crisis. Gigerenzer is currently developing simple heuristic rules together with the Bank of England.
For the complex not well defined control problem we should not try to find a complex utility function. With the advent of AGI we might have only one try.
Just to go a bit further with Pinkner, as an exercise try for once to imagine a Nuturing AGI. What would it act like? How would it be designed?