You are implicitly equating “singleton” and “simple values” here in a way that doesn’t seem at all justified to me.
No, I don’t. What I am saying is that you need to have various different agents with different utility-functions around to get the necessary diversity that can give rise to enough selection pressure. I am further saying that a “singleton” won’t be able to predict the actions of new and improved versions of itself by just running sandboxed simulations. Not just because of logical uncertainty but also because it is computationally intractable to predict the real-world payoff of changes to its decision procedures.
I am also saying that you need complex values to give rise to the necessary drives to function in a complex world. You can’t just tell an AI to protect itself. What would that even mean? What changes are illegitimate? What constitutes “self”? That are all unsolved problems that are just assumed to be solvable when talking about risks from AI.
...when we can’t think of a high-expected-utility route, we try low-expected-utility routes, because that’s what there is. And if enough of us do that, we often discover unexpected utility on those routes. That said, if there’s two routes I can take, and path A has a high chance of getting me what I want, and path B has a low chance of getting me what I want, I take path A..
What I am talking about is concurrence. What I claim won’t work is the kind of arguments put forth by people like Steven Landsburg that you should contribute to just one charity that you deem most important. The real world is not that simple. Much progress is made due to unpredictable synergy. “Treating rare diseases in cute kittens” might lead to insights on solving cancer in humans.
If you are an AI with simple values you will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.
I agree that unlike mammals, self-replicating DNA with sources of random mutation are as likely to explore path A as path B. I don’t think it’s a coincidence that mammals as a class achieve their goals faster than self-replicating DNA with sources of random mutation.
The difference is that mammals have goals, complex values, which allows them to make use of evolutionary discoveries and adapt them for their purposes.
No, I don’t. What I am saying is that you need to have various different agents with different utility-functions around to get the necessary diversity that can give rise to enough selection pressure. I am further saying that a “singleton” won’t be able to predict the actions of new and improved versions of itself by just running sandboxed simulations. Not just because of logical uncertainty but also because it is computationally intractable to predict the real-world payoff of changes to its decision procedures.
I am also saying that you need complex values to give rise to the necessary drives to function in a complex world. You can’t just tell an AI to protect itself. What would that even mean? What changes are illegitimate? What constitutes “self”? That are all unsolved problems that are just assumed to be solvable when talking about risks from AI.
What I am talking about is concurrence. What I claim won’t work is the kind of arguments put forth by people like Steven Landsburg that you should contribute to just one charity that you deem most important. The real world is not that simple. Much progress is made due to unpredictable synergy. “Treating rare diseases in cute kittens” might lead to insights on solving cancer in humans.
If you are an AI with simple values you will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.
The difference is that mammals have goals, complex values, which allows them to make use of evolutionary discoveries and adapt them for their purposes.