Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process—nor do you need multiple intelligent agents to perform it.
Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.
If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.
If they do incorporate agents, then for them not to be “harmed”, they need not to feel bad if their trial fails. What would it mean to build agents that weren’t disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See “dead world” above.
Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.
First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.
Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn’t need totalitarian top-down Stalinist control. See economic theory.
Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you’re doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don’t want that to happen. So it’s better if they’re emotionally attached to their particular starting parameters.
It would be interesting if the free market didn’t actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.
The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better—despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other’s efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.
This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don’t think there are any such conditions that don’t require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.
Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.
If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.
If they do incorporate agents, then for them not to be “harmed”, they need not to feel bad if their trial fails. What would it mean to build agents that weren’t disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See “dead world” above.
Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.
First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.
Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn’t need totalitarian top-down Stalinist control. See economic theory.
Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you’re doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don’t want that to happen. So it’s better if they’re emotionally attached to their particular starting parameters.
It would be interesting if the free market didn’t actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.
This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don’t think there are any such conditions that don’t require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.