The reason for that is that the AIs that are worrying are those of human-like levels of ability.
The AIs that are worrying have to beat the (potentially much simpler) partial AIs, that can be of autistic savant—like levels of ability and beyond, in the fields very relevant to being actually powerful. You can’t focus just on the human-level AGIs when you consider the risks. The AGIs have to be able to technologically outperform contemporary human civilization to significant extent. Which would not happen if both AGIs and humans are substantially bottlenecked on running essentially same highly optimized (possibly self optimized) non general purpose algorithms to solve domain specific problems.
hence the details of its design are less important than meta considerations.
The meta considerations in question look identical to the least effective branches of philosophy.
I think “how to convince philosophers that high intelligence will not automatically imply certain goals”—ie that they are being incorrectly meta.
I think so too, albeit in different way: I do not think that high intelligence will automatically imply that the goals are within the class of “goals which we have no clue how to define mathematically but which are really easy to imagine”.
Since we don’t have any idea how the first AGI will be built (assuming it can be built),
why bother focusing down on the current details when we’re pretty certain they won’t be relevant?
I have trouble parsing logical structure of this argument. Since we don’t have any idea how the first AGI will be built, wouldn’t make reasoning that employs faulty concepts relevant. Furthermore, being certain that something is irrelevant without having studied it, is a very Dunning-Kruger prone form of thought.
Furthermore, I can’t see how in the world can you be certain that it is irrelevant that (for example) the AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won’t fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.
Also, I can’t see how it can be irrelevant that (a good guess) AGI is ultimately a mathematical function that calculates outputs from inputs using elementary operations, and the particular instance of the AGI is a machine that’s computing this function. That’s a meta consideration built from ground up rather than from concept of monolithic ‘intelligence’. The ‘symbol grounding’ may be a logical impossibility (and the feeling that symbols are grounded may well be a delusion that works via fallacies); in any case we don’t see how it can be solved. Like free will; a lot of people feel very sure that they have something in their mind, that’s clearly not compatible with reductionism. Well, I think there can be a lot of other things that we feel very sure we have, which are not compatible with reductionism in less obvious ways.
edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation. It’s like trying to prevent Hindenburg disaster, bombing of Dresden, atomic bombing of Hiroshima and Nagasaki, and the risk of nuclear war, by thinking of the flying carpet as the flying vehicle (because bird-morphizing is not cool).
AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won’t fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.
Yes. The SIAI world view doesn’t seem to pay much particular attention to how morality necessarily evolved as the cooperative glue necessary for the social super-organism meta-transition.
edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation.
Well, my opinion is that this is far too dangerous (as compared with other risks to humanity) to not investigate it. Philosophical tools are weak, but they’ve yet to prove weak enough that we should shelve the ongoing project.
It seems to me that you are a: grossly over estimating the productivity of symbolic manipulation on a significant number of symbols with highly speculative meanings, and b: there is the issue that you do not seem to dedicate due effort to investigating existing software or trying to verify relevance of the symbols or improve it. The symbolic manipulation is only as relevant as the symbols being manipulated.
The AIs that are worrying have to beat the (potentially much simpler) partial AIs, that can be of autistic savant—like levels of ability and beyond, in the fields very relevant to being actually powerful. You can’t focus just on the human-level AGIs when you consider the risks. The AGIs have to be able to technologically outperform contemporary human civilization to significant extent. Which would not happen if both AGIs and humans are substantially bottlenecked on running essentially same highly optimized (possibly self optimized) non general purpose algorithms to solve domain specific problems.
The meta considerations in question look identical to the least effective branches of philosophy.
I think so too, albeit in different way: I do not think that high intelligence will automatically imply that the goals are within the class of “goals which we have no clue how to define mathematically but which are really easy to imagine”.
I have trouble parsing logical structure of this argument. Since we don’t have any idea how the first AGI will be built, wouldn’t make reasoning that employs faulty concepts relevant. Furthermore, being certain that something is irrelevant without having studied it, is a very Dunning-Kruger prone form of thought.
Furthermore, I can’t see how in the world can you be certain that it is irrelevant that (for example) the AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won’t fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.
Also, I can’t see how it can be irrelevant that (a good guess) AGI is ultimately a mathematical function that calculates outputs from inputs using elementary operations, and the particular instance of the AGI is a machine that’s computing this function. That’s a meta consideration built from ground up rather than from concept of monolithic ‘intelligence’. The ‘symbol grounding’ may be a logical impossibility (and the feeling that symbols are grounded may well be a delusion that works via fallacies); in any case we don’t see how it can be solved. Like free will; a lot of people feel very sure that they have something in their mind, that’s clearly not compatible with reductionism. Well, I think there can be a lot of other things that we feel very sure we have, which are not compatible with reductionism in less obvious ways.
edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation. It’s like trying to prevent Hindenburg disaster, bombing of Dresden, atomic bombing of Hiroshima and Nagasaki, and the risk of nuclear war, by thinking of the flying carpet as the flying vehicle (because bird-morphizing is not cool).
Yes. The SIAI world view doesn’t seem to pay much particular attention to how morality necessarily evolved as the cooperative glue necessary for the social super-organism meta-transition.
Well, my opinion is that this is far too dangerous (as compared with other risks to humanity) to not investigate it. Philosophical tools are weak, but they’ve yet to prove weak enough that we should shelve the ongoing project.
It seems to me that you are a: grossly over estimating the productivity of symbolic manipulation on a significant number of symbols with highly speculative meanings, and b: there is the issue that you do not seem to dedicate due effort to investigating existing software or trying to verify relevance of the symbols or improve it. The symbolic manipulation is only as relevant as the symbols being manipulated.