Nobody knows what’s “friendly” (you can have “godly” there, etc. - with more or less the same effect).
Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any “clever” Superintelligence.
It may be even proven that “too much intelligence/power” (incl. “dumb” AIs) in the hands of humanimals with their DeepAnimal brains (“values”, reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. At least up to now it pretty much looks like this even for an untrained eye.
Most probably the problem will not be artificial intelligence, but natural stupidity.
Nobody knows what’s “friendly” (you can have “godly” there, etc. - with more or less the same effect).
By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage ‘friendliness’ (e.g. “I’m not being tortured”), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here’s a better explanation of Friendliness than the sense I can convey. You could also substitute the more modern word ‘Aligned’ with it.
Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any “clever” Superintelligence.
The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of.
This is a side issue, but your persistent use of the neologism “humanimal” is probably costing you weirdness points and detracts from the substance of the points you make. Everyone here knows humans are animals.
Most probably the problem will not be artificial intelligence, but natural stupidity.
Exactly ZERO.
Nobody knows what’s “friendly” (you can have “godly” there, etc. - with more or less the same effect).
Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any “clever” Superintelligence.
It may be even proven that “too much intelligence/power” (incl. “dumb” AIs) in the hands of humanimals with their DeepAnimal brains (“values”, reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. At least up to now it pretty much looks like this even for an untrained eye.
Most probably the problem will not be artificial intelligence, but natural stupidity.
...
Zero is not a probability! You cannot be infinitely certain of anything!
By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage ‘friendliness’ (e.g. “I’m not being tortured”), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here’s a better explanation of Friendliness than the sense I can convey. You could also substitute the more modern word ‘Aligned’ with it.
I would suggest reading about the following:
Paperclip Maximizer Orthogonality Thesis The Mere Goodness Sequence. However, in order to understand it well you will want to read the other Sequences first. I really want to emphasize the importance of engaging with a decade-old corpus of material about this subject.
The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of.
This is a side issue, but your persistent use of the neologism “humanimal” is probably costing you weirdness points and detracts from the substance of the points you make. Everyone here knows humans are animals.
Agreed.