If AI is naturally far more difficult than intelligence enhancement, no harm done
I should probably write a more detailed response to Eliezer’s argument at some point. But for now it seems worth pointing out that if UFAI is of comparable difficulty to IA, but FAI is much harder (as seems plausible), then attempting to build FAI would cause harm by diverting resources away from IA and contributing to the likelihood of UFAI coming first in other ways.
What if UFAI (of the dangerous kind) is incredibly difficult compared to harmless but usable AI such as a system that can find inputs to any computable function that give maximum output, analytically (not mere bruteforcing) and which for example understands ODEs?
We can cure every disease including mortality with it, we can use it to improve it, and can use it to design the machinery for mind uploading—all with comparatively little effort as it would take off much of cognitive workload—but it won’t help us make the ‘utility function’ in the SI sense (paperclips, etc) as this is a problem of definition.
I feel that the unfriendly AI term is a clever rhetorical technique. The above-mentioned math AI is not friendly, but neither is it unfriendly. Several units could probably be combined to cobble together a natural language processing system as well. Nothing like ‘hearing a statement then adopting a real world goal to the general gist of it’, though.
Cousin_it (who took a position similar to yours) and Nesov had a discussion about this, and I tend to agree with Nesov. But perhaps this issue deserves a more extensive discussion. I will give it some thought and maybe write a post.
The discussion you link is purely ideological: pessimist, narrow minded cynicism about human race (on Nesov’s side), versus the normal view, without any justifications what so ever for either view.
The magical optimizer allows for space colonization (probably), cures for every disease, solution to energy problems, and so on. We do not have as much room for intelligent improvement when it comes to destroying ourselves—the components for deadly diseases come pre made by evolution, the nuclear weapons already have been invented, etc. The capacity of destruction is bounded by what we have to lose (and we already have the capacity to lose everything), the capacity for growth is bounded by much larger value of what we may gain.
Sure, the magical friendly AI is better than anything else. So is flying carpet better than a car.
When you focus so much on the notion that others are stupid, you forget how hostile is the very universe we live in, you neglect how important it is to save ourselves from external-ish factors. As long as viruses like common cold and flu can exist and be widespread, it is only a matter of time until there is a terrible pandemic killing an enormous number of people (and potentially crippling economy). We haven’t even gotten rid of dangerous parasites yet. Not even top of the foodchain really, if you count parasites. We are also stuck on a rock hurling through space full of rocks, and we can’t go anywhere.
What if, as I suspect, UFAI is much easier than IA, where IA is at the level you’re hoping for? Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
Basically, I think you overestimate the value of intelligence.
Which is not to say that a parallel track of IA might not be worth a try.
Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.
I should probably write a more detailed response to Eliezer’s argument at some point. But for now it seems worth pointing out that if UFAI is of comparable difficulty to IA, but FAI is much harder (as seems plausible), then attempting to build FAI would cause harm by diverting resources away from IA and contributing to the likelihood of UFAI coming first in other ways.
What if UFAI (of the dangerous kind) is incredibly difficult compared to harmless but usable AI such as a system that can find inputs to any computable function that give maximum output, analytically (not mere bruteforcing) and which for example understands ODEs?
We can cure every disease including mortality with it, we can use it to improve it, and can use it to design the machinery for mind uploading—all with comparatively little effort as it would take off much of cognitive workload—but it won’t help us make the ‘utility function’ in the SI sense (paperclips, etc) as this is a problem of definition.
I feel that the unfriendly AI term is a clever rhetorical technique. The above-mentioned math AI is not friendly, but neither is it unfriendly. Several units could probably be combined to cobble together a natural language processing system as well. Nothing like ‘hearing a statement then adopting a real world goal to the general gist of it’, though.
Cousin_it (who took a position similar to yours) and Nesov had a discussion about this, and I tend to agree with Nesov. But perhaps this issue deserves a more extensive discussion. I will give it some thought and maybe write a post.
The discussion you link is purely ideological: pessimist, narrow minded cynicism about human race (on Nesov’s side), versus the normal view, without any justifications what so ever for either view.
The magical optimizer allows for space colonization (probably), cures for every disease, solution to energy problems, and so on. We do not have as much room for intelligent improvement when it comes to destroying ourselves—the components for deadly diseases come pre made by evolution, the nuclear weapons already have been invented, etc. The capacity of destruction is bounded by what we have to lose (and we already have the capacity to lose everything), the capacity for growth is bounded by much larger value of what we may gain.
Sure, the magical friendly AI is better than anything else. So is flying carpet better than a car.
When you focus so much on the notion that others are stupid, you forget how hostile is the very universe we live in, you neglect how important it is to save ourselves from external-ish factors. As long as viruses like common cold and flu can exist and be widespread, it is only a matter of time until there is a terrible pandemic killing an enormous number of people (and potentially crippling economy). We haven’t even gotten rid of dangerous parasites yet. Not even top of the foodchain really, if you count parasites. We are also stuck on a rock hurling through space full of rocks, and we can’t go anywhere.
What if, as I suspect, UFAI is much easier than IA, where IA is at the level you’re hoping for? Moreover, what evidence can you offer that researchers of von Neumann’s intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let “significantly smaller difficulty gap” mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.
Basically, I think you overestimate the value of intelligence.
Which is not to say that a parallel track of IA might not be worth a try.
I had a post about this.
If it’s the case that even researchers of von Neumann’s intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.