Not disrupting complex systems doesn’t seem to be an universal human value to me (just as Greenpeace is not our universal value system, either). But you’re right, it’s probably not a good approach to treat an AI as just another grey goo.
The problem is that it will be still us who will create that AI, so it will end up having values related to us. It would be a deliberate effort at our part to try to build something that isn’t a member of the FAI-like sphere you wrote about (in which I agree with pangel’s comment). For example, by ordering it to leave us alone and try to build stuff out of Jupiter instead. But then… what’s the point? If this AI was to prevent any further AI development on Earth… that would be a nice case of “ugly just-not-friendly-enough AI messing with humanity”, but if it wasn’t, then we could still end up converting the planet to paperclips by another AI developed later.
We have international treaties to this sense. The greenpeace just assigns it particularly high value, comparing to the rest who assign much smaller value. Still, if we had fewer resource and R&D limitations we would be able to preserve animals much better, as the value of animals as animals would stay the same while the cost of alternative ways of acquiring the resources would be lower.
With regards to the effort to build something that’s not a member of the FAI-like sphere, that’s where the majority of real effort to build the AI lies today. Look at the real projects that use techniques which have known practical spinoffs (neural networks), and have the computing power. Blue brain. The FAI effort is a microscopic, neglected fraction of AI effort.
Also, the prevention of paperclippers doesn’t strike me as particularly bad scenario. The smarter AI doesn’t need to use clumsy bureaucracy style mechanisms of forbidding all AI development.
Not disrupting complex systems doesn’t seem to be an universal human value to me (just as Greenpeace is not our universal value system, either). But you’re right, it’s probably not a good approach to treat an AI as just another grey goo.
The problem is that it will be still us who will create that AI, so it will end up having values related to us. It would be a deliberate effort at our part to try to build something that isn’t a member of the FAI-like sphere you wrote about (in which I agree with pangel’s comment). For example, by ordering it to leave us alone and try to build stuff out of Jupiter instead. But then… what’s the point? If this AI was to prevent any further AI development on Earth… that would be a nice case of “ugly just-not-friendly-enough AI messing with humanity”, but if it wasn’t, then we could still end up converting the planet to paperclips by another AI developed later.
We have international treaties to this sense. The greenpeace just assigns it particularly high value, comparing to the rest who assign much smaller value. Still, if we had fewer resource and R&D limitations we would be able to preserve animals much better, as the value of animals as animals would stay the same while the cost of alternative ways of acquiring the resources would be lower.
With regards to the effort to build something that’s not a member of the FAI-like sphere, that’s where the majority of real effort to build the AI lies today. Look at the real projects that use techniques which have known practical spinoffs (neural networks), and have the computing power. Blue brain. The FAI effort is a microscopic, neglected fraction of AI effort.
Also, the prevention of paperclippers doesn’t strike me as particularly bad scenario. The smarter AI doesn’t need to use clumsy bureaucracy style mechanisms of forbidding all AI development.