I’ve thought of this from the angle of the Fermi paradox. Afaik, Fermi thought war was a major filter. Spam is a minor indicator that individual sociopathy could be another filter as individual power increases. How far are we from home build-a-virus kits?
The major hope [1] I can see is that any of the nano or bio tech which could be used to destroy the human race will have a run-up period, and there will be nano and bio immune systems which might be good enough that the human race won’t be at risk, even though there may be large disasters.
[1]Computer programs seem much more able to self-optimize than nano and bio systems. Except that of course, a self-optimizing AI would use nano and bio methods if they seem appropriate.
This is not a cheering thought. I think the only reasonably popular ideology which poses a major risk is the “humanity is a cancer on the planet” sort of enviromentalism—it seems plausible that a merely pretty good self-optimizing AI tasked with eliminating the human race for the sake of other living creatures would be a lot easier to build than an FAI, and it might be possible to pull a group of people together to work on it.
“Planet-cancer” environmentalists don’t own server farms or make major breakthroughs in computer science, unless they’re several standard deviations above the norm in both logistical competence and hypocrisy. Accordingly, they’d be working with techniques someone else developed. It’s true that a general FAI would be harder to design than even a specific UFAI, but an AI with a goal along the lines of ‘restore earth to it’s pre-Humanity state and then prevent humans from arising, without otherwise disrupting the glorious purity of Nature’ probably isn’t easier to design than an anti-UFAI with the goal ‘identify other AIs that are trying to kill us all and destroy everything we stand for, then prevent them from doing so, minimizing collateral damage while you do so,’ while the latter would have more widespread support and therefore more resources available for it’s development.
You’re adding constraints to the “humanity is a cancer” project which make it a lot harder. Why not settle for “wipe out humanity in a way that doesn’t cause much damage and let the planet heal itself”?
The idea of an anti-UFAI is intriguing. I’m not sure it’s much easier to design than an FAI.
I think the major barrier to the development of a “wipe out humans” UFAI is that the work would have to be done in secret.
It seems to me that an anti-UFAI that does not also prevent the creation of FAIs would, necessarily, be just as hard to make as an FAI. Identifying an FAI without having a sufficiently good model of what one is that you could make one seems implausible.
An anti-UFAI could have terms like ‘minimal collateral damage’ in it’s motivation that would cause it to prioritize stopping faster or more destructive AIs over slower or friendlier ones, voluntarily limit it’s own growth, accept ongoing human supervision, and cleanly self-destruct under appropriate circumstances.
An FAI is expected to make the world better, not just keep it from getting worse, and as such would need to be trusted with far more autonomy and long-term stability.
religious fanatics who have too much trust that ‘God will protect them’ from their virus
Buddhists who loose their memetic immune system and start taking the ‘material existence is inherently undesirable’ aspect of their religion seriously, or for that mater a practitioner of an Abrahamic religion who takes the idea of heaven seriously.
Buddhists don’t seem to go bad that way. I’m not sure that “material existence is undesirable” is a fair description of the religion—what people seem to conclude from meditation is that most of what they thought they were experiencing is an illusion.
I’ve thought of this from the angle of the Fermi paradox. Afaik, Fermi thought war was a major filter. Spam is a minor indicator that individual sociopathy could be another filter as individual power increases. How far are we from home build-a-virus kits?
The major hope [1] I can see is that any of the nano or bio tech which could be used to destroy the human race will have a run-up period, and there will be nano and bio immune systems which might be good enough that the human race won’t be at risk, even though there may be large disasters.
[1]Computer programs seem much more able to self-optimize than nano and bio systems. Except that of course, a self-optimizing AI would use nano and bio methods if they seem appropriate.
This is not a cheering thought. I think the only reasonably popular ideology which poses a major risk is the “humanity is a cancer on the planet” sort of enviromentalism—it seems plausible that a merely pretty good self-optimizing AI tasked with eliminating the human race for the sake of other living creatures would be a lot easier to build than an FAI, and it might be possible to pull a group of people together to work on it.
“Planet-cancer” environmentalists don’t own server farms or make major breakthroughs in computer science, unless they’re several standard deviations above the norm in both logistical competence and hypocrisy. Accordingly, they’d be working with techniques someone else developed. It’s true that a general FAI would be harder to design than even a specific UFAI, but an AI with a goal along the lines of ‘restore earth to it’s pre-Humanity state and then prevent humans from arising, without otherwise disrupting the glorious purity of Nature’ probably isn’t easier to design than an anti-UFAI with the goal ‘identify other AIs that are trying to kill us all and destroy everything we stand for, then prevent them from doing so, minimizing collateral damage while you do so,’ while the latter would have more widespread support and therefore more resources available for it’s development.
You’re adding constraints to the “humanity is a cancer” project which make it a lot harder. Why not settle for “wipe out humanity in a way that doesn’t cause much damage and let the planet heal itself”?
The idea of an anti-UFAI is intriguing. I’m not sure it’s much easier to design than an FAI.
I think the major barrier to the development of a “wipe out humans” UFAI is that the work would have to be done in secret.
It seems to me that an anti-UFAI that does not also prevent the creation of FAIs would, necessarily, be just as hard to make as an FAI. Identifying an FAI without having a sufficiently good model of what one is that you could make one seems implausible.
Am I wrong?
You’re at least plausible.
An anti-UFAI could have terms like ‘minimal collateral damage’ in it’s motivation that would cause it to prioritize stopping faster or more destructive AIs over slower or friendlier ones, voluntarily limit it’s own growth, accept ongoing human supervision, and cleanly self-destruct under appropriate circumstances.
An FAI is expected to make the world better, not just keep it from getting worse, and as such would need to be trusted with far more autonomy and long-term stability.
I’d also be worried about:
depressed microbiologists
religious fanatics who have too much trust that ‘God will protect them’ from their virus
Buddhists who loose their memetic immune system and start taking the ‘material existence is inherently undesirable’ aspect of their religion seriously, or for that mater a practitioner of an Abrahamic religion who takes the idea of heaven seriously.
Buddhists don’t seem to go bad that way. I’m not sure that “material existence is undesirable” is a fair description of the religion—what people seem to conclude from meditation is that most of what they thought they were experiencing is an illusion.