Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years—with their “UnFriendly” peers and their “UnFriendly” institutions. Evidently, “Friendliness” is not necessary for human flourishing.
I agree with this part of Chrysophylax’s comment: “It’s not necessary when the UnFriendly people are humans using muscle-power weaponry.” Humans can be non-Friendly without immediately destroying the planet because humans are a lot weaker than a superintelligence. If you gave a human unlimited power, it would almost certainly make the world vastly worse than it currently is. We should be at least as worried, then, about giving an AGI arbitrarily large amounts of power, until we’ve figured out reliable ways to safety-proof optimization processes.
It’s not necessary when the UnFriendly people are humans using muscle-power weaponry. A superhumanly intelligent self-modifying AGI is a rather different proposition, even with only today’s resources available. Given that we have no reason to believe that molecular nanotech isn’t possible, an AI that is even slightly UnFriendly might be a disaster.
Consider the situation where the world finds out that DARPA has finished an AI (for example). Would you expect America to release the source code? Given our track record on issues like evolution and whether American citizens need to arm themselves against the US government, how many people would consider it an abomination and/or a threat to their liberty? What would the self-interested response of every dictator (for example, Kim Jong Il’s successor) with nuclear weapons be? Even a Friendly AI poses a danger until fighting against it is not only useless but obviously useless, and making an AI Friendly is, as has been explained, really freakin’ hard.
I also take issue with the statement that humans have flourished. We spent most of those millions of years being hunter-gatherers. “Nasty, brutish and short” is the phrase that springs to mind.
Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years—with their “UnFriendly” peers and their “UnFriendly” institutions. Evidently, “Friendliness” is not necessary for human flourishing.
I agree with this part of Chrysophylax’s comment: “It’s not necessary when the UnFriendly people are humans using muscle-power weaponry.” Humans can be non-Friendly without immediately destroying the planet because humans are a lot weaker than a superintelligence. If you gave a human unlimited power, it would almost certainly make the world vastly worse than it currently is. We should be at least as worried, then, about giving an AGI arbitrarily large amounts of power, until we’ve figured out reliable ways to safety-proof optimization processes.
It’s not necessary when the UnFriendly people are humans using muscle-power weaponry. A superhumanly intelligent self-modifying AGI is a rather different proposition, even with only today’s resources available. Given that we have no reason to believe that molecular nanotech isn’t possible, an AI that is even slightly UnFriendly might be a disaster.
Consider the situation where the world finds out that DARPA has finished an AI (for example). Would you expect America to release the source code? Given our track record on issues like evolution and whether American citizens need to arm themselves against the US government, how many people would consider it an abomination and/or a threat to their liberty? What would the self-interested response of every dictator (for example, Kim Jong Il’s successor) with nuclear weapons be? Even a Friendly AI poses a danger until fighting against it is not only useless but obviously useless, and making an AI Friendly is, as has been explained, really freakin’ hard.
I also take issue with the statement that humans have flourished. We spent most of those millions of years being hunter-gatherers. “Nasty, brutish and short” is the phrase that springs to mind.