You anthropomorphize the AIs way too much. If there’s an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you’ve chosen to give the AI access to easily manipulable biological material.
AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does.
As for AIs working together: if the first AI wipes out everyone there isn’t a second AI for it to work with.
You’re making a huge leap… I see where you’re leaping to… but I have no idea where you’re leaping from. In order for me to believe that we might leap where you’re arguing we could leap… I have to know where you’re leaping from. In other words, you’re telling a story but leaving out all the chapters in the middle. It’s hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.
Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Money didn’t enter into the picture.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.
Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?
That’s a fascinating question, and my basic answer is probably not. But I don’t in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn’t grappling at all with the sorts of scenarios which people concerned about AI are concerned about.
You anthropomorphize the AIs way too much. If there’s an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you’ve chosen to give the AI access to easily manipulable biological material.
AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does.
As for AIs working together: if the first AI wipes out everyone there isn’t a second AI for it to work with.
You’re making a huge leap… I see where you’re leaping to… but I have no idea where you’re leaping from. In order for me to believe that we might leap where you’re arguing we could leap… I have to know where you’re leaping from. In other words, you’re telling a story but leaving out all the chapters in the middle. It’s hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.
Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
Does that help?
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.
Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?
That’s a fascinating question, and my basic answer is probably not. But I don’t in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn’t grappling at all with the sorts of scenarios which people concerned about AI are concerned about.