Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Money didn’t enter into the picture.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.
Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
Does that help?
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.