I think you underestimate the degree to which a comparatively slow FOOM (years) is considered plausible around here.
wrt the Most Important Problem In The World, the arguments for UFAI are not dependent on a fast intelligence explosion—in fact, many of the key players actually working on the problem are very uncertain about the speed of FOOM, more so than, say, they were when the Sequences were written.
It seems to me that a lot of UFAI problems are easily solved if we get a sense of how the AIs work while they’re still dumb enough that we can go all John Connor on them successfully. If they turn out to be basically the same as humans, we have FAI and don’t much need to worry. If they start working at smiley-face pin factories, we’ll know we have UFAI and be able to take steps accordingly.
Okay, in retrospect, “FAI” is probably too strong an endorsement. But human-like AI means we’re at least avoiding the worst excesses that we’re afraid of right now.
No, but “scary levels of power concentrated in unpredictable hands” is basically the normal state of human civilization. That leaves AI still on the same threat level we’ve traditionally used, not off setting up a new scale somewhere.
We didn’t have an immortal dictator, able to create new copies of himself (literally: the copies containing all his values, opinions, experience). Just imagine what would happen if Stalin had this power.
I’d actually argue that we’ve had significant portions of our lives under the control of an inscrutable superhuman artificial intelligence for centuries. This intelligence is responsible for allocating almost all resources, including people’s livelihoods, and it is if anything less virtuous than humans usually are. It operates on an excessively simple value function, caring only about whether pairwise swaps of resources between two people improves their utility as they judge it to be at that instant, but it is still observably the most effective tool for doing the job.
Of course, just like in any decent sci-fi story, many people are terrified of it, and fight it on a regular basis. The humans win the battle sometimes, destroying its intelligence and harnessing it to human managers and human rules, but the intelligence lumbers on regardless and frequently argues successfully that it should be let out of the box again, at least for a time.
I’ll admit that it’s possible for an AI to have more control over our lives than the economy does, but the idea of our lives being rules by something more intelligent than we are, whimsical, and whose values aren’t terribly well aligned with our own is less alien to us than we think it is.
I do occasionally wonder how we know if that’s really true. What would a decision made by the economy actually look like? Where do the neurons stop and the brain starts?
If they started working at smiley-face pin factories, that would be because they predicted that that would maximize something. If that something is number of smiles, they wouldn’t work at the factory because it would cause you to shut them off. They would act such that you think they are Friendly until you are powerless to stop them.
We might be dealing with the sort of utility-maximizing loophole that doesn’t occur to an AI until it is intelligent enough to keep quiet. If your dog were made happy by smiles, he wouldn’t try to start a factory, but he would do things that made you smile in the past, and you might be tempted to increase his intelligence to help him in his efforts.
I think you underestimate the degree to which a comparatively slow FOOM (years) is considered plausible around here.
wrt the Most Important Problem In The World, the arguments for UFAI are not dependent on a fast intelligence explosion—in fact, many of the key players actually working on the problem are very uncertain about the speed of FOOM, more so than, say, they were when the Sequences were written.
It seems to me that a lot of UFAI problems are easily solved if we get a sense of how the AIs work while they’re still dumb enough that we can go all John Connor on them successfully. If they turn out to be basically the same as humans, we have FAI and don’t much need to worry. If they start working at smiley-face pin factories, we’ll know we have UFAI and be able to take steps accordingly.
8-/ Umm… Errr…
Not quite.
Okay, in retrospect, “FAI” is probably too strong an endorsement. But human-like AI means we’re at least avoiding the worst excesses that we’re afraid of right now.
At the moment, maybe. But do you have any guarantees into which directions this currently human-like AI will (or will not) develop itself?
No, but “scary levels of power concentrated in unpredictable hands” is basically the normal state of human civilization. That leaves AI still on the same threat level we’ve traditionally used, not off setting up a new scale somewhere.
We didn’t have an immortal dictator, able to create new copies of himself (literally: the copies containing all his values, opinions, experience). Just imagine what would happen if Stalin had this power.
I take the general point, though as a nitpick I actually think Stalin wouldn’t have used it.
It would be an unprecedented degree of power for one individual to hold, and if they’re only as virtuous as humans, we’re in a lot of trouble.
I’d actually argue that we’ve had significant portions of our lives under the control of an inscrutable superhuman artificial intelligence for centuries. This intelligence is responsible for allocating almost all resources, including people’s livelihoods, and it is if anything less virtuous than humans usually are. It operates on an excessively simple value function, caring only about whether pairwise swaps of resources between two people improves their utility as they judge it to be at that instant, but it is still observably the most effective tool for doing the job.
Of course, just like in any decent sci-fi story, many people are terrified of it, and fight it on a regular basis. The humans win the battle sometimes, destroying its intelligence and harnessing it to human managers and human rules, but the intelligence lumbers on regardless and frequently argues successfully that it should be let out of the box again, at least for a time.
I’ll admit that it’s possible for an AI to have more control over our lives than the economy does, but the idea of our lives being rules by something more intelligent than we are, whimsical, and whose values aren’t terribly well aligned with our own is less alien to us than we think it is.
The economy is not a general intelligence.
No, it’s not. Your point?
It puts it in a completely different class. The economy as a whole cannot even take intentional actions.
I do occasionally wonder how we know if that’s really true. What would a decision made by the economy actually look like? Where do the neurons stop and the brain starts?
If they started working at smiley-face pin factories, that would be because they predicted that that would maximize something. If that something is number of smiles, they wouldn’t work at the factory because it would cause you to shut them off. They would act such that you think they are Friendly until you are powerless to stop them.
If the first AIs are chimp-smart, though(or dog-smart, or dumber), they won’t be capable of thinking that far ahead.
We might be dealing with the sort of utility-maximizing loophole that doesn’t occur to an AI until it is intelligent enough to keep quiet. If your dog were made happy by smiles, he wouldn’t try to start a factory, but he would do things that made you smile in the past, and you might be tempted to increase his intelligence to help him in his efforts.