Okay, in retrospect, “FAI” is probably too strong an endorsement. But human-like AI means we’re at least avoiding the worst excesses that we’re afraid of right now.
No, but “scary levels of power concentrated in unpredictable hands” is basically the normal state of human civilization. That leaves AI still on the same threat level we’ve traditionally used, not off setting up a new scale somewhere.
We didn’t have an immortal dictator, able to create new copies of himself (literally: the copies containing all his values, opinions, experience). Just imagine what would happen if Stalin had this power.
I’d actually argue that we’ve had significant portions of our lives under the control of an inscrutable superhuman artificial intelligence for centuries. This intelligence is responsible for allocating almost all resources, including people’s livelihoods, and it is if anything less virtuous than humans usually are. It operates on an excessively simple value function, caring only about whether pairwise swaps of resources between two people improves their utility as they judge it to be at that instant, but it is still observably the most effective tool for doing the job.
Of course, just like in any decent sci-fi story, many people are terrified of it, and fight it on a regular basis. The humans win the battle sometimes, destroying its intelligence and harnessing it to human managers and human rules, but the intelligence lumbers on regardless and frequently argues successfully that it should be let out of the box again, at least for a time.
I’ll admit that it’s possible for an AI to have more control over our lives than the economy does, but the idea of our lives being rules by something more intelligent than we are, whimsical, and whose values aren’t terribly well aligned with our own is less alien to us than we think it is.
I do occasionally wonder how we know if that’s really true. What would a decision made by the economy actually look like? Where do the neurons stop and the brain starts?
Okay, in retrospect, “FAI” is probably too strong an endorsement. But human-like AI means we’re at least avoiding the worst excesses that we’re afraid of right now.
At the moment, maybe. But do you have any guarantees into which directions this currently human-like AI will (or will not) develop itself?
No, but “scary levels of power concentrated in unpredictable hands” is basically the normal state of human civilization. That leaves AI still on the same threat level we’ve traditionally used, not off setting up a new scale somewhere.
We didn’t have an immortal dictator, able to create new copies of himself (literally: the copies containing all his values, opinions, experience). Just imagine what would happen if Stalin had this power.
I take the general point, though as a nitpick I actually think Stalin wouldn’t have used it.
It would be an unprecedented degree of power for one individual to hold, and if they’re only as virtuous as humans, we’re in a lot of trouble.
I’d actually argue that we’ve had significant portions of our lives under the control of an inscrutable superhuman artificial intelligence for centuries. This intelligence is responsible for allocating almost all resources, including people’s livelihoods, and it is if anything less virtuous than humans usually are. It operates on an excessively simple value function, caring only about whether pairwise swaps of resources between two people improves their utility as they judge it to be at that instant, but it is still observably the most effective tool for doing the job.
Of course, just like in any decent sci-fi story, many people are terrified of it, and fight it on a regular basis. The humans win the battle sometimes, destroying its intelligence and harnessing it to human managers and human rules, but the intelligence lumbers on regardless and frequently argues successfully that it should be let out of the box again, at least for a time.
I’ll admit that it’s possible for an AI to have more control over our lives than the economy does, but the idea of our lives being rules by something more intelligent than we are, whimsical, and whose values aren’t terribly well aligned with our own is less alien to us than we think it is.
The economy is not a general intelligence.
No, it’s not. Your point?
It puts it in a completely different class. The economy as a whole cannot even take intentional actions.
I do occasionally wonder how we know if that’s really true. What would a decision made by the economy actually look like? Where do the neurons stop and the brain starts?