“You can’t enslave something by creating it with a certain set of desires which you then allow it to follow.
So if Africans were engineered to believe that they existed in order to be servants to Europeans, Europeans wouldn’t actually be enslaving them in the process? And the daughter whose father treated her in such a way as for her to actually want to have sex with him, what about her? These things aren’t so far off from reality. You’re saying there is no real moral significance to either event. It’s not slavery, black people just know their place—and it’s not abuse, she’s just been raised to have a genuine sexual desire for her father. What Eliezer is proposing might, in fact, be worse. Imagine black people and children actually being engineered for these purposes—without even the possibility of a revelation along the lines of “Maybe my conditioning was unfair.”
These (fictional) accidents happen in scenarios where the AI actually has enough power to turn the solar system into “computronium” (i.e. unlimited access to physical resources), which is unreasonable. Evidently nobody thinks to try to stop it, either—cutting power to it, blowing it up. I guess the thought is that AGI’s will be immune to bombs and hardware disruptions, by means of shear intelligence (similar to our being immune to bullets), so once one starts trying to destroy the solar system there’s literally nothing you can do.
It would take a few weeks, possibly months or years, to destroy even just the planet earth, given that you already had done all the planning.
The level of “intelligence” (if you can call it that) you’re talking about with an AI whose able to draw up plans to destroy Earth (or the solar system), evade detection or convince humans to help it, actually enact its plans and survive the whole thing, is beyond the scope of realistic dreams for the first AI. It amounts to belief in a trickster deity, one which only FAI, the benevolent god, can save you from.
“Comment by Michael Vassar”
More of the same. Of course bad things can happen when you give something unlimited power, but that’s not what we should be talking about.
“Not if aliens are extremely rare.”
That’s true. But how rare is extremely rare? Are you grasping the astronomical spacial and historical scales involved in a statement such as ”… takes over the entire lightcone preventing any interesting life from ever arising anywhere”?
“You can’t enslave something by creating it with a certain set of desires which you then allow it to follow.
So if Africans were engineered to believe that they existed in order to be servants to Europeans, Europeans wouldn’t actually be enslaving them in the process? And the daughter whose father treated her in such a way as for her to actually want to have sex with him, what about her? These things aren’t so far off from reality. You’re saying there is no real moral significance to either event. It’s not slavery, black people just know their place—and it’s not abuse, she’s just been raised to have a genuine sexual desire for her father. What Eliezer is proposing might, in fact, be worse. Imagine black people and children actually being engineered for these purposes—without even the possibility of a revelation along the lines of “Maybe my conditioning was unfair.”
“Accidents happen.
CFAI 3.2.6: The Riemann Hypothesis Catastrophe
CFAI 3.4: Why structure matters
These (fictional) accidents happen in scenarios where the AI actually has enough power to turn the solar system into “computronium” (i.e. unlimited access to physical resources), which is unreasonable. Evidently nobody thinks to try to stop it, either—cutting power to it, blowing it up. I guess the thought is that AGI’s will be immune to bombs and hardware disruptions, by means of shear intelligence (similar to our being immune to bullets), so once one starts trying to destroy the solar system there’s literally nothing you can do.
It would take a few weeks, possibly months or years, to destroy even just the planet earth, given that you already had done all the planning.
The level of “intelligence” (if you can call it that) you’re talking about with an AI whose able to draw up plans to destroy Earth (or the solar system), evade detection or convince humans to help it, actually enact its plans and survive the whole thing, is beyond the scope of realistic dreams for the first AI. It amounts to belief in a trickster deity, one which only FAI, the benevolent god, can save you from.
“Comment by Michael Vassar”
More of the same. Of course bad things can happen when you give something unlimited power, but that’s not what we should be talking about.
“Not if aliens are extremely rare.”
That’s true. But how rare is extremely rare? Are you grasping the astronomical spacial and historical scales involved in a statement such as ”… takes over the entire lightcone preventing any interesting life from ever arising anywhere”?