We’ve evolved something called “morality” that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
That’s exactly the high awareness I was talking about, and most people don’t have it. I wouldn’t be surprised if most people here failed at it, if it presented itself in their real lives.
I mean, are you saying you wouldn’t save the burning orphans?
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn’t do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don’t have anything to deal with supervillains.
There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.
It is the fashion in some circles to promote funding for Friendly AI research
as a guard against the existential threat of Unfriendly AI. While this is an
admirable goal, the path to Whole Brain Emulation is in many respects more
straightforward and presents fewer risks.
I believe Eliezer expressed it as something that tells you that even if you
think it would be right (because of your superior ability) to murder the
chief and take over the tribe, it still is not right to murder the chief
and take over the tribe.
That’s exactly the high awareness I was talking about, and most people don’t
have it. I wouldn’t be surprised if most people here failed at it, if it
presented itself in their real lives.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.
Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a “Friendly AI”, their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think “Its none of my business, maybe god wants it to be that way” and let the extra 1 person die. A “Friendly AI” would maximize lives saved if nothing else is known about all those people.
There are many examples why most people are not close to acting like a “Friendly AI” even if we removed all the bad influences on them. We should build a software to be a “Friendly AI” instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a “Friendly AI”. Its probably safer to do it completely in software.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
I agree entirely that humans are not friendly. Whole brain emulation is humanity-safe if there’s never a point at which one person or small group and run much faster than the rest of humanity (including other uploads)
The uploads may outpace us, but if they can keep each other in check, then uploading is not the same kind of human-values threat.
Even an upload singleton is not a total loss if the uploads have somewhat benign values. It is a crippling of the future, not an erasure.
We’ve evolved something called “morality” that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
That’s exactly the high awareness I was talking about, and most people don’t have it. I wouldn’t be surprised if most people here failed at it, if it presented itself in their real lives.
I mean, are you saying you wouldn’t save the burning orphans?
We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn’t do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don’t have anything to deal with supervillains.
There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.
Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a “Friendly AI”, their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think “Its none of my business, maybe god wants it to be that way” and let the extra 1 person die. A “Friendly AI” would maximize lives saved if nothing else is known about all those people.
There are many examples why most people are not close to acting like a “Friendly AI” even if we removed all the bad influences on them. We should build a software to be a “Friendly AI” instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a “Friendly AI”. Its probably safer to do it completely in software.
I agree entirely that humans are not friendly. Whole brain emulation is humanity-safe if there’s never a point at which one person or small group and run much faster than the rest of humanity (including other uploads) The uploads may outpace us, but if they can keep each other in check, then uploading is not the same kind of human-values threat.
Even an upload singleton is not a total loss if the uploads have somewhat benign values. It is a crippling of the future, not an erasure.