For example, an upload could probably make more copies of itself it if deleted its capacities for humor and empathy.
If you were an upload, would you make copies of yourself? Where’s the fun in that? The only reason I could see doing it is if I wanted to amass knowledge or do a lot of tasks… and if I did that, I’d want the copies to get merged back into a single “me” so I would have the knowledge and experiences. (Okay, and maybe some backups would be good to have around). But why worry about how many copies you could make? That sounds suspiciously Clippy-like to me.
In any case, I think we’d be more likely to be screwed over by uploads’ human qualities and biases, than by a hypothetical desire to become less human.
In a world of uploads which contains some that do want to copy themselves, selection obviously favors the replicators, with tragicresults absent a singleton.
If you were an upload, would you make copies of yourself?
Yes. I’d make as many copies as was optimal for maximising my own power. I would then endeavor to gain dominance over civilisation, probably by joining a coalition of some sort. This may include creating an FAI that could self improve more effectively than I and serve to further my ends. When a stable equilibrium was reached and it was safe to do so I would go back to following this:
Where’s the fun in that? The only reason I could see doing it is if I wanted to amass knowledge or do a lot of tasks… and if I did that, I’d want the copies to get merged back into a single “me” so I would have the knowledge and experiences.
If right now is the final minutes of the game then early in a WBE era is the penalty shootouts. You don’t mess around having fun till you and those that you care about are going to live to see tomorrow.
If you were an upload, would you make copies of yourself? Where’s the fun in that?
You have a moral obligation to do it
Working in concert, thousands of you could save all the orphans from all the fires, and then go on to right a great many wrongs. You have many many good reasons to gain power.
So unless you’re very aware that you will gain power and then abuse power, you will take steps to gain power.
Even from a purely selfish perspective: If 10,000 of you could take over the world and become an elite of 10,000, that’s probably better than your current rank.
We’ve evolved something called “morality” that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
That’s exactly the high awareness I was talking about, and most people don’t have it. I wouldn’t be surprised if most people here failed at it, if it presented itself in their real lives.
I mean, are you saying you wouldn’t save the burning orphans?
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn’t do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don’t have anything to deal with supervillains.
There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.
It is the fashion in some circles to promote funding for Friendly AI research
as a guard against the existential threat of Unfriendly AI. While this is an
admirable goal, the path to Whole Brain Emulation is in many respects more
straightforward and presents fewer risks.
I believe Eliezer expressed it as something that tells you that even if you
think it would be right (because of your superior ability) to murder the
chief and take over the tribe, it still is not right to murder the chief
and take over the tribe.
That’s exactly the high awareness I was talking about, and most people don’t
have it. I wouldn’t be surprised if most people here failed at it, if it
presented itself in their real lives.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.
Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a “Friendly AI”, their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think “Its none of my business, maybe god wants it to be that way” and let the extra 1 person die. A “Friendly AI” would maximize lives saved if nothing else is known about all those people.
There are many examples why most people are not close to acting like a “Friendly AI” even if we removed all the bad influences on them. We should build a software to be a “Friendly AI” instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a “Friendly AI”. Its probably safer to do it completely in software.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
I agree entirely that humans are not friendly. Whole brain emulation is humanity-safe if there’s never a point at which one person or small group and run much faster than the rest of humanity (including other uploads)
The uploads may outpace us, but if they can keep each other in check, then uploading is not the same kind of human-values threat.
Even an upload singleton is not a total loss if the uploads have somewhat benign values. It is a crippling of the future, not an erasure.
It’s probably easier to cooperate with copies of yourself than with other people, but you also stand to gain less as all of you start out with the same skill set and the same talents.
But why worry about how many copies you could make? That sounds suspiciously Clippy-like to me.
This is, I think, an echo of Robin Hanson’s ‘crack of a future dawn’, where hyper-Darwinian pressures to multiply cause the discarding of unuseful mental modules like humor or empathy which take up space.
If you were an upload, would you make copies of yourself? Where’s the fun in that? The only reason I could see doing it is if I wanted to amass knowledge or do a lot of tasks… and if I did that, I’d want the copies to get merged back into a single “me” so I would have the knowledge and experiences. (Okay, and maybe some backups would be good to have around). But why worry about how many copies you could make? That sounds suspiciously Clippy-like to me.
In any case, I think we’d be more likely to be screwed over by uploads’ human qualities and biases, than by a hypothetical desire to become less human.
In a world of uploads which contains some that do want to copy themselves, selection obviously favors the replicators, with tragic results absent a singleton.
Note that emulations can enable the creation of a singleton, it doesn’t necessarily have to exist in advance.
Yes, but that’s only likely if the first uploads are FAI researchers.
Yes. I’d make as many copies as was optimal for maximising my own power. I would then endeavor to gain dominance over civilisation, probably by joining a coalition of some sort. This may include creating an FAI that could self improve more effectively than I and serve to further my ends. When a stable equilibrium was reached and it was safe to do so I would go back to following this:
If right now is the final minutes of the game then early in a WBE era is the penalty shootouts. You don’t mess around having fun till you and those that you care about are going to live to see tomorrow.
You have a moral obligation to do it
Working in concert, thousands of you could save all the orphans from all the fires, and then go on to right a great many wrongs. You have many many good reasons to gain power.
So unless you’re very aware that you will gain power and then abuse power, you will take steps to gain power.
Even from a purely selfish perspective: If 10,000 of you could take over the world and become an elite of 10,000, that’s probably better than your current rank.
We’ve evolved something called “morality” that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
That’s exactly the high awareness I was talking about, and most people don’t have it. I wouldn’t be surprised if most people here failed at it, if it presented itself in their real lives.
I mean, are you saying you wouldn’t save the burning orphans?
We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn’t do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don’t have anything to deal with supervillains.
There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.
Most people would not act like a Friendly AI therefore “Whole Brain Emulation” only leads to “fewer risks” if you know exactly which brains to emulate and have the ability to choose which brain(s).
If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.
Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a “Friendly AI”, their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think “Its none of my business, maybe god wants it to be that way” and let the extra 1 person die. A “Friendly AI” would maximize lives saved if nothing else is known about all those people.
There are many examples why most people are not close to acting like a “Friendly AI” even if we removed all the bad influences on them. We should build a software to be a “Friendly AI” instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a “Friendly AI”. Its probably safer to do it completely in software.
I agree entirely that humans are not friendly. Whole brain emulation is humanity-safe if there’s never a point at which one person or small group and run much faster than the rest of humanity (including other uploads) The uploads may outpace us, but if they can keep each other in check, then uploading is not the same kind of human-values threat.
Even an upload singleton is not a total loss if the uploads have somewhat benign values. It is a crippling of the future, not an erasure.
It’s probably easier to cooperate with copies of yourself than with other people, but you also stand to gain less as all of you start out with the same skill set and the same talents.
This is, I think, an echo of Robin Hanson’s ‘crack of a future dawn’, where hyper-Darwinian pressures to multiply cause the discarding of unuseful mental modules like humor or empathy which take up space.
Where do you get the idea that humor or empathy are not useful mental abiliites?!
From AngryParsley...