[W]e’ll use money to empower the most beneficial AIs.
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.