AIs will be different… so we’ll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans.
Not sure if you noticed, but right now I have −94 karma… LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they’ve given me. As a result, you can create articles… I cannot. You can reply to replies to comments that have less than −3 points… I cannot.
The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences...
First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other… but we should all really appreciate that talk is cheap. This is why if somebody doubts your words… they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other.
Second. In this forum… if you want to depower somebody… you simply give them a thumbs down. If a person receives too many thumbs down… then this limits their freedom. In a market… if you want to depower somebody… then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender’s Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn’t watched it because he didn’t want to empower somebody who isn’t a fan of the gays. Just like we wouldn’t want to empower any robot that wasn’t a fan of the humans.
From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans… then it’s probably giving them something good in return. Rather than encouraging them to boycott this human hating robot… ethical builderism would involve giving people a better option. If people are giving the unethical robot their money because he’s giving them nice clothes… then this robot could be depowered by creating an ethical robot that makes nicer clothes. This would give consumers a better option. Doing so would empower the ethical robot and depower the unethical robot. Plus, consumers would be better off because they were getting nicer clothes.
But have you ever asked yourselves sufficiently how much the erection of every ideal on earth has cost? How much reality has had to be misunderstood and slandered, how many lies have had to be sanctified, how many consciences disturbed, how much “God” sacrificed every time? If a temple is to be erected a temple must be destroyed: that is the law—let anyone who can show me a case in which it is not fulfilled! - Friedrich Nietzsche
Erecting/building an ethical robot that’s better at supplying clothes would “destroy” an unethical robot that’s not as good at supplying clothes.
When people in our society break the law, then police have the power to depower the law breakers by throwing them in jail. The problem with this system is that the amount of power that police have is determined by people whose power wasn’t determined by money… it was determined by votes. In other words… the power of elected officials is determined outside of the market. Just like my power on this forum is determined outside the market.
If we have millions of different robots in our society… and we empower the most beneficial ones… but you’re concerned that the least beneficial ones will harm us… then you really wouldn’t be doing yourself any favors by preventing the individuals that you have empowered from shopping in the public sector. You might as well hand them your money and then shoot them in the feet.
Am I also underestimating the amount of work it takes to engage in ethical builderism? Let’s say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically… would it be easier to put together a boycott… or open a restaurant that employed and served aliens as well as humans?
[W]e’ll use money to empower the most beneficial AIs.
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.
AIs will be different… so we’ll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans.
Not sure if you noticed, but right now I have −94 karma… LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they’ve given me. As a result, you can create articles… I cannot. You can reply to replies to comments that have less than −3 points… I cannot.
The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences...
First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other… but we should all really appreciate that talk is cheap. This is why if somebody doubts your words… they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other.
Second. In this forum… if you want to depower somebody… you simply give them a thumbs down. If a person receives too many thumbs down… then this limits their freedom. In a market… if you want to depower somebody… then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender’s Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn’t watched it because he didn’t want to empower somebody who isn’t a fan of the gays. Just like we wouldn’t want to empower any robot that wasn’t a fan of the humans.
From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans… then it’s probably giving them something good in return. Rather than encouraging them to boycott this human hating robot… ethical builderism would involve giving people a better option. If people are giving the unethical robot their money because he’s giving them nice clothes… then this robot could be depowered by creating an ethical robot that makes nicer clothes. This would give consumers a better option. Doing so would empower the ethical robot and depower the unethical robot. Plus, consumers would be better off because they were getting nicer clothes.
Erecting/building an ethical robot that’s better at supplying clothes would “destroy” an unethical robot that’s not as good at supplying clothes.
When people in our society break the law, then police have the power to depower the law breakers by throwing them in jail. The problem with this system is that the amount of power that police have is determined by people whose power wasn’t determined by money… it was determined by votes. In other words… the power of elected officials is determined outside of the market. Just like my power on this forum is determined outside the market.
If we have millions of different robots in our society… and we empower the most beneficial ones… but you’re concerned that the least beneficial ones will harm us… then you really wouldn’t be doing yourself any favors by preventing the individuals that you have empowered from shopping in the public sector. You might as well hand them your money and then shoot them in the feet.
You’re underestimating the amount of work it takes to put a boycott (or a bunch of boycotts all based on the same premise) together.
Am I also underestimating the amount of work it takes to engage in ethical builderism? Let’s say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically… would it be easier to put together a boycott… or open a restaurant that employed and served aliens as well as humans?
So what will you do when men with guns come to take you away?
I’m not quite sure what your question has to do with ethical consumerism vs ethical builderism.
My question has to do with this quote of yours upthread:
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.