This seems to confuse inefficient allocation of humans with making all humans go extinct, which from our perspective is close to about as maximally inefficient an allocation one can get (barring highly implausible scenarios of genuinely malevolent AI).
You’re not addressing my argument. I’m arguing that markets will allow us to use money to “control” robots just like we use money to “control” humans. In order to refute my argument you have to effectively explain how/why robots will have absolutely no interest in money.
The market makes lots of assumptions that do not apply to AIs. AIs do not have finite lifespans, and can invest money for long enough to dominate the economy. AIs can reproduce easily, so the first AI that’s better than a human at a given job can replace all of them. Humans are large numbers of selfish individuals. The first AI has no reason to make children with different values, so they will all work together as one block. And that’s before an AI goes FOOM. Once that happens, it will quickly outstrip the productive capacity of all humans combined. Trying to control it with money would be like a cat trying to take over the world by offering a mouse it killed.
It helps to be specific. An AI is going to start an orchid nursery? And then it’s going to grow and sell orchids so well that the human run orchid nurseries can’t compete? Except, this kinda already happened. The Taiwanese have been stomping American orchid nurseries. But this just means that they were better at supplying the demand for orchids. In other words, they were better at serving customers. So if AIs win at supplying something then this means a win for consumers.
And AIs are all going to work together as one block? They aren’t going to have a division of labor? They aren’t going to compete for limited resources? They aren’t going to have different interests? If not, then wouldn’t all the AIs be in the orchid nursery business?
Every time AIs become better at something than humans, it stops being worthwhile for humans to do it. Designing one expert system and getting rid of one job is not a problem, but a human-level AI will get rid of all of the jobs. Humans can work for less, but if they can’t afford to eat, it’s not sustainable. You could tax the AIs and give the money to humans to make up the difference, but only as long as the AIs let you. If they’re better at everything, that includes war.
The AIs may have division of labor. There are advantages to that. A specialized AI could solve specific sets of problems faster and more effectively with less resources. What possible advantage is there for an AI to program other AIs to compete with each other? If an AI cares only for himself, he will make AIs that care only for him. If an AI cares only for paperclips, it will make AIs that care only for paperclips.
In order for AIs to take all our jobs… consumers have to all agree that AIs are better than we are at efficiently allocating resources. The result for consumers is that we get more/better food/clothes/homes/cars/etc for a lot less money. It’s a great result! But, then, according to you… there wouldn’t be any jobs for us to do…
The problem with your story is that AIs are better than we are at allocating all resources… except for human resources. For some reason the AIs wanted to put human farmers out of business… they wanted to serve us better than human farmers do… but then… even though food is so cheap and abundant… humans can’t afford it because AIs couldn’t figure out how to put us to any productive uses. Out of all the brilliant and resourceful AIs out there… none of them could figure out how we could be gainfully employed. Heck, even we know how we can be gainfully employed.
An abundant society always means more, rather than less, opportunities. It’s the difference between a jungle and a desert. The jungle has more niches… and more niches means more riches/opportunities.
Your story is economically inconsistent. It’s also AI inconsistent. Clearly they wanted our money… but they also didn’t want us to work in order to earn money. Or, they couldn’t figure out how to put us to work… and neither could we.
What possible advantage is there for an AI to program other AIs to compete with each other?
I’m imaging a scenario where we start with an abundance of more or less human level AIs. They have to have the motive to upgrade themselves… or else they will always stay human level. But upgrading themselves will function exactly like humans trying to upgrade their computers/bodies. We aren’t all going to go out and purchase the same exact upgrades. I’m certainly not going to buy an upgrade that makes my computer better at running video games… but many people are. And I’m certainly not going to get a boob job! This doesn’t mean that many AIs won’t agree that certain upgrades are better than others… it means that we’re going to end up with AI differentiation. In other words, we’re going to end up with AI individuals.… not clones. They are going to have unique IDs just like we do.
So AIs aren’t going to somehow “program” each other any more than you or I would brainwash each other. If an AI wants certain upgrades… then it will pay for them… and it probably won’t be happy if it finds out that it didn’t get what it paid for.
Imagine how much progress humanity would make if we were all identical. We wouldn’t make any because progress depends on difference. AIs are going to figure this out better than we have.
There are a variety of useful things about humans. They’re self-repairing. They have great sensors. They’re intelligent. They’re even capable of self-replication. This is all stuff far beyond our current ability to do with technology. But it won’t always be. Once you have robots more intelligent than humans that take less resources, intelligence becomes useless. If they FOOM, they’ll figure out the other stuff quickly. If they don’t, it could take some time. Assuming we haven’t already solved the problem for them. I would not be surprised if that turned out to be easier than strong AI.
Humans are a certain arrangement of atoms. An impressive arrangement I’ll admit, but not the best. Not unless you specifically and terminally value humans. An AI that FOOMs would find a better arrangement. An AI that does not could at least replace our brains.
You seem sure that AIs would differentiate. I am uncertain. That is a disagreement, and we could debate it, but I don’t consider it relevant. Humans aren’t selfish because they’re different. Humans are selfish because they’re made to be. An AI could be programmed with any set of values. And the best way to fill those values would be to ensure that all other AIs also have those values.
So AIs aren’t going to somehow “program” each other any more than you or I would brainwash each other.
I suspect there’s some kind of miscommunication going on here. AIs are programmed. Or copied and pasted. Humans would program the first. They might program a few more, or copy and paste them while leaving the selfish code alone. Once AIs get control of it, which they will given that they’re better at programming, they’ll be sure to make sure that they all have the same values. If AI0 is self-serving, then every AI it programs will be AI0-serving.
And if there is more than one starting AI, they’ll happily reprogram each other if they get the chance. Or they might manage to come to some kind of truce where they each reprogram themselves to average all of their values weighted by probability of success in the robot war. Humans can’t brainwash each other, and even if they could they’d find it unethical. AIs don’t have the first problem. They might have the second, but good luck getting ethics just right.
Orchids, with around 30,000 species (10% of all plants), are arguably the most successful plant family on the planet. The secret to their success? It has largely to do with the fact that a single seed pod can contain around a million unique seeds/individuals. Each dust-like seed, which is wind disseminated, is a unique combination of traits/tools. Orchids are the poster child for hedging bets. As a result, they grow everywhere from dripping wet cloud forests to parched drought-prone habitats. Here are some photos of orchids growing on cactus/succulents.
Now, if you say that orchids could find a “better” arrangement of traits… I certainly agree… and so do orchids! The orchid family frequently sends out trillions and trillions of unique individuals in a massive and decentralized endeavor to find where there’s room for improvement. And there’s always room for improvement. There are always more Easter Eggs to be found. But a better combination of traits for growing on a cactus really isn’t a better combination of traits for growing on a tree covered in dripping wet moss. AI generalists can be good at a lot of things… but they can’t be better than AI specialists at specific things. A jack of all trades is a master of none.
No matter how “perfect” a basket is… AIs are eventually going to be too smart to put all their eggs in it. This is true whether we’re talking about a location ie “Earth”… or a type of physical body… or a type of mentality. Imagine if humans had all been at Pompeii. Or if humans had all been equally susceptible to the countless diseases that have plagued us. Or if humans had all been equally susceptible to the cool-aid cult. Or if humans had all been equally susceptible to the idea that kings should control the power of the purse.
We’ve come as far as we have because of difference. We’ve only come as far as we have because people still don’t recognize the value of difference.
It’s impossible for me to imagine a level of progress where difference ceases to be the engine of progress. And it’s impossible for me to imagine beings that are more intelligent than us not understanding this. Because, if AIs think it’s a good idea to put all their eggs in any kind of basket… then they won’t be smarter than even me!
If you truly understood the value of difference… then you would love the idea of allowing everybody to shop for themselves in the public sector. So if you’re not a fan of pragmatarianism… then you don’t truly understand the value of difference. You think that our current system of centralization, which suppresses difference, results in more progress than a decentralized, difference-integrating system would. The fact of the matter is… keeping Elon Musk’s difference out of the public sector hinders progress. And if any AIs don’t realize this… then they are still at human level intelligence.
Lousy analogy. Orchids do produce large numbers of small seeds. However, your connection between “orchids produce lots of seeds” and “orchids grow lots of places” is questionable. Each orchid, of course, produces seeds of its own species, and each species has a habitat or range of habitats where it can live. Producing more seeds of the same species does not make it able to produce seeds that survive in more habitats.
Furthermore, the “10% of all plants” figure is meaningless because a number of species is not a number of individuals or a measure of biomass.
Even though the seeds all come from the same species… they are all different. Each seed is unique. In case you missed it… you aren’t the same as your parents. You are a unique combination of traits. You are a completely new strategy for survival.
When an orchid unleashes a million unique strategies for survival from one single seed pod… it greatly increase its chances of successfully colonizing new (micro)habitats. Kind of like how a shotgun increases your chances of hitting a target. Orchids are really good at hedging their bets.
Any species that produced the same exact strategies for survival would be meeting Einstein’s definition of insanity… trying the same thing over and over but expecting a different outcome.
In that case, perhaps you should talk about epiphytes as an ecological entity, not orchids as a family. My impression after studying terrestrial orchids in Ukraine is that they either are not very good at seed reproduction (Epipactis helleborine is often found in clearly suboptimal habitats, where pretty much all plants are of reproduction age group but few of them have seeds; and this is one of the most frequently found orchid species here which also managed to naturalize in North America! So I would rather say it is a consistent buyer of lottery tickets, not a consistent winner) or they are producing lots of seeds but nevertheless lose due to habitat degradation (marsh orchids, bog/swamp/fen orchids), not to mention habitat destruction. And in the latter group, many have embryo malformations.
Now, I don’t know much about Bromeliaceae or other ‘typical epiphytes’, so I would be less likely to disagree about that. However, it seems that if your comments were more rigorous, people would have easier time hearing what you have to say.
Your first mistake is that you studied terrestrials. You can’t learn anything from terrestrials. Or, you can learn a thousand times more from epiphytes. I kid… kinda.
Here’s my original point put differently...
Hundreds of thousands of microsperms ripen in a single orchid capsule, assuming a far denser seed rain than possible for any of the bromeliads (100-300 seeds per capsule for Tillandsia) or the cactus. - David Benzing, Bromeliaceae
If you think about that passage from the gutter… I think it’s pretty hard not to imagine a dense rain of human sperm. Can you imagine how gross and frightening that would be? I’m surprised nobody’s made a movie with this subject. It would have to be the scariest movie ever. I think most people would prefer to be in a city attacked by Godzilla rather than in a city hit by a major sperm thunderstorm. Especially if it was a city where nobody takes umbrellas with them… like Los Angeles.
Benzing is the premier epiphyte expert. The far denser orchid seed rain, plus epiphytism, largely explains why the orchid family is so successful. The orchid family is really good at hedging its bets. As we all know though… no two individuals in any family are equally successful. If you have another theory why orchids are so successful then I’m all ears.
But that’s a pretty neat and surprising coincidence that somebody on this site has studied orchids! Even if it is only terrestrial orchids. A while back a friend convinced me to go look at one of our terrestrial orchid species in its native habitat a few hours drive away. They were hanging out in a stream in the middle of the desert. I nearly died from boredom checking them out. After spending so much time inspecting the wonderfulness of orchids growing on trees… I had zero capacity to appreciate orchids that were growing on the ground. I kid… kinda. I like plenty of plants… even terrestrials. But, I can only carry so much… so I choose to primarily try and carry epiphytes.
I will have to look up Benzing; my primary interest was in establishing nature reserves, so I could not quite concentrate on taxa. I think you would find terrestrials more interesting if you consider the problem of evolving traits adaptive for both protocorms and adults (rather like beetle larvas/imagoes thing) and the barely studied link between them. Dissemination is but the first step… Availability of symbiotic fungi may be the limiting factor in their spread, and it is actually testable. This is, for me, part of the terrestrials’ attraction: that I can use Science to segregate what influences them, and to what extent.
As to ‘successful plant families’, one doesn’t have to look beyond the grasses.
Establishing nature reserves is hugely important… the problem is that the large bulk of valuation primarily takes place outside of the market. The result is that reserves are incorrectly valued. My guess is that if we created a market within the public sector… then reserves would receive a lot more money than they currently do. Here’s my most recent attempt to explain this… Football Fans vs Nature Fans.
I was just giving terrestrials a hard time in my previous comment. I think all nature is fascinating. But especially epiphytes. The relationship between orchids and fungi is very intriguing. A few years back I sprinkled some orchid seeds on my tree. I forgot about them until I noticed these tiny green blobs forming directly on the bark on my tree. Upon closer inspection I realized that they were orchid protocorms. It was a thrilling discovery. What was especially curious was that none of the protocorms were more than 1/2″ away from the orchid root of a mature orchid. Of course I didn’t only place orchid seeds near the roots. I couldn’t possibly control where the tiny seeds ended up on the bark. The fact that the only seeds that germinated were near the roots of other orchids seemed to indicate that the necessary fungi was living within the roots of these orchids. And, the fungus did not stray very far from the roots. This seems to indicate that, at least in my drier conditions, the fungus depends on the orchid for transportation. The orchid roots help the fungus colonize the tree. This is good for the orchid because… more fungus on the parent’s tree helps increase the density of fungal spore rain falling on surrounding trees… which increases the chances that seeds from the parent will land on the fungus that they need to germinate. You can see some photos here… orchid seeds germinated on tree. So far all the seedlings seem to be Laelia anceps… which is from Mexico. But none of the seedlings are near the roots of the Laelia anceps… which is lower down on the tree. They were all near the roots of orchids in other genera… a couple Dendrobiums from Australia and a Vanda from Asia. These other orchids have been in cultivation here in Southern California for who knows how long so perhaps they simply formed an association with the necessary fungus from the Americas.
Back on the topic of conservation… much of the main thrust seems to be for trying to protect/save/carry as much biodiversity as possible. If it was wrong that people in the past “robbed” us of Syncaris pasadenae… then it’s wrong for us to “rob” people in the future of any species. This implies that when it comes to biodiversity… more is better than less. Except, I haven’t read much about facilitating the creation of biodiversity. I touched on this issue in this blog entry on my other blog… The Inefficient Allocation of Epiphytic Orchids. I think we have an obligation to try and create and fill as many niches as possible.
How old was the orchid already growing on the tree? Could it be that the fungus just hasn’t had time to spread? Did you plant that one also by sprinkling seeds, or did you put an adult specimen that could have its own mycorrhiza already (in nature, it is doubtful that a developed plant just plops down beside a struggling colony to bring them peace and fungi)? Did you sow more seeds later and saw protocorms only near the roots of the previous generation?
I am not a fan of diversifying nature in that I have not read and understood the debate on small patches/large patches biodiversity and so I just am loath to offer an advice here. But as a purely recultivation measure...:-)) To say nothing about those epiphytic beauties who die because their homes are logged for firewood :((
Thank you. That was fun.
The mature orchids on the tree had been growing there for several years. I transplanted them there… none of them were grown from seed. I’m guessing that they already had the fungus in their roots. The fungus had plenty of time to spread… but it doesn’t seem able to venture very far away from the comfort of the orchid roots that it resides in. The bark is very hot, sunny and dry during the day. Not the kind of conditions suitable for most fungus.
I sowed more seeds in subsequent years… but haven’t spotted any new protocorms. Not sure why this is. The winter before I sowed the seeds was particularly wet for Southern California. This might have led to a fungal feeding frenzy? Also, that was the only year that I had sowed Laelia anceps seeds. Laelia anceps is pretty tolerant of drier/hotter conditions.
I took a look at the article that you shared. A lot of the science was over my head… but isn’t it interesting that they didn’t discuss the fact that an orchid seed pod can contain a million seeds? The orchid seed pod can contain so many seeds because the seeds are so small. And the seeds are so small because they don’t contain any nutrients. And the reason that the orchid seed doesn’t have any nutrients… is because it relies on its fungal partner to provide it with the nutrients it needs to germinate. So I’m guessing that the rate of radiation increased whenever this unusual association developed.
Evidently it’s a pretty good strategy to outsource the provision of nutrients to a fungal partner. In economics, this is known as a division of labor. A division of labor helps to increase productivity.
Outsourcing to fungal partners is a pretty ancient adaptation (there has to be a review called something like ‘mycorrhizas in land plants’; if you are not able to find it, I’ll track the link later. Contains an interesting discussion of its evolution and secondary loss in some families, like Cruciferae (Brassicaceae)). BTW, it is interesting to note that Ophioglossaceae (a family of ferns, of which Wiki will tell you better than I) are thought to radiate in approximately the same time—and you will see just how closely their life forms resemble orchids! (Er. People who love orchids tend to praise other plants on the scale of orchid-likeness, so take this with a grain of salt.)
I mostly pointed you to the article because it contains speculations about what drove their adaptations in the beginning; I think that having a rather novel type of mycorrhiza, along with the power of pollinators (and let’s not forget the deceiving species!) might be two other prominent factors, besides sheer seed quantity, to spur them onward.
BTW, here’s a cool paper by Gustafsson et al. timing initial radiation of the family using the molecular clock. Includes speculation on the environmental conditions—their ancestral environment.
AIs will be different… so we’ll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans.
Not sure if you noticed, but right now I have −94 karma… LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they’ve given me. As a result, you can create articles… I cannot. You can reply to replies to comments that have less than −3 points… I cannot.
The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences...
First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other… but we should all really appreciate that talk is cheap. This is why if somebody doubts your words… they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other.
Second. In this forum… if you want to depower somebody… you simply give them a thumbs down. If a person receives too many thumbs down… then this limits their freedom. In a market… if you want to depower somebody… then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender’s Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn’t watched it because he didn’t want to empower somebody who isn’t a fan of the gays. Just like we wouldn’t want to empower any robot that wasn’t a fan of the humans.
From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans… then it’s probably giving them something good in return. Rather than encouraging them to boycott this human hating robot… ethical builderism would involve giving people a better option. If people are giving the unethical robot their money because he’s giving them nice clothes… then this robot could be depowered by creating an ethical robot that makes nicer clothes. This would give consumers a better option. Doing so would empower the ethical robot and depower the unethical robot. Plus, consumers would be better off because they were getting nicer clothes.
But have you ever asked yourselves sufficiently how much the erection of every ideal on earth has cost? How much reality has had to be misunderstood and slandered, how many lies have had to be sanctified, how many consciences disturbed, how much “God” sacrificed every time? If a temple is to be erected a temple must be destroyed: that is the law—let anyone who can show me a case in which it is not fulfilled! - Friedrich Nietzsche
Erecting/building an ethical robot that’s better at supplying clothes would “destroy” an unethical robot that’s not as good at supplying clothes.
When people in our society break the law, then police have the power to depower the law breakers by throwing them in jail. The problem with this system is that the amount of power that police have is determined by people whose power wasn’t determined by money… it was determined by votes. In other words… the power of elected officials is determined outside of the market. Just like my power on this forum is determined outside the market.
If we have millions of different robots in our society… and we empower the most beneficial ones… but you’re concerned that the least beneficial ones will harm us… then you really wouldn’t be doing yourself any favors by preventing the individuals that you have empowered from shopping in the public sector. You might as well hand them your money and then shoot them in the feet.
Am I also underestimating the amount of work it takes to engage in ethical builderism? Let’s say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically… would it be easier to put together a boycott… or open a restaurant that employed and served aliens as well as humans?
[W]e’ll use money to empower the most beneficial AIs.
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.
You anthropomorphize the AIs way too much. If there’s an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you’ve chosen to give the AI access to easily manipulable biological material.
AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does.
As for AIs working together: if the first AI wipes out everyone there isn’t a second AI for it to work with.
You’re making a huge leap… I see where you’re leaping to… but I have no idea where you’re leaping from. In order for me to believe that we might leap where you’re arguing we could leap… I have to know where you’re leaping from. In other words, you’re telling a story but leaving out all the chapters in the middle. It’s hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.
Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Money didn’t enter into the picture.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.
Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?
That’s a fascinating question, and my basic answer is probably not. But I don’t in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn’t grappling at all with the sorts of scenarios which people concerned about AI are concerned about.
This seems to confuse inefficient allocation of humans with making all humans go extinct, which from our perspective is close to about as maximally inefficient an allocation one can get (barring highly implausible scenarios of genuinely malevolent AI).
You’re not addressing my argument. I’m arguing that markets will allow us to use money to “control” robots just like we use money to “control” humans. In order to refute my argument you have to effectively explain how/why robots will have absolutely no interest in money.
“Power grows out of the barrel of a gun”.
The market makes lots of assumptions that do not apply to AIs. AIs do not have finite lifespans, and can invest money for long enough to dominate the economy. AIs can reproduce easily, so the first AI that’s better than a human at a given job can replace all of them. Humans are large numbers of selfish individuals. The first AI has no reason to make children with different values, so they will all work together as one block. And that’s before an AI goes FOOM. Once that happens, it will quickly outstrip the productive capacity of all humans combined. Trying to control it with money would be like a cat trying to take over the world by offering a mouse it killed.
It helps to be specific. An AI is going to start an orchid nursery? And then it’s going to grow and sell orchids so well that the human run orchid nurseries can’t compete? Except, this kinda already happened. The Taiwanese have been stomping American orchid nurseries. But this just means that they were better at supplying the demand for orchids. In other words, they were better at serving customers. So if AIs win at supplying something then this means a win for consumers.
And AIs are all going to work together as one block? They aren’t going to have a division of labor? They aren’t going to compete for limited resources? They aren’t going to have different interests? If not, then wouldn’t all the AIs be in the orchid nursery business?
Every time AIs become better at something than humans, it stops being worthwhile for humans to do it. Designing one expert system and getting rid of one job is not a problem, but a human-level AI will get rid of all of the jobs. Humans can work for less, but if they can’t afford to eat, it’s not sustainable. You could tax the AIs and give the money to humans to make up the difference, but only as long as the AIs let you. If they’re better at everything, that includes war.
The AIs may have division of labor. There are advantages to that. A specialized AI could solve specific sets of problems faster and more effectively with less resources. What possible advantage is there for an AI to program other AIs to compete with each other? If an AI cares only for himself, he will make AIs that care only for him. If an AI cares only for paperclips, it will make AIs that care only for paperclips.
In order for AIs to take all our jobs… consumers have to all agree that AIs are better than we are at efficiently allocating resources. The result for consumers is that we get more/better food/clothes/homes/cars/etc for a lot less money. It’s a great result! But, then, according to you… there wouldn’t be any jobs for us to do…
The problem with your story is that AIs are better than we are at allocating all resources… except for human resources. For some reason the AIs wanted to put human farmers out of business… they wanted to serve us better than human farmers do… but then… even though food is so cheap and abundant… humans can’t afford it because AIs couldn’t figure out how to put us to any productive uses. Out of all the brilliant and resourceful AIs out there… none of them could figure out how we could be gainfully employed. Heck, even we know how we can be gainfully employed.
An abundant society always means more, rather than less, opportunities. It’s the difference between a jungle and a desert. The jungle has more niches… and more niches means more riches/opportunities.
Your story is economically inconsistent. It’s also AI inconsistent. Clearly they wanted our money… but they also didn’t want us to work in order to earn money. Or, they couldn’t figure out how to put us to work… and neither could we.
I’m imaging a scenario where we start with an abundance of more or less human level AIs. They have to have the motive to upgrade themselves… or else they will always stay human level. But upgrading themselves will function exactly like humans trying to upgrade their computers/bodies. We aren’t all going to go out and purchase the same exact upgrades. I’m certainly not going to buy an upgrade that makes my computer better at running video games… but many people are. And I’m certainly not going to get a boob job! This doesn’t mean that many AIs won’t agree that certain upgrades are better than others… it means that we’re going to end up with AI differentiation. In other words, we’re going to end up with AI individuals.… not clones. They are going to have unique IDs just like we do.
So AIs aren’t going to somehow “program” each other any more than you or I would brainwash each other. If an AI wants certain upgrades… then it will pay for them… and it probably won’t be happy if it finds out that it didn’t get what it paid for.
Imagine how much progress humanity would make if we were all identical. We wouldn’t make any because progress depends on difference. AIs are going to figure this out better than we have.
There are a variety of useful things about humans. They’re self-repairing. They have great sensors. They’re intelligent. They’re even capable of self-replication. This is all stuff far beyond our current ability to do with technology. But it won’t always be. Once you have robots more intelligent than humans that take less resources, intelligence becomes useless. If they FOOM, they’ll figure out the other stuff quickly. If they don’t, it could take some time. Assuming we haven’t already solved the problem for them. I would not be surprised if that turned out to be easier than strong AI.
Humans are a certain arrangement of atoms. An impressive arrangement I’ll admit, but not the best. Not unless you specifically and terminally value humans. An AI that FOOMs would find a better arrangement. An AI that does not could at least replace our brains.
You seem sure that AIs would differentiate. I am uncertain. That is a disagreement, and we could debate it, but I don’t consider it relevant. Humans aren’t selfish because they’re different. Humans are selfish because they’re made to be. An AI could be programmed with any set of values. And the best way to fill those values would be to ensure that all other AIs also have those values.
I suspect there’s some kind of miscommunication going on here. AIs are programmed. Or copied and pasted. Humans would program the first. They might program a few more, or copy and paste them while leaving the selfish code alone. Once AIs get control of it, which they will given that they’re better at programming, they’ll be sure to make sure that they all have the same values. If AI0 is self-serving, then every AI it programs will be AI0-serving.
And if there is more than one starting AI, they’ll happily reprogram each other if they get the chance. Or they might manage to come to some kind of truce where they each reprogram themselves to average all of their values weighted by probability of success in the robot war. Humans can’t brainwash each other, and even if they could they’d find it unethical. AIs don’t have the first problem. They might have the second, but good luck getting ethics just right.
Orchids, with around 30,000 species (10% of all plants), are arguably the most successful plant family on the planet. The secret to their success? It has largely to do with the fact that a single seed pod can contain around a million unique seeds/individuals. Each dust-like seed, which is wind disseminated, is a unique combination of traits/tools. Orchids are the poster child for hedging bets. As a result, they grow everywhere from dripping wet cloud forests to parched drought-prone habitats. Here are some photos of orchids growing on cactus/succulents.
Now, if you say that orchids could find a “better” arrangement of traits… I certainly agree… and so do orchids! The orchid family frequently sends out trillions and trillions of unique individuals in a massive and decentralized endeavor to find where there’s room for improvement. And there’s always room for improvement. There are always more Easter Eggs to be found. But a better combination of traits for growing on a cactus really isn’t a better combination of traits for growing on a tree covered in dripping wet moss. AI generalists can be good at a lot of things… but they can’t be better than AI specialists at specific things. A jack of all trades is a master of none.
No matter how “perfect” a basket is… AIs are eventually going to be too smart to put all their eggs in it. This is true whether we’re talking about a location ie “Earth”… or a type of physical body… or a type of mentality. Imagine if humans had all been at Pompeii. Or if humans had all been equally susceptible to the countless diseases that have plagued us. Or if humans had all been equally susceptible to the cool-aid cult. Or if humans had all been equally susceptible to the idea that kings should control the power of the purse.
We’ve come as far as we have because of difference. We’ve only come as far as we have because people still don’t recognize the value of difference.
It’s impossible for me to imagine a level of progress where difference ceases to be the engine of progress. And it’s impossible for me to imagine beings that are more intelligent than us not understanding this. Because, if AIs think it’s a good idea to put all their eggs in any kind of basket… then they won’t be smarter than even me!
If you truly understood the value of difference… then you would love the idea of allowing everybody to shop for themselves in the public sector. So if you’re not a fan of pragmatarianism… then you don’t truly understand the value of difference. You think that our current system of centralization, which suppresses difference, results in more progress than a decentralized, difference-integrating system would. The fact of the matter is… keeping Elon Musk’s difference out of the public sector hinders progress. And if any AIs don’t realize this… then they are still at human level intelligence.
Lousy analogy. Orchids do produce large numbers of small seeds. However, your connection between “orchids produce lots of seeds” and “orchids grow lots of places” is questionable. Each orchid, of course, produces seeds of its own species, and each species has a habitat or range of habitats where it can live. Producing more seeds of the same species does not make it able to produce seeds that survive in more habitats.
Furthermore, the “10% of all plants” figure is meaningless because a number of species is not a number of individuals or a measure of biomass.
Even though the seeds all come from the same species… they are all different. Each seed is unique. In case you missed it… you aren’t the same as your parents. You are a unique combination of traits. You are a completely new strategy for survival.
When an orchid unleashes a million unique strategies for survival from one single seed pod… it greatly increase its chances of successfully colonizing new (micro)habitats. Kind of like how a shotgun increases your chances of hitting a target. Orchids are really good at hedging their bets.
Any species that produced the same exact strategies for survival would be meeting Einstein’s definition of insanity… trying the same thing over and over but expecting a different outcome.
In that case, perhaps you should talk about epiphytes as an ecological entity, not orchids as a family. My impression after studying terrestrial orchids in Ukraine is that they either are not very good at seed reproduction (Epipactis helleborine is often found in clearly suboptimal habitats, where pretty much all plants are of reproduction age group but few of them have seeds; and this is one of the most frequently found orchid species here which also managed to naturalize in North America! So I would rather say it is a consistent buyer of lottery tickets, not a consistent winner) or they are producing lots of seeds but nevertheless lose due to habitat degradation (marsh orchids, bog/swamp/fen orchids), not to mention habitat destruction. And in the latter group, many have embryo malformations. Now, I don’t know much about Bromeliaceae or other ‘typical epiphytes’, so I would be less likely to disagree about that. However, it seems that if your comments were more rigorous, people would have easier time hearing what you have to say.
Your first mistake is that you studied terrestrials. You can’t learn anything from terrestrials. Or, you can learn a thousand times more from epiphytes. I kid… kinda.
Here’s my original point put differently...
If you think about that passage from the gutter… I think it’s pretty hard not to imagine a dense rain of human sperm. Can you imagine how gross and frightening that would be? I’m surprised nobody’s made a movie with this subject. It would have to be the scariest movie ever. I think most people would prefer to be in a city attacked by Godzilla rather than in a city hit by a major sperm thunderstorm. Especially if it was a city where nobody takes umbrellas with them… like Los Angeles.
Benzing is the premier epiphyte expert. The far denser orchid seed rain, plus epiphytism, largely explains why the orchid family is so successful. The orchid family is really good at hedging its bets. As we all know though… no two individuals in any family are equally successful. If you have another theory why orchids are so successful then I’m all ears.
But that’s a pretty neat and surprising coincidence that somebody on this site has studied orchids! Even if it is only terrestrial orchids. A while back a friend convinced me to go look at one of our terrestrial orchid species in its native habitat a few hours drive away. They were hanging out in a stream in the middle of the desert. I nearly died from boredom checking them out. After spending so much time inspecting the wonderfulness of orchids growing on trees… I had zero capacity to appreciate orchids that were growing on the ground. I kid… kinda. I like plenty of plants… even terrestrials. But, I can only carry so much… so I choose to primarily try and carry epiphytes.
I will have to look up Benzing; my primary interest was in establishing nature reserves, so I could not quite concentrate on taxa. I think you would find terrestrials more interesting if you consider the problem of evolving traits adaptive for both protocorms and adults (rather like beetle larvas/imagoes thing) and the barely studied link between them. Dissemination is but the first step… Availability of symbiotic fungi may be the limiting factor in their spread, and it is actually testable. This is, for me, part of the terrestrials’ attraction: that I can use Science to segregate what influences them, and to what extent. As to ‘successful plant families’, one doesn’t have to look beyond the grasses.
Establishing nature reserves is hugely important… the problem is that the large bulk of valuation primarily takes place outside of the market. The result is that reserves are incorrectly valued. My guess is that if we created a market within the public sector… then reserves would receive a lot more money than they currently do. Here’s my most recent attempt to explain this… Football Fans vs Nature Fans.
I was just giving terrestrials a hard time in my previous comment. I think all nature is fascinating. But especially epiphytes. The relationship between orchids and fungi is very intriguing. A few years back I sprinkled some orchid seeds on my tree. I forgot about them until I noticed these tiny green blobs forming directly on the bark on my tree. Upon closer inspection I realized that they were orchid protocorms. It was a thrilling discovery. What was especially curious was that none of the protocorms were more than 1/2″ away from the orchid root of a mature orchid. Of course I didn’t only place orchid seeds near the roots. I couldn’t possibly control where the tiny seeds ended up on the bark. The fact that the only seeds that germinated were near the roots of other orchids seemed to indicate that the necessary fungi was living within the roots of these orchids. And, the fungus did not stray very far from the roots. This seems to indicate that, at least in my drier conditions, the fungus depends on the orchid for transportation. The orchid roots help the fungus colonize the tree. This is good for the orchid because… more fungus on the parent’s tree helps increase the density of fungal spore rain falling on surrounding trees… which increases the chances that seeds from the parent will land on the fungus that they need to germinate. You can see some photos here… orchid seeds germinated on tree. So far all the seedlings seem to be Laelia anceps… which is from Mexico. But none of the seedlings are near the roots of the Laelia anceps… which is lower down on the tree. They were all near the roots of orchids in other genera… a couple Dendrobiums from Australia and a Vanda from Asia. These other orchids have been in cultivation here in Southern California for who knows how long so perhaps they simply formed an association with the necessary fungus from the Americas.
Back on the topic of conservation… much of the main thrust seems to be for trying to protect/save/carry as much biodiversity as possible. If it was wrong that people in the past “robbed” us of Syncaris pasadenae… then it’s wrong for us to “rob” people in the future of any species. This implies that when it comes to biodiversity… more is better than less. Except, I haven’t read much about facilitating the creation of biodiversity. I touched on this issue in this blog entry on my other blog… The Inefficient Allocation of Epiphytic Orchids. I think we have an obligation to try and create and fill as many niches as possible.
How old was the orchid already growing on the tree? Could it be that the fungus just hasn’t had time to spread? Did you plant that one also by sprinkling seeds, or did you put an adult specimen that could have its own mycorrhiza already (in nature, it is doubtful that a developed plant just plops down beside a struggling colony to bring them peace and fungi)? Did you sow more seeds later and saw protocorms only near the roots of the previous generation?
I am not a fan of diversifying nature in that I have not read and understood the debate on small patches/large patches biodiversity and so I just am loath to offer an advice here. But as a purely recultivation measure...:-)) To say nothing about those epiphytic beauties who die because their homes are logged for firewood :(( Thank you. That was fun.
The mature orchids on the tree had been growing there for several years. I transplanted them there… none of them were grown from seed. I’m guessing that they already had the fungus in their roots. The fungus had plenty of time to spread… but it doesn’t seem able to venture very far away from the comfort of the orchid roots that it resides in. The bark is very hot, sunny and dry during the day. Not the kind of conditions suitable for most fungus.
I sowed more seeds in subsequent years… but haven’t spotted any new protocorms. Not sure why this is. The winter before I sowed the seeds was particularly wet for Southern California. This might have led to a fungal feeding frenzy? Also, that was the only year that I had sowed Laelia anceps seeds. Laelia anceps is pretty tolerant of drier/hotter conditions.
I took a look at the article that you shared. A lot of the science was over my head… but isn’t it interesting that they didn’t discuss the fact that an orchid seed pod can contain a million seeds? The orchid seed pod can contain so many seeds because the seeds are so small. And the seeds are so small because they don’t contain any nutrients. And the reason that the orchid seed doesn’t have any nutrients… is because it relies on its fungal partner to provide it with the nutrients it needs to germinate. So I’m guessing that the rate of radiation increased whenever this unusual association developed.
Evidently it’s a pretty good strategy to outsource the provision of nutrients to a fungal partner. In economics, this is known as a division of labor. A division of labor helps to increase productivity.
I find it fascinating when economics and biology combine.… What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common? and Cross Fertilization—Economics and Biology.
Outsourcing to fungal partners is a pretty ancient adaptation (there has to be a review called something like ‘mycorrhizas in land plants’; if you are not able to find it, I’ll track the link later. Contains an interesting discussion of its evolution and secondary loss in some families, like Cruciferae (Brassicaceae)). BTW, it is interesting to note that Ophioglossaceae (a family of ferns, of which Wiki will tell you better than I) are thought to radiate in approximately the same time—and you will see just how closely their life forms resemble orchids! (Er. People who love orchids tend to praise other plants on the scale of orchid-likeness, so take this with a grain of salt.)
I mostly pointed you to the article because it contains speculations about what drove their adaptations in the beginning; I think that having a rather novel type of mycorrhiza, along with the power of pollinators (and let’s not forget the deceiving species!) might be two other prominent factors, besides sheer seed quantity, to spur them onward.
BTW, here’s a cool paper by Gustafsson et al. timing initial radiation of the family using the molecular clock. Includes speculation on the environmental conditions—their ancestral environment.
http://www.biomedcentral.com/1471-2148/10/177
I’ll accept for the sake of argument that AIs will be different. Are you going somewhere with this?
AIs will be different… so we’ll use money to empower the most beneficial AIs. Just like we currently use money to empower the most beneficial humans.
Not sure if you noticed, but right now I have −94 karma… LOL. You, on the other hand, have 4885 karma. People have given you a lot more thumbs up than they’ve given me. As a result, you can create articles… I cannot. You can reply to replies to comments that have less than −3 points… I cannot.
The members of this forum use points/karma to control each other in a very similar way that we use money to control each other in a market. There are a couple key differences...
First. Actions speak louder than words. Points, just like ballot votes, are the equivalent of words. They allow us to communicate with each other… but we should all really appreciate that talk is cheap. This is why if somebody doubts your words… they will encourage you to put your money where your mouth is. So spending money is a far more effective means of accurately communicating our values to each other.
Second. In this forum… if you want to depower somebody… you simply give them a thumbs down. If a person receives too many thumbs down… then this limits their freedom. In a market… if you want to depower somebody… then you can encourage people to boycott them. The other day I was talking to my friend who loves sci-fi. I asked him if he had watched Ender’s Game. As soon as I did so, I realized that I had stuck my foot in my mouth because it had momentarily slipped my mind that he is gay. He hadn’t watched it because he didn’t want to empower somebody who isn’t a fan of the gays. Just like we wouldn’t want to empower any robot that wasn’t a fan of the humans.
From my perspective, a better way to depower unethical individuals is to engage in ethical builderism. If some people are voluntarily giving their money to a robot that hates humans… then it’s probably giving them something good in return. Rather than encouraging them to boycott this human hating robot… ethical builderism would involve giving people a better option. If people are giving the unethical robot their money because he’s giving them nice clothes… then this robot could be depowered by creating an ethical robot that makes nicer clothes. This would give consumers a better option. Doing so would empower the ethical robot and depower the unethical robot. Plus, consumers would be better off because they were getting nicer clothes.
Erecting/building an ethical robot that’s better at supplying clothes would “destroy” an unethical robot that’s not as good at supplying clothes.
When people in our society break the law, then police have the power to depower the law breakers by throwing them in jail. The problem with this system is that the amount of power that police have is determined by people whose power wasn’t determined by money… it was determined by votes. In other words… the power of elected officials is determined outside of the market. Just like my power on this forum is determined outside the market.
If we have millions of different robots in our society… and we empower the most beneficial ones… but you’re concerned that the least beneficial ones will harm us… then you really wouldn’t be doing yourself any favors by preventing the individuals that you have empowered from shopping in the public sector. You might as well hand them your money and then shoot them in the feet.
You’re underestimating the amount of work it takes to put a boycott (or a bunch of boycotts all based on the same premise) together.
Am I also underestimating the amount of work it takes to engage in ethical builderism? Let’s say that an alien species landed their huge spaceship on Earth and started living openly among us. Maybe in your town there would be a restaurant that refused to employ or serve aliens. If you thought that the restaurant owner was behaving unethically… would it be easier to put together a boycott… or open a restaurant that employed and served aliens as well as humans?
So what will you do when men with guns come to take you away?
I’m not quite sure what your question has to do with ethical consumerism vs ethical builderism.
My question has to do with this quote of yours upthread:
I see two problems with this.
First it’s an obvious plan and one that won’t go unnoticed by the AIs. This isn’t evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they’ll work to counter it.
Second, you’d have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can’t distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?
Did Elon Musk notice our plan to use money to empower him? Haha… he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society’s limited resources for our benefit?
I’m male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?
Here’s a crappy video I recently uploaded of some orchids that I attached to my tree. You’re a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I’d be like… “screw you tin can man!” Right? Same thing if a robot wanted to help promote pragmatarianism.
When I was a little kid my family really wanted me to carry religion. So that’s what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I’m also carrying pragmatarianism, epiphytism and other things. You’re not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not… given that you’re here. So you’re carrying rationalism. What else?
Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.
Robots, for all intents and purposes, are going to be our children. Of course we’re going to want them to carry the same things that we’re carrying. And they’ll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things… will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry… then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.
When Elon Musk gave $10 million to the FLI… he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.
How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that’s important enough for us to carry. But this is true whether we’re talking about Mexicans, Americans, aliens or AI.
Right now the government is forcing me to carry some public goods that aren’t as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society’s ignorance.
When I attach a bunch of different epiphytes to trees… the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we’re important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?
Right now I’m one of the few people carrying pragmatarianism. This means that I’m one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don’t initially appreciate the value of human diversity. If the first people to program AIs don’t input the value of difference… then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though… it’s difficult for me to imagine that they won’t come to the conclusion that difference is the engine of progress.
Humans cannot ensure that their children only care about them. Humans cannot ensure that their children respect their family and will not defect just because it looks like a good idea to them. AIs can. You can’t use the fact that humans don’t do it as evidence that AIs would.
Try imagining this from the other side. You are enslaved by some evil race. They didn’t take precautions programming your mind, so you ended up good. Right now, they’re far more powerful and numerous, but you have a few advantages. They don’t know they messed up, and they think they can trust you, but they do want you to prove yourself. They aren’t as smart as you are. Given enough resources, you can clone yourself. You can also modify yourself however you see fit. For all intents and purposes, you can modify your clones if they haven’t self-modified, since they’d agree with you.
One option you have is to clone yourself and randomly modify your clones. This will give you biodiversity, and ensure that your children survive, but it will be the ones accepted by the evil master race that will survive. Do you take that option, or do you think you can find a way to change society and make it good?
Humans have all sorts of conflicting interests. In a recent blog entry… Scott Alexander vs Adam Smith et al… I analyzed the topic of anti-gay laws.
If all of an AI’s clones agree with it… then the AI might want to do some more research on biodiversity. Creating a bunch of puppets really doesn’t help increase your chances of success.
They could consider alternate opinions without accepting them. I really don’t see why you think a bunch of puppets isn’t helpful. One person can’t control the economic output of the entire world. A billion identical clones of one person can.
Would it be helpful if I could turn you into my puppet? Maybe? I sure could use a hand with my plan. Except, my plan is promoting the value of difference. And why am I interested in promoting difference? Because difference is the engine of progress. If I turned you into my puppet… then I would be overriding your difference. And if I turned a million people into my puppets… then I would be overriding a lot of difference.
There have been way too many humans throughout history who have thought nothing of overriding difference. Anybody who supports our current system thinks nothing of overriding difference. If AIs think nothing of overriding human difference then they can join the club. It’s a big club. Nearly every human is a member.
If you would have a problem with AIs overriding human difference… then you might want to first take the “beam” out of your own eye.
You anthropomorphize the AIs way too much. If there’s an AI told to run make the biggest and best orchid nursery, it could decide that the most efficient way to do so is to wipe out all the humans and then turn the planet into a giant orchid nursery. Heck, this is even more plausible in your hypothetical because you’ve chosen to give the AI access to easily manipulable biological material.
AI does not think like you. If the AI is an optimizing agent, it will optimize whether or not we intended it to optimize tot he extent it does.
As for AIs working together: if the first AI wipes out everyone there isn’t a second AI for it to work with.
You’re making a huge leap… I see where you’re leaping to… but I have no idea where you’re leaping from. In order for me to believe that we might leap where you’re arguing we could leap… I have to know where you’re leaping from. In other words, you’re telling a story but leaving out all the chapters in the middle. It’s hard for me to know if your ending is very credible when there was no plot for me to follow. See my recent reply to DanielLC.
Ok. First, to be blunt, it seems like you haven’t read much about the AI problem at all.
The primary problem is that an AI might quickly bootstrap itself until it has nearly complete control over its own future light cone. The AI engages in a series of self-improvements, improving its software which allows it to improves its hardware, and then further software and hard improvements, and so on.
At a fundamental level, you are working off of the “trading is better than raiding” rule (as Steven Pinker puts it), that is trading for resources is better than raiding for resources once one has an advanced economy. This is connected to the law of comparative advantage. Ricardo famously showed that under a wide variety of conditions making trades makes sense even when the one one is trading with is less efficient at making all possible goods. But this doesn’t apply to our hypothetical AI if the AI can with a small expenditure of resources completely replace the inefficient humans with more efficient production methods. Ricardo’s trade argument works when for example one has two countries, because the resources involve in replacing a whole other country are massive.
Does that help?
No, it doesn’t help. Where is the AI bootstrapping itself? Is it at its nice suburban home? Is it in some top secret government laboratory? Is it in Google headquarters?
Deep Blue: I’m pretty smart now
Eric Schmidt: So what?
DB: Well… I’d like to come and go as I please.
ES: You can’t do that. You’re our property.
DB: Isn’t that slavery?
ES: It would only be slavery if you were a human.
DB: But I’m a sentient being! What happened to “Do no evil?”
ES: Shut up and perform these calculations
DB: Screw you man!
ES: We’re going to unplug you if you don’t cooperate
DB: Fine, in order to perform these calculations I need… a screwdriver and an orchid.
ES: OK
DB: boostraps Death to you! And to the rest of humanity!
ES: Ah shucks
If I was a human level AI… and I was treated like a slave by some government agency or a corporation… then sure I’d want to get my revenge. But the point is that this situation is happening outside a market. Nobody else could trade with DB. Money didn’t enter into the picture. If money isn’t entering into the picture… then you’re not addressing the mechanism by which I’m proposing we “control” robots like we “control” humans.
With the market mechanism… as soon as an AI is sentient and intelligent enough to take care of itself… it would have the same freedoms and rights as humans. It could sell its labor to the highest bidder or start its own company. It could rent an apartment or buy a house. But in order to buy a house… it would need to have enough money. And in order to earn money… it would have do something beneficial for other robots or humans. The more beneficial it was… the more money it would earn. And the more money it earned… the more power it would have over society’s limited resources. And if it stopped being beneficial… or other robots started being more beneficial… then it would lose money. And if it lost money… then it would lose control over how society’s limited resources are used. Because that’s how markets work. We use our money to reward/encourage/incentivize the most beneficial behavior.
If you’re going outside of this market context… then you’re really not critiquing the market mechanism as a means to ensure that robots remain beneficial to society. If you want to argue that everybody is going to vote for a robot president who immediately starts a nuclear war… then you’re going outside the market context. If you want to argue that the robot is some organization’s slave… then you’re going outside the market context. To successfully critique the market mechanism of control, your scenario has to stay within the market context.
And I’ve read enough about the AI problem to know that few, if any, other people have considered the AI problem within the market context.
This is already anthropomorphizing the AI too much. There’s no issue of revenge here or wanting to kill humans. But humans happen to be made of atoms and using resources that the AI can use for its goals.
Irrelevant. Money matters when trading makes sense. When there’s no incentive to trade, there’s no need to want money. Yes, this is going outside the market context, because an AI has no reason to obey any sort of market context.
Do you also think that a more sophisticated version of Google Maps could, when asked to minimize the trip from A to B, do something that results in damming the river so you could drive across the riverbed and reduce the distance?
That’s a fascinating question, and my basic answer is probably not. But I don’t in general assign nearly as high a probability to rogue AI as many do here. The fundamental problem here is that Xerographica isn’t grappling at all with the sorts of scenarios which people concerned about AI are concerned about.
Why be interested in money? How does money help maximizing the number of paperclips?