Imagine a group of 100 world-renowned scientists and military strategists. Could such a group easily wipe away the Roman empire when beamed back in time?
At the risk of confirming every negative stereotype RationalWiki and the like have of us...have you read the Sequences? I’m reluctant to write a full response to this, but I think large parts of the Sequences were written to address some of these ideas.
I’m afraid I had the same reaction. XiXiDu’s post seems to take the “shotgun” approach of listing every thought that popped into XiXiDu’s head, without applying much of a filter. It’s exhausting to read. Or, as one person I know put it, “XiXiDu says a lot of random shit.”
I understand what you’re saying, but, speaking from a strictly nitpicky perspective, I don’t think the situation is analogous. The Roman Empire had many more soldiers to throw at the problem; much more territory to manage; comparatively better technology; and, perhaps more importantly, a much more robust and diverse—and therefore memetically resistant—society. They would therefore fare much better than the Aztecs did.
Conquistadors climbed to the top of a volcano to harvest sulphur for ammunition production. You can count on uploads in our society, as on some Navy Seals sent into the Roman world, to do analog actions. They both would not just wait for the help from nowhere. They would improvise as conquistadors once did.
Understood, but there’s only so much the conquistadors can do even with gunpowder. Guns can do a lot of damage against bronze swords and armor, but if they have more soldiers than you have bullets, then you’ll still lose.
Of course, if the conquistadors could build a modern tank, they’d be virtually invincible. But in order to do that, they’d need to smelt steel, vulcanize rubber, refine petroleum, manufacture electronics, etc. Even if they had perfect knowledge of these technologies, they couldn’t duplicate them in ye olde Aztec times, because such technologies require a large portion of the world’s population to be up to speed. There’s a limit to how much you can do armed with nothing but a pocket knife and a volcano.
I think this was XiXiDu’s point: knowledge alone is not enough, you also need to put in a lot of work (which is often measured in centuries) in order to apply it.
knowledge alone is not enough, you also need to put in a lot of work (which is often measured in centuries) in order to apply it.
Understood that, too! But one can optimize and outsource a lot. Conquistadors employed Indians, enslaved Aztecs and Incas. Besides, the subjective time of an upload can be vast. A good idea can trim a lot of work need to be done. And at least my upload would be full of ideas.
Agreed; just as a single conquistador—or better yet, a modern engineer—transported into the Roman Empire would be full of ideas. He would know how to forge steel, refine petroleum, design electronic circuits, genetically engineer plants and animals, write software, plus many other things. But he wouldn’t be able to actually use most of that knowledge.
In order to write software, you need a computer. In order to build a computer, you need… well, you need a lot of stuff that outsourced Aztec (or Roman) slaves just wouldn’t be able to provide. You could enslave everyone on the continent, and you still wouldn’t be able to make a single CPU. Sure, if you were patient, very lucky, and long-lived, you could probably get something going within the next century or so. But that’s hardly a “FOOM”, and the Romans would have a hundred years to stop you, if they decided that your plans for the future aren’t to their liking.
But that’s hardly a “FOOM”, and the Romans would have a hundred years to stop you,
Exactly. And here the parable breaks down. The upload just might have those centuries. Virtual subjective time of thousands of years to devise a cunning plan, before we the humans even discuss their advantage. Yudkowsky has wrote a short story about this. http://lesswrong.com/lw/qk/that_alien_message/
Bugmaster’s point was that it takes a century of action by external parties, not a century of subjective thinking time. The timetable doesn’t get advanced all that much by super-intelligence. Real-world changes happen on real-world timetables. And yes, the rate of change might be exponential, but exponential curves grow slowly at first.
And meanwhile, other things are happening in that century that might upset the plans and that cannot be arbitrarily controlled even by super-intelligence.
Exponential curves grow at the same rate all the time. That is, if you zoom in on the x^2 graph at any point at any scale, it will look exactly the same as it did before you zoomed in.
I think we are using “rate” in different ways. The absolute rate of change per unit time for an exponential is hardly constant; If you look at the segment of e^x near, say, e^10, it’s growing much faster than it is at e^(-10).
At the risk of confirming every negative stereotype RationalWiki and the like have of us...have you read the Sequences?
He’s read them well enough to collect a fairly complete index of cherry picked Eliezer quotes to try to make him look bad. I don’t think lack of exposure to prerequisite information is the problem here.
I actually loved reading it. Some of those are up there among my favorite EY quotes. Arrogant, sometimes needing context to make them make sense and sometimes best left unsaid for practical reasons but still brilliant. For example:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.
There is also a quote there that I agree should remain visible, to Eliezer’s shame, until such time that he swallows his ego and publicly admits that it was an utterly idiotic way to behave. Then there is at least one quote which really deserves a disclaimer in a footnote—that EY has already written an entire sequence on admitting how stupid he was to think the way he thought when he wrote it!
I was actually rather disappointed when the list only went for a page or two. I was looking forward to reading all the highlights and lowlights. He deserves at least a few hundred best of and worst of quotes!
Then there is at least one quote which really deserves a disclaimer in a footnote...
By following the link below the quote people could learn that he claims that he doesn’t agree with what he wrote there anymore. But I added an extra disclaimer now.
Yes, I created that blog. And it is not meant to give Eliezer Yudkowsky a bad name. The context of each quote is provided.
I think it is useful to emphasize some of the beliefs hold within this community.
Here is what someone over at Google+ wrote that I agree with,
And actually I think exhibiting these quotes ‘out of context’ is quite a useful activity, since they make very striking claims which might be missed or glossed over in the middle of a lengthy & intricate argument. The context is available on the click of a button, and people can then judge for themselves how well they stand up.
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
You know people are lazy; how many will click through to see the context? (Have you attached Javascript handlers to record click-throughs and compared them against the page traffic?)
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
How could I possibly libel someone by quoting out of context edited things he has written? “I did [...] have sex with that woman.”
What would you have done if you had meant to give him a bad name but nonetheless had to refrain from simply lying?
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME,
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.
The tumblr strikes me as pretty much like the thing where you repeat what the other person says in a high-pitched voice, only minus the high-pitched voice.
Usual asshole talk from you again. I agree with 99.99% of what Eliezer writes. And just because he writes some shit doesn’t mean that I don’t respect him or that I want to give him a bad name.
Actually even you agree with me that he has been exaggerating some things in the past.
I agree with far less than that! Probably on the order of 95%. That’s higher than with most authors but I suspect lower than with, say, Yvain. Mind you Eliezer covers more controversial and original subjects than Yvain so I should expect Yvain to be more reliably correct than Eliezer. (Where from my perspective correctness cannot be distinguished from what I, all things including humility considered, agree with.)
Actually even you agree with me that he has been exaggerating some things in the past.
Certainly. Enough so that ‘even [me]’ has less meaning when applied to myself than perhaps most people who have read all his posts multiple times. I’m also rather more free with publicly disagreeing with Eliezer (or anyone) than some others who may have specific disagreements with him. You often experience that as ‘asshole talk’. And quite rightly so. (Publicly) disagreeing with people is almost always instrumentally irrational.
I don’t want to be a jerk about this or belabor this point, but in order to decide exactly how I want to go in responding to this: have you read through all the sequences?
I don’t want to be a jerk about this or belabor this point, but in order to decide exactly how I want to go in responding to this: have you read through all the sequences?
No, I can’t do that yet. It would take me half a year and I perceive other things to be more important, e.g. improving my math education.
Are you saying that people like me, who do not have the capabilities and time to read hundreds of posts, are excluded from asking about and discussing those issues? What do you suggest that I do, just ignore it? And where do you draw the line? Why is it sufficient to read the sequences? Why not exclude everyone who doesn’t understand Gödel machines and AIXI?
Well, we have to distinguish between social and epistemic spheres.
Socially it would be rude to exclude you from the debate or to say you don’t have a right to discuss these issues, especially when reading the Sequences is so time-consuming.
Epistemically, of course if you haven’t read the strongest arguments in favor of the importance of Friendly AI, you’re less likely to be right when you say it’s not important, and if gaining knowledge is time-consuming, then you either consume that time or continue lacking that knowledge.
Now there’s something tricky going on here. I can think of two extreme cases. The first is related to the courtier’s reply where some theist says an atheist can’t make atheist arguments unless she’s read every theologian from St. Augustine on and knows apologetics backwards and forwards; this is unfair and I think what you’re talking about. The second is where someone who knows nothing about medicine tries to argue with a neurosurgeon about his technique and the neurosurgeon tells him that he hasn’t been to medical school or read any of the literature on neurosurgery and so his opinions are completely meaningless; this one seems very fair and I would agree with the neurosurgeon.
I’m not really sure where the key difference lies, or which extreme this case falls into. But I asked if you’d read the Sequences for two reasons.
First, because I believed more or less what you believe now before I read the Sequences, and the Sequences changed my mind. If you’ve read the Sequences, then you might have found a flaw I missed and I should investigate what you’re saying further; if not, the simplest explanation is that you’re wrong for the same reasons I was wrong when I was in your position, and if you were to read the Sequences you’d have the same change of heart I did.
And second, because if you’d read the Sequences it would be worth debating some of these points with you; but since you haven’t looked at a much better piece that debates these points, I would recommend you do that instead, which would save me some time and make you more likely to be convinced. I realize that the Sequences are long, but that’s because supporting Eliezer’s view of Friendly AI is complicated and takes at least that long: I couldn’t write a reply to your thoughts which is as convincing as Eliezer’s in less space than it took Eliezer to write his. There’s a reason textbooks on neurosurgery aren’t light reading.
First, because I believed more or less what you believe now before I read the Sequences, and the Sequences changed my mind.
This is fascinating! Is there any way for you to expand on the following points:
What is it that you believed more or less before reading the Sequences?
What do you think is it that I believe with respect to risks from AI?
Is it possible to narrow down on what posts in particular made you change your mind?
If you’ve read the Sequences, then you might have found a flaw I missed and I should investigate what you’re saying further;
I am not sure how much I have read. Maybe 30 posts? I haven’t found any flaws so far. But I feel that there are huge flaws.
How I feel is best exemplified by what Eliezer wrote about Pascal’s mugging, “I’d sooner question my grasp of “rationality” than give five dollars to a Pascal’s Mugger because I thought it was “rational”.”
And second, because if you’d read the Sequences it would be worth debating some of these points with you;
Is there anything you could ask me, or make me do, that would enable you to find out if it is worth it for you to debate me, even if I haven’t read most of the Sequences?
I am not sure how much I have read. Maybe 30 posts?
Given the amount of activity you’ve applied to arguing about these topics (you wrote 82 LW posts during the last 1.5 years), I must say this is astonishing!
Given the amount of activity you’ve applied to arguing about these topics (you wrote 82 LW posts during the last 1.5 years), I must say this is astonishing!
If you look closely, I haven’t written a single post and did not voice any criticism for years before the Roko incident.
I honestly didn’t expect there to be anything in the Sequences that could change my mind about the topic. Especially since smarter people than me read all of the Sequences and think that you are wrong.
It’s probably not entirely fair to compare my case to yours because I started reading the Sequences before I was part of this community, and so I was much less familiar with the idea of Friendly AI than you are. But to answer your questions:
Before reading the Sequences, I assumed unfriendly AI was one more crazy speculative idea about the future, around the level of “We’ll discover psionics and merge into a single cosmic consciousness” and not really worthy of any more consideration.
I think you believe that superintelligent AI may not be possible, that it’s unlikely to “go foom”, and that in general it’s not a great use of our time to worry about it.
That’s a good question. Looking over the post list I’m surprised that I can’t find any that look like the sort of thing that would do that directly (there’s a lot about how it’s important to build a Friendly AI as opposed to just throw one together and assume it will be Friendly, but if I understand you right we don’t disagree there). It could have been an indirect effect of realizing that the person who wrote these was very smart and he believed in it. It could have been that they taught me enough rationality to realize I might be wrong about this and should consider changing my mind. And it could have been just very gradual worldview change. You said you were reading the debate with Robin, and that seems like a good starting point. The two dependency thingies labelled “Five Sources of Discontinuity” and “Optimization and the Singularity” here also give me vague memories of being good. But I guess that either I was wrong about the Sequences being full of brilliant pro-Singularity arguments, or they’re more complicated than I thought. Maybe someone else who’s read them more recently than I have can answer this better?
...which shouldn’t discourage you from reading the Sequences. They’re really good. Really. They might or might not directly help you on this question, but they’ll be indirectly helpful on this and many other things. It’s a really good use of your time (debating with me isn’t; I don’t claim any special insight on this issue beyond what I’ve picked up from the Sequences and elsewhere, and I don’t think I’ve ever posted any articles on AI simply because I wouldn’t even meet this community’s lax standards for expertise).
Here is why I believe that reading the Sequences might not be worth the effort:
1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:
According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
So what? I’m not even sure that Eliezer himself considers uFAI the most likely source of extinction. It’s just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.
As a point of note, I myself didn’t place uFAI as the most likely existential risk in that survey. That doesn’t mean I share your attitude.
No, but in the light of an expected utility calculation. Why would I read the Sequences?
Assuming you continue to write posts authoritatively about subjects related to said sequences—including criticisms of the contents therein—having read the sequences may reduce the frequency of you humiliating yourself.
They contain many insights unrelated to AI (looking at the sequences wiki page, it seems that most AI-ish things are concentrated in the second half). And many people had fun reading them. I think it would be a better use of time than trying to generically improve your math education that you speak of elsewhere (I don’t think it makes sense to learn math as an instrumental goal without a specific application in mind—unless you simply like math, in which case knock yourself out).
From a theoretical standpoint, you should never expect that observing something will shift your beliefs in some particular direction (and, guess what, there’s a post about that). This doesn’t work for humans—we can be convinced of things and we can expect to be convinced even if we don’t want to. But then, the fact that the sequences fail to convince many people shouldn’t be an argument against reading them. At least now you can be sure that they’re safe to read and won’t brainwash you.
I assumed unfriendly AI was one more crazy speculative idea about the future, around the level of “We’ll discover psionics and merge into a single cosmic consciousness” and not really worthy of any more consideration.
I do not believe that it is that speculative and I am really happy that there are people like Eliezer Yudkowsky who think about it.
In most of my submissions I try to show that there are a lot of possibilities of how the idea of superhuman AI could fail to be a risk.
Why do I do that? The reason for doing so are due to the main points of disagreement with Eliezer Yudkowsky and others who believe that “this is crunch time”. The points being that 1) I believe that they are overconfident when it comes to risks from AI, that the evidence simply doesn’t allow you to dramatize the case the way they do, and 2) I believe that they are overconfident when it comes to their methods of reasoning.
I would never have critized them if they had said, 1) “AI might pose a risk. We should think about it and evaluate the risk carefully.” and 2) “Here are some logical implications of AI being a risk. We don’t know if AI is a risk so those implications are secondary and should be discounted accordingly.”
But that is not what is happening. They portray friendly AI as a moral imperative and use the full weight of all logical implications of risks from AI to blow up its expected utility.
And that’s where my saying that I “found no flaws but feel that there are flaws” comes into play.
I understand that P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X). The problem is that, as muflax put it, I don’t see how you can believe in the implied invisible and remain even remotely sane. It does not work out. Even though on an intellectual level I completely agree with it, my intuition is yelling that something is very wrong here. It is ear-battering. I can’t ignore it. Call it irrational or just sad, I can’t help it.
I think you believe that superintelligent AI may not be possible, that it’s unlikely to “go foom”, and that in general it’s not a great use of our time to worry about it.
It is fascinating. If I could work directly at it then I would do it. But giving away my money? Here we get to point #1, mentioned above.
Is there enough evidence that my money would make a difference? This question is deep. The question is not just about the likelihood of a negative Singularity, but also the expected utility of contributing any amount of money to friendly AI research. I am seriously unable to calculate that. I don’t even know if I should get an MRI to check for unruptured brain aneurysms.
Another problem is that I am not really altruistic. I’d love to see everybody happy. But that’s it. But then I also don’t really care about myself that much. I only care if I might suffer, but not about being dead. That’s what makes the cryonics question pretty easy for me. I just don’t care enough.
It could have been an indirect effect of realizing that the person who wrote these was very smart and he believed in it.
This is one of the things I don’t understand. I don’t think Eliezer is that smart. But even if he was, I don’t think that increases the probability of him being right about some extraordinary ideas very much. Especially since I have chatted with other people that are equally smart who told me that he is wrong.
There are many incredible smart people who hold really absurd ideas.
The biggest problem is that he hasn’t achieved much. All he did was putting some of the work of other people together, especially in the field of rationality and heuristics and biases. And he wrote a popular fanfic. That’s it.
Yeah, he got some rich people to give him money. But the same people also support other crazy ideas with the same amount of money. That’s little evidence.
It could have been that they taught me enough rationality to realize I might be wrong about this and should consider changing my mind.
Sure, I am very likely wrong. But that argument cuts both ways.
You said you were reading the debate with Robin, and that seems like a good starting point.
I will try. Right now I am very off-put by Eliezer’s style of writing. I have a hard time to understand what he is saying while Robin is very clear and I agree about like everything he says.
But I will try to continue and research everything I don’t understand.
...which shouldn’t discourage you from reading the Sequences. They’re really good. Really.
In what respect? Those posts that I have read were quite interesting. But I even enjoy reading a calculus book right now. And just as I expect to never actually benefit from learning calculus I don’t think that it is instrumentally useful to read the Sequences. It is not like that I am raving mad. I have enough rationality to live a good life without the Sequences.
If you mean that they are good in convincing you of risks from AI, then I also ask you how sure you are that they are not only convincing but actually factually right? Do you believe that you have the expertise that is necessary to discern a good argument about artificial intelligence from one that is not even wrong?
It’s a really good use of your time (debating with me isn’t;
Just one last question if you allow. What are you doing against risks from AI? Do you pursue a carrier where you can earn a lot of money to contribute it to SIAI?
What’s your numerical probability estimate that, assuming no one puts much work into stopping it, Unfriendly AI will seriously damage or destroy human civilization in the next few centuries?
Mine is...hmm...I don’t know. Maybe 50%? I’m not sure. I do know that if there were an asteroid nearby with the same probability of impacting Earth, I’d be running up to people and shaking them and shouting “WHY AREN’T WE BUILDING MORE ASTEROID DEFLECTORS?! WHAT’S WRONG WITH YOU? PEOPLE!” I don’t know if I believe in unconditional moral imperatives, but if there were a 50% chance of an asteroid striking Earth soon, or even a 10% chance, and no one was doing anything about it, I would at least feel an imperative conditional on some of my other beliefs to try to help stop it.
So maybe part of what the Sequences did for me was help calibrate my brain well enough so that I noticed the similarity between the asteroid and the AI case.
The remaining sane part seems to be a matter of personal psychology. My undergrad philosophy prof once told a story of how a classmate of hers had to be committed to an institution after reading a book on nihilism: he just started doubting everything and went mad. My prof read the same book on nihilism, thought it made some interesting arguments that she couldn’t immediately refute, went back to her everyday life, and a few years later reconsidered it and found some possible refutations.
I have always sympathized more with my professor’s point of view: I can read arguments which if taken seriously would be nightmarish or imply total doubt, admit the arguments seem plausible, and then go back to making pizza or doing homework or whatever. I’m not sure what the difference is between people like myself and my professor, and people like you and my professor’s classmate. Maybe you’re more rational, deep down? Or maybe you’re just naturally depressed and anxious, and your brain latches onto this as an excuse? I don’t know. In any case, I don’t at all think of being able to resist terrifying implications as a rationalist skill and I’m not sure what I would do in your position.
(in my own case, I have pretty much decided to live a normal life but give a reasonable amount of what I make to SIAI and associated charities, probably, volunteer for them if they need it, and leave it at that. Are the algorithms that produced this plan optimal? No. Are they a heck of a lot better than going insane and resisting the idea of friendly AI with all my might because if I accepted it I would have to go insane and give away all my money? Yes.)
Another problem is that I am not really altruistic. I’d love to see everybody happy. But that’s it. But then I also don’t really care about myself that much. I only care if I might suffer, but not about being dead. That’s what makes the cryonics question pretty easy for me. I just don’t care enough.
...I would describe this as being altruistic; also, I share your intuitions about death and cryonics.
There are many incredible smart people who hold really absurd ideas.
Okay, point. I guess I got the impression Eliezer was both smart and rational; that he was smart in exactly the way that ought to prevent him from holding absurd ideas. This is an intuition, so I can’t really justify it.
If you mean that they are good in convincing you of risks from AI, then I also ask you how sure you are that they are not only convincing but actually factually right? Do you believe that you have the expertise that is necessary to discern a good argument about artificial intelligence from one that is not even wrong?
This seems like a fully general counterargument. “Sure, the evidence for evolution sounds convincing; but how do you know it’s actually true and you aren’t just being tricked?”
Just one last question if you allow. What are you doing against risks from AI? Do you pursue a carrier where you can earn a lot of money to contribute it to SIAI?
I’m pursuing a career as a doctor. Despite a recent major setback, I’m still hoping to get there within a year or so. After that, yeah, I do intend to donate a lot to SIAI—albeit, as I said before, I don’t claim I’ll be anywhere near perfect.
Maybe 50%? I’m not sure. I do know that if there were an asteroid nearby with the same probability of impacting Earth, I’d be running up to people and shaking them and shouting “WHY AREN’T WE BUILDING MORE ASTEROID DEFLECTORS?! WHAT’S WRONG WITH YOU? PEOPLE!”
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way (50% chance) but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
This seems like a fully general counterargument. “Sure, the evidence for evolution sounds convincing; but how do you know it’s actually true and you aren’t just being tricked?”
You can’t really compare that. The arguments for evolution are pretty easy to understand and the evidence is overwhelming. But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
After that, yeah, I do intend to donate a lot to SIAI—albeit, as I said before, I don’t claim I’ll be anywhere near perfect.
I see. That makes me take you much more seriously.
But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.
I’m not rich. My gross annual salary is lower than Eliezer Yudkowsky’s or Nick Bostrom’s. (mentioned since you keep using me as your example commenter for donations)
I am not sure how much I have read. Maybe 30 posts? I haven’t found any flaws so far. But I feel that there are huge flaws.
I just spent over a minute trying to emphasize the ‘argument from blatant ignorance’ point here in a way that I didn’t just seem petty. I don’t think I succeeded.
In my professional life, I am frequently asked questions about a complex system I have some responsibility for. For pieces of that system, I got sufficiently tired of answering certain recurring questions that I wrote a set of documents that addressed those areas.
How would you recommend I reply to someone who asks me questions about those areas and, when directed to the documents I wrote, replies that they don’t have time to read the document, they just want to participate in a conversation with me about the subject in which I answer their specific questions?
I usually deal with these types of questions by sending them a link to a specific document. For example, I might say, “take a look at the flow diagram at ${url}, then read the notes on ${node}, they will explain why you are getting that error”. If the person comes back and says, “I read your notes but I have no idea what they mean”, I’d point him to some introductory material; but in practice, this happens rarely, because my notes are full of links.
One thing I used to do, but stopped doing, was to say, “open up the project wiki and read through all the pages in it”. The people who come to me with questions are looking for an effective solution to a specific problem, not for general education and/or enlightenment. They have tangible goals just like I do, after all, and there are only so many hours in a day.
How would you recommend I reply to someone who asks me questions about those areas and, when directed to the documents I wrote, replies that they don’t have time to read the document, they just want to participate in a conversation with me about the subject in which I answer their specific questions?
You could answer in the same way that a lot of AI researchers and computer scientists did, who I wrote (400+ by now): “Your fears are the result of a lack of knowledge. Come back once you have enough background to ask intelligent questions about AI.”
But then nobody learns anything because everyone thinks that they are obviously right and the others are idiots.
You might have seen the following sentences in some of my recent submissions:
″...highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.”
″… in my view they’ve just sprinkled enough mathematics and logic over their fantasies to give them a veneer of respectability.”
″… I … personally think that they are naïve as far as the nature of human intelligence goes. I think they are mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents. None of them seems to have a realistic picture about the nature of thinking...”
Those lines are copied straight out of emails from people who know a lot more than I do.
So what do I recommend? I recommend that both sides do the same that evolutionary biologist did when creationists attacked. Write a book on it or create documents that can actually be read by people with a large inferential distance. But don’t tell them to read hundreds of blog posts that are largely unrelated to the problem in question, or your papers that only someone with a doctorate in machine learning can understand.
OK. So, suppose I decide to write a book on it, as you suggest. What do you recommend I do during the years that that book-writing process is going on? And does that recommendation change if it turns out that I’m wrong and merely think that I’m right, vs. if I’m actually right?
What do you recommend I do during the years that that book-writing process is going on?
Talk about the subject of the book as much as possible, even if you talk bullshit. Because the book was meant to explain why you are talking bullshit. People talking bullshit is exactly what it takes to write a good book on dissolving bullshit.
And does that recommendation change if it turns out that I’m wrong and merely think that I’m right, vs. if I’m actually right?
No. See ‘Darwin’s Dangerous Idea’ by Daniel Dennett. The book starts by reviewing all the bullshit people have been saying in the past few hundred years and manages to shed light on culture, popular misconceptions and how you can be wrong.
He actually mentions a few times how creationists and other enemies of evolution actually allowed evolutionary biologists to hone their arguments and become stronger. And yes, the arguments of the critics were often poor from the point of view of the experts, but strong from the point of view of laymen.
For what it’s worth, I disagree with your first recommendation.
I do agree that Dennett (both in DDI and more generally) has an admirable willingness to engage with the “bullshit” in his field, though he also is willing to unilaterally dismiss vast swaths of it when he decides it’s no longer valuable for him to engage with (see, for example, his treatment of qualia in ‘Consciousness Explained’… or listen to him talk to undergraduates, if he still does that; he was always a treat to listen to).
Are you saying that people like me, who do not have the capabilities and time to read hundreds of posts, are excluded from asking about and discussing those issues?
No. He’s saying:
I’m reluctant to write a full response to this, but I think large parts of the Sequences were written to address some of these ideas.
I don’t want to be a jerk about this or belabor this point, but in order to decide exactly how I want to go in responding to this: have you read through all the sequences?
However, I will say that EY wrote at least part of the sequences because he got sick and tired of seeing people who try to reason about AI fall immediately into some obvious failure state. You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
What do you suggest that I do, just ignore it?
Would you rather continue running around in circles, banging your head against the wall? Even if you did read the sequences, there’d still be no guarantee that you wouldn’t continue doing the same thing. But, to paraphrase Yudkowsky, at least you’d get a saving throw.
Why is it sufficient to read the sequences? Why not exclude everyone who doesn’t understand Gödel machines and AIXI?
Nobody said anything of the sort. Again, Yvain’s trying to formulate a response.
You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
I had to say this like 10 times now. I am getting the impression that nobody actually read what I wrote.
The whole point was to get people thinking about how an AI is actually going to take over the world, in practice, rather than just claiming it will use magic.
If the IRC channel and the discussion section is any evidence, all you’ve managed to accomplish is to get people to think about how to take over ancient Rome using modern tech.
With the help of tens of thousands of Xiu Mayan warriors, it would take more than 170 years for the Spanish to establish full control of the Maya homelands, which extended from northern Yucatán to the central lowlands
That you got 22 upvotes for that is incredible sad.
So wiping out the Aztecs isn’t enough, you wanted the Spaniards to quickly establish full control over the Mayan homelands as well?
That seems to me like moving the goalposts, you no longer just want the scientists to wipe out the Roman Empire, you now also want them to “establish full control” over both the Romans’ former territory and the neighboring Persian empire?
That you got 22 upvotes for that is incredible sad.
Downvoted for the continuing rudeness, and the continuing whining about how people vote others.
Also downvoted for the aforementioned “moving the goalposts”.
So wiping out the Aztecs isn’t enough, you wanted the Spaniards to quickly establish full control over the Mayan homelands as well?
I never argued that a hundred people from today, given machine guns, would have no chance to wipe out the roman Senate and kill the the roman consul. Which doesn’t mean that an AI could do the same. But even if it was to wipe out Washington, that would not constitute world domination.
The Spaniards, with the the help of their Tlaxcallan allies, managed to kill the ruler and destabilize Aztec empire and their domination. Yet most of the Mesoamerican cultures were intact afterwards. It took over a hundred years to defeat those.
Anyway, that incident can’t be compared with the Roman empire and much less with the world of today.
That seems to me like moving the goalposts...
You seem to have no clue what my goal was in the first place. I tried to exemplify the difficulties of taking over the world by an example that bears some similarity to the idea that an AI might just take over the world. In reality it would be much more difficult for an AI to do the same due to its fragility and its technological and civilizatory dependencies.
Downvoted for the continuing rudeness, and the continuing whining about how people vote others.
I am not whining about voting behavior but that people who apparently try to be rational could be fooled that easily and act like a cult to protect their beliefs against the least amount of criticism.
Also downvoted for the aforementioned “moving the goalposts”.
You are wrong and any amount of downvoting won’t make you right.
You seem to have no clue what my goal was in the first place
The question was specific: “Could such a group easily wipe away the Roman empire when beamed back in time?”
Ofcourse you can define “wipe away” as by necessity something even more extreme than what the Goths or the Ottomans managed to do with the actual Roman Empires.
I am not whining about voting behavior
On this thread alone you’ve complained about the votes on three different comments, Yvain’s, asr’s, and gwern’s. Your actions as observed from outside are indistinguishable from those of someone who is whining about voting behaviour.
but that people who apparently try to be rational could be fooled that easily and act like a cult to protect their beliefs against the least amount of criticism.
You behaved much more aggressively and much more rudely at (even less) criticism towards this post of yours, than I’ve seen leveled against you.
I’m interested in protecting the standards of polite discourse that are part of LessWrong, which said standards of polite discourse you seem determined to destroy by continuous and gratituous insults. I haven’t even touched any of the actual content of your post, only commented on your rude responses to criticism of it.
I agree that it’s sad I got 22 upvotes; it was a cheap shot at an example I don’t think was really relevant. But here’s another cheap shot: if you learned a future history book would say “With the help of tens of trillions of nanobots, it would take more than 170 years for unfriendly superintelligences to eradicate the last pockets of human resistance beneath the crust of Ganymede”, would you follow that up with “See, I was right, it wasn’t such a big deal after all”?
I agree with all Aris Katsaris’ points as well, but, again, I don’t think this example is too relevant to AI. Although this whole concept of historical analogies for AI isn’t great, a slightly better example than “530 Spaniards vs. Mesoamerica”, might be “1 Spaniard infected with smallpox vs. Mesoamerica”. AIs don’t have to play fair.
...if you learned a future history book would say “With the help of tens of trillions of nanobots,
Full stop. If advanced nanotechnology comes first then my above post is rendered obsolete. I do completely agree that an AI in possession of advanced nanotech assemblers will be able to take over the world.
So, the AI, even if coming before advanced nanotechnology, would just need to invent some advanced nanotechnology and convince some people to manufacture them, is that right?
Imagine a group of 530 Spaniards...
At the risk of confirming every negative stereotype RationalWiki and the like have of us...have you read the Sequences? I’m reluctant to write a full response to this, but I think large parts of the Sequences were written to address some of these ideas.
I’m afraid I had the same reaction. XiXiDu’s post seems to take the “shotgun” approach of listing every thought that popped into XiXiDu’s head, without applying much of a filter. It’s exhausting to read. Or, as one person I know put it, “XiXiDu says a lot of random shit.”
I understand what you’re saying, but, speaking from a strictly nitpicky perspective, I don’t think the situation is analogous. The Roman Empire had many more soldiers to throw at the problem; much more territory to manage; comparatively better technology; and, perhaps more importantly, a much more robust and diverse—and therefore memetically resistant—society. They would therefore fare much better than the Aztecs did.
Conquistadors climbed to the top of a volcano to harvest sulphur for ammunition production. You can count on uploads in our society, as on some Navy Seals sent into the Roman world, to do analog actions. They both would not just wait for the help from nowhere. They would improvise as conquistadors once did.
Understood, but there’s only so much the conquistadors can do even with gunpowder. Guns can do a lot of damage against bronze swords and armor, but if they have more soldiers than you have bullets, then you’ll still lose.
Of course, if the conquistadors could build a modern tank, they’d be virtually invincible. But in order to do that, they’d need to smelt steel, vulcanize rubber, refine petroleum, manufacture electronics, etc. Even if they had perfect knowledge of these technologies, they couldn’t duplicate them in ye olde Aztec times, because such technologies require a large portion of the world’s population to be up to speed. There’s a limit to how much you can do armed with nothing but a pocket knife and a volcano.
I think this was XiXiDu’s point: knowledge alone is not enough, you also need to put in a lot of work (which is often measured in centuries) in order to apply it.
Understood that, too! But one can optimize and outsource a lot. Conquistadors employed Indians, enslaved Aztecs and Incas. Besides, the subjective time of an upload can be vast. A good idea can trim a lot of work need to be done. And at least my upload would be full of ideas.
Agreed; just as a single conquistador—or better yet, a modern engineer—transported into the Roman Empire would be full of ideas. He would know how to forge steel, refine petroleum, design electronic circuits, genetically engineer plants and animals, write software, plus many other things. But he wouldn’t be able to actually use most of that knowledge.
In order to write software, you need a computer. In order to build a computer, you need… well, you need a lot of stuff that outsourced Aztec (or Roman) slaves just wouldn’t be able to provide. You could enslave everyone on the continent, and you still wouldn’t be able to make a single CPU. Sure, if you were patient, very lucky, and long-lived, you could probably get something going within the next century or so. But that’s hardly a “FOOM”, and the Romans would have a hundred years to stop you, if they decided that your plans for the future aren’t to their liking.
Exactly. And here the parable breaks down. The upload just might have those centuries. Virtual subjective time of thousands of years to devise a cunning plan, before we the humans even discuss their advantage. Yudkowsky has wrote a short story about this. http://lesswrong.com/lw/qk/that_alien_message/
Bugmaster’s point was that it takes a century of action by external parties, not a century of subjective thinking time. The timetable doesn’t get advanced all that much by super-intelligence. Real-world changes happen on real-world timetables. And yes, the rate of change might be exponential, but exponential curves grow slowly at first.
And meanwhile, other things are happening in that century that might upset the plans and that cannot be arbitrarily controlled even by super-intelligence.
Err… minor quibble.
Exponential curves grow at the same rate all the time. That is, if you zoom in on the x^2 graph at any point at any scale, it will look exactly the same as it did before you zoomed in.
I think we are using “rate” in different ways. The absolute rate of change per unit time for an exponential is hardly constant; If you look at the segment of e^x near, say, e^10, it’s growing much faster than it is at e^(-10).
asr got my point exactly right.
Guns? I thought horses were their main advantage.
(What are the Aztecs gonna do, burn down all the grass in the continent?)
The OP used gunpowder as the example, so I went with it. You might be right about horses, though.
He’s read them well enough to collect a fairly complete index of cherry picked Eliezer quotes to try to make him look bad. I don’t think lack of exposure to prerequisite information is the problem here.
The index wedrifid was alluding to, if anyone cares: http://shityudkowskysays.tumblr.com/
I actually loved reading it. Some of those are up there among my favorite EY quotes. Arrogant, sometimes needing context to make them make sense and sometimes best left unsaid for practical reasons but still brilliant. For example:
There is also a quote there that I agree should remain visible, to Eliezer’s shame, until such time that he swallows his ego and publicly admits that it was an utterly idiotic way to behave. Then there is at least one quote which really deserves a disclaimer in a footnote—that EY has already written an entire sequence on admitting how stupid he was to think the way he thought when he wrote it!
I was actually rather disappointed when the list only went for a page or two. I was looking forward to reading all the highlights and lowlights. He deserves at least a few hundred best of and worst of quotes!
There’s always sorting in http://www.ibiblio.org/weidai/lesswrong_user.php?u=Eliezer_Yudkowsky
By following the link below the quote people could learn that he claims that he doesn’t agree with what he wrote there anymore. But I added an extra disclaimer now.
Thanks for making me find out what the Roko-thing was about :(
Yes, I created that blog. And it is not meant to give Eliezer Yudkowsky a bad name. The context of each quote is provided.
I think it is useful to emphasize some of the beliefs hold within this community.
Here is what someone over at Google+ wrote that I agree with,
Anyway, how could I possible libel someone by publishing what he and his followers believe to be true and good?
You know people are lazy; how many will click through to see the context? (Have you attached Javascript handlers to record click-throughs and compared them against the page traffic?)
How could I possibly libel someone by quoting out of context edited things he has written? “I did [...] have sex with that woman.”
What would you have done if you had meant to give him a bad name but nonetheless had to refrain from simply lying?
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
No lies. Hands-free.
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.
The tumblr strikes me as pretty much like the thing where you repeat what the other person says in a high-pitched voice, only minus the high-pitched voice.
Usual asshole talk from you again. I agree with 99.99% of what Eliezer writes. And just because he writes some shit doesn’t mean that I don’t respect him or that I want to give him a bad name.
Actually even you agree with me that he has been exaggerating some things in the past.
I agree with far less than that! Probably on the order of 95%. That’s higher than with most authors but I suspect lower than with, say, Yvain. Mind you Eliezer covers more controversial and original subjects than Yvain so I should expect Yvain to be more reliably correct than Eliezer. (Where from my perspective correctness cannot be distinguished from what I, all things including humility considered, agree with.)
Certainly. Enough so that ‘even [me]’ has less meaning when applied to myself than perhaps most people who have read all his posts multiple times. I’m also rather more free with publicly disagreeing with Eliezer (or anyone) than some others who may have specific disagreements with him. You often experience that as ‘asshole talk’. And quite rightly so. (Publicly) disagreeing with people is almost always instrumentally irrational.
I don’t want to be a jerk about this or belabor this point, but in order to decide exactly how I want to go in responding to this: have you read through all the sequences?
No, I can’t do that yet. It would take me half a year and I perceive other things to be more important, e.g. improving my math education.
Are you saying that people like me, who do not have the capabilities and time to read hundreds of posts, are excluded from asking about and discussing those issues? What do you suggest that I do, just ignore it? And where do you draw the line? Why is it sufficient to read the sequences? Why not exclude everyone who doesn’t understand Gödel machines and AIXI?
Well, we have to distinguish between social and epistemic spheres.
Socially it would be rude to exclude you from the debate or to say you don’t have a right to discuss these issues, especially when reading the Sequences is so time-consuming.
Epistemically, of course if you haven’t read the strongest arguments in favor of the importance of Friendly AI, you’re less likely to be right when you say it’s not important, and if gaining knowledge is time-consuming, then you either consume that time or continue lacking that knowledge.
Now there’s something tricky going on here. I can think of two extreme cases. The first is related to the courtier’s reply where some theist says an atheist can’t make atheist arguments unless she’s read every theologian from St. Augustine on and knows apologetics backwards and forwards; this is unfair and I think what you’re talking about. The second is where someone who knows nothing about medicine tries to argue with a neurosurgeon about his technique and the neurosurgeon tells him that he hasn’t been to medical school or read any of the literature on neurosurgery and so his opinions are completely meaningless; this one seems very fair and I would agree with the neurosurgeon.
I’m not really sure where the key difference lies, or which extreme this case falls into. But I asked if you’d read the Sequences for two reasons.
First, because I believed more or less what you believe now before I read the Sequences, and the Sequences changed my mind. If you’ve read the Sequences, then you might have found a flaw I missed and I should investigate what you’re saying further; if not, the simplest explanation is that you’re wrong for the same reasons I was wrong when I was in your position, and if you were to read the Sequences you’d have the same change of heart I did.
And second, because if you’d read the Sequences it would be worth debating some of these points with you; but since you haven’t looked at a much better piece that debates these points, I would recommend you do that instead, which would save me some time and make you more likely to be convinced. I realize that the Sequences are long, but that’s because supporting Eliezer’s view of Friendly AI is complicated and takes at least that long: I couldn’t write a reply to your thoughts which is as convincing as Eliezer’s in less space than it took Eliezer to write his. There’s a reason textbooks on neurosurgery aren’t light reading.
This is fascinating! Is there any way for you to expand on the following points:
What is it that you believed more or less before reading the Sequences?
What do you think is it that I believe with respect to risks from AI?
Is it possible to narrow down on what posts in particular made you change your mind?
I am not sure how much I have read. Maybe 30 posts? I haven’t found any flaws so far. But I feel that there are huge flaws.
How I feel is best exemplified by what Eliezer wrote about Pascal’s mugging, “I’d sooner question my grasp of “rationality” than give five dollars to a Pascal’s Mugger because I thought it was “rational”.”
Is there anything you could ask me, or make me do, that would enable you to find out if it is worth it for you to debate me, even if I haven’t read most of the Sequences?
P.S. Thank you for taking the time and effort.
Given the amount of activity you’ve applied to arguing about these topics (you wrote 82 LW posts during the last 1.5 years), I must say this is astonishing!
If you look closely, I haven’t written a single post and did not voice any criticism for years before the Roko incident.
I honestly didn’t expect there to be anything in the Sequences that could change my mind about the topic. Especially since smarter people than me read all of the Sequences and think that you are wrong.
It’s probably not entirely fair to compare my case to yours because I started reading the Sequences before I was part of this community, and so I was much less familiar with the idea of Friendly AI than you are. But to answer your questions:
Before reading the Sequences, I assumed unfriendly AI was one more crazy speculative idea about the future, around the level of “We’ll discover psionics and merge into a single cosmic consciousness” and not really worthy of any more consideration.
I think you believe that superintelligent AI may not be possible, that it’s unlikely to “go foom”, and that in general it’s not a great use of our time to worry about it.
That’s a good question. Looking over the post list I’m surprised that I can’t find any that look like the sort of thing that would do that directly (there’s a lot about how it’s important to build a Friendly AI as opposed to just throw one together and assume it will be Friendly, but if I understand you right we don’t disagree there). It could have been an indirect effect of realizing that the person who wrote these was very smart and he believed in it. It could have been that they taught me enough rationality to realize I might be wrong about this and should consider changing my mind. And it could have been just very gradual worldview change. You said you were reading the debate with Robin, and that seems like a good starting point. The two dependency thingies labelled “Five Sources of Discontinuity” and “Optimization and the Singularity” here also give me vague memories of being good. But I guess that either I was wrong about the Sequences being full of brilliant pro-Singularity arguments, or they’re more complicated than I thought. Maybe someone else who’s read them more recently than I have can answer this better?
...which shouldn’t discourage you from reading the Sequences. They’re really good. Really. They might or might not directly help you on this question, but they’ll be indirectly helpful on this and many other things. It’s a really good use of your time (debating with me isn’t; I don’t claim any special insight on this issue beyond what I’ve picked up from the Sequences and elsewhere, and I don’t think I’ve ever posted any articles on AI simply because I wouldn’t even meet this community’s lax standards for expertise).
(Addendum to my other comment)
Here is why I believe that reading the Sequences might not be worth the effort:
1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:
Robin Hanson
Katja Grace (who has been a visiting fellow)
John Baez (who interviews Eliezer Yudkowsky)
Holden Karnofsky
Ben Goertzel
So what? I’m not even sure that Eliezer himself considers uFAI the most likely source of extinction. It’s just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.
As a point of note, I myself didn’t place uFAI as the most likely existential risk in that survey. That doesn’t mean I share your attitude.
I hope I didn’t claim the Sequences, or any argument were 100% effective in changing the mind of every single person who read them.
Also, Ben Goertzel has read all the Sequences? That makes that recent conversation with Luke kind of sad.
No, but in the light of an expected utility calculation. Why would I read the Sequences?
Assuming you continue to write posts authoritatively about subjects related to said sequences—including criticisms of the contents therein—having read the sequences may reduce the frequency of you humiliating yourself.
They contain many insights unrelated to AI (looking at the sequences wiki page, it seems that most AI-ish things are concentrated in the second half). And many people had fun reading them. I think it would be a better use of time than trying to generically improve your math education that you speak of elsewhere (I don’t think it makes sense to learn math as an instrumental goal without a specific application in mind—unless you simply like math, in which case knock yourself out).
From a theoretical standpoint, you should never expect that observing something will shift your beliefs in some particular direction (and, guess what, there’s a post about that). This doesn’t work for humans—we can be convinced of things and we can expect to be convinced even if we don’t want to. But then, the fact that the sequences fail to convince many people shouldn’t be an argument against reading them. At least now you can be sure that they’re safe to read and won’t brainwash you.
I do not believe that it is that speculative and I am really happy that there are people like Eliezer Yudkowsky who think about it.
In most of my submissions I try to show that there are a lot of possibilities of how the idea of superhuman AI could fail to be a risk.
Why do I do that? The reason for doing so are due to the main points of disagreement with Eliezer Yudkowsky and others who believe that “this is crunch time”. The points being that 1) I believe that they are overconfident when it comes to risks from AI, that the evidence simply doesn’t allow you to dramatize the case the way they do, and 2) I believe that they are overconfident when it comes to their methods of reasoning.
I would never have critized them if they had said, 1) “AI might pose a risk. We should think about it and evaluate the risk carefully.” and 2) “Here are some logical implications of AI being a risk. We don’t know if AI is a risk so those implications are secondary and should be discounted accordingly.”
But that is not what is happening. They portray friendly AI as a moral imperative and use the full weight of all logical implications of risks from AI to blow up its expected utility.
And that’s where my saying that I “found no flaws but feel that there are flaws” comes into play.
I understand that P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X). The problem is that, as muflax put it, I don’t see how you can believe in the implied invisible and remain even remotely sane. It does not work out. Even though on an intellectual level I completely agree with it, my intuition is yelling that something is very wrong here. It is ear-battering. I can’t ignore it. Call it irrational or just sad, I can’t help it.
It is fascinating. If I could work directly at it then I would do it. But giving away my money? Here we get to point #1, mentioned above.
Is there enough evidence that my money would make a difference? This question is deep. The question is not just about the likelihood of a negative Singularity, but also the expected utility of contributing any amount of money to friendly AI research. I am seriously unable to calculate that. I don’t even know if I should get an MRI to check for unruptured brain aneurysms.
Another problem is that I am not really altruistic. I’d love to see everybody happy. But that’s it. But then I also don’t really care about myself that much. I only care if I might suffer, but not about being dead. That’s what makes the cryonics question pretty easy for me. I just don’t care enough.
This is one of the things I don’t understand. I don’t think Eliezer is that smart. But even if he was, I don’t think that increases the probability of him being right about some extraordinary ideas very much. Especially since I have chatted with other people that are equally smart who told me that he is wrong.
There are many incredible smart people who hold really absurd ideas.
The biggest problem is that he hasn’t achieved much. All he did was putting some of the work of other people together, especially in the field of rationality and heuristics and biases. And he wrote a popular fanfic. That’s it.
Yeah, he got some rich people to give him money. But the same people also support other crazy ideas with the same amount of money. That’s little evidence.
Sure, I am very likely wrong. But that argument cuts both ways.
I will try. Right now I am very off-put by Eliezer’s style of writing. I have a hard time to understand what he is saying while Robin is very clear and I agree about like everything he says.
But I will try to continue and research everything I don’t understand.
In what respect? Those posts that I have read were quite interesting. But I even enjoy reading a calculus book right now. And just as I expect to never actually benefit from learning calculus I don’t think that it is instrumentally useful to read the Sequences. It is not like that I am raving mad. I have enough rationality to live a good life without the Sequences.
If you mean that they are good in convincing you of risks from AI, then I also ask you how sure you are that they are not only convincing but actually factually right? Do you believe that you have the expertise that is necessary to discern a good argument about artificial intelligence from one that is not even wrong?
Just one last question if you allow. What are you doing against risks from AI? Do you pursue a carrier where you can earn a lot of money to contribute it to SIAI?
What’s your numerical probability estimate that, assuming no one puts much work into stopping it, Unfriendly AI will seriously damage or destroy human civilization in the next few centuries?
Mine is...hmm...I don’t know. Maybe 50%? I’m not sure. I do know that if there were an asteroid nearby with the same probability of impacting Earth, I’d be running up to people and shaking them and shouting “WHY AREN’T WE BUILDING MORE ASTEROID DEFLECTORS?! WHAT’S WRONG WITH YOU? PEOPLE!” I don’t know if I believe in unconditional moral imperatives, but if there were a 50% chance of an asteroid striking Earth soon, or even a 10% chance, and no one was doing anything about it, I would at least feel an imperative conditional on some of my other beliefs to try to help stop it.
So maybe part of what the Sequences did for me was help calibrate my brain well enough so that I noticed the similarity between the asteroid and the AI case.
The remaining sane part seems to be a matter of personal psychology. My undergrad philosophy prof once told a story of how a classmate of hers had to be committed to an institution after reading a book on nihilism: he just started doubting everything and went mad. My prof read the same book on nihilism, thought it made some interesting arguments that she couldn’t immediately refute, went back to her everyday life, and a few years later reconsidered it and found some possible refutations.
I have always sympathized more with my professor’s point of view: I can read arguments which if taken seriously would be nightmarish or imply total doubt, admit the arguments seem plausible, and then go back to making pizza or doing homework or whatever. I’m not sure what the difference is between people like myself and my professor, and people like you and my professor’s classmate. Maybe you’re more rational, deep down? Or maybe you’re just naturally depressed and anxious, and your brain latches onto this as an excuse? I don’t know. In any case, I don’t at all think of being able to resist terrifying implications as a rationalist skill and I’m not sure what I would do in your position.
(in my own case, I have pretty much decided to live a normal life but give a reasonable amount of what I make to SIAI and associated charities, probably, volunteer for them if they need it, and leave it at that. Are the algorithms that produced this plan optimal? No. Are they a heck of a lot better than going insane and resisting the idea of friendly AI with all my might because if I accepted it I would have to go insane and give away all my money? Yes.)
...I would describe this as being altruistic; also, I share your intuitions about death and cryonics.
Okay, point. I guess I got the impression Eliezer was both smart and rational; that he was smart in exactly the way that ought to prevent him from holding absurd ideas. This is an intuition, so I can’t really justify it.
This seems like a fully general counterargument. “Sure, the evidence for evolution sounds convincing; but how do you know it’s actually true and you aren’t just being tricked?”
I’m pursuing a career as a doctor. Despite a recent major setback, I’m still hoping to get there within a year or so. After that, yeah, I do intend to donate a lot to SIAI—albeit, as I said before, I don’t claim I’ll be anywhere near perfect.
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way (50% chance) but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
You can’t really compare that. The arguments for evolution are pretty easy to understand and the evidence is overwhelming. But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
I see. That makes me take you much more seriously.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.
I’m not rich. My gross annual salary is lower than Eliezer Yudkowsky’s or Nick Bostrom’s. (mentioned since you keep using me as your example commenter for donations)
I just spent over a minute trying to emphasize the ‘argument from blatant ignorance’ point here in a way that I didn’t just seem petty. I don’t think I succeeded.
In my professional life, I am frequently asked questions about a complex system I have some responsibility for. For pieces of that system, I got sufficiently tired of answering certain recurring questions that I wrote a set of documents that addressed those areas.
How would you recommend I reply to someone who asks me questions about those areas and, when directed to the documents I wrote, replies that they don’t have time to read the document, they just want to participate in a conversation with me about the subject in which I answer their specific questions?
I usually deal with these types of questions by sending them a link to a specific document. For example, I might say, “take a look at the flow diagram at ${url}, then read the notes on ${node}, they will explain why you are getting that error”. If the person comes back and says, “I read your notes but I have no idea what they mean”, I’d point him to some introductory material; but in practice, this happens rarely, because my notes are full of links.
One thing I used to do, but stopped doing, was to say, “open up the project wiki and read through all the pages in it”. The people who come to me with questions are looking for an effective solution to a specific problem, not for general education and/or enlightenment. They have tangible goals just like I do, after all, and there are only so many hours in a day.
You could answer in the same way that a lot of AI researchers and computer scientists did, who I wrote (400+ by now): “Your fears are the result of a lack of knowledge. Come back once you have enough background to ask intelligent questions about AI.”
But then nobody learns anything because everyone thinks that they are obviously right and the others are idiots.
You might have seen the following sentences in some of my recent submissions:
″...highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.”
″… in my view they’ve just sprinkled enough mathematics and logic over their fantasies to give them a veneer of respectability.”
″… I … personally think that they are naïve as far as the nature of human intelligence goes. I think they are mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents. None of them seems to have a realistic picture about the nature of thinking...”
Those lines are copied straight out of emails from people who know a lot more than I do.
So what do I recommend? I recommend that both sides do the same that evolutionary biologist did when creationists attacked. Write a book on it or create documents that can actually be read by people with a large inferential distance. But don’t tell them to read hundreds of blog posts that are largely unrelated to the problem in question, or your papers that only someone with a doctorate in machine learning can understand.
OK.
So, suppose I decide to write a book on it, as you suggest.
What do you recommend I do during the years that that book-writing process is going on?
And does that recommendation change if it turns out that I’m wrong and merely think that I’m right, vs. if I’m actually right?
Talk about the subject of the book as much as possible, even if you talk bullshit. Because the book was meant to explain why you are talking bullshit. People talking bullshit is exactly what it takes to write a good book on dissolving bullshit.
No. See ‘Darwin’s Dangerous Idea’ by Daniel Dennett. The book starts by reviewing all the bullshit people have been saying in the past few hundred years and manages to shed light on culture, popular misconceptions and how you can be wrong.
He actually mentions a few times how creationists and other enemies of evolution actually allowed evolutionary biologists to hone their arguments and become stronger. And yes, the arguments of the critics were often poor from the point of view of the experts, but strong from the point of view of laymen.
OK, thanks… that’s clear.
For what it’s worth, I disagree with your first recommendation.
I do agree that Dennett (both in DDI and more generally) has an admirable willingness to engage with the “bullshit” in his field, though he also is willing to unilaterally dismiss vast swaths of it when he decides it’s no longer valuable for him to engage with (see, for example, his treatment of qualia in ‘Consciousness Explained’… or listen to him talk to undergraduates, if he still does that; he was always a treat to listen to).
No. He’s saying:
However, I will say that EY wrote at least part of the sequences because he got sick and tired of seeing people who try to reason about AI fall immediately into some obvious failure state. You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
Would you rather continue running around in circles, banging your head against the wall? Even if you did read the sequences, there’d still be no guarantee that you wouldn’t continue doing the same thing. But, to paraphrase Yudkowsky, at least you’d get a saving throw.
Nobody said anything of the sort. Again, Yvain’s trying to formulate a response.
I had to say this like 10 times now. I am getting the impression that nobody actually read what I wrote.
The whole point was to get people thinking about how an AI is actually going to take over the world, in practice, rather than just claiming it will use magic.
If the IRC channel and the discussion section is any evidence, all you’ve managed to accomplish is to get people to think about how to take over ancient Rome using modern tech.
No, you don’t, as this post alone demonstrates. This is why I consider you a troll.
That wouldn’t work out well...
That you got 22 upvotes for that is incredible sad.
So wiping out the Aztecs isn’t enough, you wanted the Spaniards to quickly establish full control over the Mayan homelands as well?
That seems to me like moving the goalposts, you no longer just want the scientists to wipe out the Roman Empire, you now also want them to “establish full control” over both the Romans’ former territory and the neighboring Persian empire?
Downvoted for the continuing rudeness, and the continuing whining about how people vote others.
Also downvoted for the aforementioned “moving the goalposts”.
I never argued that a hundred people from today, given machine guns, would have no chance to wipe out the roman Senate and kill the the roman consul. Which doesn’t mean that an AI could do the same. But even if it was to wipe out Washington, that would not constitute world domination.
The Spaniards, with the the help of their Tlaxcallan allies, managed to kill the ruler and destabilize Aztec empire and their domination. Yet most of the Mesoamerican cultures were intact afterwards. It took over a hundred years to defeat those.
Anyway, that incident can’t be compared with the Roman empire and much less with the world of today.
You seem to have no clue what my goal was in the first place. I tried to exemplify the difficulties of taking over the world by an example that bears some similarity to the idea that an AI might just take over the world. In reality it would be much more difficult for an AI to do the same due to its fragility and its technological and civilizatory dependencies.
I am not whining about voting behavior but that people who apparently try to be rational could be fooled that easily and act like a cult to protect their beliefs against the least amount of criticism.
You are wrong and any amount of downvoting won’t make you right.
The question was specific: “Could such a group easily wipe away the Roman empire when beamed back in time?”
Ofcourse you can define “wipe away” as by necessity something even more extreme than what the Goths or the Ottomans managed to do with the actual Roman Empires.
On this thread alone you’ve complained about the votes on three different comments, Yvain’s, asr’s, and gwern’s. Your actions as observed from outside are indistinguishable from those of someone who is whining about voting behaviour.
You behaved much more aggressively and much more rudely at (even less) criticism towards this post of yours, than I’ve seen leveled against you.
I’m interested in protecting the standards of polite discourse that are part of LessWrong, which said standards of polite discourse you seem determined to destroy by continuous and gratituous insults. I haven’t even touched any of the actual content of your post, only commented on your rude responses to criticism of it.
I agree that it’s sad I got 22 upvotes; it was a cheap shot at an example I don’t think was really relevant. But here’s another cheap shot: if you learned a future history book would say “With the help of tens of trillions of nanobots, it would take more than 170 years for unfriendly superintelligences to eradicate the last pockets of human resistance beneath the crust of Ganymede”, would you follow that up with “See, I was right, it wasn’t such a big deal after all”?
I agree with all Aris Katsaris’ points as well, but, again, I don’t think this example is too relevant to AI. Although this whole concept of historical analogies for AI isn’t great, a slightly better example than “530 Spaniards vs. Mesoamerica”, might be “1 Spaniard infected with smallpox vs. Mesoamerica”. AIs don’t have to play fair.
Full stop. If advanced nanotechnology comes first then my above post is rendered obsolete. I do completely agree that an AI in possession of advanced nanotech assemblers will be able to take over the world.
So, the AI, even if coming before advanced nanotechnology, would just need to invent some advanced nanotechnology and convince some people to manufacture them, is that right?
I’m pretty sure UFAI isn’t even necessary for advanced nanotech to take over the world.