Did other smart people put as much time and fail? It’s not about size… find the post that you think required the most intelligence to make, that’s from where you estimate intelligence, from size you estimate persistence. With regards to topics, it also covers his opinions, many of which have low independent probability of being correct. That’s not very good—think what the reactions very smart people would have—it may be that the community is smarter than average but has an intelligence cut off point. Picture a much narrower bell curve centred at 115.
My first reaction to “Bayesian” this and that was, “too many words about too trivial topic”. We have coolest presidents on lowest denomination coins, and we have many coolest mathematician names on things that many 5th graders routinely reinvent on a math olympiad.
Well, we have the entirety of academia. Harvard can’t afford academic journals, so it seems fair to say that academic journals fail entirely at this goal, and one assumes that the people publishing there are, on average, at least 1 standard deviation above norm (IQ 115+)
It’s not about size…
I think this idea sabotages more intelligent people than anything else. Yes, it is about size. Intelligence is useless if you don’t use it. Call it “applied intelligence” or some such if you want, but it’s what actually matters in the world—not simply the ability to come up with an idea, but to actually put the work in to implementing it. “Genius is one percent inspiration, 99% perspiration”
I don’t care about someone who has had a single idea that happens to be smarter than Eliezer’s best—it’s easy to have a single outlier, it’s much harder to have consistently good ideas. And without those other, consistently good ideas, I have no real reason to pay attention to that one idea.
My first reaction to “Bayesian” this and that was, “too many words about too trivial topic”.
laughs Okay, here we agree! Except… the sequences aren’t just about high-level concepts. They’re about raising the sanity line of society. They’re about teaching people who didn’t come up with this one on their own in 5th grade.
I’m not saying Eliezer is the messiah, or the smartest man on Earth. I’m just saying, he’s done some clearly fairly bright things with his life. I think he’s under-educated in some areas, and flat-out misguided in others, but I can say that about an incredible number of intelligent people.
I don’t care about someone who has had a single idea that happens to be smarter than Eliezer’s best—it’s easy to have a single outlier, it’s much harder to have consistently good ideas
You are answering to someone who thinks that FOOM description is misguided, for example. And there is not so much evidence for FOOM—inferences are quite shaky there. There are many ideas Eliezer has promoted that dilute the “consistently good” definition unless you agree with his priors.
They’re about teaching people who didn’t come up with this one on their own in 5th grade.
And it doesn’t look like it succeeds on this...
There is a range of intelligence+knowledge where you generally understand the underlying concepts and were quite close but couldn’t put it into shape. Those people would like Sequences unless the prior clash (or value clash...) make them too uncomfortable with shaky topics. These people are noticeably above waterline, by the way.
For raising sanity waterline Freakonomics books do more than Sequences.
Minor note- the intellligence explosion/FOOM idea isn’t due to Eliezer. The idea originally seems to be due to I.J. Good. I don’t know if Eliezer came up with it independently of Good or not but I suspect that Eliezer didn’t come up with it on his own.
For raising sanity waterline Freakonomics books do more than Sequences.
This seems dubious to me. The original book might suggest some interesting patterns and teach one how to do Fermi calculations but not much else. The sequel book has quite a few problems. Can you expand on why you think this is the case?
Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
I guess in Eliezer’s arguments about FOOM there is still some fresh content.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?
For raising sanity waterline Freakonomics books do more than Sequences.
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.
I don’t see what exactly you think academia failed at.
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism and consider sane doing some sort of theology with god replaced by ‘superintelligence’, clearly useless pass time if you ask me.
edit: note on the superintelligence stuff: one could make some educated guesses about what computational process that did N operations could do, but that will involve a lot of difficult mathematics. For example of low hanging fruit—one can show that even scary many operations (think jupiter brain thinking for hours) given perfect knowledge won’t let you predict weather very far—length of prediction is ~log(operations) or worse. The powers of prediction though are the easiest to fantasise about.
I don’t see what exactly you think academia failed at.
Accessibility, both in the sense that much of the published information is NOT freely available to everyone, and in the sense that it tends to be very difficult to approach without a solid grounding in the field (the Sequences are aimed at smart people with maybe a year or two of college under their belt. Academia doesn’t have any such obligation to write clearly, and thus tends to collapse in to jargon, etc.)
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism
A few bad ideas does not necessarily spoil the effort. In my opinion, the ‘cult’ ideas (such as FOOMism) are fairly easy to notice, and you can still gain quite a lot from the site while avoiding those. More importantly, I think anyone that does buy in to the ‘cult’ of LessWrong is still probably a few steps ahead of where they started (if nothing else, they’re probably prone to find some other, potentially more-dangerous cult-belief if they don’t have something benign like FOOMism to focus on)
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Apparently knowing of confirmation bias doesn’t make people actually try to follow some sort of process thats not affected by bias, instead it is just assumed that because you know of bias it disappears. What I can see here is people learning how to rationalize to greater extent than to which they learn to be rational (if one can actually learn such a thing anyway). I should stop posting, was only meaning to message some people in private.
edit: also, see, foom (and other such stuff) is a good counter example to claim that there’s some raising of sanity waterline going on, or some great success at thinking better. TBH whole AI issue looks like EY never quite won the struggle with theist instinct, and is doing theology. Is there even any talks about AI where there’s computational complexity etc is used to guess at what AI won’t be good at? Did anyone here even arrive at understanding that a computer, what ever it computes, how ever it computes, even with scary many operations per second, will be a bad weather forecaster? (and probably bad many other things forecaster). You can get to human as human to a roundworm and only double the capability on things that are logarithmic in the operations. That’s a very trivial thing, that I just don’t see understood here.
I should stop posting, was only meaning to message some people in private.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.)
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly;
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Hmmmm, you’re right, actually. I was using the evidence of “this has helped me, and a few of my friends”—I have decent anecdotal evidence that it’s useful, but I was definitely over-playing it’s value simply because it happens to land in the “sweet spot” of my social circle. A book like Freakonomics is aimed at a less intelligent audience, and I’m sure there’s plenty of resources aimed at a more intelligent audience. The Sequences just happen to be (thus far) ideal for my own social circle.
Thank you for taking the time to respond—I was caught up exploring a idea and hadn’t taken the time to step back and realize that it was a stupid one.
I do still feel the Sequences are evidence of intelligence—a stupid person could not have written these! But it’s not any particular evidence of an extraordinary level of intelligence. It’s like a post-graduate degree; you have to be smart to get one, but there’s a lot of similarly smart people out there.
Well, that would depend to how you define intelligence. What did set us aside from other animals, is that we could invent stone axe (the one with the stone actually attached to the stick, that’s hard to do). If I see someone who invented something, I know they are intelligent in this sense. But writings without significant innovation do not let me conclude much. Since the IQ tests, we started mixing up different dimensions to intelligence. The IQ tests have very little loading for the data-heavy or choices-heavy (with very many possible actions) processing, some types of work, too.
What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Presenting a complex argument requires a whole host of sub-skills.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
On the other hand, you’re uncharitable and unnecessarily derogatory.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)
Did other smart people put as much time and fail? It’s not about size… find the post that you think required the most intelligence to make, that’s from where you estimate intelligence, from size you estimate persistence. With regards to topics, it also covers his opinions, many of which have low independent probability of being correct. That’s not very good—think what the reactions very smart people would have—it may be that the community is smarter than average but has an intelligence cut off point. Picture a much narrower bell curve centred at 115.
My first reaction to “Bayesian” this and that was, “too many words about too trivial topic”. We have coolest presidents on lowest denomination coins, and we have many coolest mathematician names on things that many 5th graders routinely reinvent on a math olympiad.
Well, we have the entirety of academia. Harvard can’t afford academic journals, so it seems fair to say that academic journals fail entirely at this goal, and one assumes that the people publishing there are, on average, at least 1 standard deviation above norm (IQ 115+)
I think this idea sabotages more intelligent people than anything else. Yes, it is about size. Intelligence is useless if you don’t use it. Call it “applied intelligence” or some such if you want, but it’s what actually matters in the world—not simply the ability to come up with an idea, but to actually put the work in to implementing it. “Genius is one percent inspiration, 99% perspiration”
I don’t care about someone who has had a single idea that happens to be smarter than Eliezer’s best—it’s easy to have a single outlier, it’s much harder to have consistently good ideas. And without those other, consistently good ideas, I have no real reason to pay attention to that one idea.
laughs Okay, here we agree! Except… the sequences aren’t just about high-level concepts. They’re about raising the sanity line of society. They’re about teaching people who didn’t come up with this one on their own in 5th grade.
I’m not saying Eliezer is the messiah, or the smartest man on Earth. I’m just saying, he’s done some clearly fairly bright things with his life. I think he’s under-educated in some areas, and flat-out misguided in others, but I can say that about an incredible number of intelligent people.
You are answering to someone who thinks that FOOM description is misguided, for example. And there is not so much evidence for FOOM—inferences are quite shaky there. There are many ideas Eliezer has promoted that dilute the “consistently good” definition unless you agree with his priors.
And it doesn’t look like it succeeds on this...
There is a range of intelligence+knowledge where you generally understand the underlying concepts and were quite close but couldn’t put it into shape. Those people would like Sequences unless the prior clash (or value clash...) make them too uncomfortable with shaky topics. These people are noticeably above waterline, by the way.
For raising sanity waterline Freakonomics books do more than Sequences.
Minor note- the intellligence explosion/FOOM idea isn’t due to Eliezer. The idea originally seems to be due to I.J. Good. I don’t know if Eliezer came up with it independently of Good or not but I suspect that Eliezer didn’t come up with it on his own.
This seems dubious to me. The original book might suggest some interesting patterns and teach one how to do Fermi calculations but not much else. The sequel book has quite a few problems. Can you expand on why you think this is the case?
Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.
I don’t see what exactly you think academia failed at.
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism and consider sane doing some sort of theology with god replaced by ‘superintelligence’, clearly useless pass time if you ask me.
edit: note on the superintelligence stuff: one could make some educated guesses about what computational process that did N operations could do, but that will involve a lot of difficult mathematics. For example of low hanging fruit—one can show that even scary many operations (think jupiter brain thinking for hours) given perfect knowledge won’t let you predict weather very far—length of prediction is ~log(operations) or worse. The powers of prediction though are the easiest to fantasise about.
Accessibility, both in the sense that much of the published information is NOT freely available to everyone, and in the sense that it tends to be very difficult to approach without a solid grounding in the field (the Sequences are aimed at smart people with maybe a year or two of college under their belt. Academia doesn’t have any such obligation to write clearly, and thus tends to collapse in to jargon, etc.)
A few bad ideas does not necessarily spoil the effort. In my opinion, the ‘cult’ ideas (such as FOOMism) are fairly easy to notice, and you can still gain quite a lot from the site while avoiding those. More importantly, I think anyone that does buy in to the ‘cult’ of LessWrong is still probably a few steps ahead of where they started (if nothing else, they’re probably prone to find some other, potentially more-dangerous cult-belief if they don’t have something benign like FOOMism to focus on)
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Apparently knowing of confirmation bias doesn’t make people actually try to follow some sort of process thats not affected by bias, instead it is just assumed that because you know of bias it disappears. What I can see here is people learning how to rationalize to greater extent than to which they learn to be rational (if one can actually learn such a thing anyway). I should stop posting, was only meaning to message some people in private.
edit: also, see, foom (and other such stuff) is a good counter example to claim that there’s some raising of sanity waterline going on, or some great success at thinking better. TBH whole AI issue looks like EY never quite won the struggle with theist instinct, and is doing theology. Is there even any talks about AI where there’s computational complexity etc is used to guess at what AI won’t be good at? Did anyone here even arrive at understanding that a computer, what ever it computes, how ever it computes, even with scary many operations per second, will be a bad weather forecaster? (and probably bad many other things forecaster). You can get to human as human to a roundworm and only double the capability on things that are logarithmic in the operations. That’s a very trivial thing, that I just don’t see understood here.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
Note, that the formulation presented here is one I came up with on my own while searching for the bayesstructure behind arguments based on the outside view.
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
so you consider those answers correct?
I assume you’re refer to that?
A correct ANSWER is different from a correct METHOD. I treat an answer as correct if I can verify it.
Problem: X^2 = 9 Solution X=3
It doesn’t matter how they arrived at “X=3”, it’s still correct, and I can verify that (3^2 = 9, yep!)
It’s not about whenever they disagree, it’s about whenever they actually did it themselves, that would make them competent. Re: Niler, writing reply.
Hmmmm, you’re right, actually. I was using the evidence of “this has helped me, and a few of my friends”—I have decent anecdotal evidence that it’s useful, but I was definitely over-playing it’s value simply because it happens to land in the “sweet spot” of my social circle. A book like Freakonomics is aimed at a less intelligent audience, and I’m sure there’s plenty of resources aimed at a more intelligent audience. The Sequences just happen to be (thus far) ideal for my own social circle.
Thank you for taking the time to respond—I was caught up exploring a idea and hadn’t taken the time to step back and realize that it was a stupid one.
I do still feel the Sequences are evidence of intelligence—a stupid person could not have written these! But it’s not any particular evidence of an extraordinary level of intelligence. It’s like a post-graduate degree; you have to be smart to get one, but there’s a lot of similarly smart people out there.
Well, that would depend to how you define intelligence. What did set us aside from other animals, is that we could invent stone axe (the one with the stone actually attached to the stick, that’s hard to do). If I see someone who invented something, I know they are intelligent in this sense. But writings without significant innovation do not let me conclude much. Since the IQ tests, we started mixing up different dimensions to intelligence. The IQ tests have very little loading for the data-heavy or choices-heavy (with very many possible actions) processing, some types of work, too.
Cross-domain optimization. Unless there’s some special reason to focus on a more narrow notion.
What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
Just to be sure I understand you:
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
http://en.wikipedia.org/wiki/Stuxnet
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)