I don’t see what exactly you think academia failed at.
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism and consider sane doing some sort of theology with god replaced by ‘superintelligence’, clearly useless pass time if you ask me.
edit: note on the superintelligence stuff: one could make some educated guesses about what computational process that did N operations could do, but that will involve a lot of difficult mathematics. For example of low hanging fruit—one can show that even scary many operations (think jupiter brain thinking for hours) given perfect knowledge won’t let you predict weather very far—length of prediction is ~log(operations) or worse. The powers of prediction though are the easiest to fantasise about.
I don’t see what exactly you think academia failed at.
Accessibility, both in the sense that much of the published information is NOT freely available to everyone, and in the sense that it tends to be very difficult to approach without a solid grounding in the field (the Sequences are aimed at smart people with maybe a year or two of college under their belt. Academia doesn’t have any such obligation to write clearly, and thus tends to collapse in to jargon, etc.)
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism
A few bad ideas does not necessarily spoil the effort. In my opinion, the ‘cult’ ideas (such as FOOMism) are fairly easy to notice, and you can still gain quite a lot from the site while avoiding those. More importantly, I think anyone that does buy in to the ‘cult’ of LessWrong is still probably a few steps ahead of where they started (if nothing else, they’re probably prone to find some other, potentially more-dangerous cult-belief if they don’t have something benign like FOOMism to focus on)
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Apparently knowing of confirmation bias doesn’t make people actually try to follow some sort of process thats not affected by bias, instead it is just assumed that because you know of bias it disappears. What I can see here is people learning how to rationalize to greater extent than to which they learn to be rational (if one can actually learn such a thing anyway). I should stop posting, was only meaning to message some people in private.
edit: also, see, foom (and other such stuff) is a good counter example to claim that there’s some raising of sanity waterline going on, or some great success at thinking better. TBH whole AI issue looks like EY never quite won the struggle with theist instinct, and is doing theology. Is there even any talks about AI where there’s computational complexity etc is used to guess at what AI won’t be good at? Did anyone here even arrive at understanding that a computer, what ever it computes, how ever it computes, even with scary many operations per second, will be a bad weather forecaster? (and probably bad many other things forecaster). You can get to human as human to a roundworm and only double the capability on things that are logarithmic in the operations. That’s a very trivial thing, that I just don’t see understood here.
I should stop posting, was only meaning to message some people in private.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.)
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly;
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Hmmmm, you’re right, actually. I was using the evidence of “this has helped me, and a few of my friends”—I have decent anecdotal evidence that it’s useful, but I was definitely over-playing it’s value simply because it happens to land in the “sweet spot” of my social circle. A book like Freakonomics is aimed at a less intelligent audience, and I’m sure there’s plenty of resources aimed at a more intelligent audience. The Sequences just happen to be (thus far) ideal for my own social circle.
Thank you for taking the time to respond—I was caught up exploring a idea and hadn’t taken the time to step back and realize that it was a stupid one.
I do still feel the Sequences are evidence of intelligence—a stupid person could not have written these! But it’s not any particular evidence of an extraordinary level of intelligence. It’s like a post-graduate degree; you have to be smart to get one, but there’s a lot of similarly smart people out there.
Well, that would depend to how you define intelligence. What did set us aside from other animals, is that we could invent stone axe (the one with the stone actually attached to the stick, that’s hard to do). If I see someone who invented something, I know they are intelligent in this sense. But writings without significant innovation do not let me conclude much. Since the IQ tests, we started mixing up different dimensions to intelligence. The IQ tests have very little loading for the data-heavy or choices-heavy (with very many possible actions) processing, some types of work, too.
What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Presenting a complex argument requires a whole host of sub-skills.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
On the other hand, you’re uncharitable and unnecessarily derogatory.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)
I don’t see what exactly you think academia failed at.
For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism and consider sane doing some sort of theology with god replaced by ‘superintelligence’, clearly useless pass time if you ask me.
edit: note on the superintelligence stuff: one could make some educated guesses about what computational process that did N operations could do, but that will involve a lot of difficult mathematics. For example of low hanging fruit—one can show that even scary many operations (think jupiter brain thinking for hours) given perfect knowledge won’t let you predict weather very far—length of prediction is ~log(operations) or worse. The powers of prediction though are the easiest to fantasise about.
Accessibility, both in the sense that much of the published information is NOT freely available to everyone, and in the sense that it tends to be very difficult to approach without a solid grounding in the field (the Sequences are aimed at smart people with maybe a year or two of college under their belt. Academia doesn’t have any such obligation to write clearly, and thus tends to collapse in to jargon, etc.)
A few bad ideas does not necessarily spoil the effort. In my opinion, the ‘cult’ ideas (such as FOOMism) are fairly easy to notice, and you can still gain quite a lot from the site while avoiding those. More importantly, I think anyone that does buy in to the ‘cult’ of LessWrong is still probably a few steps ahead of where they started (if nothing else, they’re probably prone to find some other, potentially more-dangerous cult-belief if they don’t have something benign like FOOMism to focus on)
Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.
Apparently knowing of confirmation bias doesn’t make people actually try to follow some sort of process thats not affected by bias, instead it is just assumed that because you know of bias it disappears. What I can see here is people learning how to rationalize to greater extent than to which they learn to be rational (if one can actually learn such a thing anyway). I should stop posting, was only meaning to message some people in private.
edit: also, see, foom (and other such stuff) is a good counter example to claim that there’s some raising of sanity waterline going on, or some great success at thinking better. TBH whole AI issue looks like EY never quite won the struggle with theist instinct, and is doing theology. Is there even any talks about AI where there’s computational complexity etc is used to guess at what AI won’t be good at? Did anyone here even arrive at understanding that a computer, what ever it computes, how ever it computes, even with scary many operations per second, will be a bad weather forecaster? (and probably bad many other things forecaster). You can get to human as human to a roundworm and only double the capability on things that are logarithmic in the operations. That’s a very trivial thing, that I just don’t see understood here.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
Note, that the formulation presented here is one I came up with on my own while searching for the bayesstructure behind arguments based on the outside view.
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
so you consider those answers correct?
I assume you’re refer to that?
A correct ANSWER is different from a correct METHOD. I treat an answer as correct if I can verify it.
Problem: X^2 = 9 Solution X=3
It doesn’t matter how they arrived at “X=3”, it’s still correct, and I can verify that (3^2 = 9, yep!)
It’s not about whenever they disagree, it’s about whenever they actually did it themselves, that would make them competent. Re: Niler, writing reply.
Hmmmm, you’re right, actually. I was using the evidence of “this has helped me, and a few of my friends”—I have decent anecdotal evidence that it’s useful, but I was definitely over-playing it’s value simply because it happens to land in the “sweet spot” of my social circle. A book like Freakonomics is aimed at a less intelligent audience, and I’m sure there’s plenty of resources aimed at a more intelligent audience. The Sequences just happen to be (thus far) ideal for my own social circle.
Thank you for taking the time to respond—I was caught up exploring a idea and hadn’t taken the time to step back and realize that it was a stupid one.
I do still feel the Sequences are evidence of intelligence—a stupid person could not have written these! But it’s not any particular evidence of an extraordinary level of intelligence. It’s like a post-graduate degree; you have to be smart to get one, but there’s a lot of similarly smart people out there.
Well, that would depend to how you define intelligence. What did set us aside from other animals, is that we could invent stone axe (the one with the stone actually attached to the stick, that’s hard to do). If I see someone who invented something, I know they are intelligent in this sense. But writings without significant innovation do not let me conclude much. Since the IQ tests, we started mixing up different dimensions to intelligence. The IQ tests have very little loading for the data-heavy or choices-heavy (with very many possible actions) processing, some types of work, too.
Cross-domain optimization. Unless there’s some special reason to focus on a more narrow notion.
What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
Just to be sure I understand you:
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
http://en.wikipedia.org/wiki/Stuxnet
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)