I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence – e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
So since academic consensus on the topic is not reliable, and domain knowledge in the field of AI is negatively useful, what are the prerequisites for grasping the truth when it comes to AI risks?
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence – e.g. this standard would fail MWI, molecular nanotechnology, cryonics,
I think that in saying this, Eliezer is making his opponents’ case for them. Yes, of course the standard would also let you discard cryonics. One solution to that is to say that the standard is bad. Another solution is to say “yes, and I don’t much care for cryonics either”.
I think that in saying this, Eliezer is making his opponents’ case for him.
Nah, those are all plausibly correct things that mainstream science has mostly ignored and/or made researching taboo.
If you prefer a more clear-cut example, science was wrong about continental drift for about half a century—until overwhelming, unmistakable evidence became available.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company. There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case. Most of the time scientists reject something out of hand, they’re right, or at worst, wrong about the thing existing, but right about the lack of good evidence so far.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company.
Guilt by association? Grow up.
There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics. That’s the problem, and that’s what’s going on with the CS people who still think AI Winter implies AGI isn’t worth studying.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
So? I’m pretty sure that there would be backlash against, say, homeopaths in a medical association. Backlash against deserving targets (which include people who are correct but because of unlucky circumstances, legitimately look wrong) doesn’t count.
I’m reminded of an argument I had with a proponent of psychic power. He asked me what if psychic powers happen to be of such a nature that they can’t be detected by experiments, don’t show up in double-blind tests, etc.. I pointed out that he was postulating that psi is real but looks exactly like a fake. If something looks exactly like a fake, at some point the rational thing to do is treat it as fake. At that point in history, continental drift happened to look like a fake.
Guilt by association? Grow up.
That’s not guilt by association, it’s pointing out that the example is used by pseudoscientists for a reason, and this reason applies to you too.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics.
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
I’ll limit my response to the following amusing footnote:
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
This is, in fact, what happened between early cryonics and cryobiology.
EDIT: Just so people aren’t misled by Jiro’s motivated interpretation of the link:
However, according to the cryobiologist informant who attributes to this episode the formal hardening of the Society for Cryobiology against cryonics, the repercussions from this incident were far-reaching. Rumors about the presentation—often wildly distorted rumors—began to circulate. One particularly pernicious rumor, according to this informant, was that my presentation had included graphic photos of “corpses’ heads being cut off.” This was not the case. Surgical photos which were shown were of thoracic surgery to place cannula and would be suitable for viewing by any audience drawn from the general public.
This informant also indicates that it was his perception that this presentation caused real fear and anger amongst the Officers and Directors of the Society. They felt as if they had been “invaded” and that such a presentation given during the course of, and thus under the aegis of, their meeting could cause them to be publicly associated with cryonics. Comments such as “what if the press got wind of this,” or “what if a reporter had been there” were reported to have circulated.
Also, the presentation may have brought into sharper focus the fact that cryonicists existed, were really freezing people, and that they were using sophisticated procedures borrowed from medicine, and yes, even from cryobiology, which could cause confusion between the “real” science of cryobiology and the “fraud” of cryonics in the public eye. More to the point, it was clear that cryonicists were not operating in some back room and mumbling inarticulately; they were now right there in the midst of the cryobiologists and they were anything but inarticulate, bumbling back-room fools.
You’re equivocating on the term “political”. When the context is “race, religion, or politics”, “political” doesn’t normally mean “related to human status”, it means “related to government”. Besides, they only considered it low status based on their belief that it is scientifically nonsensical.
My reply was steelmanning your post by assuming that the ethical considerations mentioned in the article counted as religious. That was the only thing mentioned in it that could reasonably fall under “race, religion, or politics” as that is normally understood.
Most of the history described in your own link makes it clear that scientists objected because they think cryonics is scientifically nonsense, not because of race, religion, or politics. The article then tacks on a claim that scientists reject it for ethical reasons, but that isn’t supported by its own history, just by a few quotes with no evidence that these beliefs are prevalent among anyone other than the people quoted.
Furthermore, of the quotes it does give, one of them is vague enough that I have no idea if it means in context what the article claims it means. Saying that the “end result” is damaging doesn’t necessarily mean that having unfrozen people walking around is damaging—it may mean that he thinks cryonics doesn’t work and that having a lot of resources wasted on freezing corpses is damaging.
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
I’m inclined to disagree somewhat with Eliezer_2009 on the issue of traditional AI—even basic graph search algorithms supply valuable intuitions about what planning looks like, and what it is not. But even that same (obsoleted now, I assume) article does list computer programming knowledge as a requirement.
...what are the prerequisites for grasping the truth when it comes to AI risks?
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
What counts as “a grasp” of computer programming/science? I can e.g. program a simple web crawler and solve a bunch of Project Euler problems. I’ve read books such as “The C Programming Language”.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I am skeptical though. If the reason that I dismiss certain kinds of AI risks is that I lack the necessary education, then I expect to see rebuttals of the kind “You are wrong because of (add incomprehensible technical justification)...”. But that’s not the case. All I see are half-baked science fiction stories and completely unconvincing informal arguments.
What counts as “a grasp” of computer programming/science?
This is actually a question I’ve thought about quite a bit, in a different context. So I have a cached response to what makes a programmer, not tailored to you or to AI at all. When someone asks for guidance on development as a programmer, the question I tend to ask is, how big is the biggest project you architected and wrote yourself?
The 100 line scale tests only the mechanics of programming; the 1k line scale tests the ability to subdivide problems; the 10k line scale tests the ability to select concepts; and the 50k line scale tests conceptual taste, and the ability to add, split, and purge concepts in a large map. (Line numbers are very approximate, but I believe the progression of skills is a reasonably accurate way to characterize programmer development.)
New programmers (not jimrandomh), be wary of line counts! It’s very easy for a programmer who’s not yet ready for a 10k line project to turn it into a 50k lines. I agree with the progression of skills though.
Yeah, I was thinking more of “project as complex as an n-line project in an average-density language should be”. Bad code (especially with copy-paste) can inflate inflate line numbers ridiculously, and languages vary up to 5x in their base density too.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I think you’re overestimating these requirements. I haven’t taken the Udacity courses, but I did well in my classes on AI and machine learning in university, and I wouldn’t describe my background in stats or linear algebra as strong—more “fair to conversant”.
They’re both quite central to the field and you’ll end up using them a lot, but you don’t need to know them in much depth. If you can calculate posteriors and find the inverse of a matrix, you’re probably fine; more complicated stuff will come up occasionally, but I’d expect a refresher when it does.
Don’t twist Eliezer’s words. There’s a vast difference between “a PhD in what they call AI will not help you think about the mathematical and philosophical issues of AGI” and “you don’t need any training or education in computing to think clearly about AGI”.
What are the prerequisites for grasping the truth when it comes to AI risks?
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
internalizing the idea that you and not some external malicious agency are responsible for failures.
So it’s basically responsibility?
...that any problems are entirely due to flaws in your specification or your model.
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Clearly you never had to chase bugs through third-party libraries...
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
But randomness and external factors do predominate in almost everything.
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
People have been telling him this for years. I doubt it will get much better.
This is a much bigger problem for your ability to reason about this area than you think.
A relevant quote from Eliezer Yudkowsky (source):
And another one (source):
So since academic consensus on the topic is not reliable, and domain knowledge in the field of AI is negatively useful, what are the prerequisites for grasping the truth when it comes to AI risks?
I think that in saying this, Eliezer is making his opponents’ case for them. Yes, of course the standard would also let you discard cryonics. One solution to that is to say that the standard is bad. Another solution is to say “yes, and I don’t much care for cryonics either”.
Nah, those are all plausibly correct things that mainstream science has mostly ignored and/or made researching taboo.
If you prefer a more clear-cut example, science was wrong about continental drift for about half a century—until overwhelming, unmistakable evidence became available.
The main reason that scientists rejected continental drift was that there was no known mechanism which could cause it; plate tectonics wasn’t developed until the late 1950′s.
Continental drift is also commonly invoked by pseudoscientists as a reason not to trust scientists, and if you do so too you’re in very bad company. There’s a reason why pseudoscientists keep using continental drift for this purpose and don’t have dozens of examples: examples are very hard to find. Even if you decide that continental drift is close enough that it counts, it’s a very atypical case. Most of the time scientists reject something out of hand, they’re right, or at worst, wrong about the thing existing, but right about the lack of good evidence so far.
There was also a great deal of institutional backlash against proponents of continental drift, which was my point.
Guilt by association? Grow up.
There are many, many cases of scientists being oppressed and dismissed because of their race, their religious beliefs, and their politics. That’s the problem, and that’s what’s going on with the CS people who still think AI Winter implies AGI isn’t worth studying.
So? I’m pretty sure that there would be backlash against, say, homeopaths in a medical association. Backlash against deserving targets (which include people who are correct but because of unlucky circumstances, legitimately look wrong) doesn’t count.
I’m reminded of an argument I had with a proponent of psychic power. He asked me what if psychic powers happen to be of such a nature that they can’t be detected by experiments, don’t show up in double-blind tests, etc.. I pointed out that he was postulating that psi is real but looks exactly like a fake. If something looks exactly like a fake, at some point the rational thing to do is treat it as fake. At that point in history, continental drift happened to look like a fake.
That’s not guilt by association, it’s pointing out that the example is used by pseudoscientists for a reason, and this reason applies to you too.
If scientists dismissed cryonics because of the supporters’ race, religion, or politics, you might have a point.
I’ll limit my response to the following amusing footnote:
This is, in fact, what happened between early cryonics and cryobiology.
EDIT: Just so people aren’t misled by Jiro’s motivated interpretation of the link:
Obviously political.
You’re equivocating on the term “political”. When the context is “race, religion, or politics”, “political” doesn’t normally mean “related to human status”, it means “related to government”. Besides, they only considered it low status based on their belief that it is scientifically nonsensical.
My reply was steelmanning your post by assuming that the ethical considerations mentioned in the article counted as religious. That was the only thing mentioned in it that could reasonably fall under “race, religion, or politics” as that is normally understood.
Most of the history described in your own link makes it clear that scientists objected because they think cryonics is scientifically nonsense, not because of race, religion, or politics. The article then tacks on a claim that scientists reject it for ethical reasons, but that isn’t supported by its own history, just by a few quotes with no evidence that these beliefs are prevalent among anyone other than the people quoted.
Furthermore, of the quotes it does give, one of them is vague enough that I have no idea if it means in context what the article claims it means. Saying that the “end result” is damaging doesn’t necessarily mean that having unfrozen people walking around is damaging—it may mean that he thinks cryonics doesn’t work and that having a lot of resources wasted on freezing corpses is damaging.
At a minimum, a grasp of computer programming and CS. Computer programming, not even AI.
I’m inclined to disagree somewhat with Eliezer_2009 on the issue of traditional AI—even basic graph search algorithms supply valuable intuitions about what planning looks like, and what it is not. But even that same (obsoleted now, I assume) article does list computer programming knowledge as a requirement.
What counts as “a grasp” of computer programming/science? I can e.g. program a simple web crawler and solve a bunch of Project Euler problems. I’ve read books such as “The C Programming Language”.
I would have taken the udacity courses on machine learning by now, but the stated requirement is a strong familiarity with Probability Theory, Linear Algebra and Statistics. I wouldn’t describe my familiarity as strong, that will take a few more years.
I am skeptical though. If the reason that I dismiss certain kinds of AI risks is that I lack the necessary education, then I expect to see rebuttals of the kind “You are wrong because of (add incomprehensible technical justification)...”. But that’s not the case. All I see are half-baked science fiction stories and completely unconvincing informal arguments.
This is actually a question I’ve thought about quite a bit, in a different context. So I have a cached response to what makes a programmer, not tailored to you or to AI at all. When someone asks for guidance on development as a programmer, the question I tend to ask is, how big is the biggest project you architected and wrote yourself?
The 100 line scale tests only the mechanics of programming; the 1k line scale tests the ability to subdivide problems; the 10k line scale tests the ability to select concepts; and the 50k line scale tests conceptual taste, and the ability to add, split, and purge concepts in a large map. (Line numbers are very approximate, but I believe the progression of skills is a reasonably accurate way to characterize programmer development.)
New programmers (not jimrandomh), be wary of line counts! It’s very easy for a programmer who’s not yet ready for a 10k line project to turn it into a 50k lines. I agree with the progression of skills though.
Yeah, I was thinking more of “project as complex as an n-line project in an average-density language should be”. Bad code (especially with copy-paste) can inflate inflate line numbers ridiculously, and languages vary up to 5x in their base density too.
I think you’re overestimating these requirements. I haven’t taken the Udacity courses, but I did well in my classes on AI and machine learning in university, and I wouldn’t describe my background in stats or linear algebra as strong—more “fair to conversant”.
They’re both quite central to the field and you’ll end up using them a lot, but you don’t need to know them in much depth. If you can calculate posteriors and find the inverse of a matrix, you’re probably fine; more complicated stuff will come up occasionally, but I’d expect a refresher when it does.
Don’t twist Eliezer’s words. There’s a vast difference between “a PhD in what they call AI will not help you think about the mathematical and philosophical issues of AGI” and “you don’t need any training or education in computing to think clearly about AGI”.
Not learning philosophy, as EY recommends will not help you with the philosophical issues.
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
That looks highly doubtful to me.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
There are a whole bunch of considerations.
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
Can you expand on how do you understand “reductionist humility”, in particular as a cognitive skill?
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
IOW realizing that the reason why if you eat a lot you get fat is not that you piss off God and he takes revenge, as certain people appear to alieve.
So it’s basically responsibility?
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
...
Make up your mind :-)
Predominate in almost every problem.
Don’t predominate in any solved problem.
Learning to drive is learningto deal with other traffic (external) and not knowing what is going to happen next (random)
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
Beg pardon?
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
People have been telling him this for years. I doubt it will get much better.