Well, we need lots of help besides elite young math/compsci talent. You could contact louie.helm [at] singinst.org and explain your experience and qualifications. Thanks for your interest!
Is it really optimal to dismiss Incorrect as not being elite math/computer science talent so quickly?
Also, are you familiar with growth versus static models of intelligence? This looks to me like you are promoting a static model, which amounts to destroying a public good in my view.
University professors don’t tell students they are too stupid to contribute to the problems they are trying to solve. I don’t see why SI should either.
I didn’t interpret lukeprog’s comment as dismissing Incorrect as not being elite talent. I thought he was just noting that, whether he is “elite” or not, he can contact Louie to find out how he can help.
While I agree with most of this (and have upvoted) two points stand out:
Also, are you familiar with growth versus static models of intelligence
I don’t think bringing this up helps your point very much. While there are individuals whose apparent extreme talent blooms fairly late (e.g. Steven Chu who didn’t really start being that impressive until he was in college), the lack of change of IQ scores over time on average is very robust, dating back to Spearman’s original research about a hundred years ago. This is also true for other metrics of intelligence. By and large, intelligence is pretty static.
University professors don’t tell students they are too stupid to contribute to the problems they are trying to solve
This is true, but professors do sometimes tell students when a problem may just be out of their league. To use an extreme example, consider a grad student who walks into his adviser’s office and says he wants to prove the Riemann Hypothesis. That said, your essential point is valid, because even in that case, a professor could still direct them to some easier related problem or helpful question related to some aspect of it. So your basic point is valid.
Intelligence seems relatively static, but AFAIK once you’ve reached a certain minimum threshold in intelligence, conscientiousness becomes a more important factor for actual accomplishment. (Anecdotally and intuitively, conscientiousness seems more amenable to change, but I don’t know if the psychological evidence supports that.)
Wait, there’s real evidence of durable changes in conscientiousness? Point me its way. The psychology literature does not appear (after a brief search) to support the idea of lasting change. I would be happy to be wrong.
I’m not sure how strongly IQ correlates with real-world abilities (well, actually, I am sure: 0.2-0.6 depending on the task 1). You don’t need exceptional IQ to do new math (see Richard Feynman) but you do need an interest in math and quite a bit of exposure. Synesthesia can also be helpful.
I’m not finding a non-paywalled version right now, and unfortunately am not at my university at the moment to access it.
How many mathematicians consciously try to extract heuristics from their problem-solving process and keep them in a database, or track how environmental factors like diet and activities affect their productivity?
Has there ever been a team of mathematicians teamed with the team of mathematician optimizers who observed the mathematicians like lab animals? :D
Has there ever been a team of mathematicians teamed with the team of mathematician optimizers who observed the mathematicians like lab animals?
Soviet Russia produced a remarkable amount of math, and ideologically was well-suited to such testing or design; they ultimately created whole academic cities for science and math, optimized (or at least, not pessimized like the rest of Soviet Russia) for research.
In fact, what I know of the Russian math academic system strikes me as reminiscent of the impression I have of the very successful athletic systems in both Russia and America: take young kids showing promise with relatives in related areas, push them hard with experienced tutors themselves skilled in the area, provide the resources they might need, various incentives for them and the relatives, and don’t let off the slack until they begin to flag in their late 20s/early 30s at which point they take their tutors’ places.
Some special schools target a limited number of academic domains, and some focus on more general academic-talent development. The most intensive special schools existed in the Soviet bloc countries. According to Donoghue, Karp, and Vogeli (2000), Chubarikov and Pyryt (1993), and Grigorenko and Clinkenbeard (1994), the impetus for specialized science schools came in the late 1950s from distinguished scientists advocating for educational opportunities to develop future generations of scientists. In order to increase the geographical reach of the schools, several included boarding facilities. Admission to the schools was based on stringent criteria, including having already competed well in regional competitions. The faculty of these schools included pedagogically talented educators (Karp, 2010), and students had the opportunity to work with renowned professors as well. An example of one of these specialized institutions is the residential Kolmogorov School (Chubarikove & Pyryt, 1993), which enrolls 200 students per year from Russia, Belarus, and beyond. Selection was and continues to be based on a record of success in regional Olympiads. Professors from the prestigious Moscow State University serve as the faculty, the coursework is heavy and intense, and students are expected to conduct independent projects on topics of interest to them. Grigorenko and Clinkenbeard (1994) reported that students attending Soviet special schools were uncharacteristically (for the Soviet Union) encouraged to be intellectually aggressive and competitive. They added that the curriculum in these schools shortchanged the humanities and social sciences, focusing overwhelmingly on excellence in mathematics and science. Although the schools were often denigrated by Soviet educators and psychologists, who argued that outstanding achievement was achieved exclusively from hard work and commitment, these arguments were countered by famous scientific advocates (Donoghue et al., 2000). The schools, which continue to exist in some form today, have graduates on the faculties of the most prestigious institutions in Russia. However, many graduates of these schools are also found in the academic ranks of Western universities, leading Russian policy makers to question the value of further investment.
Donoghue, E. F., Karp, A., & Vogeli, B. R. (2000). Russian schools for the mathematically and scientifically talented: Can the vision survive unchanged? Roeper Review, 22, 121–123. doi:10.1080/02783190009554015
Chubarikove, V. N., & Pyryt, M. (1993). Educating mathematically gifted pupils at the Komogorov School. Gifted Education International, 9, 110–130
Grigorenko, E. L., & Clinkenbeard, P. R. (1994). An inside view of gifted education in Russia. Roeper Review, 16, 167–171. doi:10.1080/02783199409553566
Karp, A. (2010). Teachers of the mathematically gifted tell about themselves and their profession. Roeper Review, 32, 272–280. doi:10.1080/02783193.2010.485306
I studied in specialized soviet school (well, post soviet, but same teachers). It had tough entrance exam. I say in past tense because it was dismantled. The biggest thing about those is that we study deeper and with better understanding instead of skipping ahead to make prodigies that understand same topics equally badly but at earlier age, and never really become very competent at anything.
Also, on the humanities, while there may be less % of humanities, the students are smarter and go ahead faster and still retain/understand more than average at typical humanities course.
A syllabus of recommended reading for folks who think they might want to work on FAI could potentially have a really high benefit to cost ratio. Could potentially have just as high a net benefit for reaching young talent as SPARC. Wouldn’t necessarily take too much effort either, maybe just EY spending an hour brainstorming books an ideal collaborator would have read, and setting up a google group for people working through the syllabus.
I guess this could potentially increase UFAI risk a little, but I still judge it to be positive expectation. (SPARC could potentially increase UFAI risk too.)
Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.
IMO the extent to which some/most of these books/documents are only tentative suggestions with unclear relevance to the problem should be emphasized, for example they shouldn’t be referred to with “After learning these basics”, as if the list is definitive and works as some sort of prerequisite.
Also, using the words “deep understanding of mathematics, logic, and computation” to refer to the section with Sipser’s introductory text is not really appropriate.
That’s cool and a good intro, but you could also have a list of weaker suggestions over ten times that size to show people what sorts of advanced maths &c. might or might not end up being relevant. E.g., a summary paper from the literature on abstract machines, or even extremely young, developing subfields such as quantum algorithmic information theory that teach relevant cognitive-mathematical skills even if they’re not quite fundamental to decision theory. This is also a sly way to interest people from diverse advanced disciplines. Is opportunity cost the reason such a list isn’t around? My apologies if this question is missing the point of the discussion, and I’m sorry it’s only somewhat related to the post, which is an important topic itself.
That list doesn’t actually seem very intimidating; for some reason I expected more highly technical AI papers and books. Why do you guys feel you need elite math talent as opposed to typical math grad student level talent? Which problems, if any, related to FAI seem unusually difficult compared to typical math research problems?
Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.
Are you sure raw math talent is the best predictor of a person’s ability to do this? I tend to associate this skill with programming especially, and maybe solving math word problems.
No. I should have said “Eliezer-guided,” or something. Eliezer doesn’t think it’s a good idea for him to manage the team. We need our “Oppenheimer” for that.
Do you have any direct advice to young programmers?
Advice toward what goal(s)? Reducing AI risk?
Becoming involved with the SI and knowing if they are qualified to be involved with the SI and if not, becoming qualified to be involved with the SI.
Well, we need lots of help besides elite young math/compsci talent. You could contact louie.helm [at] singinst.org and explain your experience and qualifications. Thanks for your interest!
Is it really optimal to dismiss Incorrect as not being elite math/computer science talent so quickly?
Also, are you familiar with growth versus static models of intelligence? This looks to me like you are promoting a static model, which amounts to destroying a public good in my view.
University professors don’t tell students they are too stupid to contribute to the problems they are trying to solve. I don’t see why SI should either.
I didn’t interpret lukeprog’s comment as dismissing Incorrect as not being elite talent. I thought he was just noting that, whether he is “elite” or not, he can contact Louie to find out how he can help.
Correct.
While I agree with most of this (and have upvoted) two points stand out:
I don’t think bringing this up helps your point very much. While there are individuals whose apparent extreme talent blooms fairly late (e.g. Steven Chu who didn’t really start being that impressive until he was in college), the lack of change of IQ scores over time on average is very robust, dating back to Spearman’s original research about a hundred years ago. This is also true for other metrics of intelligence. By and large, intelligence is pretty static.
This is true, but professors do sometimes tell students when a problem may just be out of their league. To use an extreme example, consider a grad student who walks into his adviser’s office and says he wants to prove the Riemann Hypothesis. That said, your essential point is valid, because even in that case, a professor could still direct them to some easier related problem or helpful question related to some aspect of it. So your basic point is valid.
Intelligence seems relatively static, but AFAIK once you’ve reached a certain minimum threshold in intelligence, conscientiousness becomes a more important factor for actual accomplishment. (Anecdotally and intuitively, conscientiousness seems more amenable to change, but I don’t know if the psychological evidence supports that.)
Wait, there’s real evidence of durable changes in conscientiousness? Point me its way. The psychology literature does not appear (after a brief search) to support the idea of lasting change. I would be happy to be wrong.
Well, there’s http://commonsenseatheism.com/wp-content/uploads/2011/02/Eisenberger-Learned-industriousness.pdf
Sorry, I should have been more clear: I only have anecdotal evidence, and a rather small sample at that. I’ll edit my comment.
Mind sharing your source for relatively static IQ? I feel like I’ve read otherwise, especially for children.
Childhood IQs don’t correlate that tightly with adult IQs. But once people are in their late teens change already becomes very unlikely.
Yes, in the lower end there’s some flexbility, especially in the mid teens but after that change is relatively static.
I’m not sure how strongly IQ correlates with real-world abilities (well, actually, I am sure: 0.2-0.6 depending on the task 1). You don’t need exceptional IQ to do new math (see Richard Feynman) but you do need an interest in math and quite a bit of exposure. Synesthesia can also be helpful.
I’m not finding a non-paywalled version right now, and unfortunately am not at my university at the moment to access it.
How many mathematicians consciously try to extract heuristics from their problem-solving process and keep them in a database, or track how environmental factors like diet and activities affect their productivity?
Has there ever been a team of mathematicians teamed with the team of mathematician optimizers who observed the mathematicians like lab animals? :D
Soviet Russia produced a remarkable amount of math, and ideologically was well-suited to such testing or design; they ultimately created whole academic cities for science and math, optimized (or at least, not pessimized like the rest of Soviet Russia) for research.
In fact, what I know of the Russian math academic system strikes me as reminiscent of the impression I have of the very successful athletic systems in both Russia and America: take young kids showing promise with relatives in related areas, push them hard with experienced tutors themselves skilled in the area, provide the resources they might need, various incentives for them and the relatives, and don’t let off the slack until they begin to flag in their late 20s/early 30s at which point they take their tutors’ places.
Read this today, “Rethinking Giftedness and Gifted Education: A Proposed Direction Forward Based on Psychological Science”, which is very germane to this discussion.
It also discusses athletics.
I studied in specialized soviet school (well, post soviet, but same teachers). It had tough entrance exam. I say in past tense because it was dismantled. The biggest thing about those is that we study deeper and with better understanding instead of skipping ahead to make prodigies that understand same topics equally badly but at earlier age, and never really become very competent at anything.
Also, on the humanities, while there may be less % of humanities, the students are smarter and go ahead faster and still retain/understand more than average at typical humanities course.
Did you just go meta on the process of going less meta?
A syllabus of recommended reading for folks who think they might want to work on FAI could potentially have a really high benefit to cost ratio. Could potentially have just as high a net benefit for reaching young talent as SPARC. Wouldn’t necessarily take too much effort either, maybe just EY spending an hour brainstorming books an ideal collaborator would have read, and setting up a google group for people working through the syllabus.
I guess this could potentially increase UFAI risk a little, but I still judge it to be positive expectation. (SPARC could potentially increase UFAI risk too.)
Already done.
Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.
IMO the extent to which some/most of these books/documents are only tentative suggestions with unclear relevance to the problem should be emphasized, for example they shouldn’t be referred to with “After learning these basics”, as if the list is definitive and works as some sort of prerequisite.
Also, using the words “deep understanding of mathematics, logic, and computation” to refer to the section with Sipser’s introductory text is not really appropriate.
That’s cool and a good intro, but you could also have a list of weaker suggestions over ten times that size to show people what sorts of advanced maths &c. might or might not end up being relevant. E.g., a summary paper from the literature on abstract machines, or even extremely young, developing subfields such as quantum algorithmic information theory that teach relevant cognitive-mathematical skills even if they’re not quite fundamental to decision theory. This is also a sly way to interest people from diverse advanced disciplines. Is opportunity cost the reason such a list isn’t around? My apologies if this question is missing the point of the discussion, and I’m sorry it’s only somewhat related to the post, which is an important topic itself.
Nice!
That list doesn’t actually seem very intimidating; for some reason I expected more highly technical AI papers and books. Why do you guys feel you need elite math talent as opposed to typical math grad student level talent? Which problems, if any, related to FAI seem unusually difficult compared to typical math research problems?
Now we’ve come to the point where I’d like to be able to hand you Open Problems in Friendly AI, but I can’t.
In the Singularity Institute open problems document, you write:
Are you sure raw math talent is the best predictor of a person’s ability to do this? I tend to associate this skill with programming especially, and maybe solving math word problems.
No, I’m not sure. The raw math talent thing is aimed more at the “Eliezer-led basement FAI team” stage.
Does Eliezer have experience with managing research teams?
No. I should have said “Eliezer-guided,” or something. Eliezer doesn’t think it’s a good idea for him to manage the team. We need our “Oppenheimer” for that.