Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.
IMO the extent to which some/most of these books/documents are only tentative suggestions with unclear relevance to the problem should be emphasized, for example they shouldn’t be referred to with “After learning these basics”, as if the list is definitive and works as some sort of prerequisite.
Also, using the words “deep understanding of mathematics, logic, and computation” to refer to the section with Sipser’s introductory text is not really appropriate.
That’s cool and a good intro, but you could also have a list of weaker suggestions over ten times that size to show people what sorts of advanced maths &c. might or might not end up being relevant. E.g., a summary paper from the literature on abstract machines, or even extremely young, developing subfields such as quantum algorithmic information theory that teach relevant cognitive-mathematical skills even if they’re not quite fundamental to decision theory. This is also a sly way to interest people from diverse advanced disciplines. Is opportunity cost the reason such a list isn’t around? My apologies if this question is missing the point of the discussion, and I’m sorry it’s only somewhat related to the post, which is an important topic itself.
That list doesn’t actually seem very intimidating; for some reason I expected more highly technical AI papers and books. Why do you guys feel you need elite math talent as opposed to typical math grad student level talent? Which problems, if any, related to FAI seem unusually difficult compared to typical math research problems?
Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.
Are you sure raw math talent is the best predictor of a person’s ability to do this? I tend to associate this skill with programming especially, and maybe solving math word problems.
No. I should have said “Eliezer-guided,” or something. Eliezer doesn’t think it’s a good idea for him to manage the team. We need our “Oppenheimer” for that.
Already done.
Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.
IMO the extent to which some/most of these books/documents are only tentative suggestions with unclear relevance to the problem should be emphasized, for example they shouldn’t be referred to with “After learning these basics”, as if the list is definitive and works as some sort of prerequisite.
Also, using the words “deep understanding of mathematics, logic, and computation” to refer to the section with Sipser’s introductory text is not really appropriate.
That’s cool and a good intro, but you could also have a list of weaker suggestions over ten times that size to show people what sorts of advanced maths &c. might or might not end up being relevant. E.g., a summary paper from the literature on abstract machines, or even extremely young, developing subfields such as quantum algorithmic information theory that teach relevant cognitive-mathematical skills even if they’re not quite fundamental to decision theory. This is also a sly way to interest people from diverse advanced disciplines. Is opportunity cost the reason such a list isn’t around? My apologies if this question is missing the point of the discussion, and I’m sorry it’s only somewhat related to the post, which is an important topic itself.
Nice!
That list doesn’t actually seem very intimidating; for some reason I expected more highly technical AI papers and books. Why do you guys feel you need elite math talent as opposed to typical math grad student level talent? Which problems, if any, related to FAI seem unusually difficult compared to typical math research problems?
Now we’ve come to the point where I’d like to be able to hand you Open Problems in Friendly AI, but I can’t.
In the Singularity Institute open problems document, you write:
Are you sure raw math talent is the best predictor of a person’s ability to do this? I tend to associate this skill with programming especially, and maybe solving math word problems.
No, I’m not sure. The raw math talent thing is aimed more at the “Eliezer-led basement FAI team” stage.
Does Eliezer have experience with managing research teams?
No. I should have said “Eliezer-guided,” or something. Eliezer doesn’t think it’s a good idea for him to manage the team. We need our “Oppenheimer” for that.