Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.
Another point: I seem to recall a joke among mathematicians that if only it was announced that some famous problem was solved, without there actually being a solution, someone would try to find the solution for themselves and succeed in finding a valid solution.
In other words, how problems are framed may be important, and framing a problem as potentially impossible may make it difficult for folks to solve it.
Additionally, I see little evidence that the problems required for FAI are actually hard problems. This isn’t to say that it’s not a major research endeavor, which it may or may not be. All I’m saying is I don’t see top academics having hammered at problems involved in building a FAI the same way they’ve hammered at, say, proving the Riemann hypothesis.
EY thinking they are super hard doesn’t seem like much evidence to me; he’s primarily known as a figure in the transhumanist movement and for popular writings on rationality, not for solving research problems. It’s not even clear how much time he’s spent thinking about the problems in between all of the other stuff he does.
FAI might just require lots of legwork on problems that are relatively straightforward to solve, really.