That doesn’t seem like a correct translation. The idealistic belief that might be helpful for AI researchers is more like “a safe AI design is possible and simple”, rather than “all AI research is safe” or “safe AI research is easy”. There’s a large difference. I don’t think Descartes and Leibniz considered their task easy, but they needed the belief that it’s possible at all.
Okay. Be careful, but don’t be too afraid in your AI research. Above all, don’t just wait for MIRI and its AI projects, for they are more like Locke, Hume or Hobbes, than Leibniz or Descartes.
I’m not sure. There’s a bit of tension between LW-ish beliefs, which are mostly empiricist, and Eliezer’s popularity, which I think owes more to his idealistic and poetic attitude coupled with high intelligence. Maybe people should learn more from the latter, rather than the former :-)
Excuse me, but of course it is. To believe that it’s neither possible nor simple is to believe that human minds are so needlessly complex, viewed from the outside, that bottling up our model-forming and evaluation-forming processes for artificial processing is impossible or intractable.
The problem only looks hard because people are applying the wrong maps. They come at it with maps based on pure logic, microeconomic utility theory, and a priori moral philosophizing, when they really need the maps of statistical learning theory, computational cognitive science, and evaluative psychology.
Sometimes when things look really improbable, it’s because you’ve got a very biased prior.
To translate. Do not buy this LW-Yudkowsky AI mantra how hard, difficult and dangerous it is. Do it at home, for yourself, have no fear!
That doesn’t seem like a correct translation. The idealistic belief that might be helpful for AI researchers is more like “a safe AI design is possible and simple”, rather than “all AI research is safe” or “safe AI research is easy”. There’s a large difference. I don’t think Descartes and Leibniz considered their task easy, but they needed the belief that it’s possible at all.
Okay. Be careful, but don’t be too afraid in your AI research. Above all, don’t just wait for MIRI and its AI projects, for they are more like Locke, Hume or Hobbes, than Leibniz or Descartes.
I’m not sure. There’s a bit of tension between LW-ish beliefs, which are mostly empiricist, and Eliezer’s popularity, which I think owes more to his idealistic and poetic attitude coupled with high intelligence. Maybe people should learn more from the latter, rather than the former :-)
Excuse me, but of course it is. To believe that it’s neither possible nor simple is to believe that human minds are so needlessly complex, viewed from the outside, that bottling up our model-forming and evaluation-forming processes for artificial processing is impossible or intractable.
The problem only looks hard because people are applying the wrong maps. They come at it with maps based on pure logic, microeconomic utility theory, and a priori moral philosophizing, when they really need the maps of statistical learning theory, computational cognitive science, and evaluative psychology.
Sometimes when things look really improbable, it’s because you’ve got a very biased prior.