The organization section touches on something that concerns me. Developing a new decision theory sounds like it requires more mathematical talent than the SI yet has available. I’ve said before that hiring some world-class mathematicians for a year seems likely to either get said geniuses interested in the problem, to produce real progress, or to produce a proof that SI’s current approach can’t work. In other words, it seems like the best form of accountability we can hope for given the theoretical nature of the work.
Now Eliezer is definitely looking for people who might help. For instance, the latest chapter of “Harry Potter and the Methods of Rationality” mentioned
a minicamp for 20 mathematically talented youths...Most focus will be on technical aspects of rationality (probability theory, decision theory) but also with some teaching of the same mental skills in the other Minicamps.
It also says,
Several instructors of International Olympiad level have already volunteered.
So they technically have something already. And if there exists a high-school student who can help with the problem, or learn to do so, that person seems relatively likely to enjoy HP:MoR. But I worry that Eliezer is thinking too much in terms of his own life story here, and has not had to defend his approach enough.
I phrased that with too much certainty. While I have little if any reason to see fully-reflective decision theory as an easier task than self-consistent infinite set theory, I also have no clear reason to think the contrary.
if it works well enough to help someone produce an uFAI but not well enough to stop this in time
if some part of it—such as a fully-reflective decision theory that humans can understand—is mathematically impossible, and SI never realizes this.
Now SI technically seems aware of both problems. The fact that Eliezer went out of his way to help critics understand Löb’s Theorem and that he keeps mentioning said theorem seems like a good sign. But should I believe that SI is doing enough to address #2? Why?
The organization section touches on something that concerns me. Developing a new decision theory sounds like it requires more mathematical talent than the SI yet has available. I’ve said before that hiring some world-class mathematicians for a year seems likely to either get said geniuses interested in the problem, to produce real progress, or to produce a proof that SI’s current approach can’t work. In other words, it seems like the best form of accountability we can hope for given the theoretical nature of the work.
Now Eliezer is definitely looking for people who might help. For instance, the latest chapter of “Harry Potter and the Methods of Rationality” mentioned
It also says,
So they technically have something already. And if there exists a high-school student who can help with the problem, or learn to do so, that person seems relatively likely to enjoy HP:MoR. But I worry that Eliezer is thinking too much in terms of his own life story here, and has not had to defend his approach enough.
On what measure of difficulty are you basing this? We have some guys around here doing a pretty good job.
I phrased that with too much certainty. While I have little if any reason to see fully-reflective decision theory as an easier task than self-consistent infinite set theory, I also have no clear reason to think the contrary.
But I’m trying to find the worst scenario that we could plan for. I can think of two broad ways that Eliezer’s current plan could be horribly misguided:
if it works well enough to help someone produce an uFAI but not well enough to stop this in time
if some part of it—such as a fully-reflective decision theory that humans can understand—is mathematically impossible, and SI never realizes this.
Now SI technically seems aware of both problems. The fact that Eliezer went out of his way to help critics understand Löb’s Theorem and that he keeps mentioning said theorem seems like a good sign. But should I believe that SI is doing enough to address #2? Why?