I, non-programmer, have a question my programmer friends asked me: why can’t they find any of Eliezer’s papers that have any hard math in them at all? According to them, it’s all words words words. Excellent words, to be sure, but they’d like to see some hard equations, code, and the like, stuff that would qualify him as a researcher rather than a “mere” philosopher. What should I link them to?
AFAIK, nothing of the kind is publicly available. The closest thing to it is probably his Intuitive Explanation of Bayes’ Theorem; however, Bayes’ Theorem is high-school math. (His Cartoon Guide to Lob’s Theorem might also be relevant- although they may think it’s just more words.) Two relevant quotes by Eliezer:
On some gut level I’m also just embarrassed by the number of compliments I get for my math ability (because I’m a good explainer and can make math things that I do understand seem obvious to other people) as compared to the actual amount of advanced math knowledge that I have (practically none by any real mathematician’s standard).
My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof. (Robin Hanson spends a lot of time usefully discussing which activities are most prestigious in academia, and it would be a Hansonian observation, even though he didn’t say it AFAIK, that complicated proofs are prestigious but it’s much more important to figure out which theorem to prove.)
The paper on TDT has some words that mean math, but the hard parts are mostly not done.
Eliezer is deliberately working on developing basics of FAI theory rather than producing code, but even then either he spends little time writing it down or he’s not making much progress.
The SI folk say that they are deliberately not releasing the work they’ve done that’s directly related to AGI. Doing so would speed up the development of an AGI without necessarily speeding up the development of an FAI, and therefore increase existential risk.
I, non-programmer, have a question my programmer friends asked me: why can’t they find any of Eliezer’s papers that have any hard math in them at all? According to them, it’s all words words words. Excellent words, to be sure, but they’d like to see some hard equations, code, and the like, stuff that would qualify him as a researcher rather than a “mere” philosopher. What should I link them to?
AFAIK, nothing of the kind is publicly available. The closest thing to it is probably his Intuitive Explanation of Bayes’ Theorem; however, Bayes’ Theorem is high-school math. (His Cartoon Guide to Lob’s Theorem might also be relevant- although they may think it’s just more words.) Two relevant quotes by Eliezer:
Source for both
Viewtifully phrased...
The paper on TDT has some words that mean math, but the hard parts are mostly not done.
Eliezer is deliberately working on developing basics of FAI theory rather than producing code, but even then either he spends little time writing it down or he’s not making much progress.
The SI folk say that they are deliberately not releasing the work they’ve done that’s directly related to AGI. Doing so would speed up the development of an AGI without necessarily speeding up the development of an FAI, and therefore increase existential risk.
ETA: Retracted, couldn’t find my source for this.