I don’t have that information, but being published in a peer-reviewed venue is not a prerequisite for academia to recognize a result. See this example. Getting a paper through peer review is very costly in terms of time and effort (especially at a highly reputable journal which might only accept less than 10% of submissions and that’s from academics who know what they’re supposed to do to maximize the chances of being accepted), and may not buy all that much in terms of additional attention from the people who might be able to build upon the work. I tried submitting a paper to an academic crypto conference (for a crypto primitive, not b-money) so I have some experience with this myself. Satoshi and Eliezer aren’t the only very smart people who haven’t tried very hard to publish in academia (i.e., hard enough to get published in a reputable journal). Just from people I know, there’s also Gary Drescher, Paul Christiano (for his approval-directed agent ideas), and other researchers at MIRI who seem to publish most of their results as technical reports. I guess that also includes you, who haven’t followed your own advice?
Obviously whole fields of academia going in wrong directions represents a huge societal waste, and it would be great to have a solution to fix that; I’m just not sure what the solution is. (Note that among other things, you have to get academics to admit at least implicitly that for decades they’ve been wasting time and other people’s money on wrong approaches.) I haven’t been too concerned about this for decision theory since I’m not sure that further progress in decision theory is really crucial for (or even contributes positively to) AI alignment, but I have been thinking about how to get more academics to switch their focus from ML and AI capability in general to AI alignment, especially to AI alignment ideas that we think are promising and neglected (like Paul’s ideas). So far my idea is to try to influence funders (Open Phil, FLI, other philanthropists) to direct their grants to those specific ideas.
I guess that also includes you, who haven’t followed your own advice?
Yeah. When I was working actively on MIRI math, I picked up the idea that getting stuff peer reviewed is nice but not necessary for progress. My opinion changed in the last couple years, when I was already mostly away. The strategy I’d suggest now is to try to join the academic conversation on their terms, as I did at the Cambridge conference. Ideally, getting publications in journals should be part of that.
I haven’t thought much about talking to funders, good to hear you’re pursuing that.
The strategy I’d suggest now is to try to join the academic conversation on their terms, as I did at the Cambridge conference.
MIRI seems to be doing more of this as well, but I’m not seeing any noticeable results so far. Judging by citations in Google Scholar, in the 2 years since that conference, it doesn’t look like any academics have picked up on the ideas presented there by you and MIRI people or made further progress?
One other thing that worries me is, unless we can precisely diagnose what is causing academia to be unable to take the “outsider steps”, it seems dangerous to make ourselves more like academia. What if that causes us to lose that ability ourselves?
I haven’t thought much about talking to funders, good to hear you’re pursuing that.
Well I’m doing what I can but I’m not sure I’m the best person for this job, given that I’m not very social/outgoing and my opportunities for travel are limited so it’s hard to meet those funders and build up relationships.
One other thing that worries me is, unless we can precisely diagnose what is causing academia to be unable to take the “outsider steps”, it seems dangerous to make ourselves more like academia. What if that causes us to lose that ability ourselves?
Seems that academic motivations can be “value”, e.g discovering something of utility or “momentum”, sort of like a beauty contest, more applicable in abstract areas where utility is not obvious. Possible third is immediate enjoyment which probably contributed to millennia of number theory before it became useful.
Doing novel non-incremental things for non-value (like valuing AI safety) reasons is likely to be difficult until enough acceptability is built up for momentum type motivations. (which also suggests trying to explicitly build up momentum as an intervention)
I don’t have that information, but being published in a peer-reviewed venue is not a prerequisite for academia to recognize a result. See this example. Getting a paper through peer review is very costly in terms of time and effort (especially at a highly reputable journal which might only accept less than 10% of submissions and that’s from academics who know what they’re supposed to do to maximize the chances of being accepted), and may not buy all that much in terms of additional attention from the people who might be able to build upon the work. I tried submitting a paper to an academic crypto conference (for a crypto primitive, not b-money) so I have some experience with this myself. Satoshi and Eliezer aren’t the only very smart people who haven’t tried very hard to publish in academia (i.e., hard enough to get published in a reputable journal). Just from people I know, there’s also Gary Drescher, Paul Christiano (for his approval-directed agent ideas), and other researchers at MIRI who seem to publish most of their results as technical reports. I guess that also includes you, who haven’t followed your own advice?
Obviously whole fields of academia going in wrong directions represents a huge societal waste, and it would be great to have a solution to fix that; I’m just not sure what the solution is. (Note that among other things, you have to get academics to admit at least implicitly that for decades they’ve been wasting time and other people’s money on wrong approaches.) I haven’t been too concerned about this for decision theory since I’m not sure that further progress in decision theory is really crucial for (or even contributes positively to) AI alignment, but I have been thinking about how to get more academics to switch their focus from ML and AI capability in general to AI alignment, especially to AI alignment ideas that we think are promising and neglected (like Paul’s ideas). So far my idea is to try to influence funders (Open Phil, FLI, other philanthropists) to direct their grants to those specific ideas.
Yeah. When I was working actively on MIRI math, I picked up the idea that getting stuff peer reviewed is nice but not necessary for progress. My opinion changed in the last couple years, when I was already mostly away. The strategy I’d suggest now is to try to join the academic conversation on their terms, as I did at the Cambridge conference. Ideally, getting publications in journals should be part of that.
I haven’t thought much about talking to funders, good to hear you’re pursuing that.
What triggered this?
MIRI seems to be doing more of this as well, but I’m not seeing any noticeable results so far. Judging by citations in Google Scholar, in the 2 years since that conference, it doesn’t look like any academics have picked up on the ideas presented there by you and MIRI people or made further progress?
One other thing that worries me is, unless we can precisely diagnose what is causing academia to be unable to take the “outsider steps”, it seems dangerous to make ourselves more like academia. What if that causes us to lose that ability ourselves?
Well I’m doing what I can but I’m not sure I’m the best person for this job, given that I’m not very social/outgoing and my opportunities for travel are limited so it’s hard to meet those funders and build up relationships.
Seems that academic motivations can be “value”, e.g discovering something of utility or “momentum”, sort of like a beauty contest, more applicable in abstract areas where utility is not obvious. Possible third is immediate enjoyment which probably contributed to millennia of number theory before it became useful.
Doing novel non-incremental things for non-value (like valuing AI safety) reasons is likely to be difficult until enough acceptability is built up for momentum type motivations. (which also suggests trying to explicitly build up momentum as an intervention)
Did you mean “likely to be difficult”?
thanks, fixed!