Do you have plans to invite any particular people outside of SIAI to contribute?
Certainly!
What is your expectation that the academic community will seriously engage with a scholarly wiki? What is the relative value of a wiki article vs a journal article? How does this compare to relative cost?
The academic community generally will not usefully engage with AI risk issues unless they (1) hear the arguments and already accept (or are open to) the major premises of the central arguments, or unless they (2) come around to caring by way of personal conversation and personal relationships. Individual scholarly articles, whether in journals or in a wiki, don’t generally persuade people to care. Everyone has their own list of objections to the basic arguments, and you can’t answer all of them in a single article. (But again, a wiki format is better for this.)
The main value of journal papers or wiki articles on AI risk is not for people who have strong counter-intuitions (e.g. “more intelligence implies more benevolence,” “machines can’t be smarter than humans”). Instead, they are mostly of value to people who already accept the premises of the arguments but hadn’t previously noticed their implications, or who are open enough to the ideas that with enough clear explanation they can grok it.
As long as you’re not picky about which journal you get into, the cost of a journal article isn’t much more than that of a good scholarly wiki article. Yes, you have to do more revisions, but in most cases you can ignore the revision suggestions you don’t want to make, and just make the revisions you do want to make. (Whaddyaknow? Peer review comments are often helpful.) A journal article has some special credibility value in having gone through peer review, while a wiki article has some special usefulness value in virtue of being linked directly to articles that explain other parts of the landscape.
A journal article won’t necessarily get read more than a wiki article, though. More people read Bostrom’s preprints on his website than the same journal articles in the actual journals. One exception to this is that journal articles sometimes get picked up by the popular media, whereas they won’t write a story about a wiki article. But as I said in the OP, it won’t be that expensive to convert material from good scholarly wiki articles to journal articles and vice versa, so we can have both without much extra expense.
I’m not sure I answered your question, though: feel free to ask follow-up questions.
Could this help SIAI recruit FAI researchers?
Heck yes. As near as I can tell, what happens today is this:
An ubermath gets interested enough to devote a dozen or more hours reading the relevant papers and blog posts, gets pretty interested.
In most cases, the ubermath does nothing and contacts nobody, except maybe asking some friends what they think and mostly keeping these crazy-sounding ideas at arm’s length. Or they ask an AI expert what they think of this stuff and the AI expert sends them back a critique of Kurzweil. (Yes, this has happened!)
In some cases, the ubermath hangs out on LW and occasionally comments but doesn’t make direct contact or show us that they are an ubermath.
In other cases, the ubermath makes contact (or, we discover their ubermathness by accident while talking about other subjects), and this leads to personal conversations with us in which the ubermath explains which parts of the picture they got from The Sequences don’t make sense, and we say “Yes, you’re right, and very perceptive. The picture you have doesn’t make sense, because it’s missing these 4 pieces that are among the 55 pieces that have never been written up. Sorry about that. But here, let’s talk it through.” And then the ubermath is largely persuaded and starts looking into decision theory, or thinking about strategy, or starts donating, or collaborates with us and occasionally thinks about whether they’d like to do FAI research for SI some day, when SI can afford the ubermath.
A scholarly AI risk wiki can help ubermaths (and non-ubermaths like myself) to (1) understand our picture of AI risk better, more quickly, more cheaply, and in a way that requires less personal investment from SI, (2) see that there is enough serious thought going into these issues that maybe they should take it seriously and contact us, (3) see where the bleeding edges of research are that they might contribute to it, and more.
BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says “Hi my name is , I got a medal in the IMO or scored well on the Putnam, and I’m starting to think seriously about AI risk.”
We currently spend a lot of time in conversation with promising people, in part because one really can’t get a very good idea of our current situation via the articles and blog posts that currently exist.
(These opinions are my own and may or may not represent those of other SI staffers, for example people who may or may not be named Eliezer Yudkowsky.)
BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says “Hi my name is , I got a medal in the IMO or scored well on the Putnam, and I’m starting to think seriously about AI risk.”
Would it be useful for SIAI to run a math competition to identify ubermaths, or to try contacting people who have done well in existing competitions?
Yes. It’s on our to-do list to reach out to such people, and also to look into sponsoring these competitions, but we haven’t had time to do those things yet.
Certainly!
The academic community generally will not usefully engage with AI risk issues unless they (1) hear the arguments and already accept (or are open to) the major premises of the central arguments, or unless they (2) come around to caring by way of personal conversation and personal relationships. Individual scholarly articles, whether in journals or in a wiki, don’t generally persuade people to care. Everyone has their own list of objections to the basic arguments, and you can’t answer all of them in a single article. (But again, a wiki format is better for this.)
The main value of journal papers or wiki articles on AI risk is not for people who have strong counter-intuitions (e.g. “more intelligence implies more benevolence,” “machines can’t be smarter than humans”). Instead, they are mostly of value to people who already accept the premises of the arguments but hadn’t previously noticed their implications, or who are open enough to the ideas that with enough clear explanation they can grok it.
As long as you’re not picky about which journal you get into, the cost of a journal article isn’t much more than that of a good scholarly wiki article. Yes, you have to do more revisions, but in most cases you can ignore the revision suggestions you don’t want to make, and just make the revisions you do want to make. (Whaddyaknow? Peer review comments are often helpful.) A journal article has some special credibility value in having gone through peer review, while a wiki article has some special usefulness value in virtue of being linked directly to articles that explain other parts of the landscape.
A journal article won’t necessarily get read more than a wiki article, though. More people read Bostrom’s preprints on his website than the same journal articles in the actual journals. One exception to this is that journal articles sometimes get picked up by the popular media, whereas they won’t write a story about a wiki article. But as I said in the OP, it won’t be that expensive to convert material from good scholarly wiki articles to journal articles and vice versa, so we can have both without much extra expense.
I’m not sure I answered your question, though: feel free to ask follow-up questions.
Heck yes. As near as I can tell, what happens today is this:
An ubermath gets interested enough to devote a dozen or more hours reading the relevant papers and blog posts, gets pretty interested.
In most cases, the ubermath does nothing and contacts nobody, except maybe asking some friends what they think and mostly keeping these crazy-sounding ideas at arm’s length. Or they ask an AI expert what they think of this stuff and the AI expert sends them back a critique of Kurzweil. (Yes, this has happened!)
In some cases, the ubermath hangs out on LW and occasionally comments but doesn’t make direct contact or show us that they are an ubermath.
In other cases, the ubermath makes contact (or, we discover their ubermathness by accident while talking about other subjects), and this leads to personal conversations with us in which the ubermath explains which parts of the picture they got from The Sequences don’t make sense, and we say “Yes, you’re right, and very perceptive. The picture you have doesn’t make sense, because it’s missing these 4 pieces that are among the 55 pieces that have never been written up. Sorry about that. But here, let’s talk it through.” And then the ubermath is largely persuaded and starts looking into decision theory, or thinking about strategy, or starts donating, or collaborates with us and occasionally thinks about whether they’d like to do FAI research for SI some day, when SI can afford the ubermath.
A scholarly AI risk wiki can help ubermaths (and non-ubermaths like myself) to (1) understand our picture of AI risk better, more quickly, more cheaply, and in a way that requires less personal investment from SI, (2) see that there is enough serious thought going into these issues that maybe they should take it seriously and contact us, (3) see where the bleeding edges of research are that they might contribute to it, and more.
BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says “Hi my name is , I got a medal in the IMO or scored well on the Putnam, and I’m starting to think seriously about AI risk.”
We currently spend a lot of time in conversation with promising people, in part because one really can’t get a very good idea of our current situation via the articles and blog posts that currently exist.
(These opinions are my own and may or may not represent those of other SI staffers, for example people who may or may not be named Eliezer Yudkowsky.)
Would it be useful for SIAI to run a math competition to identify ubermaths, or to try contacting people who have done well in existing competitions?
Yes. It’s on our to-do list to reach out to such people, and also to look into sponsoring these competitions, but we haven’t had time to do those things yet.
Who? People at FHI? Other AGI researchers?
(And thanks for good answers to the other questions.)
FHI researchers, AGI researchers, other domain experts, etc.