The FHI already has a private wiki, run by me (with some access of some outside-FHIers). It hasn’t been a great success. If we do a public wiki, it’s absolutely essential that we get people involved who know how to run a wiki, keep people involved, and keep it up to date (or else it’s just embarrassing). After the first flush of interest, I’m not confident we have enough people with free time to sustain it.
Would a subsection of an existing major wiki be a better way to go?
You showed me that wiki once. The problem of course is that there isn’t enough budget invested in it to make it grow and keep it active. We would only create this scholarly AI risk wiki if we had the funds required to make it worthwhile.
Well, as long as it’s well curated and maintained, I suppose it could work… But why not work on making the less wrong wiki better? That comes attached to the website already.
Anyway, I’m not sure a new wiki has much of an advantage over “list of recent AI risk papers + links to youtube videos + less wrong wiki updated a bit more” for researchers—at least, not enough advantage to justify the costs. A few well-maintained pages (“AI risks”, “friendly AI”, “CEV”, “counterarguments”, “various models”), no more than a dozen at most, that summarise the core arguments with links to the more advanced stuff, should be enough for what we’d want, I feel.
I would guess that the primary factors related to whether wikis succeed or fail don’t have much to do with whether they are about AI risk or some other topic. So, perhaps beware of extrapolating too much from the FHI wiki data point.
The FHI already has a private wiki, run by me (with some access of some outside-FHIers). It hasn’t been a great success. If we do a public wiki, it’s absolutely essential that we get people involved who know how to run a wiki, keep people involved, and keep it up to date (or else it’s just embarrassing). After the first flush of interest, I’m not confident we have enough people with free time to sustain it.
Would a subsection of an existing major wiki be a better way to go?
You showed me that wiki once. The problem of course is that there isn’t enough budget invested in it to make it grow and keep it active. We would only create this scholarly AI risk wiki if we had the funds required to make it worthwhile.
Are you confident you can translate budget into sustained wiki activity?
Not 100%, obviously. But most of the work developing the wiki would be paid work, if that’s what you mean by “activity.”
Well, as long as it’s well curated and maintained, I suppose it could work… But why not work on making the less wrong wiki better? That comes attached to the website already.
Anyway, I’m not sure a new wiki has much of an advantage over “list of recent AI risk papers + links to youtube videos + less wrong wiki updated a bit more” for researchers—at least, not enough advantage to justify the costs. A few well-maintained pages (“AI risks”, “friendly AI”, “CEV”, “counterarguments”, “various models”), no more than a dozen at most, that summarise the core arguments with links to the more advanced stuff, should be enough for what we’d want, I feel.
You might be right. I do have 3 people right now improving the LW wiki and adding all the pages listed in the OP that aren’t already in the LW wiki.
I would guess that the primary factors related to whether wikis succeed or fail don’t have much to do with whether they are about AI risk or some other topic. So, perhaps beware of extrapolating too much from the FHI wiki data point.
As I said: it’s absolutely essential that we get people involved who know how to run a wiki, keep people involved, and keep it up to date.
I’m unfortunately not one of these types of people.