I intended it as somewhat of an outreach tool, though probably to people who already care about AI risk, since I wouldn’t want it to serve as a reason for somebody to start working on it more because they see it’s possible.
Mostly, I’m did it for epistemics/forecasting: I think it will be useful for the community to know how this particular kind of work is progressing, and since it’s in disparate research areas I don’t think it’s being tracked by the research community by default.
Makes sense, thanks. I think the current version of the list is not a significant infohazard since the examples are well-known, but I agree it’s good to be cautious. (I tweeted about it to try to get more examples, but it didn’t get much uptake, happy to delete the tweet if you prefer.) Focusing on outreach to people who care about AI risk seems like a good idea, maybe it could be useful to nudge researchers who don’t work on AI safety because of long timelines to start working on it.
No need to delete the tweet. I dagree the examples are not info hazards, they’re all publicly known. I just probably wouldn’t want somebody going to good ML researchers who currently are doing something that isn’t really capabilities (e.g., application of ML to some other area) and telling them “look at this, AGI soon.”
Thanks! Here is a shorter url: rsi.thomaswoodside.com.
I intended it as somewhat of an outreach tool, though probably to people who already care about AI risk, since I wouldn’t want it to serve as a reason for somebody to start working on it more because they see it’s possible.
Mostly, I’m did it for epistemics/forecasting: I think it will be useful for the community to know how this particular kind of work is progressing, and since it’s in disparate research areas I don’t think it’s being tracked by the research community by default.
Makes sense, thanks. I think the current version of the list is not a significant infohazard since the examples are well-known, but I agree it’s good to be cautious. (I tweeted about it to try to get more examples, but it didn’t get much uptake, happy to delete the tweet if you prefer.) Focusing on outreach to people who care about AI risk seems like a good idea, maybe it could be useful to nudge researchers who don’t work on AI safety because of long timelines to start working on it.
No need to delete the tweet. I dagree the examples are not info hazards, they’re all publicly known. I just probably wouldn’t want somebody going to good ML researchers who currently are doing something that isn’t really capabilities (e.g., application of ML to some other area) and telling them “look at this, AGI soon.”