Makes sense, thanks. I think the current version of the list is not a significant infohazard since the examples are well-known, but I agree it’s good to be cautious. (I tweeted about it to try to get more examples, but it didn’t get much uptake, happy to delete the tweet if you prefer.) Focusing on outreach to people who care about AI risk seems like a good idea, maybe it could be useful to nudge researchers who don’t work on AI safety because of long timelines to start working on it.
No need to delete the tweet. I dagree the examples are not info hazards, they’re all publicly known. I just probably wouldn’t want somebody going to good ML researchers who currently are doing something that isn’t really capabilities (e.g., application of ML to some other area) and telling them “look at this, AGI soon.”
Makes sense, thanks. I think the current version of the list is not a significant infohazard since the examples are well-known, but I agree it’s good to be cautious. (I tweeted about it to try to get more examples, but it didn’t get much uptake, happy to delete the tweet if you prefer.) Focusing on outreach to people who care about AI risk seems like a good idea, maybe it could be useful to nudge researchers who don’t work on AI safety because of long timelines to start working on it.
No need to delete the tweet. I dagree the examples are not info hazards, they’re all publicly known. I just probably wouldn’t want somebody going to good ML researchers who currently are doing something that isn’t really capabilities (e.g., application of ML to some other area) and telling them “look at this, AGI soon.”