Promoted to curated: I think the question of whether we are in an AI overhang is pretty obviously relevant to a lot of thinking about AI Risk, and this post covers the topic quite well. I particularly liked the use of a lot of small fermi estimate, and how it covered a lot of ground in relatively little writing.
I also really appreciated the discussion in the comments, and felt that Gwern’s comment on AI development strategies in particular help me build a much map of the modern ML space (though I wouldn’t want it to be interpreted as a complete map of a space, just a kind of foothold that helped me get a better grasp on thinking about this).
Most of my immediate critiques are formatting related. I feel like the listed section could have used some more clarity, maybe by bolding the name for each bullet point consideration, but it flowed pretty well as is. I was also a bit concerned about there being some infohazard-like risks from promoting the idea of being in an AI overhang too much, but after talking to some more people about it, and thinking for a bit about it, decided that I don’t think this post adds much additional risk (e.g. by encouraging AI companies to act on being in an overhang and try to drastically scale up models without concern for safety).
Out of curiosity, what was it that convinced you this isn’t an infohazard-like risk?
Some mixture of:
I think it’s pretty valuable to have open conversation about being in an overhang, and I think on the margin it will make those worlds go better by improving coordination. My current sense is that the perspective presented in this post is reasonably frequent among people in ML, so that marginally reducing how many people believe this is not going to do much of a difference, but having good writeups that summarize the arguments seems like it has a better chance of creating some kind of common knowledge that allows people to coordinate better here.
This post more so than other posts in its reference class emphasizes a bunch of the safety concerns, whereas I expect the next post to replace it to not do that very much
Curation in particular mostly sends out the post to more people who are concerned with safety. This post found a lot of traction on HN and other places, so in some sense the cat is out of the bag and if it was harmful the curation decision won’t change that very much, and it seems like it would unnecessarily hinder the people most concerned about safety if we don’t curate it (since the considerations do also seem quite relevant to safety work).
Promoted to curated: I think the question of whether we are in an AI overhang is pretty obviously relevant to a lot of thinking about AI Risk, and this post covers the topic quite well. I particularly liked the use of a lot of small fermi estimate, and how it covered a lot of ground in relatively little writing.
I also really appreciated the discussion in the comments, and felt that Gwern’s comment on AI development strategies in particular help me build a much map of the modern ML space (though I wouldn’t want it to be interpreted as a complete map of a space, just a kind of foothold that helped me get a better grasp on thinking about this).
Most of my immediate critiques are formatting related. I feel like the listed section could have used some more clarity, maybe by bolding the name for each bullet point consideration, but it flowed pretty well as is. I was also a bit concerned about there being some infohazard-like risks from promoting the idea of being in an AI overhang too much, but after talking to some more people about it, and thinking for a bit about it, decided that I don’t think this post adds much additional risk (e.g. by encouraging AI companies to act on being in an overhang and try to drastically scale up models without concern for safety).
Thanks for the feedback! I’ve cleaned up the constraints section a bit, though it’s still less coherent than the first section.
Out of curiosity, what was it that convinced you this isn’t an infohazard-like risk?
Some mixture of:
I think it’s pretty valuable to have open conversation about being in an overhang, and I think on the margin it will make those worlds go better by improving coordination. My current sense is that the perspective presented in this post is reasonably frequent among people in ML, so that marginally reducing how many people believe this is not going to do much of a difference, but having good writeups that summarize the arguments seems like it has a better chance of creating some kind of common knowledge that allows people to coordinate better here.
This post more so than other posts in its reference class emphasizes a bunch of the safety concerns, whereas I expect the next post to replace it to not do that very much
Curation in particular mostly sends out the post to more people who are concerned with safety. This post found a lot of traction on HN and other places, so in some sense the cat is out of the bag and if it was harmful the curation decision won’t change that very much, and it seems like it would unnecessarily hinder the people most concerned about safety if we don’t curate it (since the considerations do also seem quite relevant to safety work).