very clear and simple. tempting to dismiss this as not significant/novel, but there is a place for presenting basic things well.
And it’s positively framed. We could all use a little hope right now.
The noise in my model of what AI safety research is supposed to do and I had learned to ignore it. It surprises me how big a difference it makes, how comparatively calm and settled I feel to have the typical success narratives in front of me, disambiguated from each other. There’s much more confusion to tackle, but it seems more manageable.
The next time I stumble upon an AI discussion, I expect I will look up this post for a refresher and to organize my thoughts on what model each person is using.
Assuming what Wei_Dai wrote is accurate, of course. I can tell that this post is greatly approachable, but I’m not in a place to assess whether the approach is *correct*. David Krugeur disputed a few cells on the AI forum. Nor am I certain whether it’s a useful communication block for anyone already in the safety field. i see a few pingback posts, 105 votes. Would it see use in the office or is it strictly inferior to other models for communicating about AI futures?
Could attempt reading some AI papers and posts, and judge whether this post helps me contextualize the research in a meaningful way. If it help, that would be evidence for it serving an introductory purpose well. Humans like their stories.
**i am so tired i don’t want to run the experiment**
Improve: - Clarify what CAIS stands for, acronym is not expanded on the page - Could use some Ethos. This is a widely accessible post, it makes sense to establish who the author his and why anyone would trust his opaque assessments - Flesh out description of autonomy better - Remove parentheticals asking for comments - Survey which orgs/researchers are considering which scenarios. Including this information gives a next action to take if a reader wishes to engage further. As Donald Hobson pointed out, the most common category of work might be “technical work [that] could contribute to most success stories”. this would also be important information worth knowing. - could follow up with a similarly basic overview of semi-success. I don’t recommend doing it with failure stories. there are too many and it would be a major bummer.
very clear and simple. tempting to dismiss this as not significant/novel, but there is a place for presenting basic things well.
And it’s positively framed. We could all use a little hope right now.
The noise in my model of what AI safety research is supposed to do and I had learned to ignore it. It surprises me how big a difference it makes, how comparatively calm and settled I feel to have the typical success narratives in front of me, disambiguated from each other. There’s much more confusion to tackle, but it seems more manageable.
The next time I stumble upon an AI discussion, I expect I will look up this post for a refresher and to organize my thoughts on what model each person is using.
Assuming what Wei_Dai wrote is accurate, of course. I can tell that this post is greatly approachable, but I’m not in a place to assess whether the approach is *correct*. David Krugeur disputed a few cells on the AI forum. Nor am I certain whether it’s a useful communication block for anyone already in the safety field. i see a few pingback posts, 105 votes. Would it see use in the office or is it strictly inferior to other models for communicating about AI futures?
Could attempt reading some AI papers and posts, and judge whether this post helps me contextualize the research in a meaningful way. If it help, that would be evidence for it serving an introductory purpose well. Humans like their stories.
**i am so tired i don’t want to run the experiment**
Improve:
- Clarify what CAIS stands for, acronym is not expanded on the page
- Could use some Ethos. This is a widely accessible post, it makes sense to establish who the author his and why anyone would trust his opaque assessments
- Flesh out description of autonomy better
- Remove parentheticals asking for comments
- Survey which orgs/researchers are considering which scenarios. Including this information gives a next action to take if a reader wishes to engage further. As Donald Hobson pointed out, the most common category of work might be “technical work [that] could contribute to most success stories”. this would also be important information worth knowing.
- could follow up with a similarly basic overview of semi-success. I don’t recommend doing it with failure stories. there are too many and it would be a major bummer.