I’m quite surprised by the lack of discussion on this paper. This is probably one of the most significant paper on AGI I’ve seen as it outlines a concrete, practical path to its implementation by one of the most important researcher in the field.
It probably would have gotten more engagement if someone else (e.g. Gwern posted it). I’m a low karma/unpopular account, so few of my posts get seen unless people go looking for new posts.
I rarely read new posts (I read particular posts on things I’m interested in and have alerts for some posters). So, it’s not that surprising, I guess? I wouldn’t have read this post myself had someone analogous to me posted it.
Maybe there’s a way to increase discoverability of posts from low karma/unpopular users?
I’m quite surprised by the lack of discussion on this paper. This is probably one of the most significant paper on AGI I’ve seen as it outlines a concrete, practical path to its implementation by one of the most important researcher in the field.
There is not a lot of discussion about the paper here on LessWrong yet, but there are a dozen or so comments about it on OpenReview: https://openreview.net/forum?id=BZ5a1r-kVsf
I agree. I shared the post on an a couple AI safety Slack and Discord channels just now to try and get it more visibility.
It probably would have gotten more engagement if someone else (e.g. Gwern posted it). I’m a low karma/unpopular account, so few of my posts get seen unless people go looking for new posts.
I rarely read new posts (I read particular posts on things I’m interested in and have alerts for some posters). So, it’s not that surprising, I guess? I wouldn’t have read this post myself had someone analogous to me posted it.
Maybe there’s a way to increase discoverability of posts from low karma/unpopular users?
I see your post on the frontpage when I scroll down deep enough in Recent Discussion. We just have to write comments here often to keep it here!
Bumping is back!