It seems like the post is implicitly referring to the next big paper on SAEs from one of these labs, similar in newsworthiness as the last Anthropic paper. A big paper won’t be a negative result or a much smaller downstream application, and a big paper would compare its method against baselines if possible, making 165% still within the ballpark.
I still agree with your comment, especially the recommendation for a time-based prediction (I explained in my other comment here).
Thank you for your alignment work :)
Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren’t that “close” to these other technologies.
Maybe they believe in a ≈38% chance of superintelligence by 2039.
PS: Your comment may have caused it to drop to 38%. :)