The purpose of this post is to communicate, not to persuade. It may be that we want to bit [sic] the bullet of the strongest form of robustness to scale, and build an AGI that is simply not robust to scale, but if we do, we should at least realize that we are doing that.
While this may not be what is happening here, in general I think that when an author opens with “I acknowledge the following criticisms to my argument” they make it unfairly socially unacceptable to respond with “Yeah, but the criticisms.” I think this is a bad discourse norm, and people should give that response more often.
I was imagining someone with a bright yet flawed idea reading this post, realizing their idea doesn’t scale, and ending up scrapping something redeemable that people with more expertise could have steelmanned. I’m not presuming that Scott was advocating a “totally scalable or STFU” criterion, but I wanted to put that consideration out there.
While this may not be what is happening here, in general I think that when an author opens with “I acknowledge the following criticisms to my argument” they make it unfairly socially unacceptable to respond with “Yeah, but the criticisms.” I think this is a bad discourse norm, and people should give that response more often.
I was imagining someone with a bright yet flawed idea reading this post, realizing their idea doesn’t scale, and ending up scrapping something redeemable that people with more expertise could have steelmanned. I’m not presuming that Scott was advocating a “totally scalable or STFU” criterion, but I wanted to put that consideration out there.