For (moral) free speech considerations, the question of whether the censor is a private or government entity is a proxy. We care whether a censor has enough power to actually suppress the ideas they’re censoring.
The example of SSC moderation is a poor guide for our intuition here, because we should expect to arrive at different answers to “is censorship here OK?” for differently sized scopes. It can simultaneously be fine to ban talking about X at your dinner table and a huge problem to ban it nationally.
If we were to plot venue size against harm to society by the exercise of power to censor that venue, I’d expect some kind of increasing curve. Twitter’s moderation policy definitely sits above SSC’s. It also sits below, say, the Sedition Act.
Also, the scale of the event we’re seeing isn’t only Twitter and Facebook. The alternative platform the faction tried to flee to has been evicted by Google, Apple, and Amazon.
The strategy of “apply pressure on every technology company available until they boot your political opponents” is a symmetric weapon. It works just as well for bad intent as for good intent.
Under this model training the model to do things you don’t want and then “jailbreaking” it afterward would be a way to prevent classes of behavior.