I suspect that LLMs likely can write blogs on par with most humans if we trained and scaffolded them appropriately, but is that really what we want from LLMs?
Claude 3.7 might not write outstanding blogs but he can help explain why not:
The fundamental mismatch between LLMs and blogging isn’t primarily about capabilities, but about design and motivation:
Current LLMs are RLHF-tuned to be balanced, helpful assistants—essentially the opposite of good bloggers. Assistants hedge, acknowledge all perspectives, and avoid strong stances. Good bloggers take intellectual risks, have distinctive voices, and present unique viewpoints.
Humans blog for reasons LLMs simply don’t have:
Building intellectual reputation in a community
Working through personal confusions
Creative self-expression
The social reward of changing minds
The metrics we use to evaluate LLMs (helpfulness, accuracy, harmlessness) don’t capture what makes blogs compelling (novelty, intellectual risk-taking, personality).
Simply making LLMs more capable won’t bridge this gap. We’d need systems with fundamentally different optimization targets—ones trained to be interesting rather than helpful, to develop consistent viewpoints rather than being balanced, and to prioritize novel insights over comprehensive coverage.
Strongly subsidizing the costs of raising children (and not just in financial terms) would likely provide more pro-social results than a large one-time lump payment. However, that won’t do much for folks skipping out on children because they think humanity is doomed shortly anyway.