I don’t remember seeing it, and based on the title I probably wouldn’t have clicked. I’m not sure what’s wrong with the title but it feels kind of like a meaningless string of words at first glance (did you use an LLM to translate or create the title?). Some titles that feel more interesting/meaningful:
Why We Resist Improvement
Why Making Things Better Often Sucks
Resisting Improvement
Stepping Away From the Local Maximum
As for the article itself, it feels strangely hard to read to me, even if I don’t recognize it as LLM generated explicitly. Like my attention just keeps slipping away while trying to read it. This is a feeling I often get from text written by LLMs, especially text not generated at my behest. Nothing in this post had the same feeling. So I think it’s probably still worth translating things you want people to read by hand; it might be interesting to post a manual translation of the same article in a month or so to see how it does.
There are probably still plenty of ways you can use LLMs to speed up or enhance the process, e.g.
Have it generate 5 different translations of a sentence, then mix and match your favorite parts of each translation.
Do a rough translation yourself and then ask the LLM to point out places where it’s awkward, or has incorrect grammar.
Ask the LLM about the connotations of specific word choices.
The idea itself I found somewhat interesting, and probably could find it more interesting/useful with the right framing. I agree that 10-20 is a reasonable expectation based on just the ideas.
Reference class forecasting is correct exactly when the only thing you know about something is that it is of that reference class.
In that sense, it can reasonable prior, but it does not excuse you from updating on all the additional information you have about something.