I’ll probably get disagree points, but wanted to share my reaction: I honestly don’t mind the AI’s output. I read it all and think it’s just an elaboration of what you said. The only problem I noticed is it is too long.
Then again, I’m not an amazing writer, and my critical skills aren’t so great for critiquing style. I will admit I rarely use assistance, because I have a tight set of points I want to include, and explaining them all to the AI is almost the same as writing the post itself.
Awesome ideas! These ideas are some of the things missing for LLMs to have economic impact. Companies expected them to just automate certain jobs, but that’s an all or nothing solution that’s never worked historically (until it eventually does, but we’re not there yet).
One idea I thought of when reading Scott Aaronson’s Reading Burden (https://scottaaronson.blog/?p=8217), is that people with interesting opinions and with somewhat of a public presence, have a TON of reading to do, not just to keep up with current events, but to observe people’s reactions and see the trend in ideas in response to events. Perhaps this can be improved with LLMs:
Give the model a collection of your writings and latest opinions. Have it scour online posts and their comments from your favorite sources. Each post + comments section is one input, so we need longer context. Look for opportunities to share your viewpoint. Report whether your viewpoint has already been shared or refuted, or if there are points not considered in your writings. If nothing, save yourself the effort! If something, highlight the important bits.
Might be too many LLM calls depending on the sources, obviously a retrieval stage is in order. Or that bit can be done manually, we seem pretty good at finding handfuls of interesting sounding articles, and do this anyway during procrastination.