I have a counterpoint, which is that I often see low effort posts or comments by people (less often on LessWrong) where I think: “I wish this person had had a discussion with Claude about this before posting.”
I don’t like it when people post verbatim what the models output, because of all the reasons you mention, but I do think that having a debate about your ideas with a model can help clarify. You need to actually manage to have a debate though, not just get it to sycophantically agree with you. Try tactics like starting out pretending to have the opposite point of view you actually have, then switching. Ask for pros and cons and for insightful critiques, and for it to avoid padding phrases, social niceties, and flattery.
Then rewrite, in your own words, your updated thought after this process, and it’ll probably be at least a bit improved.
So, this isn’t quite disagreeing with the point here exactly. I guess my summary is, ‘Use LLMs thoughtfully and deliberately, not sloppily and carelessly.’
I guess my summary is, ‘Use LLMs thoughtfully and deliberately, not sloppily and carelessly.’
That reminds me of a remark attributed to Dijkstra. I forget the exact wording, but it was to the effect that we should make our errors thoughtfully and deliberately, not sloppily and carelessly.
The ability to take the opposite point of view is a skill that you can generally apply, writing good comments on a topic has often something to do with your own knowledge of the given topic.
Recently, a friend wrote something on Facebook about how her doctors didn’t gave her sleeping pills immediately and only after tests which shows that doctors aren’t that much into giving people drugs as commonly argued.
I then had the intuition/cached thought, that German doctors just have different incentives when it comes to prescribing drugs and asked ChatGPT for how the incentives differ. As a result, a got a lot of details that allowed me to make a higher-quality post.
After having that discussion, I even feel like it might be good to write a LessWrong post about the incentive differences because it touches on the Hansonian claim that health-care interventions could be halved in the US without damage in health outcomes.
In the claim about Hanson’s “extreme perspective on health” I heard people say things like nobody would suggest reducing the usage of clearly useful medicines like antibiotics. In Germany, we use less than half of the antibiotics and have better infectious disease mortality than in the US, so it’s not far-fetched.
Most of the debate between Scott Alexander and Robin Hanson largely ignored aspects of how incentives for drug use are set, as those details are technical and bureaucratic in a way that’s outside of the discourse. ChatGPT however is quite capable of just giving me all those boring bureaucratic details.
Searching for how those bureaucratic details work isn’t very straightforward without LLMs.
I think there may be some narrow band of potential where attempting the process of considering the situation from both sides and having a conversation as ‘the other side’ actually boosts them somewhat. Optimistically, the practice of doing this a few times may help someone be able to do it for themselves in their own minds in the future.
I have a counterpoint, which is that I often see low effort posts or comments by people (less often on LessWrong) where I think: “I wish this person had had a discussion with Claude about this before posting.”
I don’t like it when people post verbatim what the models output, because of all the reasons you mention, but I do think that having a debate about your ideas with a model can help clarify. You need to actually manage to have a debate though, not just get it to sycophantically agree with you. Try tactics like starting out pretending to have the opposite point of view you actually have, then switching. Ask for pros and cons and for insightful critiques, and for it to avoid padding phrases, social niceties, and flattery.
Then rewrite, in your own words, your updated thought after this process, and it’ll probably be at least a bit improved.
So, this isn’t quite disagreeing with the point here exactly. I guess my summary is, ‘Use LLMs thoughtfully and deliberately, not sloppily and carelessly.’
That reminds me of a remark attributed to Dijkstra. I forget the exact wording, but it was to the effect that we should make our errors thoughtfully and deliberately, not sloppily and carelessly.
I’d wager anyone with the ability to do this, to entertain views contrary to one’s own, probably writes comments just fine with the unaided mind.
The ability to take the opposite point of view is a skill that you can generally apply, writing good comments on a topic has often something to do with your own knowledge of the given topic.
Recently, a friend wrote something on Facebook about how her doctors didn’t gave her sleeping pills immediately and only after tests which shows that doctors aren’t that much into giving people drugs as commonly argued.
I then had the intuition/cached thought, that German doctors just have different incentives when it comes to prescribing drugs and asked ChatGPT for how the incentives differ. As a result, a got a lot of details that allowed me to make a higher-quality post.
After having that discussion, I even feel like it might be good to write a LessWrong post about the incentive differences because it touches on the Hansonian claim that health-care interventions could be halved in the US without damage in health outcomes.
In the claim about Hanson’s “extreme perspective on health” I heard people say things like nobody would suggest reducing the usage of clearly useful medicines like antibiotics. In Germany, we use less than half of the antibiotics and have better infectious disease mortality than in the US, so it’s not far-fetched.
Most of the debate between Scott Alexander and Robin Hanson largely ignored aspects of how incentives for drug use are set, as those details are technical and bureaucratic in a way that’s outside of the discourse. ChatGPT however is quite capable of just giving me all those boring bureaucratic details.
Searching for how those bureaucratic details work isn’t very straightforward without LLMs.
Yeah, probably true.
I think there may be some narrow band of potential where attempting the process of considering the situation from both sides and having a conversation as ‘the other side’ actually boosts them somewhat. Optimistically, the practice of doing this a few times may help someone be able to do it for themselves in their own minds in the future.
i struggle with this, and need to attend a prompting bootcamp