Interesting, if the same could be done with chatgpt I’d be curious to hear how you’d frame the question. If the same analysis can be done with chatgpt I’d do it consistently.
Can you say more about how it causes harm? I’d like to find a way to reduce that harm, because there’s a lot of good stuff in this sort of analysis, but you’re right that there’s a tendency to use extremely spiky words. A favorite internet poster of mine has some really interesting takes on how it’s important to use soft language and not demand people agree, which folks on that subreddit are in fact pretty bad at doing. It’s hard to avoid it at times, though, when one is impassioned.
You can give ChatGPT the job posting and a brief description of Simon’s experiment, and then just ask them to provide critiques from a given perspective (eg. “What are some potential moral problems with this plan?”)
Interesting, if the same could be done with chatgpt I’d be curious to hear how you’d frame the question. If the same analysis can be done with chatgpt I’d do it consistently.
Can you say more about how it causes harm? I’d like to find a way to reduce that harm, because there’s a lot of good stuff in this sort of analysis, but you’re right that there’s a tendency to use extremely spiky words. A favorite internet poster of mine has some really interesting takes on how it’s important to use soft language and not demand people agree, which folks on that subreddit are in fact pretty bad at doing. It’s hard to avoid it at times, though, when one is impassioned.
You can give ChatGPT the job posting and a brief description of Simon’s experiment, and then just ask them to provide critiques from a given perspective (eg. “What are some potential moral problems with this plan?”)
ah, I see, yeah, solid and makes sense.