In much the same way that we need to prompt engineer language models to get them to handle our questions correctly, we know that humans respond differently to “This intervention will save 200 lives” and “This intervention will result in 400 deaths” when talking about a population of 600 people.
Is there any pre-existing writing that touches on this?
Plato’s dialogues are all examples of Socrates’ prompt engineering. See also the old political wisdom that if you can set the agenda, it doesn’t matter how people vote.
Framing effects? The wiki page might be a good starting point; it lists some materials. You can also go look at the studies cited here.
See also a deliberately malicious example here: