I’ve played the AI box game on other forums. We designed a system to incentivise release of the AI. We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems. A certain number of accepted solutions would enable the AI to free itself. You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.
Psychology was very important in those, as you said. Different people have very different values and to appeal to each person you have to know their values.
If you predict that there’s a 20% chance of the AI destroying the world and an 80% chance of global warming destroying the world and there’s a 100% chance the AI will stop global warming if released and unmolested then you are better off releasing the AI.
Or you can just give a person 6 points for achieving their goal and −20 points for releasing the AI. Even though the person knows rationally that the AI could destroy the world points matter more than that, and that strongly encourages people to try negotiating with the AI.