Completely understood here. It’d be different for OpenAI, even more different for DeepMind. We’d have to tailor outreach. But I would like to try experimentation.
An institution could do A/B testing on interventions like these. It can talk to people more than once.
We can’t take this for granted: when A tells B that B’s views are inconsistent, the standard response (afaict) is for B to default in one direction (and which direction is often heavily influenced by their status quo), make that direction their consistent view, and then double down every time they’re pressed.
It’s possible that we have ~1 shot per person at convincing them.
Completely understood here. It’d be different for OpenAI, even more different for DeepMind. We’d have to tailor outreach. But I would like to try experimentation.
We can’t take this for granted: when A tells B that B’s views are inconsistent, the standard response (afaict) is for B to default in one direction (and which direction is often heavily influenced by their status quo), make that direction their consistent view, and then double down every time they’re pressed.
It’s possible that we have ~1 shot per person at convincing them.