This doesn’t directly answer your questions, but since the OAA already requires global coordination and agreement to follow the plans spit out by the superintelligent AI, maybe propagandizing people is not necessary. Especially if we consider that by the time the OAA becomes possible, the economy and science are probably already largely automated by CoEms and don’t need to involve motivated humans.
Then, the time-boundedness of the plan raises the chances that the plan doesn’t concern with changing people’s values and preferences as a side effect (which will be relevant for the ongoing work of shaping the constraints and desiderata for the next iteration of the plan). Some such interference with values will inevitably happen, though. That’s what Davidad considers when he writes “A de-pessimizing OAA would effectively buy humanity some time, and freedom to experiment with less risk, for tackling the CEV-style alignment problem—which is harder than merely mitigating extinction risk.”
This doesn’t directly answer your questions, but since the OAA already requires global coordination and agreement to follow the plans spit out by the superintelligent AI, maybe propagandizing people is not necessary. Especially if we consider that by the time the OAA becomes possible, the economy and science are probably already largely automated by CoEms and don’t need to involve motivated humans.
Then, the time-boundedness of the plan raises the chances that the plan doesn’t concern with changing people’s values and preferences as a side effect (which will be relevant for the ongoing work of shaping the constraints and desiderata for the next iteration of the plan). Some such interference with values will inevitably happen, though. That’s what Davidad considers when he writes “A de-pessimizing OAA would effectively buy humanity some time, and freedom to experiment with less risk, for tackling the CEV-style alignment problem—which is harder than merely mitigating extinction risk.”