Perhaps, but I think my point stands. CEV will use a veil of ignorance, or it won’t be coherent. It may be incoherent with the veil as well, but I doubt it. Real human beings look after number one much more than they’d ever care to admit, and won’t take stupid risks when choosing under the veil.
One very intriguing thought about an AI is that it could make the Rawlsian choice a real one. Create a simulated society to the choosers’ preferences, and then beam them in at random...
Isn’t there substantial disagreement over whether the veil of ignorance is sufficient or necessary to justify a moral theory?
Edit: Or just read what Nornagest said
Perhaps, but I think my point stands. CEV will use a veil of ignorance, or it won’t be coherent. It may be incoherent with the veil as well, but I doubt it. Real human beings look after number one much more than they’d ever care to admit, and won’t take stupid risks when choosing under the veil.
One very intriguing thought about an AI is that it could make the Rawlsian choice a real one. Create a simulated society to the choosers’ preferences, and then beam them in at random...
Even with a veil of ignorance, people won’t make the same choices—people fall in different places on the risk aversion/reward-seeking spectrum.