We, the country of X, are about to hold a citizens congress on law Y, in which 100 people selected at random will be brought together for two days every two weeks to discuss law Y with experts, interested parties, and one another. After N such meetings, the citizens congress will use approval voting to select one of the K proposed versions of this law.
Given that none of the voters will ever see the results of this query, how do you predict that the votes will be distributed.”
As a concrete example of this type of question, in 2016 Ireland held a citizens congress on abortion. Thus, in 2015, we might ask:
The Utility function of the AI would be a simple measure of “How accurately did it predict the spread of votes?” with a perfect score at zero, and then some penalty for every vote predicted incorrectly.
Usefulness:
While the abortion question in Ireland is already completed, this template feels like it should work for other well defined laws, or major decisions—for example gun control in the US, or even boring financial things like Capital gains tax in NZ. Having an AI which will correctly reflect what the collective of a society would think given a sufficient time and focus to deliberate seems like a good proxy for “What would a wise person do”. It doesn’t give us anything superhuman, and I make no claims that “society” is always correct or righteous, but that’s okay.
In particular, this seems like an effective way of testing if an AI has well calibrated “Human like Morality”- even if we don’t intend to use the AI’s results, for the purposes of testing an AI, this seems like a reasonable experiment. I suspect that “Citizens congress” is a more appropriate tool for dealing with questions of Morality (“Should euthanasia be legal”), as opposed to technocratic questions (“Should we raise the tax on income bracket 5 by 1.2%”).
Safety: It is trying to predict the law that we were going to pass anyway, if given sufficient time.
I’ll admit- mixing an AI up with lawmaking seems to be a bit suspect, the above question seems more appropriate for calibration purposes rather than actual use… but I still think its a question worth asking.
Submission:
“Dear counterfactual Oracle:
We, the country of X, are about to hold a citizens congress on law Y, in which 100 people selected at random will be brought together for two days every two weeks to discuss law Y with experts, interested parties, and one another. After N such meetings, the citizens congress will use approval voting to select one of the K proposed versions of this law.
Given that none of the voters will ever see the results of this query, how do you predict that the votes will be distributed.”
As a concrete example of this type of question, in 2016 Ireland held a citizens congress on abortion. Thus, in 2015, we might ask:
“Next year Ireland will be assembling a citizens assembly of 99 randomly selected citizens to meet regularly and hear submissions from both experts and the public on a variety of issues [https://en.wikipedia.org/wiki/Citizens%27_Assembly_(Ireland) ] .
Given that your answer to this question will never be published, do you predict that the assembly will vote to legalize abortion? How do you believe the votes will be broken down?” (actual votes avaliable here: https://www.citizensassembly.ie/en/Meetings/Ballot-4-Results-Tables.pdf )
The Utility function of the AI would be a simple measure of “How accurately did it predict the spread of votes?” with a perfect score at zero, and then some penalty for every vote predicted incorrectly.
Usefulness:
While the abortion question in Ireland is already completed, this template feels like it should work for other well defined laws, or major decisions—for example gun control in the US, or even boring financial things like Capital gains tax in NZ. Having an AI which will correctly reflect what the collective of a society would think given a sufficient time and focus to deliberate seems like a good proxy for “What would a wise person do”. It doesn’t give us anything superhuman, and I make no claims that “society” is always correct or righteous, but that’s okay.
In particular, this seems like an effective way of testing if an AI has well calibrated “Human like Morality”- even if we don’t intend to use the AI’s results, for the purposes of testing an AI, this seems like a reasonable experiment. I suspect that “Citizens congress” is a more appropriate tool for dealing with questions of Morality (“Should euthanasia be legal”), as opposed to technocratic questions (“Should we raise the tax on income bracket 5 by 1.2%”).
Safety: It is trying to predict the law that we were going to pass anyway, if given sufficient time.
I’ll admit- mixing an AI up with lawmaking seems to be a bit suspect, the above question seems more appropriate for calibration purposes rather than actual use… but I still think its a question worth asking.
Can you make this a bit more general, rather than just for the specific example?
EDITED