Some ideal world would use the empirically valid Eysenck model of left vs. right and authoritarian vs. libertarian for the political section. Oh wait, you basically did. Good job.
Autism Spectrum choice—I don’t know but my Autism Spectrum Quotient was [blank]
Question suggestion 1:
Only answer this if you think the chance of a singularity before 2100 is over 1%. Do you feel that influencing the singularity’s outcome is tractable enough to be worth your time or money?
Question suggestion 2:
The singleton AI has taken over, fixed technology near 2014 levels and now asks you to decide the world’s economic priorities for the next century. You must choose from:
A. Maximize GDP
B. Maximize median GDP per capita
C. Maximize number of people making over a certain fixed income that you get to choose as [blank]
There may be any number of priorities besides economic ones. The AI will not dictate how we trade off non-economic priorities, but only that we choose one of the above and exclude the other two.
Other question suggestions I’m not making (since I will limit myself to two) but maybe someone else will:
Choice suggestions:
Some ideal world would use the empirically valid Eysenck model of left vs. right and authoritarian vs. libertarian for the political section. Oh wait, you basically did. Good job.
Complex Affiliation choice—Inscrutably Idiosyncratic
Autism Spectrum choice—I don’t know but my Autism Spectrum Quotient was [blank]
Question suggestion 1:
Only answer this if you think the chance of a singularity before 2100 is over 1%. Do you feel that influencing the singularity’s outcome is tractable enough to be worth your time or money?
Question suggestion 2:
The singleton AI has taken over, fixed technology near 2014 levels and now asks you to decide the world’s economic priorities for the next century. You must choose from:
A. Maximize GDP B. Maximize median GDP per capita C. Maximize number of people making over a certain fixed income that you get to choose as [blank]
There may be any number of priorities besides economic ones. The AI will not dictate how we trade off non-economic priorities, but only that we choose one of the above and exclude the other two.
Other question suggestions I’m not making (since I will limit myself to two) but maybe someone else will:
Mental Health—Add ADHD and this question
A two-choice sub-question of whether your SAT out of 1600 was taken before or after April 1995, when it was recentered.