Results of $1,000 Oracle contest!
Almost a year ago, I posted a contest to find the best questions to ask an Oracle AI. I’ve been very slow in grading it, apologies for that, but here are the final results and the sumptuous cash prizes.
First of all, thanks for everyone who wrote an entry; even if your suggestion didn’t win, most of them had interesting ideas that made me think. But here are the prizes:
Counterfactual Oracle
$350 for the best question(s) to ask a counterfactual Oracle, awarded to Wei_Dai for any one of these posts.
The best counterfactual Oracle suggestions revolved around using the Oracle to automate or hasten difficult human work, by getting the Oracle to tell us what humans would have achieved had they done the work. Wei_Dai’s suggestions in this area were the most useful, and he also had an interesting iterated Oracle idea that suggests some other avenues to explore.
Low-Bandwidth Oracle
$350 for the best question(s) to ask a low-bandwidth Oracle: the joint winners are Wei_Dai, for his idea on using the Oracle to predict crime, and William_S, for his idea to get the Oracle to critique plans.
Generally good ideas
There’s a total of $300 “to be distributed as I see fit among the non-winning entries; I’ll be mainly looking for innovative and interesting ideas that don’t quite work.” This award is split among several people:
$150 to cousin_it for finding the “bucket chain” flaw if there are Oracles whose time-ranges of prediction overlap.
$50 to evhub’s idea to use Oracles to do iterated amplification and distillation.
$50 to paulfchristiano for a useful critique of evhub’s idea, the thread starting from here. It increased my understanding both of Paul’s approach and my own.
$25 to Gurkenglas for using Oracles to see how predictable history was; I have no idea how to make this work precisely, but it is a fascinating idea.
$25 to romeostevensit for a very thorough schema of issues around Oracles.
Thanks again to all who participated, and the winners can contact me to receive their ill-gotten gains.
- 22 Jun 2022 10:39 UTC; 7 points) 's comment on Getting from an unaligned AGI to an aligned AGI? by (
- 22 Jun 2022 14:24 UTC; 5 points) 's comment on Getting from an unaligned AGI to an aligned AGI? by (
- 8 Jul 2022 6:57 UTC; 1 point) 's comment on Getting from an unaligned AGI to an aligned AGI? by (
Well, Oracle, which under 1000 words question, would be answered by the most influential answer for our future? What answer to which question would be the most earthshattering?
How can a swarm of nuclear asbestos superintelligent nanobots be synthesised using common household items? (The rhetoric in the answer will keep your guard down for just long enough to publish it.)