The poll has now settled quite a bit. I have been a bit busy and so it took me a bit longer than expected to write this up – but without further ado let us look at some of the results. Over the past month 74 people voted, 70 people were grouped, and 2,095 votes were cast, with 55 statements submitted.
I will briefly list some of the result together with some of my own thoughts, but if you want more detail go look at the auto-generated report.
Some Results
We will look at some of the majority opinions first, that is, statements that most people agree (or disagree) with. We weigh both percentage and number of people voted. All number can be found in the report.
Corporate or academic labs are likely to build AGI before any state actor does. (75% Agree, 18% Pass)
I believe there needs to be a bigger focus on technical research (64% Agree, 30% passed)
AI-to-human safety is fundamentally the same kind of problem as any interspecies animal-to-animal cooperation problem (70% disagree, 12% passed)
I think it is most likely the first AGI will be developed in a leading AI lab, but still plausible it will be developed by a smaller group (65% Agree, 17% Passes)
A sufficiently large AI-related catastrophe could potentially motivate the US government to successfully undertake a pivotal act. (60% Agree, 21% Passed)
Opinion Groups
Pol.is automatically generates opinion groups based on similar voting patterns. In this case the algorithm identified two groups A and B. They can roughly be identified with group A believing that AGI will come soon and will be very dangerous, whereas group B believes that AGI will take some time and be only somewhat dangerous.
(During the voting there were three stable groups for a while, where the third group roughly believed that AGI will come soon, but won’t be that dangerous. )
So let us see what beliefs makes group A special. The most important one is that “[they] think the probability of AGI before 2040 is above 50%”. Ninety percent of group A agreed with this statement. The same amount also believes that “once a robustly general human-level ‘AI scientist’ is built, it will be capable of self-improving via research to superhuman levels”, whereas only 33% of group B agreed with the same statement. Finally, group A strongly (79%) believes that the chance of an AI takeover is above 15%. Group B weakly disagrees with this statement (60%).
The statement “By 2035 it will be possible to train and run an AGI on fewer compute resources than required by PaLM today” separated the two groups pretty strongly, with group A mostly agreeing or being unsure / passing and group B disagreeing with 71%.
Areas of uncertainty
Some of the statement here are probably just due to references that not a lot of people know (or know well). A statement that I was surprised to see was that “one agent getting the rest of the world to do its bidding is a failure scenario for alignment via dictatorship”, which sounds somewhat straightforward as a statement, or at least sound to me like the kind of statement that people would have strong gut reactions about.
The other one that I thought was interesting and probably worth investigating further is that “SotA LLMs >= human brain language centers, it’s the lack of prefrontal-cortex-equivalent-functionality which separates them from AGI.” This is especially true now, since ChatGPT was released between the time I sent out the poll and now. Someone somewhere has probably already made some comparison, so feel free to link it.
Some final thoughts
Overall I think this poll was a descent success. At the very least I had a lot of fun watching the votes come in! I might do more similar polls on other things in the future. I really recommend looking at the results here!
Poll Results on AGI
The poll has now settled quite a bit. I have been a bit busy and so it took me a bit longer than expected to write this up – but without further ado let us look at some of the results. Over the past month 74 people voted, 70 people were grouped, and 2,095 votes were cast, with 55 statements submitted.
I will briefly list some of the result together with some of my own thoughts, but if you want more detail go look at the auto-generated report.
Some Results
We will look at some of the majority opinions first, that is, statements that most people agree (or disagree) with. We weigh both percentage and number of people voted. All number can be found in the report.
Corporate or academic labs are likely to build AGI before any state actor does. (75% Agree, 18% Pass)
I believe there needs to be a bigger focus on technical research (64% Agree, 30% passed)
AI-to-human safety is fundamentally the same kind of problem as any interspecies animal-to-animal cooperation problem (70% disagree, 12% passed)
I think it is most likely the first AGI will be developed in a leading AI lab, but still plausible it will be developed by a smaller group (65% Agree, 17% Passes)
A sufficiently large AI-related catastrophe could potentially motivate the US government to successfully undertake a pivotal act. (60% Agree, 21% Passed)
Opinion Groups
Pol.is automatically generates opinion groups based on similar voting patterns. In this case the algorithm identified two groups A and B. They can roughly be identified with group A believing that AGI will come soon and will be very dangerous, whereas group B believes that AGI will take some time and be only somewhat dangerous.
(During the voting there were three stable groups for a while, where the third group roughly believed that AGI will come soon, but won’t be that dangerous. )
So let us see what beliefs makes group A special. The most important one is that “[they] think the probability of AGI before 2040 is above 50%”. Ninety percent of group A agreed with this statement. The same amount also believes that “once a robustly general human-level ‘AI scientist’ is built, it will be capable of self-improving via research to superhuman levels”, whereas only 33% of group B agreed with the same statement. Finally, group A strongly (79%) believes that the chance of an AI takeover is above 15%. Group B weakly disagrees with this statement (60%).
The statement “By 2035 it will be possible to train and run an AGI on fewer compute resources than required by PaLM today” separated the two groups pretty strongly, with group A mostly agreeing or being unsure / passing and group B disagreeing with 71%.
Areas of uncertainty
Some of the statement here are probably just due to references that not a lot of people know (or know well). A statement that I was surprised to see was that “one agent getting the rest of the world to do its bidding is a failure scenario for alignment via dictatorship”, which sounds somewhat straightforward as a statement, or at least sound to me like the kind of statement that people would have strong gut reactions about.
The other one that I thought was interesting and probably worth investigating further is that “SotA LLMs >= human brain language centers, it’s the lack of prefrontal-cortex-equivalent-functionality which separates them from AGI.” This is especially true now, since ChatGPT was released between the time I sent out the poll and now. Someone somewhere has probably already made some comparison, so feel free to link it.
Some final thoughts
Overall I think this poll was a descent success. At the very least I had a lot of fun watching the votes come in! I might do more similar polls on other things in the future. I really recommend looking at the results here!