I thought you were trying to make a population level case that the more knowledge you have about deep learning, the lower your probability of doom is.
Yes sort of but not exactly—deep knowledge of DL and neurosci in particular is somewhat insulating against many of the doom arguments. People outside the field are not relevant here, i’m only concerned with a fairly elite group who have somewhat rare knowledge. For example there are only a handful of people on LW who I would consider demonstrably well read in DL&neurosci and they mostly have lower p(doom) then EY/MIRI.
Most outside the field don’t see it as a world-ending issue, and surveys often turn up an average of over 10% among experts that it ends up being a world-ending issue.
The actual results are near the complete opposite of what you claim.
The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%.
5% is near my p(doom) and that of Q Pope’s (who is a self proclaimed optimist). So the median DL respondent from their survey is an optimist, which proves my point.
Also only a small portion of those sent the survey actually responded, and only a small portion of those who responded − 162 individuals—actually answered the doom question. It seems extremely unlikely that responding to that question was correlated with optimism, so there is probably a large sample bias effect here.
I will note that I was correct in the number I gave. The mean is 14%, the median is 5%. Though I didn’t know the median was so low, so good piece of data to include. And your original claim was about the fraction of people with high pdoom, so the median seems more relevant.
Otherwise, good points. I guess I have more disagreements with the DL networks than I thought.
Yes sort of but not exactly—deep knowledge of DL and neurosci in particular is somewhat insulating against many of the doom arguments. People outside the field are not relevant here, i’m only concerned with a fairly elite group who have somewhat rare knowledge. For example there are only a handful of people on LW who I would consider demonstrably well read in DL&neurosci and they mostly have lower p(doom) then EY/MIRI.
If you are referring to this survey: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/?ref=warpnews.org
The actual results are near the complete opposite of what you claim.
5% is near my p(doom) and that of Q Pope’s (who is a self proclaimed optimist). So the median DL respondent from their survey is an optimist, which proves my point.
Also only a small portion of those sent the survey actually responded, and only a small portion of those who responded − 162 individuals—actually answered the doom question. It seems extremely unlikely that responding to that question was correlated with optimism, so there is probably a large sample bias effect here.
I will note that I was correct in the number I gave. The mean is 14%, the median is 5%. Though I didn’t know the median was so low, so good piece of data to include. And your original claim was about the fraction of people with high pdoom, so the median seems more relevant.
Otherwise, good points. I guess I have more disagreements with the DL networks than I thought.