Hm. Did I give that (false) impression? If I didn’t wish to answer I wouldn’t have even opened the survey.
To be perfectly clear: I did wish to answer; the survey was constructed in such a way that I could not answer in a way that didn’t knowingly add likely-incorrect data[1].
=====
The subjectivity is a design feature
One person’s feature is another person’s selection bias[2].
Corporate survey where they described how they filtered out ‘bad’ data by asking synonymous questions and discarding inconsistent answers… only when a survey asks multiple closely related questions I have a tendency to notice and carefully examine the differences… and said questions weren’t actually quite synonymous (they never are).
As I also responded to Ben Pace, I believe I replied too intellectually defensively to both of your comments as a result of the tone and would like to rectify that mistake. So thank you sincerely for the feedback and I agree that we would like not to exclude anyone unnecessarily nor have too much ambiguity in the answers we expect. We have updated the survey as a result and again, excuse my response.
It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
- Transformative AI (TAI) systems: AI systems that are able to qualitatively transform society in a way as large as the industrial revolution
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
AI safety is important, but my comparative advantage lies elsewhere
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
What do you think are the odds for the following scenarios? [...] TAI will be developed by [...] If AGI is developed today [...]
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
What do you think are the odds for the following scenarios? [...] If AGI is developed today, it would be net beneficial for humanity’s long-term future
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
How concerned are you about each of these problems?
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
AI-enabled cyber attacks/misinformation
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.
Haha true, but the feature is luckily not a systematic blockade, more of an ontological one. And sorry for misinterpreting! On another note, I really do appreciate the feedback and With Folded Hands definitely seems within-scope for this sort of answer, great book.
Hm. Did I give that (false) impression? If I didn’t wish to answer I wouldn’t have even opened the survey.
To be perfectly clear: I did wish to answer; the survey was constructed in such a way that I could not answer in a way that didn’t knowingly add likely-incorrect data[1].
=====
One person’s feature is another person’s selection bias[2].
(And I am the sort of person who will refuse rather than give likely-incorrect responses.)
I find myself surprisingly commonly[3] selection-biased against, and it is frustrating at best. This is just the latest example[4][5][6].
As in % of surveys that I open but end up dropping, give or take.
Privacy survey that required Skype.
Corporate survey where they described how they filtered out ‘bad’ data by asking synonymous questions and discarding inconsistent answers… only when a survey asks multiple closely related questions I have a tendency to notice and carefully examine the differences… and said questions weren’t actually quite synonymous (they never are).
Different corporate ‘anonymous’ HR-related survey that could only be taken on the corporate VPN.
As I also responded to Ben Pace, I believe I replied too intellectually defensively to both of your comments as a result of the tone and would like to rectify that mistake. So thank you sincerely for the feedback and I agree that we would like not to exclude anyone unnecessarily nor have too much ambiguity in the answers we expect. We have updated the survey as a result and again, excuse my response.
It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
P(insufficient safeguards | rushed development) > P(insufficient safeguards | slow development).
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
Hasn’t changed. Still mandatory.
Not really the correct term, but I don’t know of a better one.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.
Haha true, but the feature is luckily not a systematic blockade, more of an ontological one. And sorry for misinterpreting! On another note, I really do appreciate the feedback and With Folded Hands definitely seems within-scope for this sort of answer, great book.