It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
- Transformative AI (TAI) systems: AI systems that are able to qualitatively transform society in a way as large as the industrial revolution
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
AI safety is important, but my comparative advantage lies elsewhere
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
What do you think are the odds for the following scenarios? [...] TAI will be developed by [...] If AGI is developed today [...]
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
What do you think are the odds for the following scenarios? [...] If AGI is developed today, it would be net beneficial for humanity’s long-term future
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
How concerned are you about each of these problems?
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
AI-enabled cyber attacks/misinformation
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.
It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
P(insufficient safeguards | rushed development) > P(insufficient safeguards | slow development).
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
Hasn’t changed. Still mandatory.
Not really the correct term, but I don’t know of a better one.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.