Thank you for engaging with the actual question, unlike the other comments! What you seem to be gesturing at is a phase transition from “too dumb to be x-risk dangerous, even is a large group” to “x-risk-level dangerous”. I think this phase transition, or lack thereof would be worth studying, for two reasons:
it is something we CAN study effectively, because we don’t have to reason about intelligences smarter than ourselves.
it is something that can become an x-risk WAY EARLIER than a super-intelligent AGI.
Additionally, there is a fair chance to stave off “dying without dignity” by accidentally unleashing something preventable.
Thank you for engaging with the actual question, unlike the other comments! What you seem to be gesturing at is a phase transition from “too dumb to be x-risk dangerous, even is a large group” to “x-risk-level dangerous”. I think this phase transition, or lack thereof would be worth studying, for two reasons:
it is something we CAN study effectively, because we don’t have to reason about intelligences smarter than ourselves.
it is something that can become an x-risk WAY EARLIER than a super-intelligent AGI.
Additionally, there is a fair chance to stave off “dying without dignity” by accidentally unleashing something preventable.