[Question] Request for AI risk quotes, especially around speed, large impacts and black boxes

@KatjaGrace, Josh Hart I are finding quotes around different arguments for AI being an existential risk

Full list here: https://​​docs.google.com/​​spreadsheets/​​d/​​1yB1QIHtA-EMPzqJ_57RvvftvXHTI5ZLAy921Y_8sn3U/​​edit

Currently we are struggling to find proponents of the following arguments:

  • “Loss of control via speed”—that things that might otherwise go well are going to go badly because they are happening so fast

  • “Loss of control via inferiority”—if an actor is much less capable than other actors then they might slowly lose control of their resources (eg a child king)

  • “AI may produce or accelerate destructive multi-agent dynamics”—poorly defined, but in the direction of ‘one AI might be fine, but many AIs plus us in a competitive world will lead to outcomes nobody wants’

  • “Large impacts suggest large risks”—A pure argument from size, that the impacts will be big an that is concerning

  • “Black boxes”—We understand AI substantially less well than other new, impactful technologies

  • A good Yudkowsky quote for “Risk from competent malign agents, ie that AIs are a risk because they are competent and not aligned to us. I am confident that Yud thinks this but I struggle to find in less than 250 words

I would love any suggestion however vague of where you think good quotes are for proponents of these arguments.

No answers.
No comments.