If humanity develops very advanced AI technology, how likely do you think it is that this causes humanity to go extinct or be substantially disempowered?
I would find this difficult to answer, because I don’t know what you mean by “substantially disempowered”.
I’d find it especially hard to understand because you present it as a “peer risk” to extinction. I’d take that as a hint that whatever you meant by “substantially disempowered” was Really Bad(TM). Yet there are a lot of things that could reasonably be described as “substantially disempowered”, but don’t seem particularly bad to me… and definitely not bad on an extinction level. So I’d be lost as to how substantial it had to be, or in what way, or just in general as to what you were getting at it with it.
I would find this difficult to answer, because I don’t know what you mean by “substantially disempowered”.
I’d find it especially hard to understand because you present it as a “peer risk” to extinction. I’d take that as a hint that whatever you meant by “substantially disempowered” was Really Bad(TM). Yet there are a lot of things that could reasonably be described as “substantially disempowered”, but don’t seem particularly bad to me… and definitely not bad on an extinction level. So I’d be lost as to how substantial it had to be, or in what way, or just in general as to what you were getting at it with it.