The problem is that there’s too much stuff to be done. From Gates’ perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn’t necessarily mean he’ll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn’t as reassuring as it might look at first glance.
The problem is that there’s too much stuff to be done. From Gates’ perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn’t necessarily mean he’ll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn’t as reassuring as it might look at first glance.
Yeah, but worlds where AI is on his radar probably have a much higher Bill-Gates-intervention-rate than those where it isn’t.
The base rate might be low but I still like to hear that one of the necessary conditions has been met.