How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!