If we accept XiXidu’s implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he’ll now research risks from AI.
You might also tell me who you think is important and I will write them.
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he’ll now research risks from AI.
You might also tell me who you think is important and I will write them.
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!