I think the model clearly applies, though almost certainly the effect is less strictly binary than in the surprise party example.
I expect the annoyance to make him a little bit biased, but still open to the idea and still maintaining solid epistemics.
This is roughly a crux for me, yeah. I think dozens of people emailing him would cause him to (fairly reasonably, actually!) infer that something weird is going on (e.g., people are in a crazy echo chamber) and that he’s being targeted for unwanted attention (which he would be!). And it seems important, in a unilateralist’s curse way, that this effect is probably unrelated to the overall size of the group of people who have these beliefs about AI. Like, if you multiply the number of AI-riskers by 10, you also multiply by 10 the number of people who, by some context-unaware individual judgement, think they should cold-email Tao. Some of these people will be correct that they should do something like that, but it seems likely that many of such people will be incorrect.
Yeah, random internet forum users emailing eminent mathematician en masse would be strange enough to be non-productive. I for one wasn’t thinking anyone would to, I don’t think it was what OP suggested. To anyone contemplating sending one, the task is best delegated to someone who not only can write coherent research proposals that sound relevant to the person approached, but can write the best one.
Mathematicians receive occasional crank emails about solutions to P ?= NP, so anyone doing the reaching needs to be reputable to get past their crank filters.
I think the model clearly applies, though almost certainly the effect is less strictly binary than in the surprise party example.
This is roughly a crux for me, yeah. I think dozens of people emailing him would cause him to (fairly reasonably, actually!) infer that something weird is going on (e.g., people are in a crazy echo chamber) and that he’s being targeted for unwanted attention (which he would be!). And it seems important, in a unilateralist’s curse way, that this effect is probably unrelated to the overall size of the group of people who have these beliefs about AI. Like, if you multiply the number of AI-riskers by 10, you also multiply by 10 the number of people who, by some context-unaware individual judgement, think they should cold-email Tao. Some of these people will be correct that they should do something like that, but it seems likely that many of such people will be incorrect.
Yeah, random internet forum users emailing eminent mathematician en masse would be strange enough to be non-productive. I for one wasn’t thinking anyone would to, I don’t think it was what OP suggested. To anyone contemplating sending one, the task is best delegated to someone who not only can write coherent research proposals that sound relevant to the person approached, but can write the best one.
Mathematicians receive occasional crank emails about solutions to P ?= NP, so anyone doing the reaching needs to be reputable to get past their crank filters.