Does whether Eliezer is over-confident or not have any bearing on whether he’s fit to work on FAI?
Oh, I don’t think so, see my response to Eliezer here.
The claim is not credible. I’ve seen a few examples given, but with no way to determine if the people “repelled” would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn’t dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection
Yes, so here it seems like there’s enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it’s not readily exchanged.
All I would add to what I’ve said is that if you haven’t already done so, see the responses to michaelkeenan’s comment here (in particular those by myself, bentarm and wedrifid).
If you remain unconvinced, we can agree to disagree without hard feelings :-)
Oh, I don’t think so, see my response to Eliezer here.
Yes, so here it seems like there’s enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it’s not readily exchanged.
All I would add to what I’ve said is that if you haven’t already done so, see the responses to michaelkeenan’s comment here (in particular those by myself, bentarm and wedrifid).
If you remain unconvinced, we can agree to disagree without hard feelings :-)