I suppose you also believe that Obama must prove he’s not a muslim? And must do so again every time someone asserts that he is?
I don’t see the situation that you cite as comparable. Obama has stated that he’s a Christian, and this seriously calls into question the idea that he’s a Muslim.
Has Eliezer ever said something which calls my interpretation of the situation into question? If so I’ll gladly link a reference to it in my top level post.
(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he’s fit to be president.)
Let me say that Eliezer may have already done more to save the world than most people in history. This is going on the assumption that FAI is a serious existential risk. Even if he is doing it wrong and his work will never directly contribute to FAI in any way, his efforts at popularizing the existence of this threat have vastly increased the pool of people who know of it and want to help in some way.
His skill at explanation and inspiration have brought more attention to this issue than any other single person I know of. The fact that he also has the intellect to work directly on the problem is simply an added bonus. And I strongly doubt that it’s driven away anyone who would have otherwise helped.
I definitely agree that some of what Eliezer has done has reduced existential risk. As I’ve said elsewhere, I’m grateful to Eliezer for inspiring me personally to think more about existential risk.
However, as I’ve said, in my present epistemological state I believe that he’s also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto’s comment. I may be wrong about this.
In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer. I’m happy to support Eliezer to the extent that he does things that I understand to lower existential risk.
You said you had delusions of messianic grandeur in high school, but you’re better now. But then you post an exceptionally well done personal take-down of someone who YOU believe is too self-confident and who (more importantly) has convinced others that his confidence is justified. I think your delusions of messiah-hood are still present, perhaps unacknowledged, and you are suffering from envy of someone you view as “a more successful messiah”.
My conscious motivation making my most recent string of posts is given in my Transparency and Accountability posting. I have no conscious awareness of having a motivation of the type that you describe.
Of course, I may be deluded about this (just as all humans may be deluded about possessing any given belief). In line with my top level posting, I’m interested in seriously considering the possibility that my unconscious motivations are working against my conscious goals.
And my priests are Eliezer Yudkowsky and the SIAI fellows. I don’t believe they leach off of me, I feel they earn every bit of respect and funding they get. But that’s besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.
I don’t judge you for having this motivation (we’re all only human). But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.
(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he’s fit to be president.)
Does whether Eliezer is over-confident or not have any bearing on whether he’s fit to work on FAI?
I believe that he’s also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto’s comment. I may be wrong about this.
From the comment:
My claim is that on average Eliezer’s outlandish claims repel people from thinking about existential risk.
The claim is not credible. I’ve seen a few examples given, but with no way to determine if the people “repelled” would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn’t dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection
In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer.
The latest post made this clear, and cheers for that. But the previous ones are written as attacks on Eliezer. It’s hard to see a diatribe against someone describing them as a cult leader who’s increasing existential risk and would do best to shut up and not interpret it as a personal attack.
But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.
Fair enough, can’t blame you for that. I’m happy with my enthusiasm.
Does whether Eliezer is over-confident or not have any bearing on whether he’s fit to work on FAI?
Oh, I don’t think so, see my response to Eliezer here.
The claim is not credible. I’ve seen a few examples given, but with no way to determine if the people “repelled” would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn’t dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection
Yes, so here it seems like there’s enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it’s not readily exchanged.
All I would add to what I’ve said is that if you haven’t already done so, see the responses to michaelkeenan’s comment here (in particular those by myself, bentarm and wedrifid).
If you remain unconvinced, we can agree to disagree without hard feelings :-)
I don’t see the situation that you cite as comparable. Obama has stated that he’s a Christian, and this seriously calls into question the idea that he’s a Muslim.
Has Eliezer ever said something which calls my interpretation of the situation into question? If so I’ll gladly link a reference to it in my top level post.
(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he’s fit to be president.)
I definitely agree that some of what Eliezer has done has reduced existential risk. As I’ve said elsewhere, I’m grateful to Eliezer for inspiring me personally to think more about existential risk.
However, as I’ve said, in my present epistemological state I believe that he’s also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto’s comment. I may be wrong about this.
In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer. I’m happy to support Eliezer to the extent that he does things that I understand to lower existential risk.
My conscious motivation making my most recent string of posts is given in my Transparency and Accountability posting. I have no conscious awareness of having a motivation of the type that you describe.
Of course, I may be deluded about this (just as all humans may be deluded about possessing any given belief). In line with my top level posting, I’m interested in seriously considering the possibility that my unconscious motivations are working against my conscious goals.
However, I see your own impression as very poor evidence that I may be deluded on this particular point in light of your expressed preference for donating to Eliezer and SIAI even if doing so is not socially optimal:
I don’t judge you for having this motivation (we’re all only human). But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.
Does whether Eliezer is over-confident or not have any bearing on whether he’s fit to work on FAI?
From the comment:
The claim is not credible. I’ve seen a few examples given, but with no way to determine if the people “repelled” would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn’t dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection
The latest post made this clear, and cheers for that. But the previous ones are written as attacks on Eliezer. It’s hard to see a diatribe against someone describing them as a cult leader who’s increasing existential risk and would do best to shut up and not interpret it as a personal attack.
Fair enough, can’t blame you for that. I’m happy with my enthusiasm.
Oh, I don’t think so, see my response to Eliezer here.
Yes, so here it seems like there’s enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it’s not readily exchanged.
All I would add to what I’ve said is that if you haven’t already done so, see the responses to michaelkeenan’s comment here (in particular those by myself, bentarm and wedrifid).
If you remain unconvinced, we can agree to disagree without hard feelings :-)