A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention—the famous gorilla experiment—but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.
Yes, it’s possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.
Anyway—that aside, I actually don’t disagree much at all with most of what you said.
The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson’s book (“Ship of Fools” p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I’ve ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.
why MIRI should receive Effective Altruism funding
I guess the argument is that (a) a superhuman AI will probably be developed soon, (b) whether it is properly aligned with human values or not will have tremendous impact on the future of humanity, and (c) MIRI is one of the organizations that take this problem most seriously.
If you agree with all three parts, then the funding makes sense. If you disagree with any one of them, it does not. At least from political perspective, it would be better to not talk about funding missions that require belief in several controversial statements to justify them.
This is partially about plausibility of the claims, and partially about prevention vs reaction. Other EA charities are reactive: a problem already exists, we want to solve it. In case of malaria, it is not about curing the people who are already sick, but about preventing other people from getting sick… but anyway, people sick of malaria already exist.
I was looking for some analogy, when humanity spent a lot of resources on prevention, but I actually don’t remember any. Even recently with covid, a lot of people had to die first; perhaps at the beginning we could have prevented all this, but precisely because it did not happen yet, it didn’t seem important.
A bias is an error in weighting or proportion or emphasis. This differs from a fallacy, which is an error in reasoning specifically. Just to make up an example, an attentional bias would be a misapplication of attention—the famous gorilla experiment—but there would be no reasoning underlying this error per se. The ad hominem fallacy contains at least implicit reasoning about truth-valued claims.
Yes, it’s possible that AI could be a concern for rationality. But AI is an object of rationality; in this sense, AI is like carbon emissions; it has room for applied rationality, absolutely, but it is not rationality itself. People who read about AI through this medium are not necessarily learning about rationality. They may be, but they also may not be. As such, the overfocus on AI is a massive departure from the original subject matter, much like how it would be if LessWrong became overwhelmed with ways to reduce carbon emissions.
Anyway—that aside, I actually don’t disagree much at all with most of what you said.
The issue is that when these concerns have been applied to the foundation of a community concerned with the same things, they have been staggeringly wrongheaded and resulted in the disparities between mission statements and practical realities, which is more or less the basis of my objection. I am no stranger to criticizing intellectual communities; I have outright argued that we should expand the federal defunding criteria to include of certain major universities such as UC Berkeley itself. For all of the faults that have been levied against academia — and I have been such a critic of these norms that I was in Tucker Carlson’s book (“Ship of Fools” p. 130) as a Person Rebelling Against Academic Norms — I have never had a discussion as absurd as I have when questioning why MIRI should receive Effective Altruism funding. It was and still is one of the most bizarre and frankly concerning lines of reasoning I’ve ever experienced, especially when contrasted with the position of EA leaders to address homelessness or the drug war. The concept of LessWrong and much of EA, on face, is not objectionable; what has resulted absolutely is.
I guess the argument is that (a) a superhuman AI will probably be developed soon, (b) whether it is properly aligned with human values or not will have tremendous impact on the future of humanity, and (c) MIRI is one of the organizations that take this problem most seriously.
If you agree with all three parts, then the funding makes sense. If you disagree with any one of them, it does not. At least from political perspective, it would be better to not talk about funding missions that require belief in several controversial statements to justify them.
This is partially about plausibility of the claims, and partially about prevention vs reaction. Other EA charities are reactive: a problem already exists, we want to solve it. In case of malaria, it is not about curing the people who are already sick, but about preventing other people from getting sick… but anyway, people sick of malaria already exist.
I was looking for some analogy, when humanity spent a lot of resources on prevention, but I actually don’t remember any. Even recently with covid, a lot of people had to die first; perhaps at the beginning we could have prevented all this, but precisely because it did not happen yet, it didn’t seem important.