In the parent comment you seem to indicate that you do believe this at least to some degree, but in the great-grandparent you suggest that you do not. Which is it?
The described perception is a caricature. That is, it is not a correct description of AI risk proponents, nor is it a correct description of the views of people who dismiss AI risk, even on a popular level. So in no way should it be taken as a straightforward description of something people actually believe. But you insist on taking it in this way. Very well: in that case, it is basically false, with a few grains of truth. There is nothing inconsistent about this, or with my two statements on the matter. Many stereotypes are like this: false, but based on some true things.
It seems to me that attacking someone with a publication history and what amounts to hundreds of pages of written material available online on the basis of a lack of a degree amounts to an argumentum ad-hominem and is inappropriate on a rationality forum.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
You are the one who pursued this line of questioning by asking how much truth there was in the original caricature. I did not wish to pursue this line of discussion, and I did not say, and I do not think, that it is relevant to AI risk in any significant way.
By focusing on the origin of their belief, aren’t you committing the genetic fallacy?
No. I did not say that the historical origin of their belief is relevant to whether or not the AI risk idea is valid, and I do not think that it is.
Your assertion that science fiction influenced Yudkowsky’s opinions is unwarranted, irrelevant to the correctness of his argument and amounts to Bulverism.
As for “unwarranted,” you asked me yourself about what truth I thought there was in the caricature. So it was not unwarranted. It is indeed irrelevant to the correctness of his arguments; I did not say, or suggest, or think, that it is.
As for Bulverism, C.S. Lewis defines it as assuming that someone is wrong without argument, and then explaining e.g. psychologically, how he got his opinions. I do not assume without argument that Yudkowsky is wrong. I have reasons for that belief, and I stated in the grandparent that I was willing to give them. I do suspect that Yudkowksy was influenced by science fiction. This is not a big deal; many people were. Apparently Ettinger came up with the idea of cryonics by seeing something similar in science fiction. But I would not have commented on this issue, if you had not insisted on asking about it. I did not say, and I do not think, that it is relevant to the correctness of the AI risk idea.
your defense of that belief and use of it to attack the AIrisk argument amounts to fallacious argumentation inappropriate for LW.
As I said in the first place, I do not take that belief as a literal description even of the beliefs of people who dismiss AI risk. And taken as a literal description, as you insist on taking it, I have not defended that belief. I simply said it is not 100% false; very few things are.
I also did not use it to attack AI risk arguments, as I have said repeatedly in this comment, and as you can easily verify in the above thread.
What is inappropriate to Less Wrong, is the kind of heresy trial that you are engaging in here: you insisted yourself on reading that description as a literal one, you insisted yourself on asking me whether I thought there might be any truth in it, and then you falsely attributed to me arguments that I never made.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
I will. Whether we believe something to be true in practice does depend to some degree on the origin story of the idea, otherwise peer review would be a silly and pointless exercise. Yudkowsky and to a lesser degree Bostrom’s ideas have not received the level of academic peer review that most scientists would consider necessary before entertaining such a seriously transformative idea. This is a heuristic that shouldn’t be necessary in theory, but is in practice.
Furthermore, academia does have a core value in its training that Yudkowsky lacks—a breadth of cross disciplinary knowledge that is more extensive than one’s personal interests only. I think it is reasonable to be suspect of an idea about advanced AI promulgated by two people with very narrow, informal training in the field. Again this is a heuristic, but a generally good one.
This might be relevant if you knew nothing else about the situation, and if you have no idea or personal assessment of the content of their writings. That might true about you; it certainly is not true about me.
Meaning you believe EY and Bostrom to have a broad and deep understanding of the various relevant subfields of AI and general software engineering? Because that is accessible information from their writings, and my opinion of it is not favorable.
The described perception is a caricature. That is, it is not a correct description of AI risk proponents, nor is it a correct description of the views of people who dismiss AI risk, even on a popular level. So in no way should it be taken as a straightforward description of something people actually believe. But you insist on taking it in this way. Very well: in that case, it is basically false, with a few grains of truth. There is nothing inconsistent about this, or with my two statements on the matter. Many stereotypes are like this: false, but based on some true things.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
You are the one who pursued this line of questioning by asking how much truth there was in the original caricature. I did not wish to pursue this line of discussion, and I did not say, and I do not think, that it is relevant to AI risk in any significant way.
No. I did not say that the historical origin of their belief is relevant to whether or not the AI risk idea is valid, and I do not think that it is.
As for “unwarranted,” you asked me yourself about what truth I thought there was in the caricature. So it was not unwarranted. It is indeed irrelevant to the correctness of his arguments; I did not say, or suggest, or think, that it is.
As for Bulverism, C.S. Lewis defines it as assuming that someone is wrong without argument, and then explaining e.g. psychologically, how he got his opinions. I do not assume without argument that Yudkowsky is wrong. I have reasons for that belief, and I stated in the grandparent that I was willing to give them. I do suspect that Yudkowksy was influenced by science fiction. This is not a big deal; many people were. Apparently Ettinger came up with the idea of cryonics by seeing something similar in science fiction. But I would not have commented on this issue, if you had not insisted on asking about it. I did not say, and I do not think, that it is relevant to the correctness of the AI risk idea.
As I said in the first place, I do not take that belief as a literal description even of the beliefs of people who dismiss AI risk. And taken as a literal description, as you insist on taking it, I have not defended that belief. I simply said it is not 100% false; very few things are.
I also did not use it to attack AI risk arguments, as I have said repeatedly in this comment, and as you can easily verify in the above thread.
What is inappropriate to Less Wrong, is the kind of heresy trial that you are engaging in here: you insisted yourself on reading that description as a literal one, you insisted yourself on asking me whether I thought there might be any truth in it, and then you falsely attributed to me arguments that I never made.
I will. Whether we believe something to be true in practice does depend to some degree on the origin story of the idea, otherwise peer review would be a silly and pointless exercise. Yudkowsky and to a lesser degree Bostrom’s ideas have not received the level of academic peer review that most scientists would consider necessary before entertaining such a seriously transformative idea. This is a heuristic that shouldn’t be necessary in theory, but is in practice.
Furthermore, academia does have a core value in its training that Yudkowsky lacks—a breadth of cross disciplinary knowledge that is more extensive than one’s personal interests only. I think it is reasonable to be suspect of an idea about advanced AI promulgated by two people with very narrow, informal training in the field. Again this is a heuristic, but a generally good one.
This might be relevant if you knew nothing else about the situation, and if you have no idea or personal assessment of the content of their writings. That might true about you; it certainly is not true about me.
Meaning you believe EY and Bostrom to have a broad and deep understanding of the various relevant subfields of AI and general software engineering? Because that is accessible information from their writings, and my opinion of it is not favorable.
Or did you mean something else?