I disagree with many of your points, but I don’t have time to reply to all that, so to avoid being logically rude I’ll at least reply to what seems to be your central point, about “relevant expertise as measured by educational credentials and/or accomplishments.”
Who has educational credentials and/or accomplishments relevant to future AGI designs or long-term tech forecasting? Also, do you particularly disagree with what I wrote in AGI Impact Experts and Friendly AI Experts?
Also, in general, I’ll just remind everyone reading this that I don’t think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts (e.g. facts relevant to the theses in this post). Argument screens off authority, and all that.
Edit: Also, my view of Holden Karnofsky might be illustrative. I take Holden Karnofsky more seriously than almost anyone on the cost-effectiveness of global health interventions, despite the fact that he has 0 relevant degrees, 0 papers published in relevant journals, 0 awards for global health work, etc. Degrees and papers and so on are only proxy variables for what we really care about, and are easily screened off by more relevant variables, both for the case of Karnofsky on global health and for the case of Bostrom, Yudkowsky, Shulman, etc. on AI risk.
For Karnofsky and to some extent Bostrom yes, Shulman is debatable, Yudkowsky tried to get screened (tried to write a programming language, for example, wrote a lot of articles on various topics, many of them wrong, tried to write technical papers (TDT), really badly), and failed to pass the screening by a very big margin. Entirely irrational arguments about 10% counter-factual impact of his are also a part of failure. Omohundro passed with flying colours (his PhD is almost entirely irrelevant at that point, as it is screened off by his accomplishments in AI).
I’ll just remind everyone reading this that I don’t think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts....
Exactly. All of this is wasted effort once either FAI or UFAI is developed.
Who has educational credentials and/or accomplishments relevant to future AGI designs or long-term tech forecasting?
There’s the more relevant accomplishments, there are less relevant accomplishments, and lacks of accomplishment.
Also, in general, I’ll just remind everyone reading this that I don’t think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts
I agree that a discussion of strategically relevant facts would be much more productive. I don’t see facts here. I see many speculations. I see a lot of making things up to fit the conclusion.
If I were to tell you that I can, for example, win a very high stakes programming contest (with a difficult, open problem that has many potential solutions that can be ranked in terms of quality), the discussion of my approach to the contest problem between you and me would be almost useless for your or my prediction of victory (provided that basic standards of competence are met), irrespective of whenever my idea is good. Prior track record, on the other hand, would be a good predictor. This is how it is for a very well defined problem. It is not going to be better for a less well understood problem.
I disagree with many of your points, but I don’t have time to reply to all that, so to avoid being logically rude I’ll at least reply to what seems to be your central point, about “relevant expertise as measured by educational credentials and/or accomplishments.”
Who has educational credentials and/or accomplishments relevant to future AGI designs or long-term tech forecasting? Also, do you particularly disagree with what I wrote in AGI Impact Experts and Friendly AI Experts?
Also, in general, I’ll just remind everyone reading this that I don’t think these meta-level debates about proper social epistemology are as productive as object-level debates about strategically relevant facts (e.g. facts relevant to the theses in this post). Argument screens off authority, and all that.
Edit: Also, my view of Holden Karnofsky might be illustrative. I take Holden Karnofsky more seriously than almost anyone on the cost-effectiveness of global health interventions, despite the fact that he has 0 relevant degrees, 0 papers published in relevant journals, 0 awards for global health work, etc. Degrees and papers and so on are only proxy variables for what we really care about, and are easily screened off by more relevant variables, both for the case of Karnofsky on global health and for the case of Bostrom, Yudkowsky, Shulman, etc. on AI risk.
For Karnofsky and to some extent Bostrom yes, Shulman is debatable, Yudkowsky tried to get screened (tried to write a programming language, for example, wrote a lot of articles on various topics, many of them wrong, tried to write technical papers (TDT), really badly), and failed to pass the screening by a very big margin. Entirely irrational arguments about 10% counter-factual impact of his are also a part of failure. Omohundro passed with flying colours (his PhD is almost entirely irrelevant at that point, as it is screened off by his accomplishments in AI).
Exactly. All of this is wasted effort once either FAI or UFAI is developed.
There’s the more relevant accomplishments, there are less relevant accomplishments, and lacks of accomplishment.
I agree that a discussion of strategically relevant facts would be much more productive. I don’t see facts here. I see many speculations. I see a lot of making things up to fit the conclusion.
If I were to tell you that I can, for example, win a very high stakes programming contest (with a difficult, open problem that has many potential solutions that can be ranked in terms of quality), the discussion of my approach to the contest problem between you and me would be almost useless for your or my prediction of victory (provided that basic standards of competence are met), irrespective of whenever my idea is good. Prior track record, on the other hand, would be a good predictor. This is how it is for a very well defined problem. It is not going to be better for a less well understood problem.