As I said, the ideal is to use expert opinion as prior unless you have a lot of good info, or you think something is uniquely dysfunctional about an area (its rationalist folklore that a lot of areas are dysfunctional—“the world is mad”—but I think people are being silly about this). Experts really do know a lot.
You also need to figure out who are actual experts and what do they actually say. That’s a non-trivial task—reading reports on science in mainstream media will just stuff your head with nonsense.
It’s actually much worse than that, because huge breakthroughs themselves are what create new experts. So on the eve of huge breakthroughs, currently recognized experts invariably predict the future is far, simply because they can’t see the novel path towards the solution.
In this sense everyone who is currently an AI expert is, trivially, someone who has failed to create AGI. The only experts who have any sort of clear understanding of how far AGI is are either not currently recognized or do not yet exist.
Btw, I don’t consider myself an AI expert. I am not sure what “AI expertise” entails, I guess knowing a lot about lots of things that include stuff like stats/ML but also other things, including a ton of engineering. I think an “AI expert” is sort of like “an airplane expert.” Airplanes are too big for one person—you might be an expert on modeling fluids or an expert on jet engines, but not an expert on airplanes.
And the many-worlds interpretation of quantum mechanics. That is, all EY’s hobby horses. Though I don’t know how common these positions are among the unquiet spirits that haunt LessWrong.
In what specific areas do you think LWers are making serious mistakes by ignoring or not accepting strong enough priors from experts?
As I said, the ideal is to use expert opinion as prior unless you have a lot of good info, or you think something is uniquely dysfunctional about an area (its rationalist folklore that a lot of areas are dysfunctional—“the world is mad”—but I think people are being silly about this). Experts really do know a lot.
You also need to figure out who are actual experts and what do they actually say. That’s a non-trivial task—reading reports on science in mainstream media will just stuff your head with nonsense.
It’s true, reading/scholarship is hard (even for scientists).
It’s actually much worse than that, because huge breakthroughs themselves are what create new experts. So on the eve of huge breakthroughs, currently recognized experts invariably predict the future is far, simply because they can’t see the novel path towards the solution.
In this sense everyone who is currently an AI expert is, trivially, someone who has failed to create AGI. The only experts who have any sort of clear understanding of how far AGI is are either not currently recognized or do not yet exist.
Btw, I don’t consider myself an AI expert. I am not sure what “AI expertise” entails, I guess knowing a lot about lots of things that include stuff like stats/ML but also other things, including a ton of engineering. I think an “AI expert” is sort of like “an airplane expert.” Airplanes are too big for one person—you might be an expert on modeling fluids or an expert on jet engines, but not an expert on airplanes.
AI, general singulatarianism, cryonics, life extension?
And the many-worlds interpretation of quantum mechanics. That is, all EY’s hobby horses. Though I don’t know how common these positions are among the unquiet spirits that haunt LessWrong.