This is a cogent, if sparse, high-level analysis of the epistemic distortions around megaprojects in AI and other fields.
It points out that projects like the human brain project and the fifth generation computer systems project made massive promises, raised around a billion dollars, and totally flopped. I don’t expect this was a simple error, I expect there were indeed systematic epistemic distortions involved, perpetuated at all levels.
It points out that similar scale projects are being evaluated today involving various major AI companies globally, and points out that the sorts of distortionary anti-epistemic tendencies can still be observed. Critics of the ideas that are currently getting billions of dollars (deep learning leading to AGI) are met with replies that systematically exclude the possibility of ‘stop, halt, and catch fire’ but instead only include ‘why are you talking about problems and not solutions’ and ‘do this through our proper channels within the field and not in this unconstrained public forum’, which are clearly the sorts you’d expect to see when a megaproject is protecting itself.
The post briefly also addresses why it’s worth modeling the sociopolitical arguments, and not just the technical arguments. I think it’s clear that megaprojects like this are subject to major distortionary forces – at the point where you’re talking about arguments against the positions that is literally funding the whole field, it is obviously not acceptable to constrain dialogue to the channels that field controls, this is a mechanism that is open to abuse of power. I like this short section.
The post ends with the claim that ‘people are being duped into believing a lie’. I don’t feel convinced of this.
I tried to write down why simply, but I’m not having the easiest time. A few pointers:
A chain is as strong as its weakest link, but not all organisations are chains. Many mathematicians can be doing nonsense symbol-manipulation while Andrew Wiles solves Fermat’s Last Theorem. I expect there was an overlap in the time today when science is substantially broken, and the time when Feynman was around making diagrams and building the atom bomb. In the intermediary time a lot of ‘science’ you could point to as not actually science and supported by anti-epistemic arguments, but this was somewhat separable from Feynman who was still doing real work.
There can be many levels of fraud, combined with many levels of actual competence at the object level. The modern field of ML has mightily impressed me with AlphaGo and GPT and so on. I think that the “full scam” position is that this is entirely a consequence of increased compute and not ML expertise, and that basically there is not much expertise at all in these fields. I find this plausible but not at the 50% level. So just because there’s evidence of anti-epistemic and adversarial behavior, this does not preclude real work from being done.
I do think it’s pretty normal for projects to have marketing, run in an epistemically adversarial way, kept in arms length and bringing in resources.
I also think that sometimes very competent people are surrounding by distortionary forces. I think I should be able to come up with strong examples here, and I thought a bit about making the case for Thiel or Cummings (who’ve both shown ability to think clearly but also engaged in somewhat political narrative-building). Perhaps Hoover is an example? Still, I think that sometimes a project can be engaged adversarially with the outside world and still be competent at its work. But I don’t think I’ve shown that strongly, and in most actual cases I am repulsed by projects that do the adversarial stuff and think it’s delusional to be holding out hope. I also think it’s especially delusional to think this about science. Science isn’t supposed to be a place where the real conversation happens in private.
Conclusion
I think this post raises a straightforward and valid hypothesis to evaluate the field against as a whole. I don’t think it’s sufficiently detailed to convince me that the overall hypothesis holds. I do think it’s a valuable conversation to have, it’s such an important topic, especially for this community. I think this post is valuable, and I expect I will give it a small positive vote in the review, around +2 or +3.
Further Work
Here are some further questions I’d like to see discussed and answered, to get a better picture of this:
What are a few other examples of criticism of the current wave of AI hype, and how were they dealt with?
What do leaders of these projects say on this topic, and in response to criticism?
(I recall an FLI panel with Demis Hassabis on, where the on detailed argument he made about the decision to put more/less funding into AGI right now was saying that it would be easier for lots of groups to do it in the future as compute gets cheaper, so in order to have centralized control and be able to include time for safety we should push as fast as we can on AGI now. I don’t think it’s an unreasonable argument but I was hardly surprised to hear it coming from him.)
How open are the channels of communication with the field? How easy is it for an outsider to engage with the people in the field?
Who are the funders of AI? To what extent are they interested in public discourse around this subject?
(My guess is that the answer here is something like “the academic field and industry have captured the prestige associated with it so that nobody else is considered reasonable to listen to”.)
What is the state of the object level arguments around the feasibility of AGI?
Does the behavior of the people who lead the field matches up with their claims?
What are some other megaprojects or fields with billions of dollars going into projects, and how are these dynamics playing out in those areas?
This is a cogent, if sparse, high-level analysis of the epistemic distortions around megaprojects in AI and other fields.
It points out that projects like the human brain project and the fifth generation computer systems project made massive promises, raised around a billion dollars, and totally flopped. I don’t expect this was a simple error, I expect there were indeed systematic epistemic distortions involved, perpetuated at all levels.
It points out that similar scale projects are being evaluated today involving various major AI companies globally, and points out that the sorts of distortionary anti-epistemic tendencies can still be observed. Critics of the ideas that are currently getting billions of dollars (deep learning leading to AGI) are met with replies that systematically exclude the possibility of ‘stop, halt, and catch fire’ but instead only include ‘why are you talking about problems and not solutions’ and ‘do this through our proper channels within the field and not in this unconstrained public forum’, which are clearly the sorts you’d expect to see when a megaproject is protecting itself.
The post briefly also addresses why it’s worth modeling the sociopolitical arguments, and not just the technical arguments. I think it’s clear that megaprojects like this are subject to major distortionary forces – at the point where you’re talking about arguments against the positions that is literally funding the whole field, it is obviously not acceptable to constrain dialogue to the channels that field controls, this is a mechanism that is open to abuse of power. I like this short section.
The post ends with the claim that ‘people are being duped into believing a lie’. I don’t feel convinced of this.
I tried to write down why simply, but I’m not having the easiest time. A few pointers:
A chain is as strong as its weakest link, but not all organisations are chains. Many mathematicians can be doing nonsense symbol-manipulation while Andrew Wiles solves Fermat’s Last Theorem. I expect there was an overlap in the time today when science is substantially broken, and the time when Feynman was around making diagrams and building the atom bomb. In the intermediary time a lot of ‘science’ you could point to as not actually science and supported by anti-epistemic arguments, but this was somewhat separable from Feynman who was still doing real work.
There can be many levels of fraud, combined with many levels of actual competence at the object level. The modern field of ML has mightily impressed me with AlphaGo and GPT and so on. I think that the “full scam” position is that this is entirely a consequence of increased compute and not ML expertise, and that basically there is not much expertise at all in these fields. I find this plausible but not at the 50% level. So just because there’s evidence of anti-epistemic and adversarial behavior, this does not preclude real work from being done.
I do think it’s pretty normal for projects to have marketing, run in an epistemically adversarial way, kept in arms length and bringing in resources.
I also think that sometimes very competent people are surrounding by distortionary forces. I think I should be able to come up with strong examples here, and I thought a bit about making the case for Thiel or Cummings (who’ve both shown ability to think clearly but also engaged in somewhat political narrative-building). Perhaps Hoover is an example? Still, I think that sometimes a project can be engaged adversarially with the outside world and still be competent at its work. But I don’t think I’ve shown that strongly, and in most actual cases I am repulsed by projects that do the adversarial stuff and think it’s delusional to be holding out hope. I also think it’s especially delusional to think this about science. Science isn’t supposed to be a place where the real conversation happens in private.
Conclusion
I think this post raises a straightforward and valid hypothesis to evaluate the field against as a whole. I don’t think it’s sufficiently detailed to convince me that the overall hypothesis holds. I do think it’s a valuable conversation to have, it’s such an important topic, especially for this community. I think this post is valuable, and I expect I will give it a small positive vote in the review, around +2 or +3.
Further Work
Here are some further questions I’d like to see discussed and answered, to get a better picture of this:
What are a few other examples of criticism of the current wave of AI hype, and how were they dealt with?
What do leaders of these projects say on this topic, and in response to criticism?
(I recall an FLI panel with Demis Hassabis on, where the on detailed argument he made about the decision to put more/less funding into AGI right now was saying that it would be easier for lots of groups to do it in the future as compute gets cheaper, so in order to have centralized control and be able to include time for safety we should push as fast as we can on AGI now. I don’t think it’s an unreasonable argument but I was hardly surprised to hear it coming from him.)
How open are the channels of communication with the field? How easy is it for an outsider to engage with the people in the field?
Who are the funders of AI? To what extent are they interested in public discourse around this subject?
(My guess is that the answer here is something like “the academic field and industry have captured the prestige associated with it so that nobody else is considered reasonable to listen to”.)
What is the state of the object level arguments around the feasibility of AGI?
Does the behavior of the people who lead the field matches up with their claims?
What are some other megaprojects or fields with billions of dollars going into projects, and how are these dynamics playing out in those areas?