I think there is an additional interpretation that you’re not taking into account, and an eminently reasonable one.
First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you’d need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.
However, the more important question is—what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless—or worse. At the same time, it’s unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).
This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get “science” to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such “science.” It’s not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as “science” by the general public and the government.
The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.
Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
I think there is an additional interpretation that you’re not taking into account, and an eminently reasonable one.
First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you’d need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.
However, the more important question is—what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless—or worse. At the same time, it’s unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).
This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get “science” to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such “science.” It’s not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as “science” by the general public and the government.
The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.
Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
This could lead to a good article.