Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
Okay, thanks, that tells me what I was looking for: clarification of what it is I’m trying to refute, and what substantive reasons I have to disagree.
So “Moldbug” is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that’s worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.
The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?
This exercise is not just some attempt to make robots “as good as humans”; rather, it reveals why that-which-we-call “common sense” works in the first place, and exposes more general principles of superior inference.
In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn’t work.
This could lead to a good article.