I read Moldbug’s quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.
Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as “it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense”. Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
Think of it in terms of Searle’s Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.
My argument is that, not only is it the Room rather than the people in it that speaks Chinese, but (in my opinion) the algorithm that the Room executes will not be one that is globally intelligible to humans, in the way that a human can understand, say, how Windows XP works.
In other words, the human brain is not powerful enough to virtualize itself. It can reason, and with sufficient technology it can build algorithmic devices capable of artificial reason, and this implies that it can explain why these devices work. But it cannot upgrade itself to a superhuman level of reason by following the same algorithm itself.
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)
I read Moldbug’s quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.
Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as “it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense”. Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)