The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
Think of it in terms of Searle’s Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.
My argument is that, not only is it the Room rather than the people in it that speaks Chinese, but (in my opinion) the algorithm that the Room executes will not be one that is globally intelligible to humans, in the way that a human can understand, say, how Windows XP works.
In other words, the human brain is not powerful enough to virtualize itself. It can reason, and with sufficient technology it can build algorithmic devices capable of artificial reason, and this implies that it can explain why these devices work. But it cannot upgrade itself to a superhuman level of reason by following the same algorithm itself.
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)
The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas’ view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:
That sounds like a justification for view 1. Remember, view 1 doesn’t provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.
(Of course, “Moldbug’s” view still doesn’t seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)