I’ve watched some of Vervaeke’s lectures, but they just seem to go on and on without ever reaching whatever his goal is. Likewise Jordan Peterson. Having just read through Valentine’s document (mainly the lecture summaries, rather than the detailed notes), I am still disappointed. Vervaeke just breaks off at the end, just as it seemed it might get interesting. It goes to lecture 26, the last of which suggests there are more to come. I look forward to summaries of them, but more with hope than with expectation.
Yeah, I think you’ll appreciate the summaries we end up with of the second half of the series.
I’ve watched some of Vervaeke’s lectures, but they just seem to go on and on without ever reaching whatever his goal is.
I think this is both fair and unfair, and am trying to figure out how to articulate my sense of it.
I think there’s a way to consider thinking that views it as just being about truth/exactness/etc., and turning everything into propositional knowledge. I think there’s another way to consider thinking that views it as being a delicate balancing act between different layers of knowledge (propositional, procedural, perspectival, and participatory being the four that Vervaeke talks about frequently). I have a suspicion that a lot of his goal is transformative change in the audience, often by something like moving from thinking mostly about propositions to thinking in a balanced way, but from the propositional perspective this will end up seeming empty, or full of lots of things that don’t compile to propositions, or only do so vacuously.
“So what was his point? What does it boil down to?” “Well… boiling it isn’t a good mode of preparation, actually; it kills the nutritional value because it denatures the vitamin C.”
Talk of “his goal” reminds me of a line from SSC’s review of 12 Rules for Life: “But I actually acted as a slightly better person during the week or so I read Jordan Peterson’s book.” [Noting that Vervaeke isn’t trying to be a prophet, or make his own solution; I think he’s trying to do science on wisdom, and help people realize the situation that they / humanity are in.]
But anyway, let’s jump ahead a lot and talk about my main goal (ignoring, for a moment, the many secondary goals).
There’s a thing that LW-style rationalism holds near its core, which is “rationalists should win”. That is, the procedural commitments to rationality are because those commitments pay off (like the point of believing things is that they pay rent in anticipated experiences, etc.). The ‘art of refining human rationality’ is about developing more psychotechnologies that lead to more winning. It feels to me like there’s a big hole in our understanding that’s at least labeled in this series: the problem of ‘relevance realization’.
As an example, there’s a thing that LW-style Bayesianism does, which says “well, induction is solved in principle by Solomonoff Induction, we just need to make an approximator to that.” Vervaeke identifies this as the problem of combinatorial explosion: the underlying task is impossible, and so you need an impossible machine in order to accomplish it. [He doesn’t address SI directly, but if he did, I think he would describe it as “absurd”, meaning detached from reality, which it is!]
But actual humans somehow have a sense of what considerations are relevant in any particular case, and this has detail and internal structure to it, can be more or less appropriate, and thus should be a branch of psychoengineering. To the extent that AI alignment is about ‘developing machine wisdom’ to best use the machine intelligence, a mechanistic theory of developing human wisdom seems potentially fruitful as an area of study.
I’ve watched some of Vervaeke’s lectures, but they just seem to go on and on without ever reaching whatever his goal is. Likewise Jordan Peterson. Having just read through Valentine’s document (mainly the lecture summaries, rather than the detailed notes), I am still disappointed. Vervaeke just breaks off at the end, just as it seemed it might get interesting. It goes to lecture 26, the last of which suggests there are more to come. I look forward to summaries of them, but more with hope than with expectation.
Yeah, I think you’ll appreciate the summaries we end up with of the second half of the series.
I think this is both fair and unfair, and am trying to figure out how to articulate my sense of it.
I think there’s a way to consider thinking that views it as just being about truth/exactness/etc., and turning everything into propositional knowledge. I think there’s another way to consider thinking that views it as being a delicate balancing act between different layers of knowledge (propositional, procedural, perspectival, and participatory being the four that Vervaeke talks about frequently). I have a suspicion that a lot of his goal is transformative change in the audience, often by something like moving from thinking mostly about propositions to thinking in a balanced way, but from the propositional perspective this will end up seeming empty, or full of lots of things that don’t compile to propositions, or only do so vacuously.
“So what was his point? What does it boil down to?” “Well… boiling it isn’t a good mode of preparation, actually; it kills the nutritional value because it denatures the vitamin C.”
Talk of “his goal” reminds me of a line from SSC’s review of 12 Rules for Life: “But I actually acted as a slightly better person during the week or so I read Jordan Peterson’s book.” [Noting that Vervaeke isn’t trying to be a prophet, or make his own solution; I think he’s trying to do science on wisdom, and help people realize the situation that they / humanity are in.]
But anyway, let’s jump ahead a lot and talk about my main goal (ignoring, for a moment, the many secondary goals).
There’s a thing that LW-style rationalism holds near its core, which is “rationalists should win”. That is, the procedural commitments to rationality are because those commitments pay off (like the point of believing things is that they pay rent in anticipated experiences, etc.). The ‘art of refining human rationality’ is about developing more psychotechnologies that lead to more winning. It feels to me like there’s a big hole in our understanding that’s at least labeled in this series: the problem of ‘relevance realization’.
As an example, there’s a thing that LW-style Bayesianism does, which says “well, induction is solved in principle by Solomonoff Induction, we just need to make an approximator to that.” Vervaeke identifies this as the problem of combinatorial explosion: the underlying task is impossible, and so you need an impossible machine in order to accomplish it. [He doesn’t address SI directly, but if he did, I think he would describe it as “absurd”, meaning detached from reality, which it is!]
But actual humans somehow have a sense of what considerations are relevant in any particular case, and this has detail and internal structure to it, can be more or less appropriate, and thus should be a branch of psychoengineering. To the extent that AI alignment is about ‘developing machine wisdom’ to best use the machine intelligence, a mechanistic theory of developing human wisdom seems potentially fruitful as an area of study.