I’m curious if you can summarize the relevance to embedded agency
Only sort of. Yoav correctly points to Vervaeke’s new contribution, but I think I’m more impressed with his perspective than with his hypothesis?
That is, he thinks the core thing underlying wisdom is relevance realization, which I’m going to simply describe as the ability to identify what bits of the world (physical and logical) influence each other, in a way which drives how you should look at the world and what actions you should take. [If you think about AlphaGo, ‘relevance realization’ is like using the value network to drive the MCTS, but for a full agent, it bears more deeply on more aspects of cognition.]
But this feels like one step: yes, you have determined that wisdom is about realizing relevance, but how to do you that? What does it look like to do that successfully, or poorly?
Here, the history of human thought becomes much more important. “The human condition”, and the perennial problems, and all that are basically the problems of embedded agency (in the context of living in a civilization, at least). Humans have built up significant practices and institutions around dealing with those problems. Here I’m more optimistic about, say, you or Scott hearing Vervaeke describe the problems and previous solutions and drawing your own connections to Embedded Agency and imagining your own solutions, more than I am excited about you just tasting his conclusions and deciding whether to accept or reject them.
Like, saying “instead of building clever robots, we need to build wise robots” doesn’t make much progress. Saying “an aspect of human wisdom is this sort of metacognition that searches for insights that determine how one is misframing reality” leads to “well, can we formalize that sort of metacognition?”.
[In particular, a guess I have about something that will be generative is grappling with the way humans have felt that wisdom was a developmental trajectory—a climbing up / climbing towards / going deeper—more than a static object, or a state that one reaches and then is complete. Like, I notice the more I think about human psychological developmental stages, the more I view particular formalizations of how to think and be in the world as “depicting a particular stage” instead of “depicting how cognition has to be.”]
Only sort of. Yoav correctly points to Vervaeke’s new contribution, but I think I’m more impressed with his perspective than with his hypothesis?
That is, he thinks the core thing underlying wisdom is relevance realization, which I’m going to simply describe as the ability to identify what bits of the world (physical and logical) influence each other, in a way which drives how you should look at the world and what actions you should take. [If you think about AlphaGo, ‘relevance realization’ is like using the value network to drive the MCTS, but for a full agent, it bears more deeply on more aspects of cognition.]
But this feels like one step: yes, you have determined that wisdom is about realizing relevance, but how to do you that? What does it look like to do that successfully, or poorly?
Here, the history of human thought becomes much more important. “The human condition”, and the perennial problems, and all that are basically the problems of embedded agency (in the context of living in a civilization, at least). Humans have built up significant practices and institutions around dealing with those problems. Here I’m more optimistic about, say, you or Scott hearing Vervaeke describe the problems and previous solutions and drawing your own connections to Embedded Agency and imagining your own solutions, more than I am excited about you just tasting his conclusions and deciding whether to accept or reject them.
Like, saying “instead of building clever robots, we need to build wise robots” doesn’t make much progress. Saying “an aspect of human wisdom is this sort of metacognition that searches for insights that determine how one is misframing reality” leads to “well, can we formalize that sort of metacognition?”.
[In particular, a guess I have about something that will be generative is grappling with the way humans have felt that wisdom was a developmental trajectory—a climbing up / climbing towards / going deeper—more than a static object, or a state that one reaches and then is complete. Like, I notice the more I think about human psychological developmental stages, the more I view particular formalizations of how to think and be in the world as “depicting a particular stage” instead of “depicting how cognition has to be.”]