I’m curious if you can summarize the relevance to embedded agency. This many hours of listening seems like quite a commitment, even at 2x. Is it really worth it? (Sometimes I have a commute or other time when it’s great to have something to listen to, but this isn’t currently true.)
Probably the main idea Vaniver is talking here is Relevance Realization, which John starts talking about in episode 28 (He stays on the topic for at least a few episodes, see the playlist). But if that also seems like much, you can read his paper Relevance Realization and the Emerging Framework in Cognitive Science. Might not be quite as in depth, but it goes over the important stuff.
Of course, i might be wrong about which idea Vaniver was talking about :)
I’m curious if you can summarize the relevance to embedded agency
Only sort of. Yoav correctly points to Vervaeke’s new contribution, but I think I’m more impressed with his perspective than with his hypothesis?
That is, he thinks the core thing underlying wisdom is relevance realization, which I’m going to simply describe as the ability to identify what bits of the world (physical and logical) influence each other, in a way which drives how you should look at the world and what actions you should take. [If you think about AlphaGo, ‘relevance realization’ is like using the value network to drive the MCTS, but for a full agent, it bears more deeply on more aspects of cognition.]
But this feels like one step: yes, you have determined that wisdom is about realizing relevance, but how to do you that? What does it look like to do that successfully, or poorly?
Here, the history of human thought becomes much more important. “The human condition”, and the perennial problems, and all that are basically the problems of embedded agency (in the context of living in a civilization, at least). Humans have built up significant practices and institutions around dealing with those problems. Here I’m more optimistic about, say, you or Scott hearing Vervaeke describe the problems and previous solutions and drawing your own connections to Embedded Agency and imagining your own solutions, more than I am excited about you just tasting his conclusions and deciding whether to accept or reject them.
Like, saying “instead of building clever robots, we need to build wise robots” doesn’t make much progress. Saying “an aspect of human wisdom is this sort of metacognition that searches for insights that determine how one is misframing reality” leads to “well, can we formalize that sort of metacognition?”.
[In particular, a guess I have about something that will be generative is grappling with the way humans have felt that wisdom was a developmental trajectory—a climbing up / climbing towards / going deeper—more than a static object, or a state that one reaches and then is complete. Like, I notice the more I think about human psychological developmental stages, the more I view particular formalizations of how to think and be in the world as “depicting a particular stage” instead of “depicting how cognition has to be.”]
I’m curious if you can summarize the relevance to embedded agency. This many hours of listening seems like quite a commitment, even at 2x. Is it really worth it? (Sometimes I have a commute or other time when it’s great to have something to listen to, but this isn’t currently true.)
Probably the main idea Vaniver is talking here is Relevance Realization, which John starts talking about in episode 28 (He stays on the topic for at least a few episodes, see the playlist). But if that also seems like much, you can read his paper Relevance Realization and the Emerging Framework in Cognitive Science. Might not be quite as in depth, but it goes over the important stuff.
Of course, i might be wrong about which idea Vaniver was talking about :)
Only sort of. Yoav correctly points to Vervaeke’s new contribution, but I think I’m more impressed with his perspective than with his hypothesis?
That is, he thinks the core thing underlying wisdom is relevance realization, which I’m going to simply describe as the ability to identify what bits of the world (physical and logical) influence each other, in a way which drives how you should look at the world and what actions you should take. [If you think about AlphaGo, ‘relevance realization’ is like using the value network to drive the MCTS, but for a full agent, it bears more deeply on more aspects of cognition.]
But this feels like one step: yes, you have determined that wisdom is about realizing relevance, but how to do you that? What does it look like to do that successfully, or poorly?
Here, the history of human thought becomes much more important. “The human condition”, and the perennial problems, and all that are basically the problems of embedded agency (in the context of living in a civilization, at least). Humans have built up significant practices and institutions around dealing with those problems. Here I’m more optimistic about, say, you or Scott hearing Vervaeke describe the problems and previous solutions and drawing your own connections to Embedded Agency and imagining your own solutions, more than I am excited about you just tasting his conclusions and deciding whether to accept or reject them.
Like, saying “instead of building clever robots, we need to build wise robots” doesn’t make much progress. Saying “an aspect of human wisdom is this sort of metacognition that searches for insights that determine how one is misframing reality” leads to “well, can we formalize that sort of metacognition?”.
[In particular, a guess I have about something that will be generative is grappling with the way humans have felt that wisdom was a developmental trajectory—a climbing up / climbing towards / going deeper—more than a static object, or a state that one reaches and then is complete. Like, I notice the more I think about human psychological developmental stages, the more I view particular formalizations of how to think and be in the world as “depicting a particular stage” instead of “depicting how cognition has to be.”]