One class of insights that come with Buddhist practice might be summarized as “determinism”, as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of “dependent origination”, that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology creates a gulf that cuts us off from direct interaction, causing us to confuse map and territory. Much of the path of practice is learning to unlearn this useful confusion that allows us to do much by focusing on the map so we can make better predictions about the territory.
In AI alignment many difficulties and confusions arise from failing to understand what there is termed embeddedness, the insight that everything happens in the world not alongside it on the other side of a Cartesian veil. The trouble is that dualism is pernicious and intuitive to humans, even as we deny it, and unlearning it is not as simple as reasoning that the problem exists. Our thinking is so polluted with dualistic notions that we struggle to see the world any other way. I suspect if we are to succeed at building safe AI, we’ll have to get a lot better at understanding and integrating the insight of embeddedness.
Off-topic riff on “Humans are Embedded Agents Too”
One class of insights that come with Buddhist practice might be summarized as “determinism”, as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of “dependent origination”, that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology creates a gulf that cuts us off from direct interaction, causing us to confuse map and territory. Much of the path of practice is learning to unlearn this useful confusion that allows us to do much by focusing on the map so we can make better predictions about the territory.
In AI alignment many difficulties and confusions arise from failing to understand what there is termed embeddedness, the insight that everything happens in the world not alongside it on the other side of a Cartesian veil. The trouble is that dualism is pernicious and intuitive to humans, even as we deny it, and unlearning it is not as simple as reasoning that the problem exists. Our thinking is so polluted with dualistic notions that we struggle to see the world any other way. I suspect if we are to succeed at building safe AI, we’ll have to get a lot better at understanding and integrating the insight of embeddedness.