Vaniver has said most of the things I want to say here, but there are some additional things I want to say:
I think building models of the mind is really hard. I also notice that in myself, building models of the mind feels scary in a way that I often prevents me from thinking sanely in many important situations.
I think the causes of why it feels scary are varied and complicated, but a lot of it boils down to the fact that in order to model minds, a purely physically reductionistic approach is often difficult, and my standards for evidence often feel calibrated for domains like physics, other hard sciences, and mathematics, and it’s often hard to communicate my reasons for why I believe minds work a certain way to others, since a substantial portion of it is internal and difficult to communicate.
But, building explicit and broad models of our mind like this sequence does strikes me as essential being effective in the world.
Overall, I think this sequence had a positive effect on me for two reasons:
It provided me with a set of concrete models of the mind that I have used a few times since then
It rekindled a certain courage in me to allow myself to build these kind of models in the first place, and I hope it has done the same for others.
I think for me at least the second effect was larger than the first one, though both are pretty substantial.
Yeah, that used to bother me too, when I learned about multi agent theory and pondering it, I of course pointed my attention inwardly, trying to observe it.
Then agents arose and started talking with each other, arguing about the fact that they can’t tell if they’re actually representatives of underlying structures and coalitions of the neural substrate or just one fanciful part, that’s engaged in puppet phantasy play. Or what the boundaries between those two even are.
Or if their apparent existence is valid evidence for multi-agent theories being any good. Well, I suppose I wasn’t bothered, they were bothered :) I/They just really badly wanted a real-time brain scan to get context for my perceptions.
Eventually, I embraced the triplethink of operational certainty [minimizes internal conflict, preserves scarce neurotransmitters], meta doubt, and meta-meta awareness, that propositions that can be expressed in conscious language can’t capture the complexity of the neural substrate, anyway.
Vaniver has said most of the things I want to say here, but there are some additional things I want to say:
I think building models of the mind is really hard. I also notice that in myself, building models of the mind feels scary in a way that I often prevents me from thinking sanely in many important situations.
I think the causes of why it feels scary are varied and complicated, but a lot of it boils down to the fact that in order to model minds, a purely physically reductionistic approach is often difficult, and my standards for evidence often feel calibrated for domains like physics, other hard sciences, and mathematics, and it’s often hard to communicate my reasons for why I believe minds work a certain way to others, since a substantial portion of it is internal and difficult to communicate.
But, building explicit and broad models of our mind like this sequence does strikes me as essential being effective in the world.
Overall, I think this sequence had a positive effect on me for two reasons:
It provided me with a set of concrete models of the mind that I have used a few times since then
It rekindled a certain courage in me to allow myself to build these kind of models in the first place, and I hope it has done the same for others.
I think for me at least the second effect was larger than the first one, though both are pretty substantial.
Yeah, that used to bother me too, when I learned about multi agent theory and pondering it, I of course pointed my attention inwardly, trying to observe it.
Then agents arose and started talking with each other, arguing about the fact that they can’t tell if they’re actually representatives of underlying structures and coalitions of the neural substrate or just one fanciful part, that’s engaged in puppet phantasy play. Or what the boundaries between those two even are.
Or if their apparent existence is valid evidence for multi-agent theories being any good. Well, I suppose I wasn’t bothered, they were bothered :) I/They just really badly wanted a real-time brain scan to get context for my perceptions.
Eventually, I embraced the triplethink of operational certainty [minimizes internal conflict, preserves scarce neurotransmitters], meta doubt, and meta-meta awareness, that propositions that can be expressed in conscious language can’t capture the complexity of the neural substrate, anyway.
All models are wrong, yet modeling is essential.