I agree with the existence of the failure mode and the need to model others in order to win, and also in order to be a kind person who increases the hedons in the world.
But isn’t it the case that if readers notice they’re good at “deliberate thinking and can reckon all sorts of plans that should work in theory to get them what they want, but which fall apart when they have to interact with other humans”, they could add a <deliberately think about how to model other people> as part of their “truth” search and thereby reach your desired end point without using the tool you are advocating for?
In theory, yes. In practice this tends to be impractical because of the amount of effort required to think through how other people think in a deliberate way that accurately models them. Most people who succeed in modeling others well seem to do it by having implicit models that are able to model them quickly.
I think the point is that people are complex systems that are too complex to model well if you try to do it in a deliberate, system-2 sort of way. Even if you eventually succeed in modeling them, you’ll likely get your answer about what to do way to late to be useful. The limitations of our brains force us to do something else (heck, the limitations of physics seem to force this, since idealized Solomonoff inductors run into similar computability problems, cf. AIXI).
This balance can be radically off kilter if an agent only has access to deliberate modelling. “Read books to understand how to deal with humans passingly” is a strategy seen in the wild for those that don’t instinctively build strong implicit models.
I agree with the existence of the failure mode and the need to model others in order to win, and also in order to be a kind person who increases the hedons in the world.
But isn’t it the case that if readers notice they’re good at “deliberate thinking and can reckon all sorts of plans that should work in theory to get them what they want, but which fall apart when they have to interact with other humans”, they could add a <deliberately think about how to model other people> as part of their “truth” search and thereby reach your desired end point without using the tool you are advocating for?
In theory, yes. In practice this tends to be impractical because of the amount of effort required to think through how other people think in a deliberate way that accurately models them. Most people who succeed in modeling others well seem to do it by having implicit models that are able to model them quickly.
I think the point is that people are complex systems that are too complex to model well if you try to do it in a deliberate, system-2 sort of way. Even if you eventually succeed in modeling them, you’ll likely get your answer about what to do way to late to be useful. The limitations of our brains force us to do something else (heck, the limitations of physics seem to force this, since idealized Solomonoff inductors run into similar computability problems, cf. AIXI).
This balance can be radically off kilter if an agent only has access to deliberate modelling. “Read books to understand how to deal with humans passingly” is a strategy seen in the wild for those that don’t instinctively build strong implicit models.