Non-epistemic thinking. An agent might rearrange itself to be suitable for different tasks in a way that’s not easy to understand as following rules that produce accurate beliefs. Again, evolution may be an example: although segments of the genome can sometimes be taken to correspond to something (e.g. a niche or element of the environment), they don’t seem to constitute propositions (besides a monotone “this code-fragment is useful in this context”), and it’s not obvious to me that you’d want to say that an agent has beliefs constituted by something other than propositions. It might be wrong to call this “thinking”, but it’s at least rearrangement towards suitability, and in the case of evolution can be very strong, strong enough to matter. Of course, the laws of information theory still apply; the point is that this sort of mind or agent may not be well-interpretable as having beliefs in the sense of propositions, which is a main meaning of the everyday word “belief”.
the classical understanding of negotiation often recommends “rationally irrational” tactics in which an agent handicaps its own capabilities in order to extract concessions from a counterparty: for example, in the deadly game of chicken, if I visibly throw away my steering wheel, oncoming cars are forced to swerve for me in order to avoid a crash, but if the oncoming drivers have already blindfolded themselves, they wouldn’t be able to see me throw away my steering wheel, and I am forced to swerve for them.
Also, skill at self-preservation could been continuously optimized/selected for at all stages of the evolution of intelligence, including early stages. This includes the neolithic period, where language existed but not written language, and extremely limited awareness of how to succeed at thinking or even what thinking is.
It seems plausible that the reason [murphyjitsu] works for many people (where simply asking “what could go wrong?” fails) is that, in our evolutionary history, there was a strong selection pressure in favor of individuals with a robust excuse-generating mechanism. When you’re standing in front of the chief, and he’s looming over you with a stone axe and demanding that you explain yourself, you’re much more likely to survive if your brain is good at constructing a believable narrative in which it’s not your fault.
It wouldn’t be surprising if non-epistemic thinking was already substantially evolved and accessible/retrievable in humans, in which case research into distant cognitive realms is substantially possible with resources that are currently available.
Oh yeah, that’s (potentially) a great example. At least in the human regime, it does seem like you can get sets of people relating to each other so that they’re very deeply into conflict frames. I wonder if that can extend to arbitrarily capable / intelligent agents.
I think a good example of this is minds that optimize for competitiveness in decision theory. For example, negotiation and persuasion.
Also, skill at self-preservation could been continuously optimized/selected for at all stages of the evolution of intelligence, including early stages. This includes the neolithic period, where language existed but not written language, and extremely limited awareness of how to succeed at thinking or even what thinking is.
It wouldn’t be surprising if non-epistemic thinking was already substantially evolved and accessible/retrievable in humans, in which case research into distant cognitive realms is substantially possible with resources that are currently available.
Oh yeah, that’s (potentially) a great example. At least in the human regime, it does seem like you can get sets of people relating to each other so that they’re very deeply into conflict frames. I wonder if that can extend to arbitrarily capable / intelligent agents.