The difficulty for me is that this technique is at war with having an accurate self-concept, and may conflict with good epistemic hygiene generally.
Generally, I see no conflict here, assuming that the thing you’re priming yourself with is not something that might displace your core rationalist foundations.
If you’re riding a horse, it is epistemically rational to incorporate the knowledge about the horse into your model of the world (to be aware how it will react to a pack of wolves or an attractive mare during a mating season), and it is instrumentally rational to be able to steer the horse where you want it to carry you.
Same with your mind—if you’re riding an evolutionary kludge, it is epistemically rational to incorporate the knowledge about the kludge it into your map of reality, and it is instrumentally rational to be able to steer it where you want it to be.
What matters is where you draw the line between the agent and the environment.
Generally, I see no conflict here, assuming that the thing you’re priming yourself with is not something that might displace your core rationalist foundations.
If you’re riding a horse, it is epistemically rational to incorporate the knowledge about the horse into your model of the world (to be aware how it will react to a pack of wolves or an attractive mare during a mating season), and it is instrumentally rational to be able to steer the horse where you want it to carry you.
Same with your mind—if you’re riding an evolutionary kludge, it is epistemically rational to incorporate the knowledge about the kludge it into your map of reality, and it is instrumentally rational to be able to steer it where you want it to be.
What matters is where you draw the line between the agent and the environment.