I think if you make sure that there is no adverse “set and setting”, the hit-chances might be pretty good.
Two quotes from an article describing a study.
“Twenty-two out of the 36 volunteers described a so-called mystical experience, or one that included feelings of unity with all things, transcendence of time and space as well as deep and abiding joy.”
and
“In follow-up interviews conducted two months later 67 percent of the volunteers rated the psilocybin experience as among the most meaningful of their lives, comparing it to the birth of a first child or the death of a parent, and 79 percent reported that it had moderately or greatly increased their overall sense of well-being or life satisfaction. Independent interviews of family members, friends and co-workers confirmed small but significant positive changes in the subject’s behavior and more follow-ups are currently being conducted to determine if the effects persist a year later. ”
This is from a study where drug-naive participants received psilocybin. I think its the same study I linked to earlier.
The phrase makes some kind of sense to me (although not in that particular case), so in case you’re not just trying to drop a geeky reference, let me try to explain what I make of this phrase.
Assume members of alien species X have two reasoning modes A and B which account for all their thinking. In my mind, I model these “modes” as logical calculi, but I guess you could translate this to two distinct points in the “space of possible minds”.
An Xian is at any one time instance either in mode A or B, but under certain conditions the mode can flip. Except for these two reasoning modes, there is a heuristic faculty, which guides the application of specific rules in A and B. Some conclusions can be reached in mode A but not in B, and vice versa, so ideally, an Xian would master performing switches between them.
Now here’s the problem: Switching between A and B can only happen if a certain sequence of seemingly nonsensical reasoning steps is taken. Since the sequence is nonsensical, an Xian with a finely tuned heuristic for either A or B will be unlikely to encounter it in the course of normal reasoning.
Now, say that Bloob, an accomplished Xian A-thinker, finds out how to do the switch to B and thus manages to prove a theorem of high-value. Bloob will now have major problems communicating his results to his A-thinking peers. They will look at a couple of his proof steps, conclude that they are nonsensical and label him a crackpot.
Bloob might instead decide (whatever that word means in my story) to target people who are familiar with the switch from A to B. He can show them one of the proof steps, and hope that their heuristic “remembers” that they lead to something good down the road. Such a nonsensical proof step may be saying “Shut up and to the impossible”.
So, I suspect that humans do have something like those reasoning modes. They are not necessarily just two, it might not be appropriate to call all of them reasoning, but the main point is that thinking a thought might change the rules of thinking.
I think this idea is very close to the whole area of NLP, hypnosis, and some new-age ideas, e.g., Carlos Castaneda explicitly wants to “teach” you how to shift your mind-state around in the space of possible minds (which is egg-shaped incidentally). Not that any of these have ever done anything for me, but I also haven’t tried following them.
From self-experimentation (sorry), Buddhist meditation seems to be a kind of thinking that can change the rules of thinking, and I think there is some evidence that it actually changes the brain structurally.
Given the possibility of certain thoughts changing the rules of thinking, what is the rational thing to do? If there’s a good answer to this I’m grateful for a link.