I have an internal monologue. It’s a bit like a narrator in my head, narrating my thoughts.
I think—and this is highly speculative on my part—that it’s a sign of thinking mainly with the part of the brain that handles language. Whenever I take one of those questionnaires designed to tell whether I use mainly the left or right side of my brain, I land very heavily on the left side—analytical, linguistic, mathematical. I can use the other side if I want to; but I find it surprisingly easy to become almost a caricature of a left-brain thinker.
My internal monologue quite probably restricts me to (mainly) ideas that are easily expressed in English. Up until now, I could see this as a weakness, but I couldn’t see any easy way around it. (One advantage of the internal monologue, on the other hand, is that I usually find it easy to speak my thoughts out loud; because they’re already in word form)
But now, you tell me that you don’t seem to have an internal monologue. Does this mean that you can easily think of things that are not easily expressed in English?
Well.. I can easily think of things I subsequently have seriously trouble expressing in any language, sure. Occasionally through reflection via visuals (or kinesthetics, or..), but more often not using such modalities at all.
Okay, visual I can understand. I don’t use it often, but I do use it on occasion. Kinesthetic, I use even less often, but again I can more-or-less imagine how that works. (Incidentally, I also have a lot of trouble catching a thrown object. This may be related.)
But this ‘no modalities at all’… this intrigues me. How does it work?
But this ‘no modalities at all’… this intrigues me. How does it work?
I can’t speak for Baughn but as for myself, sometimes It feels like I know ahead of time what I’m going to say as my inner voice, and sometimes this results in me not actually bothering to say it.
I went on vacation during this discussion, and completely lost track of it in the process—oops. It’s an interesting question, though. Let me try to answer.
First off, using a sensory modality for the purpose of thinking. That’s something I do, sure enough; for instance, right now I’m “hearing” what I’m saying at the same time as I’m writing it. Occasionally, if I’m unsure of how to phrase something, I’ll quickly loop through a few options; more often, I’ll do that without bothering with the “hearing” part.
When thinking about physical objects, sometimes I’ll imagine them visually. Sometimes I won’t bother.
For planning, etc. I never bother—there’s no modality that seems useful.
That’s not to say I don’t have an experience of thinking. I’m going to explain this in terms of a model of thought[1] that’s been handy for me (because it seems to fit me internally, and also because it’s handy for models in fiction-writing where I’m modifying human minds), but keep in mind that there is a very good chance it’s completely wrong. You might still be able to translate it to something that makes sense to you.
..basically, the workspace model of consciousness combined with a semi-modular brain architecture. That is to say, where the human mind consists of a large number of semi-independent modules, and consciousness is what happens when those modules are all talking to each other using a central workspace. They can also go off and do their own thing, in which case they’re subconscious.
Now, some of the major modules here are sensory. For good reason; being aware of your environment is important. It’s not terribly surprising, then, that the ability to loop information back—feeding internal data into the sensory modules, using their (massive) computational power to massage it—is useful, though it also involves what would be hallucinations if I wasn’t fully aware it’s not real. It’s sufficiently useful that, well, it seems like a lot of people don’t notice there’s anything else going on.
Non-sensory modes of thought, now… sensory modes are frequently useful, but not always. When they aren’t, they’re noise. In that case—and I didn’t quite realise that was going on until now—I’m not just not hallucinating an internal monologue, but in fact entirely disconnecting my senses from my conscious experience. It’s a bit hard to tell, since they’re naturally right there if I check, but I can be extremely easy to surprise at times.
Instead, I have an experience of… everything else. All the modules normally involved with thinking, except the sensory ones. Well, probably not all of them at once, but missing the sensory modules appears to be a sufficiently large outlier that the normal churn becomes insignificant...
Did that help? Hm. Maybe if you think about said “churn”; it’s not like you always use every possible method of thought you’re capable of, at the same time. I’m just including sensory modalities in the list of hot-swappable ones?
...
This is hard.
One more example, I suppose. I mentioned that, while I was writing this, I hallucinated my voice reading it; this appears to be necessary to actually writing. Not for deciding on the meaning I’m trying to get across, but in order to serialise it as English. Not quite sure what’s going on there, since I don’t seem to be doing it ahead of time—I’m doing it word by word.
Okay, let me summarise your statement so as to ensure that I understand it correctly.
In short, you have a number of internal functional modules in the brain; each module has a speciality. There will be, for example, a module for sight; a module for hearing; a module for language, and so on. Your thoughts consist—almost entirely—of these modules exchanging information in some sort of central space.
The modules are, in effect, having a chat.
Now, you can swap these modules out quite a bit. When you’re planning what to type, for example, it seems you run that through your ‘hearing’ module, in order to check that the word choice is correct; you know that this is not something which you are actually hearing, and thus are in no danger of treating it as a hallucination, but as a side effect of this your hearing module isn’t running through the actual input from your ears, and you may be missing something that someone else is saying to you. (I imagine that sufficiently loud or out-of-place noises are still wired directly to your survival subsystem, though, and will get your attention as normal).
But you don’t have to use your hearing module to think with. Or your sight module. You have other modules which can do the thinking, even when those modules have nothing to do. When your sensory modules have nothing to add, you can and do shut them out of the main circuit, ignoring any non-urgent input from those modules.
Your modules communicate by some means which are somehow independent of language, and your thoughts must be translated through your hearing module (which seems to have your language module buried inside it) in order to be described in English.
This is very different to how I think. I have one major module—the language module (not the hearing module, there’s no audio component to this, just a direct language model) which does almost all my thinking. Other modules can be used, but it’s like an occasional illustration in a book—very much not the main medium. (And also like an illustration in that it’s usually visual, though not necessarily limited to two dimensions).
When it comes to my internal thoughts, all modules that are not my language model are unimportant in comparison. I suspect that some modules may be so neglected as to be near nonexistent, and I wonder what those modules could be.
My sensory modules appear to be input-only. I can ignore them, but I can’t seem to consciously run other information into them. (I still dream, which I imagine indicates that I can subconsciously run other information through my sensory modules)
This leaves me with three questions:
Aside from your sensory modules, what other module(s) do you have?
Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?
When your modules share information, what form does that information take?
I imagine these will be difficult to translate to language, but I am very curious as to what your answers will be.
It’s interesting to me that you say your hearing and language modules are independent. I mean, it’s reasonably obvious that this has to be possible—deaf people do have language—but it’s absolutely impossible for me to separate the two, at least in one direction; I can’t deal with language without ‘hearing’ it.
And I just checked; it doesn’t appear I can multitask and examine non-language sounds while I’m using language, either. For comparison, I absolutely can (re)use e.g. visual modules while I’m writing this, although it gets really messy if I try to do so while remaining conscious of what they’re doing—that’s not actually required, though.
Aside from your sensory modules, what other module(s) do you have?
Well… my introspection isn’t really good enough to tell, and it’s really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn’t have an answer even in principle; that there’s no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn’t help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can’t really tell which is which.
Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?
I think if it was just one, I wouldn’t really be conscious of it. But that’s not what you asked, so the answer is “Probably yes”.
When your modules share information, what form does that information take?
I’m very tempted to say “conscious experience”, here, but I have no real basis for that other than a hunch. I’m not sure I can give you a better answer, though. Feelings, visual input (or “hallucinations”), predictions of how people or physical systems will behave, plans—not embedded in any kind of visualization, just raw plans—etc. etc. And before you ask what that’s like, it’s a bit like asking what a Python dictionary feels like.. though emotions aren’t much involved, at that level; those are separate.
The one common theme is that there’s always at least one meta-level of thought associated. Not just “Here’s a plan”, but “Here’s a plan, and oh by the way, here’s what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, “memory” here just pattern-matched it to something you read in a novel, which didn’t work, but then again a different segment is pointing out that fictional evidence is fictional.”
...without the words, of course.
So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I’m aware of. Never the base idea, but all the thinking about the idea… well, it wouldn’t really make sense to be “aware of the base idea” if I wasn’t thinking about it.
Sight is something else again. It certainly feels like I’m aware of my entire visual field, but I’m at least half convinced that’s an illusion. I’m in a prime position to fool myself about that.
It’s interesting to me that you say your hearing and language modules are independent.
This may be related to the fact that I learnt to read at a very young age; when I read, I run my visual input through my language module; the visual model pre-processes the input to extract the words, which are then run through the language module directly.
At least, that’s what I think is happening.
Running the language module without the hearing module a lot, and from a young age, probably helped quite a bit to seperate the two.
Aside from your sensory modules, what other module(s) do you have?
Well… my introspection isn’t really good enough to tell, and it’s really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn’t have an answer even in principle; that there’s no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn’t help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can’t really tell which is which.
Hmph. Disappointing, but thanks for answering the question.
I think I was hoping for more clearly defined modules than appears to be the case. Still, what’s there is there.
When your modules share information, what form does that information take?
I’m very tempted to say “conscious experience”, here, but I have no real basis for that other than a hunch. I’m not sure I can give you a better answer, though. Feelings, visual input (or “hallucinations”), predictions of how people or physical systems will behave, plans—not embedded in any kind of visualization, just raw plans—etc. etc. And before you ask what that’s like, it’s a bit like asking what a Python dictionary feels like.. though emotions aren’t much involved, at that level; those are separate.
The one common theme is that there’s always at least one meta-level of thought associated. Not just “Here’s a plan”, but “Here’s a plan, and oh by the way, here’s what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, “memory” here just pattern-matched it to something you read in a novel, which didn’t work, but then again a different segment is pointing out that fictional evidence is fictional.”
...without the words, of course.
So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I’m aware of. Never the base idea, but all the thinking about the idea… well, it wouldn’t really make sense to be “aware of the base idea” if I wasn’t thinking about it.
Now, this is interesting. I’m really going to have to go and think about this for a while. You have a kind of continual meta-commentary in your mind, thinking about what you’re thinking, cross-referencing with other stuff… that seems like a useful talent to have.
It also seems that, by concentrating more on the individual modules and less on the inter-module communication, I pretty much entirely missed where most of your thinking happens.
One question comes to mind; you mention ‘raw plans’. You’ve correctly predicted my obvious question—what raw plans feel like—but I still don’t really have much of a sense of it, so I’d like to poke at that a bit if you don’t mind.
So; how are these raw plans organised?
Let us say, for example, that you need to plan… oh, say, to travel to a library, return one set of books, and take out another. Would the plan be a series of steps arranged in order of completion, or a set of subgoals that need to be accomplished in order (subgoal one: find the car keys); or would the plan be simply a label saying ‘LIBRARY PLAN’ that connects to the memory of the last time you went on a similar errand?
As for me, I have a few different ways that I can formulate plans. For a routine errand, my plan consists of the goal (e.g. “I need to go and buy bread”) and a number of habits (which, now that I think about it, hardly impinge on my conscious mind at all; if I think about it, I know where I plan to go to get bread, but the answer’s routine enough that I don’t usually bother). When driving, there are points at which I run a quick self-check (“do I need to buy bread today? Yes? Then I must turn into the shopping centre...”)
For a less routine errand, my plan will consist of a number of steps to follow. These will be arranged in the order I expect to complete them, and I will (barring unexpected developments or the failure of any step) follow the steps in order as specified. If I were to write down the steps on paper, they would appear horrendously under-specified to a neutral observer; but in the privacy of my own head, I know exactly which shop I mean when I simply specify ‘the shop’; both the denotations and connotations intended by every word in my head are there as part of the word.
If the plan is one that I particularly look forward to fulfilling, I may run through it repeatedly, particularly the desirable parts (”...that icecream is going to taste so good...”). This all runs through my language system, of course.
Sight is something else again. It certainly feels like I’m aware of my entire visual field, but I’m at least half convinced that’s an illusion. I’m in a prime position to fool myself about that.
I have a vague memory of having read something that suggested that humans are not aware of their entire visual field, but that there is a common illusion that people are, agreeing with your hypothesis here. I vaguely suspect that it might have been in one of the ‘Science of the Discworld’ books, but I am uncertain.
I have an internal monologue. It’s a bit like a narrator in my head, narrating my thoughts.
I think—and this is highly speculative on my part—that it’s a sign of thinking mainly with the part of the brain that handles language. Whenever I take one of those questionnaires designed to tell whether I use mainly the left or right side of my brain, I land very heavily on the left side—analytical, linguistic, mathematical. I can use the other side if I want to; but I find it surprisingly easy to become almost a caricature of a left-brain thinker.
My internal monologue quite probably restricts me to (mainly) ideas that are easily expressed in English. Up until now, I could see this as a weakness, but I couldn’t see any easy way around it. (One advantage of the internal monologue, on the other hand, is that I usually find it easy to speak my thoughts out loud; because they’re already in word form)
But now, you tell me that you don’t seem to have an internal monologue. Does this mean that you can easily think of things that are not easily expressed in English?
Well.. I can easily think of things I subsequently have seriously trouble expressing in any language, sure. Occasionally through reflection via visuals (or kinesthetics, or..), but more often not using such modalities at all.
(See sibling post)
Richard Feynman tells the story of how he learned that thinking isn’t only internal monologue.
Okay, visual I can understand. I don’t use it often, but I do use it on occasion. Kinesthetic, I use even less often, but again I can more-or-less imagine how that works. (Incidentally, I also have a lot of trouble catching a thrown object. This may be related.)
But this ‘no modalities at all’… this intrigues me. How does it work?
All I know is some ways in which it doesn’t work.
I can’t speak for Baughn but as for myself, sometimes It feels like I know ahead of time what I’m going to say as my inner voice, and sometimes this results in me not actually bothering to say it.
I went on vacation during this discussion, and completely lost track of it in the process—oops. It’s an interesting question, though. Let me try to answer.
First off, using a sensory modality for the purpose of thinking. That’s something I do, sure enough; for instance, right now I’m “hearing” what I’m saying at the same time as I’m writing it. Occasionally, if I’m unsure of how to phrase something, I’ll quickly loop through a few options; more often, I’ll do that without bothering with the “hearing” part.
When thinking about physical objects, sometimes I’ll imagine them visually. Sometimes I won’t bother.
For planning, etc. I never bother—there’s no modality that seems useful.
That’s not to say I don’t have an experience of thinking. I’m going to explain this in terms of a model of thought[1] that’s been handy for me (because it seems to fit me internally, and also because it’s handy for models in fiction-writing where I’m modifying human minds), but keep in mind that there is a very good chance it’s completely wrong. You might still be able to translate it to something that makes sense to you.
..basically, the workspace model of consciousness combined with a semi-modular brain architecture. That is to say, where the human mind consists of a large number of semi-independent modules, and consciousness is what happens when those modules are all talking to each other using a central workspace. They can also go off and do their own thing, in which case they’re subconscious.
Now, some of the major modules here are sensory. For good reason; being aware of your environment is important. It’s not terribly surprising, then, that the ability to loop information back—feeding internal data into the sensory modules, using their (massive) computational power to massage it—is useful, though it also involves what would be hallucinations if I wasn’t fully aware it’s not real. It’s sufficiently useful that, well, it seems like a lot of people don’t notice there’s anything else going on.
Non-sensory modes of thought, now… sensory modes are frequently useful, but not always. When they aren’t, they’re noise. In that case—and I didn’t quite realise that was going on until now—I’m not just not hallucinating an internal monologue, but in fact entirely disconnecting my senses from my conscious experience. It’s a bit hard to tell, since they’re naturally right there if I check, but I can be extremely easy to surprise at times.
Instead, I have an experience of… everything else. All the modules normally involved with thinking, except the sensory ones. Well, probably not all of them at once, but missing the sensory modules appears to be a sufficiently large outlier that the normal churn becomes insignificant...
Did that help? Hm. Maybe if you think about said “churn”; it’s not like you always use every possible method of thought you’re capable of, at the same time. I’m just including sensory modalities in the list of hot-swappable ones?
...
This is hard.
One more example, I suppose. I mentioned that, while I was writing this, I hallucinated my voice reading it; this appears to be necessary to actually writing. Not for deciding on the meaning I’m trying to get across, but in order to serialise it as English. Not quite sure what’s going on there, since I don’t seem to be doing it ahead of time—I’m doing it word by word.
1: https://docs.google.com/document/d/1yArXzSQUqkSr_eBd6JhIECdUKQoWyUaPHh_qz7S9n54/edit#heading=h.ug167zx6z472 may or may not be useful in figuring out what I’m talking about; it’s a somewhat more long-winded use of the model. It also has enormous macroplot spoilers for the Death Game SAO fanfic, which.. you probably don’t care about.
Okay, let me summarise your statement so as to ensure that I understand it correctly.
In short, you have a number of internal functional modules in the brain; each module has a speciality. There will be, for example, a module for sight; a module for hearing; a module for language, and so on. Your thoughts consist—almost entirely—of these modules exchanging information in some sort of central space.
The modules are, in effect, having a chat.
Now, you can swap these modules out quite a bit. When you’re planning what to type, for example, it seems you run that through your ‘hearing’ module, in order to check that the word choice is correct; you know that this is not something which you are actually hearing, and thus are in no danger of treating it as a hallucination, but as a side effect of this your hearing module isn’t running through the actual input from your ears, and you may be missing something that someone else is saying to you. (I imagine that sufficiently loud or out-of-place noises are still wired directly to your survival subsystem, though, and will get your attention as normal).
But you don’t have to use your hearing module to think with. Or your sight module. You have other modules which can do the thinking, even when those modules have nothing to do. When your sensory modules have nothing to add, you can and do shut them out of the main circuit, ignoring any non-urgent input from those modules.
Your modules communicate by some means which are somehow independent of language, and your thoughts must be translated through your hearing module (which seems to have your language module buried inside it) in order to be described in English.
This is very different to how I think. I have one major module—the language module (not the hearing module, there’s no audio component to this, just a direct language model) which does almost all my thinking. Other modules can be used, but it’s like an occasional illustration in a book—very much not the main medium. (And also like an illustration in that it’s usually visual, though not necessarily limited to two dimensions).
When it comes to my internal thoughts, all modules that are not my language model are unimportant in comparison. I suspect that some modules may be so neglected as to be near nonexistent, and I wonder what those modules could be.
My sensory modules appear to be input-only. I can ignore them, but I can’t seem to consciously run other information into them. (I still dream, which I imagine indicates that I can subconsciously run other information through my sensory modules)
This leaves me with three questions:
Aside from your sensory modules, what other module(s) do you have?
Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?
When your modules share information, what form does that information take?
I imagine these will be difficult to translate to language, but I am very curious as to what your answers will be.
Your analysis is pretty much spot on.
It’s interesting to me that you say your hearing and language modules are independent. I mean, it’s reasonably obvious that this has to be possible—deaf people do have language—but it’s absolutely impossible for me to separate the two, at least in one direction; I can’t deal with language without ‘hearing’ it.
And I just checked; it doesn’t appear I can multitask and examine non-language sounds while I’m using language, either. For comparison, I absolutely can (re)use e.g. visual modules while I’m writing this, although it gets really messy if I try to do so while remaining conscious of what they’re doing—that’s not actually required, though.
Well… my introspection isn’t really good enough to tell, and it’s really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn’t have an answer even in principle; that there’s no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn’t help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can’t really tell which is which.
I think if it was just one, I wouldn’t really be conscious of it. But that’s not what you asked, so the answer is “Probably yes”.
I’m very tempted to say “conscious experience”, here, but I have no real basis for that other than a hunch. I’m not sure I can give you a better answer, though. Feelings, visual input (or “hallucinations”), predictions of how people or physical systems will behave, plans—not embedded in any kind of visualization, just raw plans—etc. etc. And before you ask what that’s like, it’s a bit like asking what a Python dictionary feels like.. though emotions aren’t much involved, at that level; those are separate.
The one common theme is that there’s always at least one meta-level of thought associated. Not just “Here’s a plan”, but “Here’s a plan, and oh by the way, here’s what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, “memory” here just pattern-matched it to something you read in a novel, which didn’t work, but then again a different segment is pointing out that fictional evidence is fictional.”
...without the words, of course.
So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I’m aware of. Never the base idea, but all the thinking about the idea… well, it wouldn’t really make sense to be “aware of the base idea” if I wasn’t thinking about it.
Sight is something else again. It certainly feels like I’m aware of my entire visual field, but I’m at least half convinced that’s an illusion. I’m in a prime position to fool myself about that.
This may be related to the fact that I learnt to read at a very young age; when I read, I run my visual input through my language module; the visual model pre-processes the input to extract the words, which are then run through the language module directly.
At least, that’s what I think is happening.
Running the language module without the hearing module a lot, and from a young age, probably helped quite a bit to seperate the two.
Hmph. Disappointing, but thanks for answering the question.
I think I was hoping for more clearly defined modules than appears to be the case. Still, what’s there is there.
Now, this is interesting. I’m really going to have to go and think about this for a while. You have a kind of continual meta-commentary in your mind, thinking about what you’re thinking, cross-referencing with other stuff… that seems like a useful talent to have.
It also seems that, by concentrating more on the individual modules and less on the inter-module communication, I pretty much entirely missed where most of your thinking happens.
One question comes to mind; you mention ‘raw plans’. You’ve correctly predicted my obvious question—what raw plans feel like—but I still don’t really have much of a sense of it, so I’d like to poke at that a bit if you don’t mind.
So; how are these raw plans organised?
Let us say, for example, that you need to plan… oh, say, to travel to a library, return one set of books, and take out another. Would the plan be a series of steps arranged in order of completion, or a set of subgoals that need to be accomplished in order (subgoal one: find the car keys); or would the plan be simply a label saying ‘LIBRARY PLAN’ that connects to the memory of the last time you went on a similar errand?
As for me, I have a few different ways that I can formulate plans. For a routine errand, my plan consists of the goal (e.g. “I need to go and buy bread”) and a number of habits (which, now that I think about it, hardly impinge on my conscious mind at all; if I think about it, I know where I plan to go to get bread, but the answer’s routine enough that I don’t usually bother). When driving, there are points at which I run a quick self-check (“do I need to buy bread today? Yes? Then I must turn into the shopping centre...”)
For a less routine errand, my plan will consist of a number of steps to follow. These will be arranged in the order I expect to complete them, and I will (barring unexpected developments or the failure of any step) follow the steps in order as specified. If I were to write down the steps on paper, they would appear horrendously under-specified to a neutral observer; but in the privacy of my own head, I know exactly which shop I mean when I simply specify ‘the shop’; both the denotations and connotations intended by every word in my head are there as part of the word.
If the plan is one that I particularly look forward to fulfilling, I may run through it repeatedly, particularly the desirable parts (”...that icecream is going to taste so good...”). This all runs through my language system, of course.
I have a vague memory of having read something that suggested that humans are not aware of their entire visual field, but that there is a common illusion that people are, agreeing with your hypothesis here. I vaguely suspect that it might have been in one of the ‘Science of the Discworld’ books, but I am uncertain.