Are you sure the model that your mind was using to report its own thought process corresponded to what was really going on under the bonnet?
I was very confident that I could reliably retain or reconstruct the last five or so actual thoughts of my conscious deliberation—and the actual ordering or sequencing of those thoughts. Although recalling or reconstructing a thought sometimes required a few seconds of effort or patience, the reconstruction sure felt like a reliable observation rather than an interpretation or the updating of a model or the use of a learned model. As for extracting generalizations or regularities from those observations, like I said, I can recall making only one (declarative) generalization, but I have no reason to believe that I could not bring the same regularity-noticing, causality-extracting powers to bear on those observations that I could to any observations made by my senses. (When observing a very complex process, the regularity-noticing consists mostly of implicit learning rather than explicit, declarative learning—or so it seems to me.)
Are you sure the model that your mind was using to report its own thought process corresponded to what was really going on under the bonnet?
I was very confident that I could reliably retain or reconstruct the last five or so actual thoughts of my conscious deliberation—and the actual ordering or sequencing of those thoughts. Although recalling or reconstructing a thought sometimes required a few seconds of effort or patience, the reconstruction sure felt like a reliable observation rather than an interpretation or the updating of a model or the use of a learned model. As for extracting generalizations or regularities from those observations, like I said, I can recall making only one (declarative) generalization, but I have no reason to believe that I could not bring the same regularity-noticing, causality-extracting powers to bear on those observations that I could to any observations made by my senses. (When observing a very complex process, the regularity-noticing consists mostly of implicit learning rather than explicit, declarative learning—or so it seems to me.)