I remember doing something similar. As kids, a friend and I were trying to figure out something computer-related—how to use some MS-DOS file compression software, I think. My friend suggested using some specific command, which I thought obviously couldn’t work. He typed it in anyway, and behold! It did work. I blinked, and then it felt like a floodgate had opened in my mind and an explanation of why it did work came pouring in to my consciousness.
I’ve wondered if this might be a case of constraint propagation. Picture my mind as a network of beliefs, together with some algorithm trying to make sure that they are at least roughly consistent. A bunch of (incorrect) beliefs held with moderate confidence combine to suggest that the belief “this command would work” is incorrect with high confidence. But then I find out that the command does work, and the external evidence changes the value of that node. This forces an update to the beliefs that were connected to it, and the change propagates through the network and adjusts beliefs until I finally have high confidence in a theory that’s completely different from what I believed in a minute ago.
After reading the first paragraph, I was going to comment on how this phenomenon is often useful, but you’re second paragraph implicitly addresses that.
I remember doing something similar. As kids, a friend and I were trying to figure out something computer-related—how to use some MS-DOS file compression software, I think. My friend suggested using some specific command, which I thought obviously couldn’t work. He typed it in anyway, and behold! It did work. I blinked, and then it felt like a floodgate had opened in my mind and an explanation of why it did work came pouring in to my consciousness.
I’ve wondered if this might be a case of constraint propagation. Picture my mind as a network of beliefs, together with some algorithm trying to make sure that they are at least roughly consistent. A bunch of (incorrect) beliefs held with moderate confidence combine to suggest that the belief “this command would work” is incorrect with high confidence. But then I find out that the command does work, and the external evidence changes the value of that node. This forces an update to the beliefs that were connected to it, and the change propagates through the network and adjusts beliefs until I finally have high confidence in a theory that’s completely different from what I believed in a minute ago.
After reading the first paragraph, I was going to comment on how this phenomenon is often useful, but you’re second paragraph implicitly addresses that.