There’s a book of fiction, Blindsight, by Peter Watts, that explores what intelligent life would look like without consciousness. You may be interested in reading it, even only recreationally, but it covers a lot of ground around the idea you’re talking about here.
I would also not discredit the ability of the emotive brain. Just like anything else, it can be trained—I think a lot of engineers, developers or technical professionals can relate to their subsconscious developing intuitive, rapid solutions to problems that conscious thought does not.
Hard agree on “post rationalism” being the alignment of the intuitive brain with accurate, rational thought. To the extent I’ve been able to do it, it’s extremely helpful, at least in the areas I frequently practice.
Blindsight strikes me as having the opposite view. Eneasz is talking about getting the underlayer to be more aligned with the overlayer. (“Unconscious” and “conscious” are the usual words, but I find them too loaded.) Watts is talking about removing the overlayer as a worse than useless excrescence. I am sceptical of the picture Watts paints, in both his fiction and non-fiction.
That’s why I brought it up; I thought it was an interesting contrast.
I am skeptical of it, but not altogether that skeptical. If language is “software” one could make an analogy to e.g symbolic AI or old fashioned algorithms vs modern transformer architectures; they perform differently at different tasks.
There’s a book of fiction, Blindsight, by Peter Watts, that explores what intelligent life would look like without consciousness. You may be interested in reading it, even only recreationally, but it covers a lot of ground around the idea you’re talking about here.
I would also not discredit the ability of the emotive brain. Just like anything else, it can be trained—I think a lot of engineers, developers or technical professionals can relate to their subsconscious developing intuitive, rapid solutions to problems that conscious thought does not.
Hard agree on “post rationalism” being the alignment of the intuitive brain with accurate, rational thought. To the extent I’ve been able to do it, it’s extremely helpful, at least in the areas I frequently practice.
Blindsight strikes me as having the opposite view. Eneasz is talking about getting the underlayer to be more aligned with the overlayer. (“Unconscious” and “conscious” are the usual words, but I find them too loaded.) Watts is talking about removing the overlayer as a worse than useless excrescence. I am sceptical of the picture Watts paints, in both his fiction and non-fiction.
That’s why I brought it up; I thought it was an interesting contrast.
I am skeptical of it, but not altogether that skeptical. If language is “software” one could make an analogy to e.g symbolic AI or old fashioned algorithms vs modern transformer architectures; they perform differently at different tasks.