I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say ‘only possible in principle’; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Well, maybe not that long, but a long, long time to do the ‘lot of little steps’. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of ‘best fit scenario’ and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Right, fair enough, then it’s a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore’s Law ends up making over the next few decades. Let’s give Blue Brain another decade or two and see what things look like then.
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say ‘only possible in principle’; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Well, maybe not that long, but a long, long time to do the ‘lot of little steps’. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of ‘best fit scenario’ and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Right, fair enough, then it’s a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore’s Law ends up making over the next few decades. Let’s give Blue Brain another decade or two and see what things look like then.
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.