You may be right but my imagination has a problem with it. If there is a way to do analog computing using software in a non step-by-step procedure, then I could imagine a software solution. It is the algorithm that is my problem and not the physical form of the ‘ware’.
I may not be understanding your objection in that case. Are you saying that there’s no way software, being a digital phenomenon, can simulate continuous analog phenomena? If so, I will point to the many cases where we successfully use software to simulate analog phenomena to sufficient precision. If not, can you perhaps rephrase?
I may not be expressing my self well here. I am try to express what I can and cannot imagine—I do not presume to say that because I cannot imagine something, it is impossible. In fact I believe that it would be possible to simulate the nervous system with digital algorithms in principle, just extremely difficult in practice. So difficult I think that I cannot imagine it happening. It is not the ‘software’ or the ‘digital’ that is my block, it is the ‘algorithm’, the stepwise processes that I am having trouble with. How do you imagine the enormous amount and varied nature of feedback in the brain can be simulated by step-by-step logic? I take it that you can imagine how it could be done—so how?
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say ‘only possible in principle’; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Well, maybe not that long, but a long, long time to do the ‘lot of little steps’. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of ‘best fit scenario’ and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Right, fair enough, then it’s a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore’s Law ends up making over the next few decades. Let’s give Blue Brain another decade or two and see what things look like then.
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.
You may be right but my imagination has a problem with it. If there is a way to do analog computing using software in a non step-by-step procedure, then I could imagine a software solution. It is the algorithm that is my problem and not the physical form of the ‘ware’.
I may not be understanding your objection in that case. Are you saying that there’s no way software, being a digital phenomenon, can simulate continuous analog phenomena? If so, I will point to the many cases where we successfully use software to simulate analog phenomena to sufficient precision. If not, can you perhaps rephrase?
I may not be expressing my self well here. I am try to express what I can and cannot imagine—I do not presume to say that because I cannot imagine something, it is impossible. In fact I believe that it would be possible to simulate the nervous system with digital algorithms in principle, just extremely difficult in practice. So difficult I think that I cannot imagine it happening. It is not the ‘software’ or the ‘digital’ that is my block, it is the ‘algorithm’, the stepwise processes that I am having trouble with. How do you imagine the enormous amount and varied nature of feedback in the brain can be simulated by step-by-step logic? I take it that you can imagine how it could be done—so how?
with a lot of steps.
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say ‘only possible in principle’; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Well, maybe not that long, but a long, long time to do the ‘lot of little steps’. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of ‘best fit scenario’ and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Right, fair enough, then it’s a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore’s Law ends up making over the next few decades. Let’s give Blue Brain another decade or two and see what things look like then.
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.