I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas—then it will be reasonable to know whether an artificial brain is a reasonable goal.
If we have figured out how to compute the weather accurately some weeks into the future—then we might know whether we can compute a much more complex system.
If we had the foggiest idea of how the brain actually works—then we might know what level of approximation is good enough.
I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas—then it will be reasonable to know whether an artificial brain is a reasonable goal.
Building an artificial kidney requires both knowledge about how a kidney works, and the physical engineering skill to build an artificial kidney with the same structure. Unlike a kidney, an artificial brain can be implemented in software, so it’s enough to only know how it works.
The comparison would be valid if by an “artificial brain” we meant a brain built out of biological neurons, but we don’t.
“an artificial brain can be implemented in software”
I have never understood what underlies this article of faith, and I know more about all the relevant technical disciplines than your average overeducated schmuck. Could you indulge my incomprehension by explaining it to me as though I just fell off the turnip truck?
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
If you question (1), I doubt I can satisfy your doubts, but I am curious as to what sorts of computations a brain can perform but software can’t.
If you question (2), I’m certain I can’t satisfy your doubts, but I’m curious as to what other aspects of a natural brain you care about and why.
If you understand something else by the phrase in the first place, it might help to unpack your reading more explicitly.
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
(1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
This is precisely what I was fishing for, thank you.
If brains are physical systems, and physics as we know it involves noncomputable processes (c.f. the comment below about CTD), then it follows that brains are doing noncomputable stuff. The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about, presuming that this conversation is ultimately about uploads. And the simple answer is that we don’t know.
I’ve got enough theoretical and practical experience with physics, computers and nervous systems to have noticed the muddles that conspicuously creep in when I try to make these three topics interface, and am a little mystified whenever it seems like other smart people don’t notice them. I suspect a big part of it is just getting a little too happy with metaphors like “brain = computer” without paying close attention to the ways in which these things are dis-analogous.
The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about
Well, I would say “that we care about”; it’s not clear to me what it means to say about an aspect of experience that I don’t care about it but I should. But I think that’s a digression.
Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don’t know.
But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an “article of faith,” then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.
By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it’s unsurprising if the first community to do X takes as “an article of faith” that X is sufficient.
I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn’t fly. If that breaks the community’s faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.
As for why the idea of mind-as-computation seems plausible… well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that’s where we’re concentrating our time and energy and enthusiasm. That seems reasonable to me.
That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you’ll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you’ll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)
Heh! Funny, I was just reading the bit in Anslie’s Breakdown of Will where he discusses the function of beliefs as mobilizing one’s motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse—in this case, “running the experiment”, as you say. So I guess I don’t disagree. :) Thanks for the thoughtful response!
I use it because I think it an accurate descriptor; what would you propose instead?
I’m guessing the presumption here is that “article of faith” is a pejorative. It’s not: it refers to anything taken as true despite not being demonstrable. We lean on these all the time and that’s okay, but it’s useful to acknowledge when this is the case.
In principle an artificial brain can be implemented in software, but we currently don’t have the technology to do so with the available resources. Overcoming that limitation would require a lot of technological progress: much faster computers or much cleverer software. So there are technological limits to implementing an artificial brain, just as there are to implementing an artificial pancreas. They just happen to be different technological limits.
Do you honestly believe that an artificial brain can be built purely in software in the near future? And if it could how would it be accurate enough to be some particular person’s brain rather than a generic one? And if it was someone’s brain could the world afford to do this for more than one or two person’s at a time? I am not at all convinced of ‘uploads’.
Do you honestly believe that an artificial brain can be built purely in software in the near future?
Kaj never mentioned “near future” or any timeline for uploads for that matter. The only thing he did was pointing out a possible flaw in your argument, yet you took it as a personal insult to your belief.
The Whole Brain Emulation Roadmap implies that it may very well be possible. I don’t have the expertise to question their judgement in this matter.
As this review shows, WBE on the neuronal/synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business-as-usual development of computational neuroscience models and computer hardware. This assumes that this is the appropriate level of description of the brain, and that we find ways of accurately simulating the subsystems that occur on this level. Conversely, pursuing this research agenda will also help detect whether there are low-level effects that have significant influence on higher level systems, requiring an increase in simulation and scanning resolution.
There do not appear to exist any obstacles to attempting to emulate an invertebrate organism today. We are still largely ignorant of the networks that make up the brains of even modestly complex organisms. Obtaining detailed anatomical information of a small brain appears entirely feasible and useful to neuroscience, and would be a critical first step towards WBE. Such a project would serve as both a proof of concept and a test bed for further development. If WBE is pursued successfully, at present it looks like the need for raw computing power for real-time simulation and funding for building large-scale automated scanning/processing facilities are the factors most likely to hold back large-scale simulations.
Tordmor has commented on my attitude—sorry I did not mean to sound so put out. The reason for the ‘near future’ was because the discussion was about ‘upload’ and so I assumed we talking about our lifetimes which in the context seemed the near furture (about the next 50 years). Making an approximate emulation of some simple invertebrate brain is certainly on the cards. But an accurate emulation of a particular person’s brain is a different ballpark entirely.
I never know exactly what people mean when they say emulation or simulation or model. How much is the idea to mimic how the brain does something? To ‘upload’ someone, the receiving computer would need some sort of mapping to the physical brain of that person. This is a very tall order.
Thanks for the link to the Roadmap which I will be reading it.
I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas—then it will be reasonable to know whether an artificial brain is a reasonable goal.
If we have figured out how to compute the weather accurately some weeks into the future—then we might know whether we can compute a much more complex system. If we had the foggiest idea of how the brain actually works—then we might know what level of approximation is good enough.
Don’t hold your breath for a personal upload.
Building an artificial kidney requires both knowledge about how a kidney works, and the physical engineering skill to build an artificial kidney with the same structure. Unlike a kidney, an artificial brain can be implemented in software, so it’s enough to only know how it works.
The comparison would be valid if by an “artificial brain” we meant a brain built out of biological neurons, but we don’t.
“an artificial brain can be implemented in software”
I have never understood what underlies this article of faith, and I know more about all the relevant technical disciplines than your average overeducated schmuck. Could you indulge my incomprehension by explaining it to me as though I just fell off the turnip truck?
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
If you question (1), I doubt I can satisfy your doubts, but I am curious as to what sorts of computations a brain can perform but software can’t.
If you question (2), I’m certain I can’t satisfy your doubts, but I’m curious as to what other aspects of a natural brain you care about and why.
If you understand something else by the phrase in the first place, it might help to unpack your reading more explicitly.
This is about right.
This is precisely what I was fishing for, thank you.
If brains are physical systems, and physics as we know it involves noncomputable processes (c.f. the comment below about CTD), then it follows that brains are doing noncomputable stuff. The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about, presuming that this conversation is ultimately about uploads. And the simple answer is that we don’t know.
I’ve got enough theoretical and practical experience with physics, computers and nervous systems to have noticed the muddles that conspicuously creep in when I try to make these three topics interface, and am a little mystified whenever it seems like other smart people don’t notice them. I suspect a big part of it is just getting a little too happy with metaphors like “brain = computer” without paying close attention to the ways in which these things are dis-analogous.
Well, I would say “that we care about”; it’s not clear to me what it means to say about an aspect of experience that I don’t care about it but I should. But I think that’s a digression.
Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don’t know.
But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an “article of faith,” then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.
By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it’s unsurprising if the first community to do X takes as “an article of faith” that X is sufficient.
I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn’t fly. If that breaks the community’s faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.
As for why the idea of mind-as-computation seems plausible… well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that’s where we’re concentrating our time and energy and enthusiasm. That seems reasonable to me.
That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you’ll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you’ll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)
Heh! Funny, I was just reading the bit in Anslie’s Breakdown of Will where he discusses the function of beliefs as mobilizing one’s motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse—in this case, “running the experiment”, as you say. So I guess I don’t disagree. :) Thanks for the thoughtful response!
I discourage the use of ‘article of faith’ as a rhetorical device.
I use it because I think it an accurate descriptor; what would you propose instead?
I’m guessing the presumption here is that “article of faith” is a pejorative. It’s not: it refers to anything taken as true despite not being demonstrable. We lean on these all the time and that’s okay, but it’s useful to acknowledge when this is the case.
In principle an artificial brain can be implemented in software, but we currently don’t have the technology to do so with the available resources. Overcoming that limitation would require a lot of technological progress: much faster computers or much cleverer software. So there are technological limits to implementing an artificial brain, just as there are to implementing an artificial pancreas. They just happen to be different technological limits.
Do you honestly believe that an artificial brain can be built purely in software in the near future? And if it could how would it be accurate enough to be some particular person’s brain rather than a generic one? And if it was someone’s brain could the world afford to do this for more than one or two person’s at a time? I am not at all convinced of ‘uploads’.
Kaj never mentioned “near future” or any timeline for uploads for that matter. The only thing he did was pointing out a possible flaw in your argument, yet you took it as a personal insult to your belief.
The Whole Brain Emulation Roadmap implies that it may very well be possible. I don’t have the expertise to question their judgement in this matter.
Tordmor has commented on my attitude—sorry I did not mean to sound so put out. The reason for the ‘near future’ was because the discussion was about ‘upload’ and so I assumed we talking about our lifetimes which in the context seemed the near furture (about the next 50 years). Making an approximate emulation of some simple invertebrate brain is certainly on the cards. But an accurate emulation of a particular person’s brain is a different ballpark entirely.
I never know exactly what people mean when they say emulation or simulation or model. How much is the idea to mimic how the brain does something? To ‘upload’ someone, the receiving computer would need some sort of mapping to the physical brain of that person. This is a very tall order.
Thanks for the link to the Roadmap which I will be reading it.