“an artificial brain can be implemented in software”
I have never understood what underlies this article of faith, and I know more about all the relevant technical disciplines than your average overeducated schmuck. Could you indulge my incomprehension by explaining it to me as though I just fell off the turnip truck?
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
If you question (1), I doubt I can satisfy your doubts, but I am curious as to what sorts of computations a brain can perform but software can’t.
If you question (2), I’m certain I can’t satisfy your doubts, but I’m curious as to what other aspects of a natural brain you care about and why.
If you understand something else by the phrase in the first place, it might help to unpack your reading more explicitly.
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
(1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
This is precisely what I was fishing for, thank you.
If brains are physical systems, and physics as we know it involves noncomputable processes (c.f. the comment below about CTD), then it follows that brains are doing noncomputable stuff. The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about, presuming that this conversation is ultimately about uploads. And the simple answer is that we don’t know.
I’ve got enough theoretical and practical experience with physics, computers and nervous systems to have noticed the muddles that conspicuously creep in when I try to make these three topics interface, and am a little mystified whenever it seems like other smart people don’t notice them. I suspect a big part of it is just getting a little too happy with metaphors like “brain = computer” without paying close attention to the ways in which these things are dis-analogous.
The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about
Well, I would say “that we care about”; it’s not clear to me what it means to say about an aspect of experience that I don’t care about it but I should. But I think that’s a digression.
Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don’t know.
But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an “article of faith,” then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.
By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it’s unsurprising if the first community to do X takes as “an article of faith” that X is sufficient.
I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn’t fly. If that breaks the community’s faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.
As for why the idea of mind-as-computation seems plausible… well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that’s where we’re concentrating our time and energy and enthusiasm. That seems reasonable to me.
That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you’ll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you’ll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)
Heh! Funny, I was just reading the bit in Anslie’s Breakdown of Will where he discusses the function of beliefs as mobilizing one’s motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse—in this case, “running the experiment”, as you say. So I guess I don’t disagree. :) Thanks for the thoughtful response!
I use it because I think it an accurate descriptor; what would you propose instead?
I’m guessing the presumption here is that “article of faith” is a pejorative. It’s not: it refers to anything taken as true despite not being demonstrable. We lean on these all the time and that’s okay, but it’s useful to acknowledge when this is the case.
“an artificial brain can be implemented in software”
I have never understood what underlies this article of faith, and I know more about all the relevant technical disciplines than your average overeducated schmuck. Could you indulge my incomprehension by explaining it to me as though I just fell off the turnip truck?
I generally understand statements like that to be shorthand for (1) The computations performed by a natural brain can be implemented in a software artificial brain, and (2) those computations are the only aspect of a natural brain I care about.
If you question (1), I doubt I can satisfy your doubts, but I am curious as to what sorts of computations a brain can perform but software can’t.
If you question (2), I’m certain I can’t satisfy your doubts, but I’m curious as to what other aspects of a natural brain you care about and why.
If you understand something else by the phrase in the first place, it might help to unpack your reading more explicitly.
This is about right.
This is precisely what I was fishing for, thank you.
If brains are physical systems, and physics as we know it involves noncomputable processes (c.f. the comment below about CTD), then it follows that brains are doing noncomputable stuff. The question is then whether that noncomputable stuff is necessary to aspects of experience that we should care about, presuming that this conversation is ultimately about uploads. And the simple answer is that we don’t know.
I’ve got enough theoretical and practical experience with physics, computers and nervous systems to have noticed the muddles that conspicuously creep in when I try to make these three topics interface, and am a little mystified whenever it seems like other smart people don’t notice them. I suspect a big part of it is just getting a little too happy with metaphors like “brain = computer” without paying close attention to the ways in which these things are dis-analogous.
Well, I would say “that we care about”; it’s not clear to me what it means to say about an aspect of experience that I don’t care about it but I should. But I think that’s a digression.
Leaving that aside, I agree: the only way to be sure that X is sufficient to build a brain is to do X and get a brain out of it. Until we do that, we don’t know.
But if supporting a project to build a brain via X prior to having that certainty is a sign of accepting an “article of faith,” then, well, my answer to your original question is that what underlies that article of faith is support for doing the experiment.
By way of analogy: the only way to be sure that X is sufficient to build a heavier-than-air flying machine is to do X and see if it flies. And if doing X requires a lot of time and energy and enthusiasm, then it’s unsurprising if the first community to do X takes as “an article of faith” that X is sufficient.
I have no problem with that, in and of itself; my question is what happens when someone does X and the machine doesn’t fly. If that breaks the community’s faith and they go on to endorse something else, then I endorse the mechanism underlying their faith.
As for why the idea of mind-as-computation seems plausible… well, of all the tools we have, computers currently seem like the best bet for building an artificial mind out of, so that’s where we’re concentrating our time and energy and enthusiasm. That seems reasonable to me.
That said, if in N years neurobiology advances to the point where we can engineer artificial structures out of neurons as readily as we can out of silicon, I expect you’ll see a lot of enthusiastic endorsement of AI projects built out of neurons. (I also expect that you’ll see a lot of criticism that a mere neural structure lacks something-or-other that brains have, and therefore cannot conceivably be intelligent.)
Heh! Funny, I was just reading the bit in Anslie’s Breakdown of Will where he discusses the function of beliefs as mobilizing one’s motivations, and now feel obliged to be more tolerant of beliefs that I suspect are confused or mistaken but which motivate activities I endorse—in this case, “running the experiment”, as you say. So I guess I don’t disagree. :) Thanks for the thoughtful response!
I discourage the use of ‘article of faith’ as a rhetorical device.
I use it because I think it an accurate descriptor; what would you propose instead?
I’m guessing the presumption here is that “article of faith” is a pejorative. It’s not: it refers to anything taken as true despite not being demonstrable. We lean on these all the time and that’s okay, but it’s useful to acknowledge when this is the case.