In principle, it is possible to simulate a brain on a computer, and I think it’s meaningful to say that if you could do this, you would know your “source code”. In general, you can think of something’s source code as a (computable) mathematical description of that thing.
Also, the point of the post is to generalize the theory to this domain. Humans don’t know their source code, but they do have models of other people, and use these to make complicated decisions. What would a formalization of this kind of process look like?
You could think of software as being any element that is programmable—ie, even a physical plugboard can be thought of as software even though it’s not typically the format we store it on.
What I’m getting at is that it doesn’t matter if the software is expressed in electron arrangement or plugs or neurons, if it’s computable. I don’t see any trouble here distinguishing between connectome and neuron.
In principle, it is possible to simulate a brain on a computer
That’s a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation—which you have to do in order to say that the simulation delivers the “source code” of the mind.
you can think of something’s source code as a (computable) mathematical description of that thing.
A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying “hardware”. Is neuron growth or, say, serotonin release part of my “source code”?
The laws of physics as we currently understand them are computable (not efficiently, but still), and there is no reason to hypothesize new physics to explain how the brain works. I’m claiming there is an isomorphism.
Dynamic systems have mathematical descriptions also…
In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the “right” exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.
In the broadest sense, the hypothesis is somewhat trivial.
No, I don’t think so.
For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the “right” exchange, such that it is indistinguishable from a human.
Are you making the Searle’s Chinese Room argument?
In any case, even if we accept the purely functional approach, it doesn’t seem obvious to me that you must be able to create a simulation which picks the “right” answer in the future. You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
Well, I did say “In the broadest sense”, so yes, that does imply a purely functional approach.
You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
In principle, it is possible to simulate a brain on a computer, and I think it’s meaningful to say that if you could do this, you would know your “source code”. In general, you can think of something’s source code as a (computable) mathematical description of that thing.
Also, the point of the post is to generalize the theory to this domain. Humans don’t know their source code, but they do have models of other people, and use these to make complicated decisions. What would a formalization of this kind of process look like?
It’s not known that a software/hardware distinctive is even applicable to brains.
Moreover, If you simulated a brain, you might be simulating in software what was originally done in hardware .
You could think of software as being any element that is programmable—ie, even a physical plugboard can be thought of as software even though it’s not typically the format we store it on.
You could think of a plugboard as hardware, too, hence there is no longer a clean hardware/software distinction
What I’m getting at is that it doesn’t matter if the software is expressed in electron arrangement or plugs or neurons, if it’s computable. I don’t see any trouble here distinguishing between connectome and neuron.
What I am saying is that ifnyiu can’t separate software from hardware, you are dealing with software in a reifiable sense.
Hardware is never computable, in the sense that simulated planes don’t fly.
That’s a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation—which you have to do in order to say that the simulation delivers the “source code” of the mind.
A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying “hardware”. Is neuron growth or, say, serotonin release part of my “source code”?
The laws of physics as we currently understand them are computable (not efficiently, but still), and there is no reason to hypothesize new physics to explain how the brain works. I’m claiming there is an isomorphism.
Dynamic systems have mathematical descriptions also…
What do you mean by that? E.g. quantum mechanics, or even the many-bodies problem in classical mechanics...
Do note that being able to write a mathematical expression does not necessarily mean it’s computable. Among other things, our universe is finite.
I strongly suspect “computable” is being used in the mathematical sense here, not in the sense of “tractable on a reasonable computer”.
QM is compatible. Cclassical physics us not.
We do.nt know whether the universe is finite or infinite.
In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the “right” exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.
No, I don’t think so.
Are you making the Searle’s Chinese Room argument?
In any case, even if we accept the purely functional approach, it doesn’t seem obvious to me that you must be able to create a simulation which picks the “right” answer in the future. You don’t get to run 2^n instances and say “Pick whichever one you satisfies your criteria”.
Well, I did say “In the broadest sense”, so yes, that does imply a purely functional approach.
The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.
That’s not simulating intelligence. That’s just a crude exhaustive search.
And I am not sure you have enough energy in the universe to run 2^n instances, anyway.