just read his post. interesting to see someone have the same train of thought starting out, but then choose different aspects to focus on.
Any non-local behaviour by the neurons shouldn’t matter if the firing patterns are replicated. I think focusing on the complexity required by the replacement neurons is missing the bigger picture. Unless the contention is that the signals that arrive at the motor neurons have been drastically affected by some other processes, enough so that they overrule some long-held understanding of how neurons operate, they are minor details.
”The third assumption is one you don’t talk about, which is that switching the substrate without affecting behavior is possible. This assumption does not hold for physical processes in general; if you change the substrate of a plank of wood that’s thrown into a fire, you will get a different process. So the assumption is that computation in the brain is substrate-independent” Well, this isn’t the assumption, it’s the conclusion (right or wrong). It appears from what I can tell is that the substrate is the firing patterns themselves.
I haven’t delved too deeply into Penrose’s stuff for quite some time. What I read before doesn’t seem to explain how quantum effects are going to influence action potential propagation on a behaviour-altering scale. It seems like throwing a few teaspoons of water at a tidal wave to try to alter its course.
Well, this isn’t the assumption, it’s the conclusion (right or wrong). It appears from what I can tell is that the substrate is the firing patterns themselves.
You say “Now, replace one neuron with a functionally identical unit, one that takes the same inputs and fires the same way” and then go from there. This step is where you make the third assumption, which you don’t justify.
I think focusing on the complexity required by the replacement neurons is missing the bigger picture.
Agreed—I didn’t say anything that complexity itself is a problem, though, I said something much more specific.
I don’t see how it’s an assumption. Are we considering that the brain might not obey the laws of physics?
I mentioned complexity because you brought up a specific aspect of what determines the firing patterns, and my response is just to say ’sure, our replacement neurons will take in additional factors as part of their input and output’
basically, it seemed that part of your argument is that the neuron black box is unimplementable. I just don’t buy into the idea that neurons operate so vastly differently than the rest of reality to the point their behaviour can’t be replicated
I don’t see how it’s an assumption. Are we considering that the brain might not obey the laws of physics?
If you consider the full set of causal effects of a physical object, then the only way to replicate those exactly is with the same object. This is just generally true; if you change anything about an object, this has changes to the particle structure, and that comes with measurable changes. An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.
This is why I made the comment about the plank of wood—it’s just to make the point that, in general, across all physical processes, substrate is causally relevant. This is a direct implication of the laws of physics; every particle has a continuous effect that depends on its precise location, any two objects have particles in different places, so there is no such thing as having a different object that does exactly the same thing.
So any step like “we’re going to take out this thing and then replace it with a different thing that has the same behavior” makes assumptions about the structure of the process. Since the behavior isn’t literally the same, you’re assuming that the system as a whole is such that the differences that do exist “fizzle out”. E.g., you might assume that it’s enough to replicate the changes to the flow of current, whereas the fact the new neurons have a different mass will fizzle out immediately and not meaningfully affect the process. (If you read my initial post, this is what I was getting at with the abstraction description thing; I was not just making a vague appeal to complexity.)
it seemed that part of your argument is that the neuron black box is unimplementable
Absolutely not; I’m not saying that any of these assumptions are wrong or even hard to justify. I’m just pointing out that this is, in fact, an assumption. Maybe this is so pedantic that it’s not worth mentioning? But I think if you’re going to use the word proof, you should get even minor assumptions right. And I do think you can genuinely prove things; I’m not in the “proof is too strong a word for anything like this” camp. So by analogy, if you miss a step in a mathematical proof, you’d get points deducted even if the thing you’re proving is still true, and even if the step isn’t difficult go get right. I really just want people to be more precise when they discuss this topic.
I see what you’re saying, but I disagree with substrate’s relevance in this specific scenario because: ”An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.” it just needs to fire at the same time, none of the internal behaviour need to be replicated or simulated.
So—indulging intentionally in an assumption this time—I do think those tiny differences fizzle out. I think it’s insignificant noise to the strong signal. What matters most in neuron firing is action potentials. This isn’t some super delicate process that will succumb to the whims of minute quantum effects and picosecond differences.
I assume that much like a plane doesn’t require feathers to fly, that sentience doesn’t require this super exacting molecular detail, especially given how consistently coherent our sentience feels to most people, despite how damn messy biology is. People have damaged brains, split brains, brains whose chemical balance is completely thrown off by afflictions or powerful hallucinogens, and yet through it all—we still have sentience. It seems wildly unlikely that it’s like ‘ah! you’re close to creating synthetic sentience, but you’re missing the serotonin, and some quantum entanglement’.
I know you’re weren’t arguing for that stance, I’m just stating it as a side note.
Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.
just read his post. interesting to see someone have the same train of thought starting out, but then choose different aspects to focus on.
Any non-local behaviour by the neurons shouldn’t matter if the firing patterns are replicated. I think focusing on the complexity required by the replacement neurons is missing the bigger picture. Unless the contention is that the signals that arrive at the motor neurons have been drastically affected by some other processes, enough so that they overrule some long-held understanding of how neurons operate, they are minor details.
”The third assumption is one you don’t talk about, which is that switching the substrate without affecting behavior is possible. This assumption does not hold for physical processes in general; if you change the substrate of a plank of wood that’s thrown into a fire, you will get a different process. So the assumption is that computation in the brain is substrate-independent”
Well, this isn’t the assumption, it’s the conclusion (right or wrong). It appears from what I can tell is that the substrate is the firing patterns themselves.
I haven’t delved too deeply into Penrose’s stuff for quite some time. What I read before doesn’t seem to explain how quantum effects are going to influence action potential propagation on a behaviour-altering scale. It seems like throwing a few teaspoons of water at a tidal wave to try to alter its course.
You say “Now, replace one neuron with a functionally identical unit, one that takes the same inputs and fires the same way” and then go from there. This step is where you make the third assumption, which you don’t justify.
Agreed—I didn’t say anything that complexity itself is a problem, though, I said something much more specific.
I don’t see how it’s an assumption. Are we considering that the brain might not obey the laws of physics?
I mentioned complexity because you brought up a specific aspect of what determines the firing patterns, and my response is just to say ’sure, our replacement neurons will take in additional factors as part of their input and output’
basically, it seemed that part of your argument is that the neuron black box is unimplementable. I just don’t buy into the idea that neurons operate so vastly differently than the rest of reality to the point their behaviour can’t be replicated
If you consider the full set of causal effects of a physical object, then the only way to replicate those exactly is with the same object. This is just generally true; if you change anything about an object, this has changes to the particle structure, and that comes with measurable changes. An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.
This is why I made the comment about the plank of wood—it’s just to make the point that, in general, across all physical processes, substrate is causally relevant. This is a direct implication of the laws of physics; every particle has a continuous effect that depends on its precise location, any two objects have particles in different places, so there is no such thing as having a different object that does exactly the same thing.
So any step like “we’re going to take out this thing and then replace it with a different thing that has the same behavior” makes assumptions about the structure of the process. Since the behavior isn’t literally the same, you’re assuming that the system as a whole is such that the differences that do exist “fizzle out”. E.g., you might assume that it’s enough to replicate the changes to the flow of current, whereas the fact the new neurons have a different mass will fizzle out immediately and not meaningfully affect the process. (If you read my initial post, this is what I was getting at with the abstraction description thing; I was not just making a vague appeal to complexity.)
Absolutely not; I’m not saying that any of these assumptions are wrong or even hard to justify. I’m just pointing out that this is, in fact, an assumption. Maybe this is so pedantic that it’s not worth mentioning? But I think if you’re going to use the word proof, you should get even minor assumptions right. And I do think you can genuinely prove things; I’m not in the “proof is too strong a word for anything like this” camp. So by analogy, if you miss a step in a mathematical proof, you’d get points deducted even if the thing you’re proving is still true, and even if the step isn’t difficult go get right. I really just want people to be more precise when they discuss this topic.
I see what you’re saying, but I disagree with substrate’s relevance in this specific scenario because:
”An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.”
it just needs to fire at the same time, none of the internal behaviour need to be replicated or simulated.
So—indulging intentionally in an assumption this time—I do think those tiny differences fizzle out. I think it’s insignificant noise to the strong signal. What matters most in neuron firing is action potentials. This isn’t some super delicate process that will succumb to the whims of minute quantum effects and picosecond differences.
I assume that much like a plane doesn’t require feathers to fly, that sentience doesn’t require this super exacting molecular detail, especially given how consistently coherent our sentience feels to most people, despite how damn messy biology is. People have damaged brains, split brains, brains whose chemical balance is completely thrown off by afflictions or powerful hallucinogens, and yet through it all—we still have sentience. It seems wildly unlikely that it’s like ‘ah! you’re close to creating synthetic sentience, but you’re missing the serotonin, and some quantum entanglement’.
I know you’re weren’t arguing for that stance, I’m just stating it as a side note.
Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.