I see what you’re saying, but I disagree with substrate’s relevance in this specific scenario because: ”An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.” it just needs to fire at the same time, none of the internal behaviour need to be replicated or simulated.
So—indulging intentionally in an assumption this time—I do think those tiny differences fizzle out. I think it’s insignificant noise to the strong signal. What matters most in neuron firing is action potentials. This isn’t some super delicate process that will succumb to the whims of minute quantum effects and picosecond differences.
I assume that much like a plane doesn’t require feathers to fly, that sentience doesn’t require this super exacting molecular detail, especially given how consistently coherent our sentience feels to most people, despite how damn messy biology is. People have damaged brains, split brains, brains whose chemical balance is completely thrown off by afflictions or powerful hallucinogens, and yet through it all—we still have sentience. It seems wildly unlikely that it’s like ‘ah! you’re close to creating synthetic sentience, but you’re missing the serotonin, and some quantum entanglement’.
I know you’re weren’t arguing for that stance, I’m just stating it as a side note.
Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.
I see what you’re saying, but I disagree with substrate’s relevance in this specific scenario because:
”An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.”
it just needs to fire at the same time, none of the internal behaviour need to be replicated or simulated.
So—indulging intentionally in an assumption this time—I do think those tiny differences fizzle out. I think it’s insignificant noise to the strong signal. What matters most in neuron firing is action potentials. This isn’t some super delicate process that will succumb to the whims of minute quantum effects and picosecond differences.
I assume that much like a plane doesn’t require feathers to fly, that sentience doesn’t require this super exacting molecular detail, especially given how consistently coherent our sentience feels to most people, despite how damn messy biology is. People have damaged brains, split brains, brains whose chemical balance is completely thrown off by afflictions or powerful hallucinogens, and yet through it all—we still have sentience. It seems wildly unlikely that it’s like ‘ah! you’re close to creating synthetic sentience, but you’re missing the serotonin, and some quantum entanglement’.
I know you’re weren’t arguing for that stance, I’m just stating it as a side note.
Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.