Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.
Nice; I think we’re on the same page now. And fwiw, I agree (except that I think you need just a little more than just “fire at the same time”). But yes, if the artificial neurons affect the electromagnetic field in the same way—so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they’re not firing—then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn’t affect its consciousness. (And note that concerns the brain’s entire behavior, i.e., the algorithm it’s running, not just its input/output map.)
If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious—which, as I said in the other thread, I think is harder than the proof you’ve given here. Like, try to figure out exactly what assumptions you do and do not require—both assumptions about how consciousness works and how the brain works—and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.
I’m claiming this based on having read every post with the consciousness tag—so I guess it’s possible that someone has written something like this and didn’t tag it, and I’ve just never seen it.