“Zombies” are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.
That seems to me be a bit beyond current technical ability (whether or not 2 things on the scale of a human being are atom-by-atom identical).
I’m not sure there’s huge value in spending a lot of time on that “problem”, except a very small fraction of our energy as a persistence-maximing hedge, sort of like spending a very small amount of time (if any) on planning how to beat proton decay trillions of years from now. http://www.pbs.org/wgbh/nova/universe/historysans.html
I’ve seen the term zombie used however, in ways other than your definition in this piece. For example, mental “uploads” that profess to be the person uploaded. That could be a little trickier, because just because something fools an observer into thinking that it a particular subjective conscious entity (for example, that it’s me, HA) doesn’t mean that it is. And since our technology can’t current do atom-by-atom comparisons of humans, it requires less than that to fool almost any current observer. That to me is the more relevant problem currently. In attempting to maximize my persistence odds, I don’t want to minimize my chances of being replaced by a “zombie” in that sense: something that meets current discernment technology but doesn’t actually preserve my subjective conscious experience. Practically, it seems to me this results in giving somewhat greater weight to persistence strategies that are more conservative in keeping my subjective consciousness in something closer to it’s current wet brain in a human body experience (as opposed to ‘uploading’, etc.)
“Zombies” are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.
That seems to me be a bit beyond current technical ability (whether or not 2 things on the scale of a human being are atom-by-atom identical).
I’m not sure there’s huge value in spending a lot of time on that “problem”, except a very small fraction of our energy as a persistence-maximing hedge, sort of like spending a very small amount of time (if any) on planning how to beat proton decay trillions of years from now. http://www.pbs.org/wgbh/nova/universe/historysans.html
I’ve seen the term zombie used however, in ways other than your definition in this piece. For example, mental “uploads” that profess to be the person uploaded. That could be a little trickier, because just because something fools an observer into thinking that it a particular subjective conscious entity (for example, that it’s me, HA) doesn’t mean that it is. And since our technology can’t current do atom-by-atom comparisons of humans, it requires less than that to fool almost any current observer. That to me is the more relevant problem currently. In attempting to maximize my persistence odds, I don’t want to minimize my chances of being replaced by a “zombie” in that sense: something that meets current discernment technology but doesn’t actually preserve my subjective conscious experience. Practically, it seems to me this results in giving somewhat greater weight to persistence strategies that are more conservative in keeping my subjective consciousness in something closer to it’s current wet brain in a human body experience (as opposed to ‘uploading’, etc.)