I don’t find the question relevant. That’s a physicist’s application of Occam’s razor: extra postulates about consciousness don’t affect physical calculations, so we should ignore them—just like MWI vs CI doesn’t affect experimental predictions, so a physicist shouldn’t care what interpretation is used.
But my concern is the intersection of physics and philosophy: what moral weight should I give in my utilitarian assessment of possible futures outcomes? Whether a life form is conscious or not doesn’t matter much from a physicists perspective because it doesn’t affect the biochemical calculations, but it does matter to the question “should I protect this life?”
There is a division in the transhumanist community between whether one should identify with the instance of a computation, or the description of a computation. This has practical, real-world consequences: should I sign up for cryonics (with the possibility of revival, but you suffer some damage) or brain preservation (less damage, but only destructive uploading options)?
If the panpsychic consciousness-in-every-interaction postulate I stated is true, then morality and personal utility comes down instance of computation, not description of computation camp. This means cryonics (long sleep) is favored over brain preservation (kill-and-copy), and weird stuff like quantum suicide are also ruled out as options.
I don’t find the question relevant. That’s a physicist’s application of Occam’s razor: extra postulates about consciousness don’t affect physical calculations, so we should ignore them—just like MWI vs CI doesn’t affect experimental predictions, so a physicist shouldn’t care what interpretation is used.
But my concern is the intersection of physics and philosophy: what moral weight should I give in my utilitarian assessment of possible futures outcomes? Whether a life form is conscious or not doesn’t matter much from a physicists perspective because it doesn’t affect the biochemical calculations, but it does matter to the question “should I protect this life?”
There is a division in the transhumanist community between whether one should identify with the instance of a computation, or the description of a computation. This has practical, real-world consequences: should I sign up for cryonics (with the possibility of revival, but you suffer some damage) or brain preservation (less damage, but only destructive uploading options)?
If the panpsychic consciousness-in-every-interaction postulate I stated is true, then morality and personal utility comes down instance of computation, not description of computation camp. This means cryonics (long sleep) is favored over brain preservation (kill-and-copy), and weird stuff like quantum suicide are also ruled out as options.