The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?