It’s interesting that you care about what the alien thinks. Normally people say that the most important property of consciousness is its subjectivity. Like, people tend to say things like “Is there something that it’s like to be that person, experiencing their own consciousness?”, rather than “Is there externally-legible indication that there’s consciousness going on here?”.
Thus, I would say: the simulation contains a conscious entity, to the same extent that I am a conscious entity. Whether aliens can figure out that fact is irrelevant.
I do agree with the narrow point that a simulation of consciousness can be externally illegible, i.e. that you can manifest something that’s conscious to the same extent that I am, in a way where third parties will be unable to figure out whether you’ve done that or not. I think a cleaner example than the ones you mentioned is: a physics simulation that might or might not contain a conscious mind, running under homomorphic encryption with a 100000-bit key, and where all copies of the key have long ago been deleted.
Whether aliens can figure out that fact is irrelevant.
To be clear, would you say that you are disagreeing with “Premise 2” above here?
Premise 2: Phenomenal consciousness is a natural kind: There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system. It is the territory rather than a map.
I don’t think Premise 2 is related to my comment. I think it’s possible to agree with premise 2 (“there is an objective fact-of-the-matter whether a conscious experience is occurring”), but also to say that there are cases where it is impossible-in-practice for aliens to figure out that fact-of-the-matter.
By analogy, I can write down a trillion-digit number N, and there will be an objective fact-of-the-matter about what is the prime factorization of N, but it might take more compute than fits in the observable universe to find out that fact-of-the-matter.
In general, I expect these sorts of constraint removals to make problems trivial, with exceptions being problems where you have to arbitrarily maintain a finite computational power, and a big problem of philosophy is not realizing how much their intuitions rests on constraints of our own world that don’t have to hold when infinity is involved.
More generally, a lot of our intuitions involve exploiting constraints on the world at large, which means that when you remove those constraints, our intuitions become false.
I think Searle’s Chinese Room argument is flawed for reasons similar to this, and more generally the use of idealizations/thought experiments make philosophers forget how wrong their intuition is when they consider the question (at least for non-moral and possibly non-identity cases, though I am much more fragile on the confidence of the non-identity case specifically.
I don’t think any of the challenges you mentioned would be a blocker to aliens that have infinite compute and infinite time. “Is the data big-endian or little-endian?” Well, try it both ways and see which one is a better fit to observations. If neither seems to fit, then do a combinatorial listing of every one of the astronomical number of possible encoding schemes, and check them all! Spend a trillion years studying the plausibility of each possible encoding before moving onto the next one, just to make sure you don’t miss any subtelty. Why not? You can do all sorts of crazy things with infinite compute and infinite time.
It’s interesting that you care about what the alien thinks. Normally people say that the most important property of consciousness is its subjectivity. Like, people tend to say things like “Is there something that it’s like to be that person, experiencing their own consciousness?”, rather than “Is there externally-legible indication that there’s consciousness going on here?”.
Thus, I would say: the simulation contains a conscious entity, to the same extent that I am a conscious entity. Whether aliens can figure out that fact is irrelevant.
I do agree with the narrow point that a simulation of consciousness can be externally illegible, i.e. that you can manifest something that’s conscious to the same extent that I am, in a way where third parties will be unable to figure out whether you’ve done that or not. I think a cleaner example than the ones you mentioned is: a physics simulation that might or might not contain a conscious mind, running under homomorphic encryption with a 100000-bit key, and where all copies of the key have long ago been deleted.
To be clear, would you say that you are disagreeing with “Premise 2” above here?
I don’t think Premise 2 is related to my comment. I think it’s possible to agree with premise 2 (“there is an objective fact-of-the-matter whether a conscious experience is occurring”), but also to say that there are cases where it is impossible-in-practice for aliens to figure out that fact-of-the-matter.
By analogy, I can write down a trillion-digit number N, and there will be an objective fact-of-the-matter about what is the prime factorization of N, but it might take more compute than fits in the observable universe to find out that fact-of-the-matter.
Ah I see, thanks for clarifying.
Perhaps I should have also given the alien access to infinite compute. I think the alien still wouldn’t be able to determine the correct simulation.
And also infinite X if you hit me with another bottleneck of the alien not having enough X in practice.
The thought experiment is intended to be about in-principle rather than practical.
In general, I expect these sorts of constraint removals to make problems trivial, with exceptions being problems where you have to arbitrarily maintain a finite computational power, and a big problem of philosophy is not realizing how much their intuitions rests on constraints of our own world that don’t have to hold when infinity is involved.
More generally, a lot of our intuitions involve exploiting constraints on the world at large, which means that when you remove those constraints, our intuitions become false.
I think Searle’s Chinese Room argument is flawed for reasons similar to this, and more generally the use of idealizations/thought experiments make philosophers forget how wrong their intuition is when they consider the question (at least for non-moral and possibly non-identity cases, though I am much more fragile on the confidence of the non-identity case specifically.
I don’t think any of the challenges you mentioned would be a blocker to aliens that have infinite compute and infinite time. “Is the data big-endian or little-endian?” Well, try it both ways and see which one is a better fit to observations. If neither seems to fit, then do a combinatorial listing of every one of the astronomical number of possible encoding schemes, and check them all! Spend a trillion years studying the plausibility of each possible encoding before moving onto the next one, just to make sure you don’t miss any subtelty. Why not? You can do all sorts of crazy things with infinite compute and infinite time.
How would the alien know when they’ve found the correct encoding scheme?