I don’t think it would tell you much, because it only excludes the case where the HP is a meme, not the case where it’s a repeatable error.
Boxed AI tells you there is an HP.
Could be because it is phenomenally conscious, and it has noticed there is a real HP.
Could be because it is repeating a conceptual conclusion.
Boxed AI tells you there is not an HP.
Could be because it is a zombie, so it can’t understand what PC is.
Could be because it has PC, but isn’t subject to the erroneous thinking that causes the HP.
Note that you can’t specify whether an AI is or isn’t conscious, or that it’s a perfect reasoner.
Note that philosophers don’t agree on what constitutes a conceptual confusion.
Note that being able to trace back the causal history of an output doesn’t tell you it wasn’t caused by PC: one of the possible solutions to the HP is that certain kinds of physical activity, or information processing are identical to PC, so there is.not necessarily an xor between PC and physical causation. Of course, there is also a fact that human pronouncements have alone sort of causal history, and it doesn’t settle much.
Note that, as things stand, the thought experiment is an intuition pump like Mary’s Room, etc.
I would be curious to know what you think about the box solving the meta-problem just before the addendum.
Do you think it is unlikely that AI would rediscover the hard problem in this setting?
I don’t think it would tell you much, because it only excludes the case where the HP is a meme, not the case where it’s a repeatable error.
Boxed AI tells you there is an HP.
Could be because it is phenomenally conscious, and it has noticed there is a real HP.
Could be because it is repeating a conceptual conclusion.
Boxed AI tells you there is not an HP.
Could be because it is a zombie, so it can’t understand what PC is.
Could be because it has PC, but isn’t subject to the erroneous thinking that causes the HP.
Note that you can’t specify whether an AI is or isn’t conscious, or that it’s a perfect reasoner.
Note that philosophers don’t agree on what constitutes a conceptual confusion.
Note that being able to trace back the causal history of an output doesn’t tell you it wasn’t caused by PC: one of the possible solutions to the HP is that certain kinds of physical activity, or information processing are identical to PC, so there is.not necessarily an xor between PC and physical causation. Of course, there is also a fact that human pronouncements have alone sort of causal history, and it doesn’t settle much.
Note that, as things stand, the thought experiment is an intuition pump like Mary’s Room, etc.