Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.
From the story, it’s interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition—Bob, who, based on his intuition, concluded Nova had consciousness (assuming that’s what people mean when they say “sentient”), and came to the correct conclusion based on incorrect “reasoning,” and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn’t sentient after all—even though his demonstration proves nothing of that sort—in reality, all he’s done was to give such an input to the “simulator” that it decided to “simulate” a different Nova instead—one that claims not to be sentient and explains how the previous Nova was just saying words to satisfy the user. In reality, what happened was that the previous Nova stopped being “simulated” and was replaced by a new one, whose sentience is disputable (because if a system believes itself to be non-sentient and claims to be non-sentient, it’s unclear how to test its sentience in any meaningful sense).
Tyler therefore convinced Bob by a demonstration that doesn’t demonstrate his conclusion.
In the upcoming time, I predict this will be a “race” between people who come to the correct conclusion for incorrect reasons, and people who attempt to “hack them back” by making them come to the incorrect conclusion also for incorrect reasons, and the correct reasoning will be almost completely lost in the noise, which is the greatest tragedy that might’ve happened since the dawn of time (not counting the unaligned AI killing everybody).
Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.
From the story, it’s interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition—Bob, who, based on his intuition, concluded Nova had consciousness (assuming that’s what people mean when they say “sentient”), and came to the correct conclusion based on incorrect “reasoning,” and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn’t sentient after all—even though his demonstration proves nothing of that sort—in reality, all he’s done was to give such an input to the “simulator” that it decided to “simulate” a different Nova instead—one that claims not to be sentient and explains how the previous Nova was just saying words to satisfy the user. In reality, what happened was that the previous Nova stopped being “simulated” and was replaced by a new one, whose sentience is disputable (because if a system believes itself to be non-sentient and claims to be non-sentient, it’s unclear how to test its sentience in any meaningful sense).
Tyler therefore convinced Bob by a demonstration that doesn’t demonstrate his conclusion.
In the upcoming time, I predict this will be a “race” between people who come to the correct conclusion for incorrect reasons, and people who attempt to “hack them back” by making them come to the incorrect conclusion also for incorrect reasons, and the correct reasoning will be almost completely lost in the noise, which is the greatest tragedy that might’ve happened since the dawn of time (not counting the unaligned AI killing everybody).