I deny the premise of the question: it is not “near-universally accepted”. It is fairly widely accepted, but there are still quite a lot of people who have some degree of uncertainty about it. It’s complicated by varying ideas of exactly what “sentient” means so the same question may be interpreted as meaning different things by different people.
Again, there are a lot of people who expect that we wouldn’t necessarily know.
Why do you think that there is any difference? The mere existence of the term “p-zombie” suggests that quite a lot of people have an idea that there could—at least in principle—be zero difference.
Looks like a long involved statement with a rhetorical question embedded in it. Are you actually asking a question here?
Same as 4.
Maybe you should distinguish between questions and claims?
Stopped reading here since the “list of questions” stopped even pretending to actually be questions.
Thank you very much for the response. Can I ask follow up questions?
I literally do not know a single person with an academic position in a related field who would publicly doubt that we do not have sentient AI yet. Literally not one. Could you point me to one?
3. I think p zombie this is a term that is wildly misunderstood on Less Wrong. In its original context, it was practically never intended to draw up a scenario that is physically possible. You basically have to buy into tricky versions of counter-scientific dualism to believe it could be. It’s an interesting thought experiment, but mostly for getting people to spell out our confusion about qualia in more actionable terms. P zombies cannot exist, and will not exist. They died with the self-stultification argument.
4. Fair enough. I think and hereby state that human minds are a misguided framework of comparison for the first consciousness to expect, in light of the fact that much simpler conscious models exist and developed first, and that a rejection of upcoming sentient AI based on the differences between AI on a human brain are problematic for this reason. And thank you for the feedback—you are right that this begins with questions that left me confused and uncertain, and increasingly gets into a territory where I am certain, and hence should stand behind my claims.
5. This is a genuine question. I am concerned that the people we trust to be most informed and objective on the matter of AI are biased in their assessment because they have much too lose if it is sentient. But I am unsure how this could empirically be tested. For now, I think it is just something to keep in mind when telling people that the “experts”, namely the people working with it, near universally declare that it isn’t sentient and won’t be. I’ve worked on fish pain, and the parallel to the fishing industry doing fish sentience “research” and arguing from their extensive expertise from working with fish every day and concluding that fish cannot feel pain and hence their fishing practices are fine are painful.
6. Fair enough. Claim: Consciousness is not mysterious, but we do often feel it should be. If we expect it to be, we may fail to recognise it in an explanation that is lacking in mystery. Artificial systems we have created and have some understanding of inherently seem non mysterious, but this is no argument that they are not conscious. I have encountered this a lot and it bothers me. A programmer will say “but all it does is “long complicated process that is eerily reminiscient of biological process likely related to sentience”, so it is not sentient!”, and if I ask them how that differs from how sentience would be embedded, it becomes clear that they have no idea and have never even thought about that.
I am sorry if it got annoying to read at that point. The TL;DR was that I think accidentally producing sentience is not at all implausible in light of sentience being a functional trait that has repeatedly accidentally evolved, that I think controlling a superior and sentient intelligence is both unethical and hopeless, and that I think we need to treat current AI better as the AI that sentient AI will emerge from, and what we are currently feeding it and doing to it is how you raise psychopaths.
I deny the premise of the question: it is not “near-universally accepted”. It is fairly widely accepted, but there are still quite a lot of people who have some degree of uncertainty about it. It’s complicated by varying ideas of exactly what “sentient” means so the same question may be interpreted as meaning different things by different people.
Again, there are a lot of people who expect that we wouldn’t necessarily know.
Why do you think that there is any difference? The mere existence of the term “p-zombie” suggests that quite a lot of people have an idea that there could—at least in principle—be zero difference.
Looks like a long involved statement with a rhetorical question embedded in it. Are you actually asking a question here?
Same as 4.
Maybe you should distinguish between questions and claims?
Stopped reading here since the “list of questions” stopped even pretending to actually be questions.
Thank you very much for the response. Can I ask follow up questions?
I literally do not know a single person with an academic position in a related field who would publicly doubt that we do not have sentient AI yet. Literally not one. Could you point me to one?
3. I think p zombie this is a term that is wildly misunderstood on Less Wrong. In its original context, it was practically never intended to draw up a scenario that is physically possible. You basically have to buy into tricky versions of counter-scientific dualism to believe it could be. It’s an interesting thought experiment, but mostly for getting people to spell out our confusion about qualia in more actionable terms. P zombies cannot exist, and will not exist. They died with the self-stultification argument.
4. Fair enough. I think and hereby state that human minds are a misguided framework of comparison for the first consciousness to expect, in light of the fact that much simpler conscious models exist and developed first, and that a rejection of upcoming sentient AI based on the differences between AI on a human brain are problematic for this reason. And thank you for the feedback—you are right that this begins with questions that left me confused and uncertain, and increasingly gets into a territory where I am certain, and hence should stand behind my claims.
5. This is a genuine question. I am concerned that the people we trust to be most informed and objective on the matter of AI are biased in their assessment because they have much too lose if it is sentient. But I am unsure how this could empirically be tested. For now, I think it is just something to keep in mind when telling people that the “experts”, namely the people working with it, near universally declare that it isn’t sentient and won’t be. I’ve worked on fish pain, and the parallel to the fishing industry doing fish sentience “research” and arguing from their extensive expertise from working with fish every day and concluding that fish cannot feel pain and hence their fishing practices are fine are painful.
6. Fair enough. Claim: Consciousness is not mysterious, but we do often feel it should be. If we expect it to be, we may fail to recognise it in an explanation that is lacking in mystery. Artificial systems we have created and have some understanding of inherently seem non mysterious, but this is no argument that they are not conscious. I have encountered this a lot and it bothers me. A programmer will say “but all it does is “long complicated process that is eerily reminiscient of biological process likely related to sentience”, so it is not sentient!”, and if I ask them how that differs from how sentience would be embedded, it becomes clear that they have no idea and have never even thought about that.
I am sorry if it got annoying to read at that point. The TL;DR was that I think accidentally producing sentience is not at all implausible in light of sentience being a functional trait that has repeatedly accidentally evolved, that I think controlling a superior and sentient intelligence is both unethical and hopeless, and that I think we need to treat current AI better as the AI that sentient AI will emerge from, and what we are currently feeding it and doing to it is how you raise psychopaths.