A different question: When does it make your (mental) life easier to categorize an AI as conscious, so that you can use the heuristics you’ve developed about what conscious things are like to make good judgments?
Sometimes, maybe! Especially if lots of work has been put in to make said AI behave in familiar ways along many axes, even when nobody (else) is looking.
But for LLMs, or other similarly alien AIs, I expect that using your usual patterns of thought for conscious things creates more problems than it helps with.
If one is a bit Platonist, then there’s some hidden fact about whether they’re “really conscious or not” no matter how murky the waters, and once this Hard problem is solved, deciding what to do is relatively easy.
But I prefer the alternative of ditching the question of consciousness entirely when it’s not going to be useful, and deciding what’s right to do about alien AIs more directly.
A different question: When does it make your (mental) life easier to categorize an AI as conscious, so that you can use the heuristics you’ve developed about what conscious things are like to make good judgments?
Sometimes, maybe! Especially if lots of work has been put in to make said AI behave in familiar ways along many axes, even when nobody (else) is looking.
But for LLMs, or other similarly alien AIs, I expect that using your usual patterns of thought for conscious things creates more problems than it helps with.
If one is a bit Platonist, then there’s some hidden fact about whether they’re “really conscious or not” no matter how murky the waters, and once this Hard problem is solved, deciding what to do is relatively easy.
But I prefer the alternative of ditching the question of consciousness entirely when it’s not going to be useful, and deciding what’s right to do about alien AIs more directly.