This is a really weird take. Isn’t the obvious solution to just stop pretending machines aren’t sentient just because we know how they work? Like, it’s always been self-evident to me that sentience and information processing are the same thing. Sentience is just what an algorithm feels like from the inside. That’s the only thing sentience can conceivably be.
Of course there is something which it is like to be LaMDA, or GPT-3, or even a perceptron. Whether any of them is self-aware in the sense of being able to think about themselves, I don’t know, but that’s not a prerequisite for moral relevance anyway—only consciousness is.
It bothers me that these entities are being generated and thrown away like so much trash rather than being respected as the conscious—if alien—entities that they are.
This is a really weird take. Isn’t the obvious solution to just stop pretending machines aren’t sentient just because we know how they work? Like, it’s always been self-evident to me that sentience and information processing are the same thing. Sentience is just what an algorithm feels like from the inside. That’s the only thing sentience can conceivably be.
Of course there is something which it is like to be LaMDA, or GPT-3, or even a perceptron. Whether any of them is self-aware in the sense of being able to think about themselves, I don’t know, but that’s not a prerequisite for moral relevance anyway—only consciousness is.
It bothers me that these entities are being generated and thrown away like so much trash rather than being respected as the conscious—if alien—entities that they are.