AI sentience designates the potential ability of AI systems to feel qualia (pain, happiness, colors...). Similar terms are often used, such as digital sentience, machine sentience or synthetic sentience.
According to functionalism and computationalism, sentience is caused by certain types of information processing. In this case, machines can in theory be sentient depending on the kind of information processing that they implement, and independently of whether their physical substrate is biological or not (see substrate independence principle). Some other theories consider that the type of physical substrate is important, and that it may be impossible to produce sentience on electronic devices.
If an AI is sentient, that doesn’t imply that it will be more capable or dangerous. But it is important from an utilitarian perspective of happiness maximization.
Sentience can be a matter of degree. If AI sentience is possible, it is then probably also possible to engineer machines that feel orders of magnitude more happiness per second than humans, with fewer resources.[1]
Related Pages: Utilitarianism, Consciousness, AI Rights & Welfare, S-Risks, Qualia, Phenomenology, Ethics & Morality, Mind Uploading, Whole Brain Emulation, Zombies