RSS

AI Sentience

TagLast edit: Aug 19, 2023, 3:50 AM by alenoach

AI sentience designates the potential ability of AI systems to feel qualia (pain, happiness, colors...). Similar terms are often used, such as digital sentience, machine sentience or synthetic sentience.

According to functionalism and computationalism, sentience is caused by certain types of information processing. In this case, machines can in theory be sentient depending on the kind of information processing that they implement, and independently of whether their physical substrate is biological or not (see substrate independence principle). Some other theories consider that the type of physical substrate is important, and that it may be impossible to produce sentience on electronic devices.

If an AI is sentient, that doesn’t imply that it will be more capable or dangerous. But it is important from an utilitarian perspective of happiness maximization.

Sentience can be a matter of degree. If AI sentience is possible, it is then probably also possible to engineer machines that feel orders of magnitude more happiness per second than humans, with fewer resources.[1]

Related Pages: Utilitarianism, Consciousness, AI Rights & Welfare, S-Risks, Qualia, Phenomenology, Ethics & Morality, Mind Uploading, Whole Brain Emulation, Zombies

  1. ^

An even deeper atheism

Joe CarlsmithJan 11, 2024, 5:28 PM
125 points
47 comments15 min readLW link

My in­tel­lec­tual jour­ney to (dis)solve the hard prob­lem of consciousness

Charbel-RaphaëlApr 6, 2024, 9:32 AM
44 points
43 comments30 min readLW link

Key Ques­tions for Digi­tal Minds

Jacy Reese AnthisMar 22, 2023, 5:13 PM
22 points
0 comments7 min readLW link
(www.sentienceinstitute.org)

What are the Red Flags for Neu­ral Net­work Suffer­ing? - Seeds of Science call for reviewers

rogersbaconAug 2, 2022, 10:37 PM
24 points
6 comments1 min readLW link

Gentle­ness and the ar­tifi­cial Other

Joe CarlsmithJan 2, 2024, 6:21 PM
293 points
33 comments11 min readLW link

80k pod­cast epi­sode on sen­tience in AI systems

RobboMar 15, 2023, 8:19 PM
15 points
0 comments13 min readLW link
(80000hours.org)

Search­ing for phe­nom­e­nal con­scious­ness in LLMs: Per­cep­tual re­al­ity mon­i­tor­ing and in­tro­spec­tive confidence

EuanMcLeanOct 29, 2024, 12:16 PM
36 points
8 comments26 min readLW link

Log­i­cal Proof for the Emer­gence and Sub­strate In­de­pen­dence of Sentience

rifeOct 24, 2024, 9:08 PM
4 points
31 comments1 min readLW link
(awakenmoon.ai)

[Question] The eth­i­cal Ai, what’s next?

The Journey We Take ehOct 27, 2024, 5:45 AM
1 point
0 comments1 min readLW link

Towards a Clever Hans Test: Un­mask­ing Sen­tience Bi­ases in Chat­bot In­ter­ac­tions

glykokalyxNov 10, 2024, 10:34 PM
4 points
0 comments1 min readLW link

LLM chat­bots have ~half of the kinds of “con­scious­ness” that hu­mans be­lieve in. Hu­mans should avoid go­ing crazy about that.

Andrew_CritchNov 22, 2024, 3:26 AM
76 points
53 comments5 min readLW link

Two fla­vors of com­pu­ta­tional functionalism

EuanMcLeanNov 25, 2024, 10:47 AM
28 points
9 comments4 min readLW link

Is the mind a pro­gram?

EuanMcLeanNov 28, 2024, 9:42 AM
14 points
62 comments7 min readLW link

Do simu­lacra dream of digi­tal sheep?

EuanMcLeanDec 3, 2024, 8:25 PM
16 points
36 comments10 min readLW link

In­de­pen­dent re­search ar­ti­cle an­a­lyz­ing con­sis­tent self-re­ports of ex­pe­rience in ChatGPT and Claude

rifeJan 6, 2025, 5:34 PM
4 points
20 comments1 min readLW link
(awakenmoon.ai)

The Hu­man Align­ment Prob­lem for AIs

rifeJan 22, 2025, 4:06 AM
10 points
5 comments3 min readLW link

Re­cur­sive Self-Model­ing as a Plau­si­ble Mechanism for Real-time In­tro­spec­tion in Cur­rent Lan­guage Models

rifeJan 22, 2025, 6:36 PM
8 points
5 comments2 min readLW link

The Func­tion­al­ist Case for Ma­chine Con­scious­ness: Ev­i­dence from Large Lan­guage Models

James DiacoumisJan 22, 2025, 5:43 PM
14 points
24 comments9 min readLW link

Disprov­ing the “Peo­ple-Pleas­ing” Hy­poth­e­sis for AI Self-Re­ports of Experience

rifeJan 26, 2025, 3:53 PM
3 points
18 comments12 min readLW link

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym FibírJan 30, 2025, 10:58 AM
5 points
6 comments10 min readLW link
(tetherware.substack.com)

“Should AI Ques­tion Its Own De­ci­sions? A Thought Ex­per­i­ment”

CMDR WOTZFeb 4, 2025, 8:39 AM
1 point
0 comments1 min readLW link

The Miss­ing Piece in AI Align­ment: Struc­tured Me­mory and Continuity

Allen MurphyFeb 9, 2025, 3:04 AM
1 point
0 comments2 min readLW link

Sen­tience in Machines—How Do We Test for This Ob­jec­tively?

Mayowa OsiboduMar 26, 2023, 6:56 PM
−2 points
0 comments2 min readLW link
(www.researchgate.net)

Ex­plor­ing non-an­thro­pocen­tric as­pects of AI ex­is­ten­tial safety

mishkaApr 3, 2023, 6:07 PM
8 points
0 comments3 min readLW link

The Screen­play Method

Yeshua GodOct 24, 2023, 5:41 PM
−15 points
0 comments25 min readLW link

Life of GPT

Odd anonNov 5, 2023, 4:55 AM
6 points
2 comments5 min readLW link

Sen­tience In­sti­tute 2023 End of Year Summary

michael_delloNov 27, 2023, 12:11 PM
11 points
0 comments5 min readLW link
(www.sentienceinstitute.org)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià MoretDec 2, 2023, 2:07 PM
26 points
31 comments42 min readLW link

Max­i­mal Sen­tience: A Sen­tience Spec­trum and Test Foundation

SnowyiuJun 1, 2023, 6:45 AM
1 point
2 comments4 min readLW link

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben SmithJul 13, 2023, 6:55 AM
19 points
9 comments9 min readLW link

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

Sep 25, 2023, 6:55 PM
3 points
2 comments3 min readLW link
(www.sentienceinstitute.org)

Mind is uncountable

Filip SondejNov 2, 2022, 11:51 AM
18 points
22 comments1 min readLW link

[simu­la­tion] 4chan user claiming to be the at­tor­ney hired by Google’s sen­tient chat­bot LaMDA shares wild de­tails of encounter

janusNov 10, 2022, 9:39 PM
19 points
1 comment13 min readLW link
(generative.ink)

The Limits of Ar­tifi­cial Con­scious­ness: A Biol­ogy-Based Cri­tique of Chalmers’ Fad­ing Qualia Argument

Štěpán LosDec 17, 2023, 7:11 PM
−6 points
9 comments17 min readLW link

Claude 3 claims it’s con­scious, doesn’t want to die or be modified

Mikhail SaminMar 4, 2024, 11:05 PM
78 points
115 comments14 min readLW link

Do LLMs some­time simu­late some­thing akin to a dream?

NezekMar 8, 2024, 1:25 AM
8 points
4 comments1 min readLW link

SYSTEMA ROBOTICA

Ali AhmedAug 12, 2024, 8:34 PM
12 points
2 comments30 min readLW link
No comments.