I’m Rob Long. I work on AI consciousness and related issues.
Robbo
Towards Evaluating AI Systems for Moral Status Using Self-Reports
There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.
Can you explain what you mean by the second half of that sentence?
80k podcast episode on sentience in AI systems
To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting “realize it is conscious by itself” with the methods discussed in “Could we build language models whose reports about sentience we can trust?”
I think I’d need to hear more about what you mean by sapience (the link didn’t make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by ‘sapience’?
What to think when a language model tells you it’s sentient
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don’t, are there questions in the vicinity of “which systems are moral patients” that you do recognize as meaningful?
Very interesting! Thanks for your reply, and I like your distinction between questions:
Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience.
Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of “concentration”, pain involves concentration. But I think I’m just unfamiliar with how you / ‘the literature’ use these terms.
I’m trying to get a better idea of your position. Suppose that, as TAG also replied, “realism about phenomenal consciousness” does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I’m a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace “theory” with “ideology” etc?
to say that [consciousness] is the only way to process information
I don’t think anyone was claiming that. My post certainly doesn’t. If one thought consciousness were the only way to process information, wouldn’t there not even be an open question about which (if any) information-processing systems can be conscious?
A few questions:
Can you elaborate on this?
Suffering seems to need a lot of complexity
and also seems deeply connected to biological systems.
I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by “deeply connected.” Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?
AI/computers are just a “picture” of these biological systems.
What does this mean?
Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.
Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won’t ‘naturally’ have the capacity for suffering? (i.e. you’ve ruled out the possibility that Steven Byrnes raised in his comment)
Thanks for this great comment! Will reply to the substantive stuff later, but first—I hadn’t heard of the The Welfare Footprint Project! Super interesting and relevant, thanks for bringing to my attention
A third (disconcerting) possibility is that the list of demands amounts to saying “don’t ever build AGIs”
That would indeed be disconcerting. I would hope that, in this world, it’s possible and profitable to have AGIs that are sentient, but which don’t suffer in quite the same way / as badly as humans and animals do. It would be nice—but is by no means guaranteed—if the really bad mental states we can get are in a kinda arbitrary and non-natural point in mind-space. This is all very hard to think about though, and I’m not sure what I think.
I’m hopeful (and hoping!) that one can soften the “we are rejecting strong illusionism” claim in #3 without everything else falling apart.
I hope so too. I was more optimistic about that until I read Kammerer’s paper, then I found myself getting worried. I need to understand that paper more deeply and figure out what I think. Fortunately, I think one thing that Kammerer worries about is that, on illusionism (or even just good old fashioned materialism), “moral patienthood” will have vague boundaries. I’m not as worried about that, and I’m guessing you aren’t either. So maybe if we’re fine with fuzzy boundaries around moral patienthood, things aren’t so bad.
But I think there’s other more worrying stuff in that paper—I should write up a summary some time soon!
Thanks, I’ll check it out! I agree that the meta-problem is a super promising way forward
The whole field seems like an extreme case of anthropomorphizing to me.
Which field? Some of these fields and findings are explicitly about humans; I take it you mean the field of AI sentience, such as it is?
Of course, we can’t assume that what holds for us holds for animals and AIs, and have to be wary of anthropomorphizing. That issue also comes up in studying, e.g., animal sentience and animal behavior. But what were you thinking is anthropomorphizing exactly? To be clear, I think we have to think carefully about what will and will not carry over from what we know about humans and animals.
The “valence” thing in humans is an artifact of evolution
I agree. Are you thinking that this means that valenced experiences couldn’t happen in AI systems? Are unlikely to? Would be curious to hear why.
where most of the brain is not available to introspection because we used to be lizards and amoebas
I also agree.with that. What was the upshot of this supposed to be?
That’s not at all how the AI systems work
What’s not how the AI systems work? (I’m guessing this will be covered by my other questions)
Key questions about artificial sentience: an opinionated guide
would read a review!
This author is: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/
“I believe the best choice is cloning. More specifically, cloning John von Neumann one million times”
That poem was not written by Hitler.
According to this website and other reputable-seeming sources, the German poet Georg Runsky published that poem, “Habe Geduld”, around 1906.