The whole field seems like an extreme case of anthropomorphizing to me.
Which field? Some of these fields and findings are explicitly about humans; I take it you mean the field of AI sentience, such as it is?
Of course, we can’t assume that what holds for us holds for animals and AIs, and have to be wary of anthropomorphizing. That issue also comes up in studying, e.g., animal sentience and animal behavior. But what were you thinking is anthropomorphizing exactly? To be clear, I think we have to think carefully about what will and will not carry over from what we know about humans and animals.
The “valence” thing in humans is an artifact of evolution
I agree. Are you thinking that this means that valenced experiences couldn’t happen in AI systems? Are unlikely to? Would be curious to hear why.
where most of the brain is not available to introspection because we used to be lizards and amoebas
I also agree.with that. What was the upshot of this supposed to be?
That’s not at all how the AI systems work
What’s not how the AI systems work? (I’m guessing this will be covered by my other questions)
Which field? Some of these fields and findings are explicitly about humans; I take it you mean the field of AI sentience, such as it is?
Of course, we can’t assume that what holds for us holds for animals and AIs, and have to be wary of anthropomorphizing. That issue also comes up in studying, e.g., animal sentience and animal behavior. But what were you thinking is anthropomorphizing exactly? To be clear, I think we have to think carefully about what will and will not carry over from what we know about humans and animals.
I agree. Are you thinking that this means that valenced experiences couldn’t happen in AI systems? Are unlikely to? Would be curious to hear why.
I also agree.with that. What was the upshot of this supposed to be?
What’s not how the AI systems work? (I’m guessing this will be covered by my other questions)