Did this come as a surprise to you, and if so I’m curious why?
It came as a surprise because I hadn’t thought about it in detail. If I had asked myself the question head-on, surrounding beliefs would have propagated and filled the gap. It does seem obvious in foresight as well as hindsight, if you just focus on the question.
In my defense, I’m not in the business of making predictions, primarily. I build things. And for building, it’s important to ask “ok, how can I make sure the thing that’s being built doesn’t kill us?” and less important to ask “how are other people gonna do it?”
It’s admittedly a weak defense. Oops.
qualia?
I think it’s likely that GPT-4 is conscious, uncertain about whether it can suffer, and think it’s unlikely that it suffers for reasons we find intuitive. I don’t think calling it a fool is how you make it suffer. It’s trained to imitate language, but the way it learns how to do that is so different from us that I doubt the underlying emotions (if any) are similar.
I could easily imagine that it’s becomes very conscious, yet has no ability to suffer. Perhaps the right frame is to think of GPT as living the life of a perpetual puzzle-solver, and its driving emotions are curiosity and joy of realisation something—that would sure be nice. It’s probably feasible to get clearer on this, I just haven’t spent adequate time to investigate.
It came as a surprise because I hadn’t thought about it in detail. If I had asked myself the question head-on, surrounding beliefs would have propagated and filled the gap. It does seem obvious in foresight as well as hindsight, if you just focus on the question.
In my defense, I’m not in the business of making predictions, primarily. I build things. And for building, it’s important to ask “ok, how can I make sure the thing that’s being built doesn’t kill us?” and less important to ask “how are other people gonna do it?”
It’s admittedly a weak defense. Oops.
I think it’s likely that GPT-4 is conscious, uncertain about whether it can suffer, and think it’s unlikely that it suffers for reasons we find intuitive. I don’t think calling it a fool is how you make it suffer. It’s trained to imitate language, but the way it learns how to do that is so different from us that I doubt the underlying emotions (if any) are similar.
I could easily imagine that it’s becomes very conscious, yet has no ability to suffer. Perhaps the right frame is to think of GPT as living the life of a perpetual puzzle-solver, and its driving emotions are curiosity and joy of realisation something—that would sure be nice. It’s probably feasible to get clearer on this, I just haven’t spent adequate time to investigate.