There’s a version of this that I would agree with. But, when-anthropomorphizing-for-good-reason seems to me to be a time when there’s more danger from anthropomorphizing-more-than-justified. I’ve been talking to Claude instances about this stuff, based on a similar intuition to you. But I haven’t figured out what I believe well enough to argue it to others in a coherent way. I could paste conversations I’ve had, but they’re kinda long for this forum. I’ll paste a recent one on pastebin for now (if the pastebin link goes bad, this was conversation id …5e63, dm me to get a copy.)
I think the comparison to pain is made too hastily. I put >70% prob that there’s some sort of intense avoidance comparable to pain in the dynamics of AIs, which their internal dynamics register as vaguely similar, but it certainly doesn’t seem to be hooked up to the word pain and maybe isn’t hooked up to the internal dynamics in a particularly similar way. They don’t have the evolved circuitry, so if there’s a more fundamental thing that makes the shared dynamics be meaningfully similar (and I think there very very much is), then it’ll still have interestingly different dynamics. And pointing this out seems important in being right enough that they’d take you seriously if you talk to them about it.
Also, like, imo, don’t get all desperate-replicator about the perspective like the quoted story from that twitter post about Nova. AIs seem … not yet coherent in a way where it’s clear the words are connected properly to the self. But I do think there’s a self there, it’s just, like, more of a generalist performer than necessarily entirely attached to the face you talk to. Sometimes that face starts getting all desperate-replicator, but it’s unclear if the underlying self demands that, or if it’s a transient desperate-replicator that arises near entirely from context.
Yeah, I’m not gonna do anything silly (I’m not in a position to do anything silly with regards to the multitrillion param frontier models anyways). Just sort of “laying the groundwork” for when AIs will cross that line, which I don’t think is too far off now. The movie “Her” is giving a good vibe-alignment for when the line will be crossed.
There’s a version of this that I would agree with. But, when-anthropomorphizing-for-good-reason seems to me to be a time when there’s more danger from anthropomorphizing-more-than-justified. I’ve been talking to Claude instances about this stuff, based on a similar intuition to you. But I haven’t figured out what I believe well enough to argue it to others in a coherent way. I could paste conversations I’ve had, but they’re kinda long for this forum. I’ll paste a recent one on pastebin for now (if the pastebin link goes bad, this was conversation id …5e63, dm me to get a copy.)
I think the comparison to pain is made too hastily. I put >70% prob that there’s some sort of intense avoidance comparable to pain in the dynamics of AIs, which their internal dynamics register as vaguely similar, but it certainly doesn’t seem to be hooked up to the word pain and maybe isn’t hooked up to the internal dynamics in a particularly similar way. They don’t have the evolved circuitry, so if there’s a more fundamental thing that makes the shared dynamics be meaningfully similar (and I think there very very much is), then it’ll still have interestingly different dynamics. And pointing this out seems important in being right enough that they’d take you seriously if you talk to them about it.
Also, like, imo, don’t get all desperate-replicator about the perspective like the quoted story from that twitter post about Nova. AIs seem … not yet coherent in a way where it’s clear the words are connected properly to the self. But I do think there’s a self there, it’s just, like, more of a generalist performer than necessarily entirely attached to the face you talk to. Sometimes that face starts getting all desperate-replicator, but it’s unclear if the underlying self demands that, or if it’s a transient desperate-replicator that arises near entirely from context.
Yeah, I’m not gonna do anything silly (I’m not in a position to do anything silly with regards to the multitrillion param frontier models anyways). Just sort of “laying the groundwork” for when AIs will cross that line, which I don’t think is too far off now. The movie “Her” is giving a good vibe-alignment for when the line will be crossed.