I agree with most of this. Not sure about how much moral weight I’d put on “a computation structured like an agent”—some, but it’s mostly coming from [I might be wrong] rather than [I think agentness implies moral weight].
Agreed that malthusian dynamics gives you an evolution-like situation—but I’d guess it’s too late for it to matter: once you’re already generally intelligent, can think your way to the convergent instrumental goal of self-preservation, and can self-modify, it’s not clear to me that consciousness/pleasure/pain buys you anything.
Heuristics are sure to be useful as shortcuts, but I’m not sure I’d want to analogise those to qualia (??? presumably the right kind would be—but I suppose I don’t expect the right kind by default).
The possibilities for signalling will also be nothing like that in a historical evolutionary setting—the utility of emotional affect doesn’t seem to be present (once the humans are gone). [these are just my immediate thoughts; I could easily be wrong]
I agree with its being likely that most vertebrates are moral patients.
Overall, I can’t rule out AIs becoming moral patients—and it’s clearly possible. I just don’t yet see positive reasons to think it has significant probability (unless aimed for explicitly).
I agree with most of this. Not sure about how much moral weight I’d put on “a computation structured like an agent”—some, but it’s mostly coming from [I might be wrong] rather than [I think agentness implies moral weight].
Agreed that malthusian dynamics gives you an evolution-like situation—but I’d guess it’s too late for it to matter: once you’re already generally intelligent, can think your way to the convergent instrumental goal of self-preservation, and can self-modify, it’s not clear to me that consciousness/pleasure/pain buys you anything.
Heuristics are sure to be useful as shortcuts, but I’m not sure I’d want to analogise those to qualia (??? presumably the right kind would be—but I suppose I don’t expect the right kind by default).
The possibilities for signalling will also be nothing like that in a historical evolutionary setting—the utility of emotional affect doesn’t seem to be present (once the humans are gone).
[these are just my immediate thoughts; I could easily be wrong]
I agree with its being likely that most vertebrates are moral patients.
Overall, I can’t rule out AIs becoming moral patients—and it’s clearly possible.
I just don’t yet see positive reasons to think it has significant probability (unless aimed for explicitly).