I think this has implications for moral philosophy where we typically assign praise, blame and responsibility to individual agents. If the notion of individuality breaks down for AI systems, we might need to shift our moral thinking away from who is to blame and more towards how do we design the system to produce better overall outcomes.
I also really liked this comment:
The familiar human sense of a coherent, stable, bounded self simply doesn’t match reality. Arguably, it doesn’t even match reality well in humans—but with AIs, the mismatch is far greater.
Because human systems also have many interacting, coherent parts that could be thought of as goal-driven without appealing to individual responsibility. Many social problems exist not because of individual moral failings but because of complex webs of incentives, institutions and emergent structures that no single individual controls. Yet our moral intuition often defaults towards assigning blame rather than addressing systemic issues.
Yeah, I think lot of the moral philosophizing about AIssuffers from reliance on human-based priors. While I’m happy some people work on digital minds welfare, and more people should do it, large part of the effort seems to be stuck at the implicit model of unitary mind which can be both a moral agent and moral patient.
Excellent post!
I think this has implications for moral philosophy where we typically assign praise, blame and responsibility to individual agents. If the notion of individuality breaks down for AI systems, we might need to shift our moral thinking away from who is to blame and more towards how do we design the system to produce better overall outcomes.
I also really liked this comment:
Because human systems also have many interacting, coherent parts that could be thought of as goal-driven without appealing to individual responsibility. Many social problems exist not because of individual moral failings but because of complex webs of incentives, institutions and emergent structures that no single individual controls. Yet our moral intuition often defaults towards assigning blame rather than addressing systemic issues.
Yeah, I think lot of the moral philosophizing about AIs suffers from reliance on human-based priors. While I’m happy some people work on digital minds welfare, and more people should do it, large part of the effort seems to be stuck at the implicit model of unitary mind which can be both a moral agent and moral patient.