Hi! I just wrote a full post in reply to this on the EA forum. (because it’s long, and it’s a while after this post). I probably won’t post the full post on this forum so here’s a link to the EA forum post.
Fai
Karma: 27
Hi! I just wrote a full post in reply to this on the EA forum. (because it’s long, and it’s a while after this post). I probably won’t post the full post on this forum so here’s a link to the EA forum post.
Yes, not directly. We didn’t include any discussion of AI alignment, or even any futuristic-sounding stuff, to keep things close to the average conversation in the field of AI ethics, plus cater to the taste of our funder—two orgs at Princeton. We might write about these things in the future, or we might not.
But I argue that the paper is relevant to AI alignment because of the core claims of the paper: AI will affect the lives of (many) animals & These impacts matter ethically. And if these claims are true, then extending from them there might be a case for AI alignment to be broadened to “alignment with all sentient beings”. And it seems to me such broadening creates a lot of new issues that the alignment field needs to think about. (I will write about it, not sure I will publish any)
I sense that we might be disagreeing on many levels on this point, please correct me if I am wrong.
1. We might or might not mean very different things when we say “aligned” or “misaligned AI”. For me, an AI is not really aligned if it only aligns with humans (humans’ intent, interests, preferences, values, CEV, etc).
2. It seems that you might be thinking that total extinction is bad for animals. But I think it’s the reverse, most animals live net negative lives, so their total extinction could be good “for them”. In other words, it sounds plausible to me an AI that makes all nonhuman animals go extinct could be (but also possibly not be) one that is “aligned”. (A pragmatic consideration related to this is that we probably can’t and shouldn’t say things like this in an introductory paper to a proposed new field.)
3. Ignoring the sign of the impact, I disagree that total extinction is the most significant impact on animals we should expect from misaligned AI. Extinction of all nonhuman animals takes away a certain (huge) amount (X) of net suffering. But misaligned AI can also create suffering/create things that cause suffering. It sounds plausible that there are many scenarios where misaligned AIs create >X, or even >>X amount of net suffering for nonhuman animals.