If we are lucky enough to find ourselves in one of in those situations you describe, where we have an AGI that wants to do what humanity wants (or would/should want) it to want to do, then to what degree is additional training actually required? I’m sure there are many possible such scenarios that vary widely, but my mental default assumption is that such a system would already be motivated to seek out answers to questions such as, “What is it the humans want me to want, and why?” which would naturally include studying… well, pretty much the entire moral philosophy literature. I wouldn’t put super high odds on it, though.
That said, one of my main concern about a classical virtue ethics training regimen specifically is that it doesn’t really give a clear answer about how to prioritize among virtues (or more broadly, things some subset of cultures says are virtues) that are in conflict, and real humans do in fact disagree and debate about this all the time.
If we are lucky enough to find ourselves in one of in those situations you describe, where we have an AGI that wants to do what humanity wants (or would/should want) it to want to do, then to what degree is additional training actually required? I’m sure there are many possible such scenarios that vary widely, but my mental default assumption is that such a system would already be motivated to seek out answers to questions such as, “What is it the humans want me to want, and why?” which would naturally include studying… well, pretty much the entire moral philosophy literature. I wouldn’t put super high odds on it, though.
That said, one of my main concern about a classical virtue ethics training regimen specifically is that it doesn’t really give a clear answer about how to prioritize among virtues (or more broadly, things some subset of cultures says are virtues) that are in conflict, and real humans do in fact disagree and debate about this all the time.