I think that the best solution is to not have a powerful AGI which tries to answer ethical questions. Instead aim for having an obedient tool-like AGI, and have it directed by a governing body which fairly represents humanity’s interests.
I mean, you can also have a narrow philosophy-tool AI that helps you with philosophical reasoning, but I recommend against giving it the power to directly enact the policies it endorses.
Thanks for the comment. If an AGI+ answered all my questions “correctly,” we still wouldn’t know if it were actually aligned, so I certainly wouldn’t endorse giving it power. But if it answered any of my questions “incorrectly,” I’d want to “send it back to the drawing board” before even considering using it as you suggest (as an “obedient tool-like AGI”). It seems to me like there’d be too much room for possible abuse or falling into the wrong hands for a tool that didn’t have its own ethical guardrails onboard. But maybe I’m wrong (part of me certainly hopes so because if AGI/AGI+ is ever developed, it’ll more than likely fall into the “wrong hands” at some point, and I’m not at all sure that everyone having one would make the situation better).
I think that the best solution is to not have a powerful AGI which tries to answer ethical questions. Instead aim for having an obedient tool-like AGI, and have it directed by a governing body which fairly represents humanity’s interests. I mean, you can also have a narrow philosophy-tool AI that helps you with philosophical reasoning, but I recommend against giving it the power to directly enact the policies it endorses.
Thanks for the comment. If an AGI+ answered all my questions “correctly,” we still wouldn’t know if it were actually aligned, so I certainly wouldn’t endorse giving it power. But if it answered any of my questions “incorrectly,” I’d want to “send it back to the drawing board” before even considering using it as you suggest (as an “obedient tool-like AGI”). It seems to me like there’d be too much room for possible abuse or falling into the wrong hands for a tool that didn’t have its own ethical guardrails onboard. But maybe I’m wrong (part of me certainly hopes so because if AGI/AGI+ is ever developed, it’ll more than likely fall into the “wrong hands” at some point, and I’m not at all sure that everyone having one would make the situation better).