And this comment I am replying to here says, “I could see the concerns in this post being especially important if things work out such that a full solution to intent-alignment becomes widely available.”
My guess, and a motivation for writing this post, is that we see something in between (a.) wide and open distribution of intent-aligned AGI (that somehow leads to well-balanced highly multi-polar scenarios), and (b.) completely central ownership (by a beneficial group of very conscientious philosopher-AI-researchers) of intent-aligned AGI.
Thanks.
There seems to be pretty wide disagreement about how intent-aligned AGI could lead to a good outcome.
For example, even in the first couple comments to this post:
The comment above (https://www.lesswrong.com/posts/Rn4wn3oqfinAsqBSf/?commentId=zpmQnkyvFKKbF9au2) suggests “wide open decentralized distribution of AI” as the solution to making intent-aligned AGI deployment go well.
And this comment I am replying to here says, “I could see the concerns in this post being especially important if things work out such that a full solution to intent-alignment becomes widely available.”
My guess, and a motivation for writing this post, is that we see something in between (a.) wide and open distribution of intent-aligned AGI (that somehow leads to well-balanced highly multi-polar scenarios), and (b.) completely central ownership (by a beneficial group of very conscientious philosopher-AI-researchers) of intent-aligned AGI.