I have zero technical AI alignment knowledge. But this question has kept recurring to me for like a year now so I thought I’d ask.
A lot of the arguments for the danger of GAI revolve around the notion that an agent that is smarter than a human is un-boxable, self-creating and self-enhancing, and not necessarily aligned with human interests.
That pattern-matches very well onto “governments,” “corporations,” and other forms of collective agencies. They have access to collective intelligence far beyond what’s accessible to an individual. That intelligence brings them power beyond the ability of even the most clever individual to avoid long-term. Their goals aren’t necessarily aligned with human values. They use their intelligence and power to enhance their own intelligence and power. They’re not always successful, but they are often able to learn from their mistakes. If one agency destroys itself, another takes its place.
How much bearing does this have on technical AI alignment work? Can AI alignment work translate into solutions for the problems we presently have in aligning these agencies to human values? Do the restraints that have so far prevented governments/corporations from paperclipping the world map onto any proposed strategies for AI alignment?
[Question] What’s the difference between GAI and a government?
I have zero technical AI alignment knowledge. But this question has kept recurring to me for like a year now so I thought I’d ask.
A lot of the arguments for the danger of GAI revolve around the notion that an agent that is smarter than a human is un-boxable, self-creating and self-enhancing, and not necessarily aligned with human interests.
That pattern-matches very well onto “governments,” “corporations,” and other forms of collective agencies. They have access to collective intelligence far beyond what’s accessible to an individual. That intelligence brings them power beyond the ability of even the most clever individual to avoid long-term. Their goals aren’t necessarily aligned with human values. They use their intelligence and power to enhance their own intelligence and power. They’re not always successful, but they are often able to learn from their mistakes. If one agency destroys itself, another takes its place.
How much bearing does this have on technical AI alignment work? Can AI alignment work translate into solutions for the problems we presently have in aligning these agencies to human values? Do the restraints that have so far prevented governments/corporations from paperclipping the world map onto any proposed strategies for AI alignment?