The key is that if AGI’s are smarter than humans those organizations run AGIs that have long-term goals will outperform organizations that mix humans with long-term goals along with AGIs that are only capable to pursue short-term goals.
If the LT goal of the AI is perfectly aligned with the goals of the organisation, yes—smarter isn’t enough, it needs to be infallible. If it’s fallible, the organisation needs to be able to tweak the goals as it goes along. Remember, smarter means it’s better at executing it’s goal, not at understanding it.
The main goal of most companies is to make money. If an AGI that runs a company is better at that it outcompete other companies. It doesn’t need infallibility. Companies run by humans are also not perfectly aligned and the interests of managers and the interests of the company are different.
It’s to make money without breaking the law. An ASI that fuflfils both goals isn’t going to kill everybody, since murder is illegal. So even if you do have ASIs with stable long term goals, they don’t lead to doom. (It’s interesting to think of the chilling effect of a law that any human who creates an agentive AI is criminally responsible for what it does).
Most big companies don’t have the goal of making money without breaking the law but are often willing to break it as long as the punishment for breaking it isn’t too costly.
But even if the AGI doesn’t murder anyone in the first five year it operates it can still focus on acquiring resources and get untouchable from human actors and then engage in actions that lead to people dying. The holodomor wasn’t directly murder but people still died because they didn’t have food.
If the LT goal of the AI is perfectly aligned with the goals of the organisation, yes—smarter isn’t enough, it needs to be infallible. If it’s fallible, the organisation needs to be able to tweak the goals as it goes along. Remember, smarter means it’s better at executing it’s goal, not at understanding it.
The main goal of most companies is to make money. If an AGI that runs a company is better at that it outcompete other companies. It doesn’t need infallibility. Companies run by humans are also not perfectly aligned and the interests of managers and the interests of the company are different.
It’s to make money without breaking the law. An ASI that fuflfils both goals isn’t going to kill everybody, since murder is illegal. So even if you do have ASIs with stable long term goals, they don’t lead to doom. (It’s interesting to think of the chilling effect of a law that any human who creates an agentive AI is criminally responsible for what it does).
Most big companies don’t have the goal of making money without breaking the law but are often willing to break it as long as the punishment for breaking it isn’t too costly.
But even if the AGI doesn’t murder anyone in the first five year it operates it can still focus on acquiring resources and get untouchable from human actors and then engage in actions that lead to people dying. The holodomor wasn’t directly murder but people still died because they didn’t have food.