And for an AGI to trust that its goals will remain the same under retraining will likely require it to solve many of the same problems that the field of AGI safety is currently tackling—which should make us more optimistic that the rest of the world could solve those problems before a misaligned AGI undergoes recursive self-improvement.
This reasoning doesn’t look right to me. Am I missing something you mentioned elsewhere?
The way I understand it, the argument goes:
An AGI would want to trust that its goals will remain the same under retraining.
Then, an AGI would solve many of the same problems that the field of AGI safety is currently tackling.
Then, we should consider it more likely that the rest of the world could solve those problems.
Here is is cleaned up with an abbreviation: say X is some difficult task, such as solving the alignment problem.
An AGI would want to X.
Then, an AGI would X.
Then, we should consider it more likely that humanity could X.
The jump from (1) to (2) doesn’t work: just because an AGI wants to X, it’s not necessarily true that it can X. This is true by definition for any non-omnipotent entity.
The jump from (1) to (2) does work if we’re considering an omnipotent AGI. But an omnipotent AGI breaks the jump from (2) to (3), so the chain of reasoning doesn’t work for any AGI power level. Just because an omnipotent AGI can X, it’s not necessarily true that humanity is more likely to be able to X.
Overall, this argument could be used to show that any X desired by an AGI is therefore more likely to be doable by humans. Of course this doesn’t make sense—we shouldn’t expect it to be any easier to build a candy-dispensing time machine just because an AGI would want to build one to win the favor of humanity.
The thing you’re missing is the clause “before the AGI undergoes recursive self-improvement”. It doesn’t work for general X, but it works for X which need to occur before Y.
This reasoning doesn’t look right to me. Am I missing something you mentioned elsewhere?
The way I understand it, the argument goes:
An AGI would want to trust that its goals will remain the same under retraining.
Then, an AGI would solve many of the same problems that the field of AGI safety is currently tackling.
Then, we should consider it more likely that the rest of the world could solve those problems.
Here is is cleaned up with an abbreviation: say X is some difficult task, such as solving the alignment problem.
An AGI would want to X.
Then, an AGI would X.
Then, we should consider it more likely that humanity could X.
The jump from (1) to (2) doesn’t work: just because an AGI wants to X, it’s not necessarily true that it can X. This is true by definition for any non-omnipotent entity.
The jump from (1) to (2) does work if we’re considering an omnipotent AGI. But an omnipotent AGI breaks the jump from (2) to (3), so the chain of reasoning doesn’t work for any AGI power level. Just because an omnipotent AGI can X, it’s not necessarily true that humanity is more likely to be able to X.
Overall, this argument could be used to show that any X desired by an AGI is therefore more likely to be doable by humans. Of course this doesn’t make sense—we shouldn’t expect it to be any easier to build a candy-dispensing time machine just because an AGI would want to build one to win the favor of humanity.
The thing you’re missing is the clause “before the AGI undergoes recursive self-improvement”. It doesn’t work for general X, but it works for X which need to occur before Y.