I wonder if “Solving the alignment problem” seems impossible given the currently invested resources, we should rather focus on different angles of approaches.
The basic premise here seems to be
Not solving the alignment problem → Death by AGI
However, I do not think this is quite right or at least not the full picture. While it is certainly true that we need to solve the alignment problem in order to thrive with AGI
Thriving with AGI → Solving the alignment problem
the implication is not bidirectional, as we could solve the alignment problem and still create an AGI without applying that solution, still leading to the death of humanity.
Therefore, I think that instead of focussing on the hard problem of alignment, which seems impossible given current resource investments, we should instead focus on easier problems of AI safety and regulation. I am thinking of regulations that apply to all industries working on large scale AI projects that involve programs running as agents. E.g. forcing them to apply minimal transparency strategies, reporting and running systems in sandboxed environments disconnected from the internet, the list could easily be extended…
Such measures would surely prove weak defenses against an almighty AGI, but could actually help against systems that border on an AGI. If attempts of breaches are recognized, that could still help to prevent the breach of a “weak” AGI and help to allocate more resources to the problem of AI safety, once the threat is recognized as more real.
Moreover, without such a regulation system in place even solving the alignment problem might prove of little value as companies could just choose to ignore the extra effort. The crucial point here is that implementing AI regulation is a precondition to succeeding in the AGI project and could provide a first line of defense that buys us time to work on the alignment problem.
So instead of thinking Not solving the alignment problem → Death by AGI We should rather think Solving AGI AND Not solving the alignment problem AND Not implementing AI regulations → Death by AGI
as well as
Solving AGI AND Solving the alignment problem AND Implementing AI regulations → Thriving with AGI
Given the slowness of politics surrounding well researched risks with more tangible consequences such as climate change, the situation might still be hopeless, but “dying with dignity” is to me no viable line of retreat for a rationalist community, even as an April Fools joke.
I wonder if “Solving the alignment problem” seems impossible given the currently invested resources, we should rather focus on different angles of approaches.
The basic premise here seems to be
Not solving the alignment problem → Death by AGI
However, I do not think this is quite right or at least not the full picture. While it is certainly true that we need to solve the alignment problem in order to thrive with AGI
Thriving with AGI → Solving the alignment problem
the implication is not bidirectional, as we could solve the alignment problem and still create an AGI without applying that solution, still leading to the death of humanity.
Therefore, I think that instead of focussing on the hard problem of alignment, which seems impossible given current resource investments, we should instead focus on easier problems of AI safety and regulation. I am thinking of regulations that apply to all industries working on large scale AI projects that involve programs running as agents. E.g. forcing them to apply minimal transparency strategies, reporting and running systems in sandboxed environments disconnected from the internet, the list could easily be extended…
Such measures would surely prove weak defenses against an almighty AGI, but could actually help against systems that border on an AGI. If attempts of breaches are recognized, that could still help to prevent the breach of a “weak” AGI and help to allocate more resources to the problem of AI safety, once the threat is recognized as more real.
Moreover, without such a regulation system in place even solving the alignment problem might prove of little value as companies could just choose to ignore the extra effort. The crucial point here is that implementing AI regulation is a precondition to succeeding in the AGI project and could provide a first line of defense that buys us time to work on the alignment problem.
So instead of thinking
Not solving the alignment problem → Death by AGI
We should rather think
Solving AGI AND Not solving the alignment problem AND Not implementing AI regulations → Death by AGI
as well as
Solving AGI AND Solving the alignment problem AND Implementing AI regulations → Thriving with AGI
Given the slowness of politics surrounding well researched risks with more tangible consequences such as climate change, the situation might still be hopeless, but “dying with dignity” is to me no viable line of retreat for a rationalist community, even as an April Fools joke.