I’m a bit late on this but I figure it’s worth a shot:
1.) We don’t have very much time left, judging by the rate of recent progress in AI capabilities. In the last two weeks alone significant progress has been made.
2.) The amount of time, money, and manpower being devoted towards the alignment problem is comparatively very small in the face of the resources being devoted to the advancement of AI capabilities.
3.) We don’t have any good idea on what to do, and you can reasonably predict that this state of ignorance will persist until the world ends, given the rate of progress in alignment research, compared to the rate of progress in all other spheres of AI.
4.) Though I definitely don’t have a gears-level understanding of how AI works, it appears to me that the consensus among alignment researchers is that alignment is extremely difficult- almost intractable. There’s a sub-problem here, of researchers deciding to work on easier, less lethal problems before the world ends due to the difficulty of the problem.
5.) Finally, the most damning of all reasons for pessimism is the fact that alignment, with all of its difficulties, needs to work on the first try, or else everyone dies.
Despite knowing all this, I don’t really know for sure that we’re doomed, like EY seems to think, mostly due to the uncertainty of the subject matter and the unprecedented nature of the technology, but things sure don’t look good.
I’m a bit late on this but I figure it’s worth a shot:
1.) We don’t have very much time left, judging by the rate of recent progress in AI capabilities. In the last two weeks alone significant progress has been made.
2.) The amount of time, money, and manpower being devoted towards the alignment problem is comparatively very small in the face of the resources being devoted to the advancement of AI capabilities.
3.) We don’t have any good idea on what to do, and you can reasonably predict that this state of ignorance will persist until the world ends, given the rate of progress in alignment research, compared to the rate of progress in all other spheres of AI.
4.) Though I definitely don’t have a gears-level understanding of how AI works, it appears to me that the consensus among alignment researchers is that alignment is extremely difficult- almost intractable. There’s a sub-problem here, of researchers deciding to work on easier, less lethal problems before the world ends due to the difficulty of the problem.
5.) Finally, the most damning of all reasons for pessimism is the fact that alignment, with all of its difficulties, needs to work on the first try, or else everyone dies.
Despite knowing all this, I don’t really know for sure that we’re doomed, like EY seems to think, mostly due to the uncertainty of the subject matter and the unprecedented nature of the technology, but things sure don’t look good.