Hard to be sure without more detail, but your comment gives me the impression that you haven’t thought through the various different branches of how AI and geopolitics might go in the next 10 years.
I, for one, am pretty sure AI control and powerful narrow AI tools will both be pretty key for humanity surviving the next 10 years. I don’t expect us to have robustly solved ASI-aligment in that timeframe.
I also don’t expect us to have robustly solved ASI-alignment in that timeframe. I simply fail to see a history in which AI control work now is a decisive factor. If you insist on making a top level claim that I haven’t thought through the branches of how things go, I’d appreciate a more substantive description of the branch I am not considering.
Hard to be sure without more detail, but your comment gives me the impression that you haven’t thought through the various different branches of how AI and geopolitics might go in the next 10 years.
I, for one, am pretty sure AI control and powerful narrow AI tools will both be pretty key for humanity surviving the next 10 years. I don’t expect us to have robustly solved ASI-aligment in that timeframe.
I also don’t expect us to have robustly solved ASI-alignment in that timeframe. I simply fail to see a history in which AI control work now is a decisive factor. If you insist on making a top level claim that I haven’t thought through the branches of how things go, I’d appreciate a more substantive description of the branch I am not considering.