Yes, that was the question I was asking (I am not certain we are over the threshold, and certainly suspicious of answers before about 1965, so I wanted to find out how far apart our positions were).
I agree that a good enough AI outcome would address coordination problems, but this cuts both ways. A society which deals with coordination problems well must all-else-equal be more likely to achieve AGI safely than one which does not.
Early enough work seems hard to target at FAI rather than just accelerating AI in general (though it’s possible you could factor out a particular part such as value loading). Given that I think we see long-term trends towards better coordination and decision-making in society, it is not even clear this work would be positive in expectation.
There is a counter-consideration that AI might be likely safer if developed earlier when less computing power is available, but I guess this is a smaller factor.
Yes, that was the question I was asking (I am not certain we are over the threshold, and certainly suspicious of answers before about 1965, so I wanted to find out how far apart our positions were).
I agree that a good enough AI outcome would address coordination problems, but this cuts both ways. A society which deals with coordination problems well must all-else-equal be more likely to achieve AGI safely than one which does not.
Early enough work seems hard to target at FAI rather than just accelerating AI in general (though it’s possible you could factor out a particular part such as value loading). Given that I think we see long-term trends towards better coordination and decision-making in society, it is not even clear this work would be positive in expectation.
There is a counter-consideration that AI might be likely safer if developed earlier when less computing power is available, but I guess this is a smaller factor.