I notice you don’t talk about the interaction between the two big goals you’ve held. Your beliefs here presumably hinge on timescales? If most existential risk is a long way off, then improving the coordination and decision making of society is likely a better route to long-term safety than anything more direct (though perhaps there is something else better still).
If you agree with that, when historically do you guess the changeover point was?
There are a number of factors here. Timescales are certainly important. I obviously can’t re-organize people at will. Even in a best-case scenario, it would take decades or even centuries to transition social systems, to shift away from governments and nations, and so on. If I believed AI would take millennia, then I’d keep addressing coordination problems. However, AI is also on the decades-to-centuries timescale.
Furthermore, developing an FAI would (depending upon your definition of ‘friendly’) address coordination problems. Whether my ideas were flawed or not, developing FAI dominates social restructuring.
when historically do you guess the changeover point was?
I’m not quite sure what you mean. Are you asking the historical date at which I believe the value of a person-hour spent on AI research overtook the value of a person-hour spent on restructuring people? I’d guess maybe 1850, in hopes that we’d be ready to build an FAI as soon as we were able to build a computer. This seems like a strange counterfactual to me, though.
Yes, that was the question I was asking (I am not certain we are over the threshold, and certainly suspicious of answers before about 1965, so I wanted to find out how far apart our positions were).
I agree that a good enough AI outcome would address coordination problems, but this cuts both ways. A society which deals with coordination problems well must all-else-equal be more likely to achieve AGI safely than one which does not.
Early enough work seems hard to target at FAI rather than just accelerating AI in general (though it’s possible you could factor out a particular part such as value loading). Given that I think we see long-term trends towards better coordination and decision-making in society, it is not even clear this work would be positive in expectation.
There is a counter-consideration that AI might be likely safer if developed earlier when less computing power is available, but I guess this is a smaller factor.
It would have been kind of impossible to work on AI in 1850, before even modern set theory was developed. Unless by work on AI, you mean work on mathematical logic in general.
Nice story.
I notice you don’t talk about the interaction between the two big goals you’ve held. Your beliefs here presumably hinge on timescales? If most existential risk is a long way off, then improving the coordination and decision making of society is likely a better route to long-term safety than anything more direct (though perhaps there is something else better still).
If you agree with that, when historically do you guess the changeover point was?
There are a number of factors here. Timescales are certainly important. I obviously can’t re-organize people at will. Even in a best-case scenario, it would take decades or even centuries to transition social systems, to shift away from governments and nations, and so on. If I believed AI would take millennia, then I’d keep addressing coordination problems. However, AI is also on the decades-to-centuries timescale.
Furthermore, developing an FAI would (depending upon your definition of ‘friendly’) address coordination problems. Whether my ideas were flawed or not, developing FAI dominates social restructuring.
I’m not quite sure what you mean. Are you asking the historical date at which I believe the value of a person-hour spent on AI research overtook the value of a person-hour spent on restructuring people? I’d guess maybe 1850, in hopes that we’d be ready to build an FAI as soon as we were able to build a computer. This seems like a strange counterfactual to me, though.
Yes, that was the question I was asking (I am not certain we are over the threshold, and certainly suspicious of answers before about 1965, so I wanted to find out how far apart our positions were).
I agree that a good enough AI outcome would address coordination problems, but this cuts both ways. A society which deals with coordination problems well must all-else-equal be more likely to achieve AGI safely than one which does not.
Early enough work seems hard to target at FAI rather than just accelerating AI in general (though it’s possible you could factor out a particular part such as value loading). Given that I think we see long-term trends towards better coordination and decision-making in society, it is not even clear this work would be positive in expectation.
There is a counter-consideration that AI might be likely safer if developed earlier when less computing power is available, but I guess this is a smaller factor.
It would have been kind of impossible to work on AI in 1850, before even modern set theory was developed. Unless by work on AI, you mean work on mathematical logic in general.