I don’t see your point. It would take an unrealistic world dictatorship (whether it’s “benevolent” seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI. And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.
I don’t see your point. It would take an unrealistic world dictatorship (whether it’s “benevolent” seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI.
You were talking about hundred year time scales. That’s time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That’s time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off
And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.
But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don’t need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.
You were talking about hundred year time scales. That’s time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That’s time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off
It’s also time enough for any of the huge number of other outcomes. It’s not outright impossible, but pretty improbable, that the world will go this exact road. And don’t underestimate how crazy people are.
But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don’t need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.
After the change of mind about value of drifted human preference, I agree that WBE/intelligence enhancement is a viable road. Here’re my arguments about the impact of these paths at this point.
WBE is still at least decades away, probably more than a hundred years if you take planning fallacy into account, and depends on the development of global technological efforts that are not easily influenced. Value of any “institutional arrangements” and viability of arguing for them given the remoteness (hence irrelevance at present) and implausibility (to most people) of WBE, also seems doubtful at present. This in my mind makes the marginal value on any present effort related to WBE relatively small. This will go up sharply as WBE tech gets closer
I suspect that FAI theory, once understood, will still be simple enough (if any general theory is possible), and can be developed by vanilla humans (on unknown timescale, probably decades to hundreds of years, but at some point WBEs overtake the timescale estimates). By the time WBE becomes viable, the risk situation will be already very explosive, so if we can get a good understanding earlier, we could possibly avoid that risky period entirely. Also, having a viable technical Friendliness programme might give academic recognition to the problem (that these risks are as unavoidable as laws of physics, and not just something to talk with your friends about, like politics or football), which might spread awareness of the AI risks on an otherwise unachievable level, helping with institutional change promoting measures against wild AI and other existential risks. On the other hand, I won’t underestimate human craziness on this point as well—technical recognition of the problem may still live side to side with global indifference.
I don’t see your point. It would take an unrealistic world dictatorship (whether it’s “benevolent” seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI. And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.
You were talking about hundred year time scales. That’s time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That’s time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off
But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don’t need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.
It’s also time enough for any of the huge number of other outcomes. It’s not outright impossible, but pretty improbable, that the world will go this exact road. And don’t underestimate how crazy people are.
After the change of mind about value of drifted human preference, I agree that WBE/intelligence enhancement is a viable road. Here’re my arguments about the impact of these paths at this point.
WBE is still at least decades away, probably more than a hundred years if you take planning fallacy into account, and depends on the development of global technological efforts that are not easily influenced. Value of any “institutional arrangements” and viability of arguing for them given the remoteness (hence irrelevance at present) and implausibility (to most people) of WBE, also seems doubtful at present. This in my mind makes the marginal value on any present effort related to WBE relatively small. This will go up sharply as WBE tech gets closer
I suspect that FAI theory, once understood, will still be simple enough (if any general theory is possible), and can be developed by vanilla humans (on unknown timescale, probably decades to hundreds of years, but at some point WBEs overtake the timescale estimates). By the time WBE becomes viable, the risk situation will be already very explosive, so if we can get a good understanding earlier, we could possibly avoid that risky period entirely. Also, having a viable technical Friendliness programme might give academic recognition to the problem (that these risks are as unavoidable as laws of physics, and not just something to talk with your friends about, like politics or football), which might spread awareness of the AI risks on an otherwise unachievable level, helping with institutional change promoting measures against wild AI and other existential risks. On the other hand, I won’t underestimate human craziness on this point as well—technical recognition of the problem may still live side to side with global indifference.