X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I’d judge against putting all our eggs in the AI basket.
“We” aren’t deciding where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn’t always “FAI research”.
There’s another factor. Regulation is systemic risk.
Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.
The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.
There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.
Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson’s ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.
Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?
Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.
Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn’t mean I favor a Malthusian result, just that the alternatives are worse.
I don’t agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)
… seems to have this belief that there is some perfect cure for any problem.
There may not be a single strategy that is perfect on it’s own, but there will always be an optimum course of action, which may be a mixture of strategies (eg dump $X into nanotech safety, $Y into intelligence enhancement, and $Z into AGI development). You might never have enough information to know the optimal strategy to maximise your utility function, but one still exists, and it is worth trying to estimate it.
I mention this because previously I have heard “there is no perfect solution” as an excuse to give up and abandon systematic/mathematical analysis of a problem, and just settle with some arbitrary suggestion of a “good enough” course of action.
It isn’t just that there is no “perfect” solution, to many problems there is no solution at all; just a continuing difficulty that must be continually worked through. Claims of some optimal (or even good enough) solution to these sorts of social problems is usually a means to advance the claimants’ agendas, especially when they propose using gov’t coercion to force everybody to follow their prescriptions.
That claims of this type are sometimes made to advance agendas does not mean we shouldn’t make these claims, or that all such claims are false. It means such claims need to be scrutinised more carefully.
I agree that more often than not there is not a simple solution, and people often accept a false simple solution too readily. But the absence of a simple solution does not mean there is no theoretical optimal strategy for continually working through the difficulty.
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I’d judge against putting all our eggs in the AI basket.
“We” aren’t deciding where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn’t always “FAI research”.
From a thread http://esr.ibiblio.org/?p=1551#comments in Armed and Dangerous:
Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.
The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.
There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.
Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson’s ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.
Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?
I think you misunderstand Robin’s scenario; if we survive, the Malthusian scenario is inevitable after some point.
Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.
Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn’t mean I favor a Malthusian result, just that the alternatives are worse.
I don’t agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)
He says that a singleton is unlikely but not negligibly so.
Ah, I see that you are right. Thanks.
There may not be a single strategy that is perfect on it’s own, but there will always be an optimum course of action, which may be a mixture of strategies (eg dump $X into nanotech safety, $Y into intelligence enhancement, and $Z into AGI development). You might never have enough information to know the optimal strategy to maximise your utility function, but one still exists, and it is worth trying to estimate it.
I mention this because previously I have heard “there is no perfect solution” as an excuse to give up and abandon systematic/mathematical analysis of a problem, and just settle with some arbitrary suggestion of a “good enough” course of action.
It isn’t just that there is no “perfect” solution, to many problems there is no solution at all; just a continuing difficulty that must be continually worked through. Claims of some optimal (or even good enough) solution to these sorts of social problems is usually a means to advance the claimants’ agendas, especially when they propose using gov’t coercion to force everybody to follow their prescriptions.
That claims of this type are sometimes made to advance agendas does not mean we shouldn’t make these claims, or that all such claims are false. It means such claims need to be scrutinised more carefully.
I agree that more often than not there is not a simple solution, and people often accept a false simple solution too readily. But the absence of a simple solution does not mean there is no theoretical optimal strategy for continually working through the difficulty.
Who’s doing that? Governments also use surveillance, intelligence, tactical invasions and other strategies to combat terrorism.