I am not convinced that “the LW proposal” is to appoint an all-powerful council of elders who decree who is and who isn’t worthy to use AI technology, and in fact I don’t recall ever seeing anything resembling that. (Though of course I might well have missed it.)
What I think I have seen suggested or implied is that something like that might be beneficial for the development of possibly-superhumanly-intelligent AIs, on the basis that random individuals are simply not competent to judge whether what they’re doing is safe and that if it isn’t the results might be catastrophic.
To whatever extent it’s true that (1) humans are capable of producing superhumanly intelligent AIs and (2) superhumanly intelligent AIs are likely to have or acquire vastly superhuman power and (3) even conditional on being able to make the superhuman AIs, making them so that they don’t use that power in ways we’d consider catastrophic is a Very Hard Problem (and I think it’s fair to say that (1-3), or at least their possibility, is pretty central to the LW community’s thinking on this), a permissively libertarian position on possibly-superhuman AI development seems uncomfortably close to a permissively libertarian position on, say, nuclear bombs.
Whether (1-3) are right, and whether a “council of elders” is the best solution if they are, are debatable. But I don’t think it should be even slightly controversial that conditional on (1-3) it’s unconscionably dangerous to say “everyone should try to make their own superhuman AI and no one should try to stop them, because Freedom”.
The most freedom-positive society in human history is probably the United States of America. Even there, there are few people arguing that the Second Amendment confers on all the right to keep and bear nuclear warheads.
Of course, if free-for-all AI development is in fact perfectly safe (at least in the sense of being vanishingly unlikely to result in outright catastrophe) then “everyone has to be free to do it because Freedom” is a much more reasonable position. But then the key point in your argument, at least around these parts where most people endorse (1-3) and lean at least somewhat libertarian, is not “Freedom!” but “having everyone develop their own superhuman AI is unlikely to be catastrophic, because …”. Which requires an actual argument, not just a scattering of boo-words like “council of elders” and “totalitarian” and “famine” and “dystopia” and yay-words like “freedom”, “privacy”, “equality”, “fair play”, “freedom”, “rightuful”, “freedom”, and “freedom”.
(I feel like I should repeat a key point from earlier: you write as if the question is who will decide who gets to own/use superhuman AIs once they exist, but so far as I know “the LW proposal” doesn’t involve anything remotely like a “council of elders” for that. The point at which something of the sort might be appropriate is in the development of possibly-superhuman AIs.)
I am not convinced that “the LW proposal” is to appoint an all-powerful council of elders who decree who is and who isn’t worthy to use AI technology, and in fact I don’t recall ever seeing anything resembling that. (Though of course I might well have missed it.)
What I think I have seen suggested or implied is that something like that might be beneficial for the development of possibly-superhumanly-intelligent AIs, on the basis that random individuals are simply not competent to judge whether what they’re doing is safe and that if it isn’t the results might be catastrophic.
To whatever extent it’s true that (1) humans are capable of producing superhumanly intelligent AIs and (2) superhumanly intelligent AIs are likely to have or acquire vastly superhuman power and (3) even conditional on being able to make the superhuman AIs, making them so that they don’t use that power in ways we’d consider catastrophic is a Very Hard Problem (and I think it’s fair to say that (1-3), or at least their possibility, is pretty central to the LW community’s thinking on this), a permissively libertarian position on possibly-superhuman AI development seems uncomfortably close to a permissively libertarian position on, say, nuclear bombs.
Whether (1-3) are right, and whether a “council of elders” is the best solution if they are, are debatable. But I don’t think it should be even slightly controversial that conditional on (1-3) it’s unconscionably dangerous to say “everyone should try to make their own superhuman AI and no one should try to stop them, because Freedom”.
The most freedom-positive society in human history is probably the United States of America. Even there, there are few people arguing that the Second Amendment confers on all the right to keep and bear nuclear warheads.
Of course, if free-for-all AI development is in fact perfectly safe (at least in the sense of being vanishingly unlikely to result in outright catastrophe) then “everyone has to be free to do it because Freedom” is a much more reasonable position. But then the key point in your argument, at least around these parts where most people endorse (1-3) and lean at least somewhat libertarian, is not “Freedom!” but “having everyone develop their own superhuman AI is unlikely to be catastrophic, because …”. Which requires an actual argument, not just a scattering of boo-words like “council of elders” and “totalitarian” and “famine” and “dystopia” and yay-words like “freedom”, “privacy”, “equality”, “fair play”, “freedom”, “rightuful”, “freedom”, and “freedom”.
(I feel like I should repeat a key point from earlier: you write as if the question is who will decide who gets to own/use superhuman AIs once they exist, but so far as I know “the LW proposal” doesn’t involve anything remotely like a “council of elders” for that. The point at which something of the sort might be appropriate is in the development of possibly-superhuman AIs.)