Verbal offers are evidence. Sometimes even compelling evidence.
True, but dangerous. Nobody really knows anything about general intelligence. Yet a combination of arguments that sound convincing when formulated in English and the reputation of a few people and their utterances are considered evidence in favor of risks from AI. No doubt that all those arguments constitute evidence. But people update on that evidence and repeat those arguments and add to the overall chorus of people who take risks from AI seriously which in turn causes other people to update towards the possibility. In the end much of all conviction is based on little evidence when put in perspective to the actual actions that such a conviction demands.
I don’t want to argue against risks from AI here. As I wrote many times, I support SI. But I believe that it takes more hard evidence to accept some of the implications and to follow through on drastic actions beyond basic research.
I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
I believe that this argument is unwise and that the line of reasoning is outright dangerous because it justifies too much in the minds of certain people. Making decisions on the basis of the expected utility associated with colonizing the Herculus supercluster is a prime example of what I am skeptical of.
Mostly, the actions I see people taking (and exhorting others to take) on LW are “do research” and “fund others doing research,” to the negligible extent that any AI-related action is taken here at all. And you seem to support those actions.
But, sure… I guess I can see how taking a far goal seriously might in principle lead to future actions other than research, and how those actions might be negative, and I can sort of see responding to that by campaigning against taking the goal seriously rather than by campaigning against specific negative actions.
True, but dangerous. Nobody really knows anything about general intelligence. Yet a combination of arguments that sound convincing when formulated in English and the reputation of a few people and their utterances are considered evidence in favor of risks from AI. No doubt that all those arguments constitute evidence. But people update on that evidence and repeat those arguments and add to the overall chorus of people who take risks from AI seriously which in turn causes other people to update towards the possibility. In the end much of all conviction is based on little evidence when put in perspective to the actual actions that such a conviction demands.
I don’t want to argue against risks from AI here. As I wrote many times, I support SI. But I believe that it takes more hard evidence to accept some of the implications and to follow through on drastic actions beyond basic research.
What drastic actions do you see other people following through on that you consider unjustified?
I am mainly worried about future actions. The perception of imminent risks from AI could give an enormous incentive to commit incredible stupid acts.
Consider the following comment by Eliezer:
I believe that this argument is unwise and that the line of reasoning is outright dangerous because it justifies too much in the minds of certain people. Making decisions on the basis of the expected utility associated with colonizing the Herculus supercluster is a prime example of what I am skeptical of.
Mostly, the actions I see people taking (and exhorting others to take) on LW are “do research” and “fund others doing research,” to the negligible extent that any AI-related action is taken here at all. And you seem to support those actions.
But, sure… I guess I can see how taking a far goal seriously might in principle lead to future actions other than research, and how those actions might be negative, and I can sort of see responding to that by campaigning against taking the goal seriously rather than by campaigning against specific negative actions.
Thanks for clarifying.