Sure, though in this case I happen to be thinking about use cases for which Less Wrongers are not my target audience. But it’ll be interesting to see what terms Less Wrongers prefer anyway.
Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted “best sounding” as “which will be the most effective term,” and imagine others did as well. Strategic thinking is kind of our thing.
I don’t like the selection of the terms because it groups lots of different goals under one set of terminology. A safe AGI is not necessarily a Friendly AGI, and a Friendly AGI is not necessarily safe in the same sense as a safely contained Unfriendly AGI. For me this rides on the unpacking of the word “safe”: it usually refers to minimizing change to the status quo.
“Control”, likewise, implies that we are containing or constraining an otherwise hostile process. In the case of safety-hardened UFAI, maybe that is what’s being doing, but it’s still not actually the same project as FAI.
The world with minimum human suffering is one in which there are no living humans, and the world with the safest, most controlled AGI is the one in which AGIs are more or less used only to automate labor which would otherwise be done by humans, never to go beyond what humans could do on our own. Governments and the fortunate economic classes among the public are going to desire the safest, most controlled AGI; the lower classes are going to have very few options but to accept whatever AGI happens; I personally want Friendly AGI that can be damn well wielded to go beyond human performance at achieving human goals, thus filling the world with Pure Awesomeness.
Hmmm… I think that might help recruit active participants even if it doesn’t really help get passive supporters.
I emphatically think we should be thinking about, “What is the base rate of validity for claims similar to ours that the average person has likely heard about?” So if we want to preach a gospel of risk and danger, we should take into account just how seriously the average person or policy-maker has learned from experience to take risk and danger, and how many other forms of risk and danger they are trying to take into account at once. As much as FHI, for instance, doesn’t consider global warming an existential risk, I think for the average person the expected value of damage from global warming is a lot higher than the expected value of damage from UFAI or harmful nanotechnology—because their subjective probabilities on any kind of AI or nanotechnology sufficiently powerful to cause harm are extremely low. So our claims get filed under “not nearly as pressing as global warming”.
(I’ve committed this sin myself, and I’m not entirely sure I consider it a sin. The primary reason I think AI risk should be addressed quickly is because it’s comparatively easy to address, and successfully addressing it has a high positive pay-off in and of itself. If we had to choose one of two risks to justify retooling the entire global economy by force of law, I would have to choose global warming.)
For what it’s worth, it might be useful running a poll on what people think the best sounding name is.
[pollid:706]
With all these options, single choice voting is pretty clearly sub-optimal, Approval or Range Voting would be better.
AGI Safety is the best one given the choices, but “AGI” is a term of art and should probably be avoided if we’re targeting the public.
On the other hand, it does sound technical, which is probably good for recruiting mathematicians.
In my experience, the term “Friendly AI” confuses people—they think it means an AI that is your friend, or an AI that obeys orders.
Sure, though in this case I happen to be thinking about use cases for which Less Wrongers are not my target audience. But it’ll be interesting to see what terms Less Wrongers prefer anyway.
Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted “best sounding” as “which will be the most effective term,” and imagine others did as well. Strategic thinking is kind of our thing.
Yup, I meant to imply this with the phrase “for what it’s worth”.
I don’t like the selection of the terms because it groups lots of different goals under one set of terminology. A safe AGI is not necessarily a Friendly AGI, and a Friendly AGI is not necessarily safe in the same sense as a safely contained Unfriendly AGI. For me this rides on the unpacking of the word “safe”: it usually refers to minimizing change to the status quo.
“Control”, likewise, implies that we are containing or constraining an otherwise hostile process. In the case of safety-hardened UFAI, maybe that is what’s being doing, but it’s still not actually the same project as FAI.
The world with minimum human suffering is one in which there are no living humans, and the world with the safest, most controlled AGI is the one in which AGIs are more or less used only to automate labor which would otherwise be done by humans, never to go beyond what humans could do on our own. Governments and the fortunate economic classes among the public are going to desire the safest, most controlled AGI; the lower classes are going to have very few options but to accept whatever AGI happens; I personally want Friendly AGI that can be damn well wielded to go beyond human performance at achieving human goals, thus filling the world with Pure Awesomeness.
Yes, but we can hardly call it World Optimization
Hmmm… I think that might help recruit active participants even if it doesn’t really help get passive supporters.
I emphatically think we should be thinking about, “What is the base rate of validity for claims similar to ours that the average person has likely heard about?” So if we want to preach a gospel of risk and danger, we should take into account just how seriously the average person or policy-maker has learned from experience to take risk and danger, and how many other forms of risk and danger they are trying to take into account at once. As much as FHI, for instance, doesn’t consider global warming an existential risk, I think for the average person the expected value of damage from global warming is a lot higher than the expected value of damage from UFAI or harmful nanotechnology—because their subjective probabilities on any kind of AI or nanotechnology sufficiently powerful to cause harm are extremely low. So our claims get filed under “not nearly as pressing as global warming”.
(I’ve committed this sin myself, and I’m not entirely sure I consider it a sin. The primary reason I think AI risk should be addressed quickly is because it’s comparatively easy to address, and successfully addressing it has a high positive pay-off in and of itself. If we had to choose one of two risks to justify retooling the entire global economy by force of law, I would have to choose global warming.)