Apparently it’s also common to not include uploads in the definition of AI. For example, here’s Eliezer:
Perhaps we would rather take some other route than AI to smarter-than-human intelligence
say, augment humans instead? To pick one extreme example, suppose the one says: The
prospect of AI makes me nervous. I would rather that, before any AI is developed,
individual humans are scanned into computers, neuron by neuron, and then upgraded,
slowly but surely, until they are super-smart; and that is the ground on which humanity
should confront the challenge of superintelligence.
Yeah, there’s a distinction between things targeting a broad audience, where people describe WBE as a form of AI, versus some “inside baseball” talk in which it is used to contrast against WBE.
That paper was written for the book “Global Catastrophic Risks” which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer’s chapter was the only one talking about AI risks, and he didn’t mention the three listed in my post that you consider to be AI risks.
Do you think I’ve given enough evidence to support the position that many people, when they say or hear “AI risk”, is either explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer’s paper and Luke’s Facing the Singularity website) and typical discussions on LW would conclude that we’re focused on a fairly narrow range of scenarios, which we call “AI risk”?
explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
Apparently it’s also common to not include uploads in the definition of AI. For example, here’s Eliezer:
Yeah, there’s a distinction between things targeting a broad audience, where people describe WBE as a form of AI, versus some “inside baseball” talk in which it is used to contrast against WBE.
That paper was written for the book “Global Catastrophic Risks” which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer’s chapter was the only one talking about AI risks, and he didn’t mention the three listed in my post that you consider to be AI risks.
Do you think I’ve given enough evidence to support the position that many people, when they say or hear “AI risk”, is either explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer’s paper and Luke’s Facing the Singularity website) and typical discussions on LW would conclude that we’re focused on a fairly narrow range of scenarios, which we call “AI risk”?
Yes.