That paper was written for the book “Global Catastrophic Risks” which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer’s chapter was the only one talking about AI risks, and he didn’t mention the three listed in my post that you consider to be AI risks.
Do you think I’ve given enough evidence to support the position that many people, when they say or hear “AI risk”, is either explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer’s paper and Luke’s Facing the Singularity website) and typical discussions on LW would conclude that we’re focused on a fairly narrow range of scenarios, which we call “AI risk”?
explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
That paper was written for the book “Global Catastrophic Risks” which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer’s chapter was the only one talking about AI risks, and he didn’t mention the three listed in my post that you consider to be AI risks.
Do you think I’ve given enough evidence to support the position that many people, when they say or hear “AI risk”, is either explicitly thinking of something narrower than your definition of “AI risk”, or have not explicitly considered how to define “AI” but is still thinking of a fairly narrow range of scenarios?
Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer’s paper and Luke’s Facing the Singularity website) and typical discussions on LW would conclude that we’re focused on a fairly narrow range of scenarios, which we call “AI risk”?
Yes.