SI/LW sometimes gives the impression of being a doomsday cult
To whom? In the post you linked, the main source of the concern (google hits) turned out not to mean the thing the author originally thought (edit: this is false. Sorry). Merely “raising the issue” is merely privileging the hypothesis.
Anywho, is the main idea of this post “this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?” I seem to recall their justification for focus on AI was that with self-improving AI, you only need to get it right the first time—one person could eliminate the risk if they could solve the right technical problems. With preventing war or preventing upload labor, on the other hand, you need all or most people to cooperate with you, making the marginal effect of one group smaller.
The post was triggered by a private message from someone, so unfortunately I can’t link to it.
Anywho, is the main idea of this post “this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?”
Not quite. I’m saying there are a bunch of Singularity-related risks that aren’t AI risks, and a bunch of Singularity-related opportunities that aren’t AI opportunities. The AI-related opportunities affect the non-AI risks, and the non-AI opportunities affect the AI risks. (For example successfully building FAI would prevent war as much as it prevents UFAI.) We shouldn’t be thinking just about AI risks and opportunities at this point, or giving the impression that we are.
To whom? In the post you linked, the main source of the concern (google hits) turned out not to mean the thing the author originally thought (edit: this is false. Sorry). Merely “raising the issue” is merely privileging the hypothesis.
Anywho, is the main idea of this post “this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?” I seem to recall their justification for focus on AI was that with self-improving AI, you only need to get it right the first time—one person could eliminate the risk if they could solve the right technical problems. With preventing war or preventing upload labor, on the other hand, you need all or most people to cooperate with you, making the marginal effect of one group smaller.
The post was triggered by a private message from someone, so unfortunately I can’t link to it.
Not quite. I’m saying there are a bunch of Singularity-related risks that aren’t AI risks, and a bunch of Singularity-related opportunities that aren’t AI opportunities. The AI-related opportunities affect the non-AI risks, and the non-AI opportunities affect the AI risks. (For example successfully building FAI would prevent war as much as it prevents UFAI.) We shouldn’t be thinking just about AI risks and opportunities at this point, or giving the impression that we are.