Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don’t want to if you already have it at hand.
I am thinking in particular of Project Steve. Rather than counter a list of many scientists who “Dissent from Darwinism” with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.
Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That’s the theory, anyway.
Intuitively it sounds like it should work, but I don’t know if there are studies supporting this.
There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”
Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don’t want to if you already have it at hand.
I am thinking in particular of Project Steve. Rather than counter a list of many scientists who “Dissent from Darwinism” with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.
Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That’s the theory, anyway.
Intuitively it sounds like it should work, but I don’t know if there are studies supporting this.
There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”