We should trust the professionals who predict these sorts of things.
What? Why? How do you decide which professionals to trust? (Nick Bostom is just some guy with a PhD; there are lots of those, and most of them aren’t predicting a robot apocalypse. Eliezer Yudkowsky never graduated from high school!)
The reason I’m concerned about existential risk from artificial intelligence, is because the arguments actually make sense. (Human intelligence has had a big impact on the planet, check; there’s no particular reason to expect humans to be the most possible powerful intelligence, check; there’s no particular reason to expect an arbitrary intelligence to have humane values, check; humans are made out of atoms than can be used for other things, check and mate.)
If you think your audience just isn’t smart enough to evaluate arguments, then, gee, I don’t know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That’s a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teachmethods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.
What? Why? How do you decide which professionals to trust?
I was telling my friends and family to prep for the coronavirus very early on. At the time the main response was, “ok, chill, don’t panic, we’ll see what happens”. Now that things have gotten crazy they think it’s impressive that I saw this coming ahead of time. That’s what my thinking was for point #3: perhaps this sort of response is common. At least amongst some non-trivial percentage of the population.
If you think your audience just isn’t smart enough to evaluate arguments, then, gee, I don’t know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That’s a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teachmethods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.
I very much agree, but it seems overwhelmingly likely that we live in a world where we can’t rely on people to evaluate the arguments. And we have to act based on the world that we do live in, even if that world is a sad and frustrating one.
I think you explained this, but it took me some parsing of your comment to quite get it, so here it is spelled out more: my interpretation of what your saying is that “ordinary people” (who aren’t following situations closely) who are trying to figure out who to trust, should update towards trusting people who predicted coronavirus early. (i.e. as an update on those people being Correct Contrarians)
I agree with your main criticism. It’s well put too!
That’s a scary possibility; I would feel much safer …
Maybe doing this is the best that one can do (so … shutup and multiply). I don’t think it is (because I’d expect it to backfire).
(But I think we should also pursue teaching people how to think rationally.)
I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people’s current understanding to the larger risks we imagine and wish to avoid.
What? Why? How do you decide which professionals to trust? (Nick Bostom is just some guy with a PhD; there are lots of those, and most of them aren’t predicting a robot apocalypse. Eliezer Yudkowsky never graduated from high school!)
The reason I’m concerned about existential risk from artificial intelligence, is because the arguments actually make sense. (Human intelligence has had a big impact on the planet, check; there’s no particular reason to expect humans to be the most possible powerful intelligence, check; there’s no particular reason to expect an arbitrary intelligence to have humane values, check; humans are made out of atoms than can be used for other things, check and mate.)
If you think your audience just isn’t smart enough to evaluate arguments, then, gee, I don’t know, maybe using a moment of particular receptiveness to plant a seed to get them to open their wallets to the right professionals later is the best you can do? That’s a scary possibility; I would feel much safer about a fate of a world that knew how to systematically teach methods of thinking that get the right answer, rather than having to gamble on the people who know how to think about objective risks also being able to win a marketing war.
I was telling my friends and family to prep for the coronavirus very early on. At the time the main response was, “ok, chill, don’t panic, we’ll see what happens”. Now that things have gotten crazy they think it’s impressive that I saw this coming ahead of time. That’s what my thinking was for point #3: perhaps this sort of response is common. At least amongst some non-trivial percentage of the population.
I very much agree, but it seems overwhelmingly likely that we live in a world where we can’t rely on people to evaluate the arguments. And we have to act based on the world that we do live in, even if that world is a sad and frustrating one.
I think you explained this, but it took me some parsing of your comment to quite get it, so here it is spelled out more: my interpretation of what your saying is that “ordinary people” (who aren’t following situations closely) who are trying to figure out who to trust, should update towards trusting people who predicted coronavirus early. (i.e. as an update on those people being Correct Contrarians)
Yes, exactly. Thank you for clarifying. I just read my original comment again and I think I didn’t make it very clear.
I agree with your main criticism. It’s well put too!
Maybe doing this is the best that one can do (so … shutup and multiply). I don’t think it is (because I’d expect it to backfire).
(But I think we should also pursue teaching people how to think rationally.)
I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people’s current understanding to the larger risks we imagine and wish to avoid.