I didn’t follow CFAR that closely, so I don’t know how transparent you were that this was a MIX of rationality improvement AND AI-Safety evangelism.
How transparent we were about this varied by year. Also how much different ones of us were trying to do different mixes of this by different programs varied by year, which changed the ground truth we would’ve been being transparent about. In the initial 2012 minicamps, we were part of MIRI still legally and included a class or two on AI safety. Then we kinda dropped it from the official stuff, I still had it as a substantial motivation, Julia and some of the others didn’t I think, it manifested for me mostly in trying to retain control of the organization and in e.g. wanting to get things like Bayes in the curriculum (b/c I thought those needed/helpful for parsing the AI risk argument) and in choices of who to admit. Later (2016? I don’t remember) we brought it back in more explicitly in our declared missions/fundraiser posts/etc., as “rationality for its own sake, for the sake of existential risk.” Also later (2015 on, I think) we ran some specialized AI safety programs, while still not having AI content explicitly in the mainline.
wanting to get things like Bayes in the curriculum (b/c I thought those needed/helpful for parsing the AI risk argument)
I do not think this is true. I snapped to ‘Oh God this is right and we’re all dead quite soon’ as a result of reading a short story about postage stamps something like fifteen years ago, and I was totally innocent of Bayesianism in any form.
It’s not a complicated argument at all, and you don’t need any kind of philosophical stance to see it.
I had exactly the same ‘snap’ reaction to my first exposure to ideas like global warming, overpopulation, malthus, coronavirus, asteroids, dysgenics, animal suffering, many-worlds, euthanasia, etc ad inf. Just a few clear and simple facts, and maybe a bit of mathematical intuition, but nothing you wouldn’t get from secondary school, lead immediately to a hideous or at least startling conclusion.
I don’t know what is going on with everyone’s inability to get these things. I think it’s more a reluctance to take abstract ideas seriously. Or maybe needing social proof before thinking about anything weird.
I don’t even think it’s much to do with intelligence. I’ve had conversations with really quite dim people who nevertheless ‘just get’ this sort of thing. And many conversations with very clever people who can’t say what’s wrong with the argument but nevertheless can’t take it seriously.
I wonder if it’s more to do with a natural immunity to peer pressure, and in fact, love of being contrarian for the sake of it (which I have in spades, despite being fairly human otherwise), which may be more of a brain malformation than anything else. It feels like it’s related to a need to stand up for the truth even when (possibly even because) people hate you for it.
Maybe the right path here is to find the already existing correct contrarians, rather than to try to make correct contrarians out of normal well-functioning people.
Later (2016? I don’t remember) we brought it back in more explicitly in our declared missions/fundraiser posts/etc., as “rationality for its own sake, for the sake of existential risk.”
How transparent we were about this varied by year. Also how much different ones of us were trying to do different mixes of this by different programs varied by year, which changed the ground truth we would’ve been being transparent about. In the initial 2012 minicamps, we were part of MIRI still legally and included a class or two on AI safety. Then we kinda dropped it from the official stuff, I still had it as a substantial motivation, Julia and some of the others didn’t I think, it manifested for me mostly in trying to retain control of the organization and in e.g. wanting to get things like Bayes in the curriculum (b/c I thought those needed/helpful for parsing the AI risk argument) and in choices of who to admit. Later (2016? I don’t remember) we brought it back in more explicitly in our declared missions/fundraiser posts/etc., as “rationality for its own sake, for the sake of existential risk.” Also later (2015 on, I think) we ran some specialized AI safety programs, while still not having AI content explicitly in the mainline.
I do not think this is true. I snapped to ‘Oh God this is right and we’re all dead quite soon’ as a result of reading a short story about postage stamps something like fifteen years ago, and I was totally innocent of Bayesianism in any form.
It’s not a complicated argument at all, and you don’t need any kind of philosophical stance to see it.
I had exactly the same ‘snap’ reaction to my first exposure to ideas like global warming, overpopulation, malthus, coronavirus, asteroids, dysgenics, animal suffering, many-worlds, euthanasia, etc ad inf. Just a few clear and simple facts, and maybe a bit of mathematical intuition, but nothing you wouldn’t get from secondary school, lead immediately to a hideous or at least startling conclusion.
I don’t know what is going on with everyone’s inability to get these things. I think it’s more a reluctance to take abstract ideas seriously. Or maybe needing social proof before thinking about anything weird.
I don’t even think it’s much to do with intelligence. I’ve had conversations with really quite dim people who nevertheless ‘just get’ this sort of thing. And many conversations with very clever people who can’t say what’s wrong with the argument but nevertheless can’t take it seriously.
I wonder if it’s more to do with a natural immunity to peer pressure, and in fact, love of being contrarian for the sake of it (which I have in spades, despite being fairly human otherwise), which may be more of a brain malformation than anything else. It feels like it’s related to a need to stand up for the truth even when (possibly even because) people hate you for it.
Maybe the right path here is to find the already existing correct contrarians, rather than to try to make correct contrarians out of normal well-functioning people.
Looks like late 2016.