Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That’s why I think it’s a good idea for CFAR to run a series of AI-specific seminars.
What is the marginal benefit gained by moving further along the road to specialization, from “roughly half our efforts these days happen to go to running an AI research seminar series” to “our mission is to enlighten AI researchers?” The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public. I would expect any such potential to be seriously outweighed by the costs I describe in my main post (e.g., losing out on rationality techniques that would be invented by people who are interested in other issues), such that the marginal effect of moving from 50% specialization to 100% specialization would be to increase AI risk. That’s why I don’t want CFAR to specialize in educating AI researchers to the exclusion of all other groups.
What is the marginal benefit gained by moving further along the road to specialization, from “roughly half our efforts these days happen to go to running an AI research seminar series” to “our mission is to enlighten AI researchers?” The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public.
Yes, I agree that this is the important question. I think there are benefits around stronger coordination among 1) CFAR staff, 2) CFAR supporters, and 3) CFAR participants around AI safety that are not captured by a quantitative increase in the number of seminars being run or whatever.
In the ideal situation, you can try to create a group of people who have common knowledge that everyone else in the group is actually dedicated to AI safety, and it allows them to coordinate better because it allows them to act and make plans under the assumption that everyone else is dedicated to AI safety, at every level of meta (e.g. when you make plans which are contingent on someone else’s plans). If CFAR instead continues to publicly present as approximately cause-neutral, these assumptions shatter and people can’t rely on each other and coordinate as well. I think it would be pretty difficult to attempt to quantify the benefit of doing this but I’d be skeptical of any confident and low upper bounds.
There are also benefits from CFAR signaling that it cares enough about AI safety in particular to drop cause neutrality; that could encourage some people who otherwise might not have to take the cause more seriously.
Yeah, that pretty much sums it up: do you think it’s more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it’s more important for rationalists to broaden their network so that rationalists have more examples to learn from?
Shockingly, as a lawyer who’s working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don’t see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.
I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR’s first minicamps back in 2011, I’d hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.
Instead, five years later, we’ve got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.
I think a lot of this is fair concern (I care about AI but am currently neutral/undecided on whether this change was a good one)
But I also note that “a couple research institutions” is sweeping a lot of work into deliberately innocuous sounding words.
First—we have lots of startups that aren’t AI related that I think were in some fashion facilitated by the overall rationality community project (With CFAR playing a major role in pushing that project forward).
We also have Effective Altruism Global, and many wings of the EA community that have benefited from CFAR and Eliezer’s original writings, which has had huge benefits to plenty of cause areas other than AI. We have your aforementioned young, awkward engineers with their 20% increase in productivity, often earning to give (often to non AI causes), or embarking on startups of their own.
Second, very credible progress has happened on AI as a result of the institutions working on AI. Elon Musk pledged $10 million to AI safety, and he did that because FLI held a conference bringing him and top AI people together, and FLI was able to do that because of a sizeable base of CFAR inspired volunteers as well as the FLI leadership having attended CFAR.
Even if everything MIRI does turns out to be worthless (which I also think is unlikely), FLI has demonstrably changed the landscape of AI safety.
do you think it’s more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it’s more important for rationalists to broaden their network so that rationalists have more examples to learn from?
I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That’s the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.
I sympathize with your concerns—I would love for the rationality community to be more diverse along all sorts of axes—but I worry they’re predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren’t particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they’re highly uncertain and even though there are no obvious feedback loops, because they’re important enough to work on anyway.
I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.
Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That’s why I think it’s a good idea for CFAR to run a series of AI-specific seminars.
What is the marginal benefit gained by moving further along the road to specialization, from “roughly half our efforts these days happen to go to running an AI research seminar series” to “our mission is to enlighten AI researchers?” The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public. I would expect any such potential to be seriously outweighed by the costs I describe in my main post (e.g., losing out on rationality techniques that would be invented by people who are interested in other issues), such that the marginal effect of moving from 50% specialization to 100% specialization would be to increase AI risk. That’s why I don’t want CFAR to specialize in educating AI researchers to the exclusion of all other groups.
Yes, I agree that this is the important question. I think there are benefits around stronger coordination among 1) CFAR staff, 2) CFAR supporters, and 3) CFAR participants around AI safety that are not captured by a quantitative increase in the number of seminars being run or whatever.
In the ideal situation, you can try to create a group of people who have common knowledge that everyone else in the group is actually dedicated to AI safety, and it allows them to coordinate better because it allows them to act and make plans under the assumption that everyone else is dedicated to AI safety, at every level of meta (e.g. when you make plans which are contingent on someone else’s plans). If CFAR instead continues to publicly present as approximately cause-neutral, these assumptions shatter and people can’t rely on each other and coordinate as well. I think it would be pretty difficult to attempt to quantify the benefit of doing this but I’d be skeptical of any confident and low upper bounds.
There are also benefits from CFAR signaling that it cares enough about AI safety in particular to drop cause neutrality; that could encourage some people who otherwise might not have to take the cause more seriously.
Yeah, that pretty much sums it up: do you think it’s more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it’s more important for rationalists to broaden their network so that rationalists have more examples to learn from?
Shockingly, as a lawyer who’s working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don’t see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.
I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR’s first minicamps back in 2011, I’d hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.
Instead, five years later, we’ve got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.
I think a lot of this is fair concern (I care about AI but am currently neutral/undecided on whether this change was a good one)
But I also note that “a couple research institutions” is sweeping a lot of work into deliberately innocuous sounding words.
First—we have lots of startups that aren’t AI related that I think were in some fashion facilitated by the overall rationality community project (With CFAR playing a major role in pushing that project forward).
We also have Effective Altruism Global, and many wings of the EA community that have benefited from CFAR and Eliezer’s original writings, which has had huge benefits to plenty of cause areas other than AI. We have your aforementioned young, awkward engineers with their 20% increase in productivity, often earning to give (often to non AI causes), or embarking on startups of their own.
Second, very credible progress has happened on AI as a result of the institutions working on AI. Elon Musk pledged $10 million to AI safety, and he did that because FLI held a conference bringing him and top AI people together, and FLI was able to do that because of a sizeable base of CFAR inspired volunteers as well as the FLI leadership having attended CFAR.
Even if everything MIRI does turns out to be worthless (which I also think is unlikely), FLI has demonstrably changed the landscape of AI safety.
I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That’s the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important.
I sympathize with your concerns—I would love for the rationality community to be more diverse along all sorts of axes—but I worry they’re predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren’t particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they’re highly uncertain and even though there are no obvious feedback loops, because they’re important enough to work on anyway.
I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart’s calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who’s longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.
That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don’t have a good sense of CFAR’s internal position on whether they should themselves spin off such an organization.)
Incidentally, if someone decides to do this please advertise here. This change in focus has made me stop my (modest) donations to CFAR. If someone started a cause-neutral rationality building institute I’d fund it, at a higher(*) level than I funded CFAR.
(*) One of the things that restrained my CFAR charity in the last few years, other than lack of money until recently, was uncertainty over their cause neutrality. They seemed to be biased in the causes they pushed for, and that gave me hesitation against funding them further. Now that they’ve come out of the closet on the issue I’m against giving them even 1 cent.