CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype
Summary: We outline CFAR’s purpose, our history in 2014, and our plans heading into 2015.
One of the reasons we’re publishing this review now is that we’ve just launched our annual matching fundraiser, and want to provide the information our prospective donors need for deciding. This is the best time of year to decide to donate to CFAR. Donations up to $120k will be matched until January 31.[1]
To briefly preview: For the first three years of our existence, CFAR mostly focused on getting going. We followed the standard recommendation to build a ‘minimum viable product’, the CFAR workshops, that could test our ideas and generate some revenue. Coming into 2013, we had a workshop that people liked (9.3 average rating on “Are you glad you came?”; a more recent random survey showed 9.6 average rating on the same question 6-24 months later), which helped keep the lights on and gave us articulate, skeptical, serious learners to iterate on. At the same time, the workshops are not everything we would want in a CFAR prototype; it feels like the current core workshop does not stress-test most of our hopes for what CFAR can eventually do. The premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth (2) be strategically effective (3) do good in the world. We have dreams of scaling up some particular kinds of sanity. Our next goal is to build the minimum strategic product that more directly justifies CFAR’s claim to be an effective altruist project.[2]
Highlights from 2014
Our brand perception improved significantly in 2014, which matters because it leads to companies being willing to pay for workshop attendance. We were covered in Fast Company—twice—the Wall Street Journal, and The Reasoner. Other mentions include Forbes, Big Think, Boing Boing, and Lifehacker. We’ve also had some interest in potential training for tech companies.
Our curriculum is gaining a second tier in the form of alumni workshops. We tried 4 experimental alumni workshops, 3 of which went well enough to be worth iterating:
The Hamming Question: “What are the most important problems in your life, and why aren’t you working on them?” This 2.5-day workshop was extremely well received, and gave rise to a new unit for our introductory workshop.
Assisting Others[3]: A two-weekend (training, then practicum) workshop investigating the close link between helping others debug their problems, and better debugging your own problems. We ran a version of this in the Bay Area that worked, and an abridged version in the UK that didn’t. (This was our fault. We’re sorry.)
Attention Workshop: A 2.5-day workshop on clearing mental space. This failed and taught us some important points about what doesn’t work.
Epistemic Rationality for Effective Altruists: A standalone 2.5-day workshop on applying techniques from the introductory workshop to factual questions, especially those related to effective altruism. (More on this below.) The attendees from this and the Hamming workshop spontaneously organized recurring meetups for themselves.
Our alumni community continues to grow. There are now 550 CFAR alumni, counting 90 from SPARC. It’s a high-initiative group. Startups by CFAR alumni include: Apptimize; Bellroy; Beeminder; Complice; Code Combat; Draftable; MealSquares; OhmData; Praxamed; Vesparum; Teleport; Watu; Wave; ZeroCater.[4] There is a highly active mailing list with over 400 members, and over 600 conversation threads, over 30 of which were active in the last month. We also ran our first-ever alumni reunion, and started a weekly alumni dojo. This enabled further curricular experimentation, and allowed alumni ideas and experiences to feed into curricular design.
SPARC happened again, with more-honed curriculum and nearly twice as many students.
Basic operations improved substantially. We’ll say more on this in section 2.
Iteration on the flagship workshop continues. We’ll say more on this (including details of what we learned, and what remains puzzling) in section 3.
Improving operations
The two driving themes of CFAR during 2014 were making our operations more stable and sustainable, and our successful struggle to pull our introductory workshop out of a local optimum and get back on track toward something that is more like a ‘full prototype’ of the CFAR concept.
At the end of 2013, we had negative $30,000 and had borrowed money to make payroll, placing us in the ‘very early stage, struggling startup’ phase. Almost all of our regular operations, such as scheduling interviews for workshop admissions, were being done by hand. Much of our real progress in 2014 consisted of making things run smoothly and getting past the phase where treading water requires so many weekly hours that nobody has time for anything else. Organizational capital is real, and we had to learn the habit of setting aside time and effort for accumulating it. (In retrospect, we were around a year too slow to enter this phase, although in the very early days it was probably correct to be building everything to throw away.)
A few of the less completely standard lessons we think we learned are as follows:
Rank-order busyness, especially if you’re passing up organizational-capital improvement tasks. Think “This is one of the 3 busiest weekends of the year” and not “I’m too busy to do it right now.” This says how large a hit you get from allowing “important but not urgent” to be postponed during times which are at least that busy, and it forces calibration.
Even in crunch times, take moments to update. (E.g., do one-sentence journal entries about what just happened / ideas for improvement after each Skype call.) The crunchiest moments are often also the most important to optimize, and even a single sentence of thought can give you a lot of the value from continuing to optimize.
Use arithmetic to estimate the time/money/staff cost of continuing to do Y the usual way, versus optimizing it. If the arithmetic indicates 10X or more savings, do it even if it requires some up-front cost. (No really, actually do the arithmetic.)
We also learned a large number of other standard lessons. As of the end of 2014, we think that basic processes at CFAR have improved substantially. We have several months of runway in the bank account—our finances are still precarious, but at least not negative, and we think they’re on an improving path. Our workshop interviews and follow-up sessions have an online interface for scheduling instead of being done by hand (which frees a rather surprising amount of energy). The workshop instructors are almost entirely not doing workshop ops. Accounting has been streamlined. The office has nutritious food easily available, without the need to quit working when one gets hungry.
CFAR feels like it is out of the very-early-startup stage, and able to start focusing on things other than just staying afloat. We feel sufficiently non-overwhelmed that we can take the highest-value opportunities we run into, rather than having all staff members overcommitted at all times. We have a clearer sense of what CFAR is trying to do; of what our internal decision-making structure is; of what each of our roles is; of the value of building good institutions for recording our heuristic updates; etc. And we have will, momentum, and knowledge with which to continue improving our organizational capital over 2015.
Attempts to go beyond the current workshop and toward the ‘full prototype’ of CFAR: our experience in 2014 and plans for 2015
Where are we spending the dividends from that organizational capital? More ambitious curriculum. Specifically, a “full prototype” of the CFAR aim.
Recall that the premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth; (2) be strategically effective; and (3) do good in the world. By a “prototype”, or “minimum strategic product”, we mean a product that actually demonstrates that the above goal is viable (and, thus, that more directly justifies CFAR’s claim to be an effective altruist project). For CFAR, this will probably require meaningfully boosting some fraction of participants along all three axes (epistemic rationality; real-world competence; and tendency to do good in the world). [5]
So that’s our target for 2015. In the rest of this section, we’ll talk about what CFAR did during 2014, go into greater detail on our attempt to build a curriculum for epistemic rationality, and describe our 2015 goals in more detail.
---
One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization. In our present state this continues to be a lot harder than we would like; and our 2014 workshop, for example, was done via crude “what do you feel you learnt?” surveys and our own gut impressions. The sort of randomized trial we ran in 2012 is extremely expensive for us because it requires randomly not admitting workshop attendees, and we don’t presently have good-enough outcome metrics to justify that expense. Life outcomes, which we see as a gold standard, are big noisy variables with many contributing factors—there’s a lot that adds to or subtracts from your salary besides having attended a CFAR workshop, which means that the randomized tests we can afford to run on life outcomes are underpowered. Testing later ability to perform specific skills doesn’t seem to stress-test the core premise in the same way. In 2014 we continued to track correlational data and did more detailed random followup surveys, but this is just enough to keep such analyses in the set of things we regularly do, and remind ourselves that we are supposed to be doing better science later.
At the start of 2014, we thought our workshops had reached a point of decent order, and we were continuing to tweak them. Partway through 2014 we realized we had reached a local optimum and become stuck (well short of a full prototype / minimum strategic product). So then we smashed everything with a hammer and tried:
4 different advanced workshops for alumni:
An epistemic rationality workshop for effective altruist alumni;
An alumnus workshop on focusing attention (failed);
An alumnus workshop on the Hamming Question, “What are your most important life problems? Why aren’t you solving them?”
2 attempts at an alumnus workshop on how to do 1-on-1 teaching / assistance of cognitive skills (first succeeded, second failed; our fault).
A 1.5-day version of the introductory workshop;
A workshop with only 10 participants with the entire class taught in a single room (extremely popular, but not yet scalable);
Shorter modules breaking up the 60-minute-unit default;
An unconference-style format for the 2014 alumni reunion.
These experiments ended up feeding back into the flagship workshop, and we think we’re now out of the local optimum and making progress again.
Epistemic rationality curriculum
In CFAR’s earliest days, we thought epistemic rationality (figuring out the answers to factual questions) was the main thing we were supposed to teach, and we took some long-suffering volunteers and started testing units on them. Then it turned out that while all of our material was pretty terrible, the epistemic rationality parts were even more terrible compared to the rest of it.
At first our model was that epistemic rationality was hard and we needed to be better teachers, so we set out to learn general teaching skills. People began to visibly enjoy many of our units. But not the units we thought of as “epistemic rationality”. They still visibly suffered through those.
We started to talk about “the curse of epistemic rationality”, and it made us worry about whether it would be worth having a CFAR if we couldn’t resolve it somehow. Figuring out the answers to factual questions, the sort of subject matter that appears in the Sequences, the kind of work that we think of scientists as carrying out, felt to us like it was central to the spirit of rationality. We had a sense (and still do) that if all we could do was teach people how to set up trigger-action systems for remembering to lock their house doors, or even turn an ugh-y feeling of needing to do a job search into a series of concrete actions, this still wouldn’t be making much progress on sanity-requiring challenges over the next decades. We were worried it wouldn’t contribute strategic potential to effective altruism.
So we kept the most essential-feeling epistemic rationality units in the workshop even despite participants’ lowish unit-ratings, and despite our own feeling that those units weren’t “clicking’, and we thought: “Maybe, if we have workshops full of units that people like, we can just make them sit through some units that they don’t like as much, and get people to learn epistemic rationality that way”. The “didn’t like” part was painful no matter what story we stuck on it. We rewrote the Bayes unit from scratch more or less every workshop. All of our “epistemic rationality” units changed radically every month.
One ray of light appeared in mid-2013 with the Inner Simulator unit, which included techniques about imagining future situations to see how surprised you felt by them, and using this to determine whether your Inner Simulator really strongly expected a new hire to work out or whether you are in fact certain that your project will be done by Thursday. This was something we considered to be an “epistemic rationality” unit at the time, and it worked, in the sense that it (a) set up concepts that fed into our other units, (b) seemed to actually convey some useful skills that people noticed they were learning, and (c) people didn’t hate it.
(And it didn’t feel like we were just trying to smuggle it in from ulterior motives about skills we thought effective altruists ought to have, but that we were actually patching concrete problems.)
A miracle had appeared! We ignored it and kept rewriting all the other “epistemic rationality” units every month.
But a lesson that we only understood later started to seep in. We started thinking of some of our other units as having epistemic rationality components in them—and this in turn changed the way we practiced, and taught, the other techniques.
The sea change that occurred in our thinking might be summarized as the shift from, “Epistemic rationality is about whole units that are about answering factual questions” to there being a truth element that appears in many skills, a point where you would like your System 1 or System 2 to see some particular fact as true, or figure out what is true, or resolve an argument about what will happen next.
We used to think of Comfort Zone Expansion[6] as being about desensitization. We would today think of it as being about, for example, correcting your System 1′s anticipation of what happens when you talk to strangers.
We used to think of Urge Propagation[6] as being about applying behaviorist conditioning techniques to yourself. Today we teach a very different technique under the same name; a technique that is about dialoging with your affective brain until system 1 and system 2 acquire a common causal model of whether task X will in fact help with the things you most care about.
We thought of Turbocharging[6] as being about instrumental techniques for acquiring skills quickly through practice. Today we would also frame it as, “Suppose you didn’t know you were supposed to be ‘Learning Spanish’. What would an outside-ish view say about what skill you might be practicing? Is it filling in blank lines in workbooks?”
We were quite cheered when we tried entirely eliminating the Bayes unit and found that we could identify a dependency in other, clearly practical, units that wanted to call on the ability to look for evidence or identify evidence.
Our Focused Grit and Hard Decisions units are entirely “epistemic”—they are straight out just about acquiring more accurate models of the world. But they don’t feel like the old “curse of epistemic rationality” units, because they begin with an actual felt System 1 need (“what shall I do when I graduate?” or similar), and they stay in contact with System 1′s reasoning process all the way through.
When we were organizing the UK workshop at the end of 2014, there was a moment where we had the sudden realization, “Hey, maybe almost all of our curriculum is secretly epistemic rationality and we can organize it into ‘Epistemic Rationality for the Planning Brain’ on day 1 and ‘Epistemic Rationality for the Affective Brain’ on day 2, and this makes our curriculum so much denser that we’ll have room for the Hamming Question on day 3.” This didn’t work as well in practice as it did in our heads (though it still went over okay) but we think this just means that the process of our digesting this insight is ongoing.
We have hopes of making a lot of progress here in 2015. It feels like we’re back on track to teaching epistemic rationality—in ways where it’s forced by need to usefully tackle life problems, not because we tacked it on. And this in turn feels like we’re back on track toward teaching that important thing we wanted to teach, the one with strategic implications containing most of CFAR’s expected future value.
(And the units we think of as “epistemic” no longer get rated lower than all our other units; and our alumni workshop on Epistemic Rationality for Effective Altruists went over very well and does seem to have helped validate the propositions that “People who care strongly about EA’s factual questions are good audiences for what we think of as relevant epistemic skills” and “Having learned CFAR basics actually does help for learning more abstract epistemic rationality later”.)
Goals for 2015
In 2015, we intend to keep building organizational capital, and use those dividends to keep pushing on the epistemic rationality curriculum, and pushing toward the minimum strategic project that stress-tests CFAR’s core value propositions. We’ve also set the following concrete goals[7]:
Find some way to track a metric for ‘How likely we think this person is to end up being strategically useful to the world’, even if it’s extremely crude.[8]
Actually start tracking it, even if internally, subjectively, and terribly.
Try to boost alumni scores on the three components of “Figure out true things”, “Be effective” and “Do-gooding” (from our extremely crude measure).
Cause 30 new people to become engaged in high-impact do-gooding in some interesting way, including 10+ with outside high status and no previous involvement with EA.
Cause 10 high-impact do-gooder alumni to say that, because of interacting with CFAR, they became much more skilled/effective/well-targeted on strategically important things. Have this also be plausible to their coworkers.
Nuts, Bolts, and Financial Details
$5.3k/month for office rent;
$30k/month for salaries (includes tax, health insurance, and contractors; our full-time people are still paid $3.5k/month);
$7k/month for total other non-workshop costs (flights and fees to attend others’ trainings; office groceries; storage unit, software subscriptions; …)
Alumni reunion: $34k income; $38k non-staff costs (for ~100 participants)
Hamming: $3.6k revenue; $3k non-staff costs
Assisting thinking: $2.1k revenue; $3.2k non-staff costs
Attention: $3.3k revenue; $2.7k non-staff costs
Epistemic Rationality for Effective Altruists: $5k revenue; $3k costs
Dojo: free.
“A taste of rationality”: $5k revenue; $2.6k non-staff costs.
The big picture and how you can help
[1] That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars. Much thanks to Peter McCluskey, Jesse Liptrap, Nick Tarleton, Stephanie Zolayvar, Arram Sabeti, Liron Shapira, Ben Hoskin, Eric Rogstad, Matt Graves, Alyssa Vance, Topher Hallquist, and John Clasby for together putting up $120k in matching funds.
[2] This post is a collaborative effort by many at CFAR.
[3] The title we ran it under was “TA training”, but the name desperately needs revision.
[4] This is missing several I can almost-recall and probably several others I can’t; please PM me if you remember one I missed. Many of the startups on this list have multiple founders who are CFAR alum. Omitted from this list are startups that were completed before the alumni met us, e.g. Skype; we included however startups that were founded before folks met us and carried on after they became alumni (even when we had no causal impact on the startups). Also of note is that many CFAR alumni are in founding or executive positions at EA-associated non-profits, including CEA, CSER, FLI, Leverage, and MIRI. One reason we’re happy about this is that it means that the curriculum we’re developing is being developed in concert with people who are trying to really actually accomplish hard goals, and who are therefore wanting more from techniques than just “does this sound cool”.
[5] Ideally, such a prototype might accomplish increases in (1), (2), and (3) in a manner that felt like facets of a single art, or that all drew upon a common base of simpler cognitive skills (such as subskills for getting accurate beliefs into system 1, for navigating internal disagreement, or for overcoming learned helplessness). A “prototype” would thus also be a product that, when we apply local optimization on it, takes us to curricula that are strategically important to the world—rather than, say, taking us to well-honed “feel inspired about your life” workshops, or something).
Relative to this ideal, the current curriculum seems to in fact accomplish some of (2), for all that we don’t have RCTs yet; but it is less successful at (1) and (3). (We’d like, eventually, to scale up (2) as well.) However, we suspect the curriculum contains seeds toward an art that can succeed at (1) and (3); and we aim to demonstrate this in 2015.
[6] Apologies for the jargon. It is probably about time we wrote up a glossary; but we don’t have one yet. If you care, you can pick up some of the vocabulary from our sample workshop schedule.
- Why CFAR? The view from 2015 by 23 Dec 2015 22:46 UTC; 73 points) (
- CFAR fundraiser far from filled; 4 days remaining by 27 Jan 2015 7:26 UTC; 69 points) (
- 29 Jan 2015 7:13 UTC; 39 points) 's comment on CFAR fundraiser far from filled; 4 days remaining by (
- 27 Jan 2015 22:00 UTC; 22 points) 's comment on CFAR fundraiser far from filled; 4 days remaining by (
- 9 Feb 2019 1:28 UTC; 14 points) 's comment on The Hamming Question by (
- 29 Sep 2017 21:13 UTC; 5 points) 's comment on Musings on Double Crux (and “Productive Disagreement”) by (
- 28 Jan 2015 2:54 UTC; 4 points) 's comment on CFAR fundraiser far from filled; 4 days remaining by (
- 12 Oct 2015 11:46 UTC; 3 points) 's comment on Emotional tools for the beginner rationalist by (
- 27 Sep 2015 14:42 UTC; 1 point) 's comment on Why we need more meta by (EA Forum;
- CFAR’s annual update [link] by 26 Dec 2014 14:05 UTC; 1 point) (EA Forum;
I donated $4,000 the other week (or I will have once the check clears).
Thank you so much for this.
Thanks for the detailed update! Donated $1,500.
Thank you! It helps our morale, as well as our budget.
Gave $8000
Donated $300. Happy New Year!
Thanks! We appreciate it a lot; and happy new year to you!
Suggestion: A unit on identifying and escaping bad local optima, if you don’t have one already. It seems to me that an awful lot of people-years are lost to situations that are sub-par but painful to get out of (e.g. crappy jobs).
I’d be curious to see a post-mortem on this and other failed efforts. I like that CFAR is willing to acknowledge when it’s screwed up. That I don’t find this willingness terribly surprising says some nice things about the LW-sphere it pulls from.
Generally upvoted, but I think there’s a significant difference between “tried something that didn’t work” and “screwed up”—the former is executing on a correct decision algorithm (which includes explore as well as exploit patterns), the latter means actually making a bad decision given the available information.
I’d also be curious to see an elaboration on the Attention workshop. The concept of attention as a limited and important resource was one of my main takeaways from the 4-day workshop (+discussions on the alumni list), leading me to the tools I needed to gain better focus and not feel overwhelmed all the time. Now and then I try to explain the concepts in conversations with people who I think might benefit from it, so I’d be interested in how not to do it.
Strongly agree with the last two sentences here.
I gave $50, and plan to give substantially more within a year of graduation. That was one hell of a “big picture” section, Anna.
In case LWers are wondering why MIRI didn’t post to LW about its own fundraising drive, that’s because we already finished it.
Also, if your employer does corporate matching (check here) and you haven’t used it all up yet and you’d like to donate to CFAR, remember to do so before January 1st so that your corporate matching for 2014 doesn’t go unused!
Is it currently better to donate to CfAR or MIRI?
Based on the fact that MIRI has finished its fundraising drive and CFAR has not, I’m gonna guess CFAR. Especially because of the matching.
Any other fundraisers interesting to LW going on?
Charity Science, which fundraises for GiveWell’s top charities, needs $35k to keep going this year. They’ve been appealing to non-EAs from the Skeptics community and lot’s of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I’m on their Board of Directors.)
Electology is an organization dedicated to improving collective decision making — that is, voting. We run on a shoestring; somewhere in the lowish 5 digits $ per year. We’ve helped get organizations such as the German Pirate Party and the various US stat Libertarian Parties to use approval voting, and gotten bills brought up in several states (no major victories so far, but we’re just starting.)
Is a better voting system worth it, even if most people still vote irrationally? I’d say emphatically yes. Plurality voting is just a disaster as a system, filled with pathological results, perverse incentives, and pernicious equilibria. Credible numerical estimates (utility-based simulations) suggest that better systems such as approval voting offer as much improvement again as the move from dictatorship to democracy was.
The first three words here are in contradiction with the last three words… :-/
I presume you’re saying that utility-based simulations are not credible. I don’t think you’re actually trying to say that they’re not numerical estimates. So let me explain what I’m talking about, then say what parts I’m claiming are “credible”.
I’m talking about monte-carlo simulations of voter satisfaction efficiency. You use some statistical model to generate thousands of electorates (that is, voters with numeric utilities for candidates); a media model to give the voters information about each other; and a strategy model to turn information, utilities, and choice of voting system into valid ballots for that voting system. Then, you see who wins each time, and calculate the average overall utility of that winners. Clearly, there are a lot of questionable assumptions in terms of the statistical, media, and strategy models, but the interesting thing is that exploring various assumptions in all of those cases shows that the (plurality-dictatorship)≈(good system-plurality) equation is pretty robust, with various systems such as approval, condorcet, majority judgment, score, or SODA in place of “good system”.
There are certainly various ways to criticize the above.
“Don’t believe it”: If you think that I’ve messed up my math or not done a good job with the sensitivity analysis, of course you’d question my conclusions. But if you want to play with my code to check it, it’s here.
“Utilitarianism is a bad metric”: It may not be perfect, but as far as I can tell it’s the only rational way to put numbers on things.
“Democracy is a bad idea”: In other words, if you think that the average voter’s estimate of their utility for a candidate has 0 or negative correlation with their true utility of that candidate winning, then this simulation is garbage. I’d respond with the old saying about democracy being the worst system except all the others.
“The advantages of democracy over dictatorship aren’t in terms of who’s in charge”: if you think that democracy’s clear superiority to dictatorship in terms of human welfare comes from something other than choosing better leaders (such as, for instance, reducing the prevalence of civil wars), then improving the voting system might not be able to have comparable payoff as instituting a voting system to begin with. I’d respond that this critique is probably partially right, but on the other hand, better leadership could credibly have better responses to crises (financial, environmental, and/or existential-risk) which could indeed be on the same order as the democracy dividend.
All in all, taking a more outside view, I see how the combination of the above objections would reduce your estimate of the expected “voting system dividend”. Still, when I “shut up and multiply” I get: $80 trillion world GDP plausible (conservative) effect size in a good year of 2% .1 plausible portion of good years over time .5 plausible portion of good years over space (some country’s economies might already be immune to the kind of harm this could prevent) .5 chance you trust my simulations .1 correlation of voter preference with utility .5 probability leadership makes any difference = about $2 billion/year potential payoff in expected value, even without compounding. That seems to me like (a) quite a conservative choice of factors, (b) not a totally implausible end result, and (c) still big enough to care about. Of course, it’s incredibly back-of-the-envelope, but I invite you to try doing the estimation yourself.
Actually, no, that’s not what I mean. I have no problems with numerical estimates in general.
What I mean by “credible”, in this context, is “shown to be relevant to real-life situations” and “supported by empirical data”.
You’ve constructed a model. You’ve played with this model and have an idea of how it behaves in different regimes. That’s all fine. But then you imply that this model reflects the real world and it’s at this point that I start to get sceptical and ask for evidence. Not evidence of how your model works, but evidence that the map matches the territory.
The model is not easy to subject to full, end-to-end testing. It seems reasonable to test it one part at a time. I’m doing the best I can to do so:
I’ve run an experiment on Amazon Mechanical Turk involving hundreds of experimental subjects voting in dozens of simulated elections to probe my strategy model.
I’m working on getting survey data and developing statistical tools to refine my statistical model (mostly, posterior predictive checks; but it’s not easy, given that this is a deeper hierarchical model than most).
In terms of the utilitarian assumptions of my model, I’m not sure how those are testable rather than just philosophical / assumed axioms. Not that I regard these assumptions as truly axiomatic, but that I think they’re pretty necessary to get anywhere at all, and in practice unlikely to be violated severely enough to invalidate the work.
I haven’t started work on testing / refining my media model (other than some head-scratching), but I can imagine how to do at least a few spot checks with posterior predictive checks too.
The assumptions that preference and utility correlate positively, even in an environment where candidates are strategic about exploiting voter irrationality, are certainly questionable. But insofar as these are violated, it would just make democracy a bad idea in general, not invalidate the fact that plurality is still a worse idea than other voting systems such as approval. Also, I think it would be basically impossible to test these assumptions without implausibly accurate and unbiased measurements of true utility. Finally, call me a hopeless optimist, but I do actually have faith that democracy is a good idea because “you can’t fool all the people all the time”.
tl;dr: I’m working on this.
Democracy is complicated. For a simple example, consider full direct democracy: instant whole-population referendums on every issue. I am not sure anyone considers this a good idea—successful real-life democratic systems (e.g. the US) are built on limited amounts of democracy which is constrained in many ways. Given this, democracy looks to be a Goldilocks-type phenomenon where you don’t want too little, but you don’t want too much either.
And, of course, democracy involves much more than just voting—there are heavily… entangled concepts like the rule of law, human rights, civil society, etc.
Full direct democracy is a bad idea because it’s incredibly inefficient (and thus also boring/annoying, and also subject to manipulation by people willing to exploit others’ boredom/annoyance). This has little or nothing to do with whether people’s preferences correlate with their utilities, which is the question I was focused on. In essence, this isn’t a true Goldilocks situation (“you want just the right amount of heat”) but rather a simple tradeoff (“you want good decisions, but don’t want to spend all your time making them”).
As to the other related concepts… I think this is getting a bit off-topic. The question is, is energy (money) spent on pursuing better voting systems more of a valid “saving throw” than when spent on pursuing better individual rationality. That’s connected to the question of the preference/utility correlation of current-day, imperfectly-rational voters. I’m not seeing the connection to rule of law &c.
No, I don’t think so. It is a bad idea even in a society technologically advanced to make it efficient and even if it’s invoked not frequently enough to make it annoying.
People’s preferences are many, multidimensional, internally inconsistent, and dynamic. I am not quite sure what do you want to correlate to a single numerical value of “utility”.
Why are you considering only these two options?
The connection is that what is a “better” voting system depends on the context, context that includes things like rule of law, etc.
You’re raising some valid questions, but I can’t respond to all of them. Or rather, I could respond (granting some of your arguments, refining some, and disputing some), but I don’t know if it’s worth it. Do you have an underlying point to make, or are you just looking for quibbles? If it’s the latter, I still thank you for responding (it’s always gratifying to see people care about issues that I think are important, even if they disagree); but I think I’ll disengage, because I expect that whatever response I give would have its own blemishes for you to find.
In other words: OK, so what?
Some people find blemish-finding services valuable, some don’t :-)
Fair enough. Thanks. Again, I agree with some of your points. I like blemish-picking as long as it doesn’t require open-ended back-and-forth.
(small note: the sentence you quote from me was unclear. “because” related to “presume”, not “saying”. But your response to what I accidentally said is still largely cogent in relation to what I meant to say, so the miscommunication isn’t important. Still, I’ve corrected the original. Future readers: lumifer quoted me correctly.)
Well, Intentional Insights is a Rationality-themed nonprofit dedicated to spreading rationality to a broad audience and thus raising the sanity waterline. We have recently received our official nonprofit designation so haven’t had time to plan out and run a fundraiser as such, but we are accepting donations, and they are tax-deductible: anything you give, whether in time/skills/money, would be super-helpful. We especially appreciate those who become monthly donors, as that allows us to plan ahead and also show other potential donors and granting agencies that we have a good base of support and can bring our mission into the world well. We would be happy to talk more to you on the phone/Skype about this matter if you wish, and/or you can donate on the website itself directly. The donation button is on the top left of the website home page, and the monthly recurring donation indication is just below the donation button itself.
This sounds like a good idea, but I had a look at the website and it is unclear to me exactly how you plan to raise the sanity waterline.
Here’s a description of what we plan to do and how we plan to do it. Let me know any questions you might have!
Donated!
My wife and I are monthly donors, and here’s to CFAR having a great 2015! I’d also love to talk about potential collaborations between CFAR and Intentional Insights as we get our own infrastructure and internal operations set up well in the next month or two.
Thanks! Discussing collaborations sounds good; easiest way to do this is to schedule an appointment with me here.
(Others are also very welcome to do this.)
Anna, will do as we get our plans and infrastructure more clear!
Why? You tend to be marketing your workshops to people who’ve already got significant training in much of Traditional Rationality. In my view, much of the world’s irrationality comes from people who have not even heard of the basics or people whose resource constraints do not allow them to apply what they know, or both. In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world through prevention of Dumb Moves than giving semi-advanced cognitive self-improvement workshops to the Silicon Valley elite.
Mind, if what you’re really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.
So, I recently started training in the Alexander Technique, which is a well-developed school of thought and practice on how to use bodies well. It’s been taught for about a century, and during the 1940s there was a brief attempt to teach it in schools to children.
My impression is that the children didn’t get all that much out of it- yes, they had better posture, and the students who might have been klutzier were more coordinated. But the people that keep Alexander alive are mostly the performers and musicians and people with painful movement problems- that is, the sort of people that get enough value out of it that it makes sense for them to take special lessons and think about it in their off time and so on.
Similarly, it might be true that while there is a great mass of irrationality out there, cognitive labor, like any other labor, can be specialized- and so focusing your rationality training on people who specialize in thinking makes sense just as focusing your movement training on people who specialize in movement makes sense. (Here I’m including speaking as movement for reasons that are anatomically obvious.)
But supposing your model is correct—that a broad rationality education would do the most good—I seem to recall hearing about an undergraduate-level rationality curriculum being developed by Keith Stanovich, a CFAR advisor, and I suspect Anna or others may know more details. Once we’ve got an undergraduate curriculum being taught, that should teach us enough to develop high-school level curriculum, and so on down to songs that can be sung in kindergarten.
Why? It seems to me that training people to think well is better, because if they end up disagreeing that gives you valuable information to update on.
This would imply that CFAR should be pitching its workshops to academics and government policymakers. Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world. Actual scientific research and public policy-making are (or, at least, scientific research is fairly intensive cognitive labor… I wouldn’t necessary say it has a high mean impact on any per-unit basis).
I would hope so! But what information indicates CFAR does this?
That’s good, but I worry that it doesn’t go far enough. The issue is not that we’re failing to teach probability theory to kindergartners—they don’t need it and don’t want it. The issue is that our society allows people to walk around thinking that there isn’t actually an external world to which their actions will be held accountable at all, and that subjective feeling both governs reality and normatively dictates correct actions.
To make an offensive political quip: there is the assertion-based community, and the reality-based community; too many people belong to the former and not nearly enough to the latter. The biggest impact we can have on “raising the sanity waterline” is to move people from the group who believe in a Fideist Theory of Truth (“Things are true by virtue of how I feel about them”) to people who believe in the Correspondence Theory of Truth (“Things are true when they match the world outside my head!”), which also thus inspires people to listen to educated domain experts at all.
To give a flagrantly stupid example, we really really really don’t want society’s way of dealing with the Friendly AI problem determined by people who believe that AIs have souls and would never harm anyone because they don’t have original sin. Giving Silicon Valley executives effectiveness workshops will not avert this problem, while teaching the broad public the very basic worldview that the universe is lawful, rather than consciously optimizing for recognizably humanoid goals, is likely to affect this problem.
My understanding is that CFAR is attended by both present and likely future academics; I don’t know about government policymakers. (I’ve met people on national advisory boards from at least two countries at CFAR workshops, but I don’t pretend to know how much influence they have on those boards, or how much influence those boards have on actual policy.)
At time of writing this comment, there are 14 startups listed in the post. What number of them would you consider local-mobile-social apps? (This seems to be an example of “not to be X” signifying “I am aware this is being an X but would like to avoid paying the relevant penalty.”)
I have always gotten the impression from them that they want to be as cause agnostic as is reasonable, but I can’t speak to their probability estimates over time and thus how they’ve updated.
Are there people working on a reproducible system to help people make this move? It’s not at all obvious to me that this would be the comparative advantage of the people at CFAR. (Though it seems to me that much of the CFAR material is helping people finish making that transition, or, at least, get further along it.)
As far as I understand it, CFAR’s current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they’re a good way to test which bits of rationality work and determine the best way to teach them.
In response to the question “Are you trying to make rationality part of primary and secondary school curricula?” the CFAR FAQ notes that:
So I’m fairly sure they agree with you on the importance of making broad improvements to education. It’s also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you’re feeling charitable.
However they go on to say:
Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they’re doing at the moment—building a ‘minimum strategic product’. Giving “semi-advanced cognitive self-improvement workshops to the Silicon Valley elite” is just a convenient way to test this stuff.
You might argue for giving the rationality workshops to “people who have not even heard of the basics” but there’s a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they’ve probably already hear of the basics). Then there’s the fact that rationality just isn’t viewed as useful in the eyes of the general public, so most people won’t care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.
I don’t think CFAR is aiming to propagandize any worldview; they’re about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I’m curious about why you think they might be (intentionally or unintentionally) doing so.
I truly wish that I was in a position to help make rationality training part of the public school curriculum because I think that would be of tremendous value to our society. I do work at a library and people hold workshops there...libraries could be a good place to “spread the word” to people who might be interested in rationality education, but may not have heard about it. The workshop would have to be free of charge, though, and CFAR isn’t there yet.
Improvements to collective decision making seem to be potentially an even bigger win. I mean, voting reform; the kind of thing advocated by Electology. Disclaimer: I’m a board member.
Why do I think that? Individual human decisionmaking has already been optimized by evolution. Sure, that optimization doesn’t fit perfectly with a modern need for rationality, but it’s pretty darn good. However, democratic decisionmaking is basically still using the first system that anybody ever thought of, and monte carlo utility simulations show that we can probably make it at least twice as good (using a random dictator as a baseline).
On the other hand, achieving voting reform requires a critical mass, while individual rationality only requires individuals. And electology is not as far along in organizational growth as CFAR. But it seems to me that it’s a complementary idea, and that it would be reasonable for an effective altruist to diversify their “saving throw” contributions. (We would also welcome rationalist board members or volunteers.)
Disclaimer: I now support you. What do you need done, what’s your vision, and where do you work? Making democracy work better has been a pet drive of mine for an extremely long time.
EDIT: Upon your website loading and my finding that you push Approval Voting, I am now writing in about volunteering.
I’m kind of curious; what do you think CFAR’s objective is 5 years from now (assuming they get the data they want and it strongly supports the value of the workshops)?
In all sincerity, I don’t actually know, and am very open to developing an opinion when I get actual information. I reread TFA, and it doesn’t seem to say. It does come out and state that “CFAR is one of the efforts most worth investing in”, but it doesn’t say how that worth will manifest itself within any bounded time period at all.
These are my thoughts as a CFAR workshop alumnus. I don’t have funds to donate right now, so my perspective isn’t backed up by action of donation, or a conscious choice not to donate. Feel free to put however much weight on my opinion as (any of) you like. I figure I would comment because providing more data is better than less data. I don’t claim for my perspective to be typical of CFAR workshop alumni.
After I attended a workshop, realizing its cost for participants as revenue for the CFAR, I did a Fermi estimate of how much revenue CFAR actually achieves. It included an estimate of the revenue and cost of each participant, multiplied by the number of participants, minus the CFAR’s operating costs. I concluded that at best the CFAR would only be making ends meet if their only source of revenue was its workshops. As expensive as the workshops may seem, reading about the CFAR’s finances in this post made me realize how seriously the CFAR’s takes their own goal of providing and testing their minimal viable product. Regarding theiri finances and operations, they’re not goofing around.
The CFAR workshop I attended was a great experience for me. I mention to some friends they seem like the sort who would get a lot out of it. However, I don’t give them a full recommendation. This is because the cost is often prohibitively expensive for those in or just out of university. My friends tell me this, and I’m well aware of it. Grand hopes for the future aside, I hope that if the CFAR received enough donations that it could offer their workshops at a lower cost. I hope this not only for my friends, but also for all others who aren’t attending because of costs, yet would benefit both themselves, the CFAR, and its alumni community. This is personally why I respect their fundraising efforts.
Hooray to the CFAR for being one of few (non-profit) organizations who admit “we tried some stuff that didn’t work well. we’ll be rejigging and testing and improving efforts in the future!” Kudos! This earnestness is refreshing.
The CFAR is taking being part of effective altruism quite seriously. It didn’t seem to me they were treating this association as seriously one year ago. They might have felt as serious, but I wasn’t receiving the signal. I am now. Also, I like their honesty in expressing how they’re not just identifying with, but trying to reach the standard of what, effective altruism ought to be.
Love hearing about how much CFAR has learned in 2014 and your aggressive 2015 goals. Thanks for the look into your operations and the reminder to donate!
Serious question—why do you (either CFAR as an organisation or Anna in particular) think in-person workshops are more effective than, eg, writing a book, or making a mooc-style series of online lessons for teaching this stuff? Is it actually more about network building than the content of the workshops themselves? Do you not understand how to teach well enough to be able to do it in video format? Videos are inherently less profitable?
I don’t speak for CFAR, but I believe that they wish to develop their product further before actually taking the time to write extensively about it, because the techniques are still being under active development and there’s no point in writing a lot about something that may change drastically the next day.
It’s also true that a large part of the benefit of the workshops comes from interacting with other participants and instructors and getting immediate feedback, as well as from becoming a part of the community.
I think the tighter feedback loops is a big point. Being there in person really helps assess what works and does not.
Of course a change of format can get in the way when that comes around, but I think the workshop format will help anyway.
On the specific suggestion of a book, there’s already a lot of written material on this.
How does cfar rank other thinking skills organisations outside the EA/MIRI groups? For instance, is Ember Associates plausibly one of the most important organisations currently existing?
What is Ember Associates? I did a quick google search, and when I clicked on their site, I got a page that said “Website Expired”. What other groups do you have in mind?
It is, or was, an organisation to teach thinking skills. Please don’t focus on the example; it was the first one that came to mind and I didn’t realise the website had expired. The point is that a lot of groups claim to teach thinking skills. Do you consider all such count to be EA? If not, what distinguishes CFAR from those that don’t?
Was just about to post the same thing. Having your website expired is definitely evidence against effectiveness.
Donated! Hooray for matching!
Hi Anna—during last year’s fundraiser you said you were allowed to match recurring monthly donations (up to one year’s worth) pledged to CFAR. Do you know if that policy is still in effect?
Yep! If you tell us that you intend to keep the monthly pledge up for the 2015 calendar year, and start a monthly pledge, it is matched at its yearly amount (e.g., if you pledge $n/month, and tell us you intend to keep it (by messaging me here, or emailing me, or commenting in the public thread), it is matched at 12n).
Thanks for bringing this up.
Thank you for posting this. An excellent writeup all around, and it gives me lots of hope that CFAR will continue improving.
Is your definition of “do good in the world” approximately equivalent to “donating to effective charity”? It sounds from this post like it is, and I find that odd. Your startup list is impressive, and personally I would credit the founders of several of those startups (specifically, all the ones I know anything about) with doing good in the world, regardless of their charitable activities or lack thereof.
No, not at all. Donating to effective charity can be highly important; but I’ll be sad, and think something has gone badly wrong, if e.g. CFAR’s altruistic impact occurs exclusively or even mainly through causing such donation; it is important to increase generators of knowledge of what is actually worth doing (rather than e.g. creating copies of CFAR’s founders’ initial beliefs on that subject), to increase people capable of finding important gaps in the world and then filling them, etc.
At the same time, donating to effective charity is both high-impact enough, and simple enough, that I suspect something will have gone badly wrong if we don’t also see a lot of giving of that sort—it’ll suggest an unwillingness to take risks, or to trust others, or to pool together into common efforts, or something similar. I have actually a lot of thoughts on how the above point and this one can both be true, but the subject is a bit unwieldly; I may write a post; in any case, I agree with your nonequivalence.
One idea for measurement in a randomized trial:
In order to apply, you have to list 4 people who would definitely know how awesome you’re being a year from now, and give their contact info. Then, choose 1 of those people 6 months later and 1 person a year later and ask them how awesome the person is being. When you ask, include a “rubric” of various stories of various awesomeness levels, in which the highest levels are not always just $$$ but sometimes are. Ask the people you’re asking to please not contact the person specifically to check awesomeness, because that could introduce bias (“this person is checking, that makes me remember the workshop I did, and feel awesome”).
The 4 people should probably include no couples. Your family, long-term friends...
The one way this breaks down is facebook. I mean, if your interaction with each person is separate, and the workshop makes you seem more awesome to each of 4 people, it is working. But if it just makes you post more upbeat things on Facebook, that might not translate to actual awesomeness. But I think that’s a really minor factor.
Sure, it’s gonna be a noisy and imperfect measurement. You will have to look at standard deviations and calculate power (including burning all 4 contacts for some people to see the within-subject variance). Also, correct for demographic info on contacts, and various other tricks to increase power. But one way or another, you’ll get a posterior distribution of the causal impact.