We need something for rapid calibration rather than slow-to-verify predictions like prediction book (which are also good to train on).
Academian
Needed: A large database of statements for true/false exercises
Someone at the last minicamp brought his partner, and she seemed to like it. She was highly educated (a PhD student at Harvard in a mathematical science), and wasn’t much into LessWrong before coming.
The particular changes I’ve made (like changing my advisor) have been very personalized for me, by me… but they have been fueled by a few root adjustments:
1) More curiosity about my life choices. Caused in part by being surrounded by a group of smart similar people doing very different things with their lives.
2) More willingness and desire to make up my mind more quickly and effectively about Big Life Decisions. Caused in part by Anna Salamon generating, on the spot, a steady stream of helpful questions for me that I could ask and answer to myself about career choices. I never came to any conclusions that she suggested (which I consider a good sign; I wouldn’t expect someone else to know what I should do with my life from a few conversations), but she gave me a sense that more is possible in terms of how quickly a person can generate important, answerable questions.
3) More curiosity / motivation to experiment with productivity hacks, until I found some that work for me (Getting Things Done system + Pomodoro technique). Caused by being surrounded by productivity-obsessed people for a week with lots of cool ideas that helped me internalize a belief in the existence of popular productivity hacks that would work for me.
4) More desire to Be Successful (which I’d had very little of throughout most of my life), caused by feeling like I was part of a community that I cared about and who might benefit in some small way from my success.
Do you think a religious event would have the same effect on the same people? That is, these mostly atheist people who were all very interested in science and rationality? Or do you just think that there exist people on which a religious event would a similar effect?
This is an important distinction for someone deciding whether to attend, because such a person knows whether she is religious or not.
For the purpose of causal inference / intervention evaluation, you must ask if a Christian retreat would have had this effect on those participants. Perhaps Christians feel closer after a Christian event, but I find Christian events somewhat alienating because I’m not Christian. I don’t find aspiring rationalist events alienating, in part because I’m an aspiring rationalist. It’s fun to hang out with people who have common interests, and depending on who you are, that group is a different group… for me, it’s rationalists. Part of the point of the camp is that it has a similar bonding effect that any coming together of people with a deep common interest or aspiration can have, and in this case, the common aspiration is rationality.
Plus, at the camp, I did internalize skills and attitudes that have helped me a lot over the past (I.e. I’ve improved much more over the past year than I have in previous years), for example, looking more vigilantly for fungibility between my time and money, looking more at the reasons I do things and finding more effect ways to pursue those reasons...
Those particular effects I wouldn’t expect from a Christian camp, just as the particular effect of feeling close to Jesus is not an effect I’d expect from a rationality camp. I just happen to prefer the “rationality” effects, and these camps are for people with similar such preferences.
Seriously, it’s fun :)
Since a couple of people want before/after information, here’s some: Before minicamp: I was able to work around 5 hours per day on research.
After: 10 hours/day, sustainable for months.
After: Less afraid to try new professional directions than ever before, by a margin much wider than this trait has ever changed for me.
After: Secured $24,000 of grant money from DARPA to work on applications of algebraic geometry to machine learning, my first time trying out applied math. Loving it.
After: Difference in productivity was so noticeable that I’m volunteering my time as an instructor at the next few camps (I taught some at the last camp, too) because I expect it to have further positive, lasting effects on my professional / personal life.
After: Got a new dissertation advisor; many people around me seemed to think that was impossible or risque, but it has gone very well and been very refreshing, given my interests. (Before the camp I was more afraid to make what felt like a “sudden” change, which was actually something I had been thinking about for a year and was not sudden at all.)
Note: My experience at the camp may not have been typical, because I did teach a few sessions at the beginning… but those were not the ideas that stuck with me most and motivated me professionally; they were Anna’s and Luke’s sessions.
Since I’m volunteering to teach for the next few camps, I won’t be able to give participant-side data after the next camp, so let this be my public testimonial: minicamp had a SERIOUS before/after effect on my life, resulting in more exploration, faster decision making (changed my thesis advisor, to great benefit and the surprise of many), and increased productivity. Its benefits are the cause of my volunteering to teach for it, and this comment.
In case this wasn’t done, a physical demonstration of a game like this at first is important, with a concurrent verbal description to tag it for indexing: “Step 1: we do this”, “Step 2: we do this.” Showing beats telling alone. Verbal or written instructions are a low bandwidth form of communication that are better used for tagging/clarification of demonstration data (i.e. naming the steps while you do them) or error-correcting after a demonstration (i.e. people can look stuff up if they get confused).
Teaching rationality made me better (at research and other things)
To be clear, I’m saying that minicamp had more of what you call B-type effect on me (so far) that many other such events. So I’m talking about B, not just A. From the OP:
Note that mini-camp was far from the first time I’ve travelled to an event to surround myself with like-minded peers working toward common goals. [...] I’ve been to many such workshops, inside and outside academia (~3 per year for the past 10 years). [...] Yet mini-camp is still topping my charts.
In particular, I’m saying that in my experience it was much more effective, B-wise, than the base-rate of generic peer gatherings like Christianity camps (which I’ve been to).
So no, not everyone who’s excited about minicamp is just talking about A. But yes, I agree with you, A is a lot of the conversation. I’m trying to focus on B.
Well, not everyone elected to provide a testimonial, and there may have been self-selection in favor of optimism. Insisting that everyone write a testimonial might have helped a bit with that.
Maybe a monthly job-posting discussion thread, where jobs are top-level comments?
Mini-camp was indeed awesome, and so was Luke (just add Bayes)
I was at the camp. It was spectacularly awesome in my judgement, too, and Luke was a big part of that. \end{soft.bayesian.evidence}
Specifically, the camp is tied for the title of the most life-altering workshop-like event of my life, and I’ve been to many such events, inside and outside academia (~3 per year for the past 10 years). The tie is with the workshop that got me onto my PhD topic (graphical causal modelling), so that’s saying something.
Frequentist vs Bayesian breakdown: interpretation vs inference
It may be difficult to find a reasonable Bayesian interpretation, and it may only approximate said interpretation, but if it’s at all useful, it will have one.
Observation: This theory that you’ve stated here—that any useful frequentist method will have a Bayesian interpretation—doesn’t serve much in the way of controlled anticipation. Because there is so much flexibility in choosing priors and a loss function, the fact that “every useful frequentist method will be a Bayes method in disguise” doesn’t tell us much about what frequentist methods will turn out to be useful.
It seems to me that the wisdom to treat beliefs as anticipation controllers is more general, and I think more important, than the choice of Bayesian vs Frequentist inference methods. Each school has their own heuristic for quantifying this wisdom.
As for Bayesian vs Frequentist interpretations of what the word “probability” means, I think that’s a different (and sillier) debate.
Michael Jordan dissolves Bayesian vs Frequentist inference debate [video lecture]
Luke, there’s a serious and common misconception in your explanation of the independence axiom (serious enough that I don’t consider this nitpicking). If you could, please fix it as soon as you can to prevent the spread of this unfortunate misunderstanding. I wrote a post to try and dispell misconceptions such as this one, because utility theory is used in a lot of toy decision theory problems, versions of which might actually be encountered by utility-seeking AIs:
For example, the independence axiom of expected utility theory says that if you prefer one apple to one orange, you must also prefer one apple plus a tiny bit more apple over one orange plus that same tiny bit of apple. If a subject prefers A to B, then the subject can’t also prefer B+C to A+C. But Allais (1953) found that subjects do violate this basic assumption under some conditions.
This is not what the independence axiom says. What it says is that, for example, if you prefer an apple over an orange, then you must prefer the gamble [72% chance you get an apple, otherwise you get a cat] over the gamble [72% chance you get an orange, otherwise you get a cat]. The axiom is about mixing probabilistic outcomes, not mixing amounts of various commodities.
This distinction is important, because for example, if you’d rather have 1 apple than 1 orange, but you’d rather have 1 orange and 0.2 apples than 1.2 apples, you’re not violating the independence axiom, nor instantiating the Allais paradox. You simply don’t like having too much apple, which is fine as far as EU is concerned: apple can have negative marginal utility after a certain point. Such explanations are an essential feature, not a shortcomming, of utility theory.
The Allais paradox is a legitimate failure of utility theory in describing human behavior, though, so you’re of course right that expected utility theory is very useless as a predictive tool. I doubt any powerful AGI would commit the Allais paradox, though.
Otherwise, thanks for the incredibly informative post!
The main descriptive difference between prospect theory and EU theory is that for monetary decisions, EU theory uses one curve (utility function), whereas prospect theory uses two curves (a value function and weight function) as well as a framing variable
The other big difference is that the prospect theory value function is defined relative to a reference point
That’s what Yvain and I are calling framing.
When people are actually given choices like $10 for sure vs. $21 w. p=.5, they tend to choose $10 for sure just as prospect theory predicts (and EU theory does not).
What you’re calling EU theory is a very restricted version of EU theory, where you require utility to be a function of total monetary wealth, or total material wealth. You might call it “Expected Utility of Wealth” theory. EU theory is actually much more general, and assigns utility to outcomes rather than amounts of money or even lists of possessions. This is all discussed in
http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem , and
http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/
But for predictive purposes, EU theory is so ridiculously general (there are so many situational parameters) that, as far as anyone knows, it has almost no predictive power. So for the purposes of prediction, I think you’re justified in talking about “EUW” theory, because without a highly restrictive assumption like utility being a function of wealth, EU theory has no chance of making predictions.
Nonetheless, I want to encourage you, and anyone else, to make explicit the assumption “utility is a function of wealth” when you’re making it. My reason is that, in toy decision-theory problems, EU theory is usually part of the framework, and it’s a reasonable framework provided we don’t impose the restrictions that make it predictively meaningful and false.
Hmm… this made me think that perhaps two-choice questions are better than true/false questions, because when all the questions have the same two possible answers T/F, there is a base rate of how often the answer “T” is correct which the player should account for. For real life questions with two possible answers like “Who is taller, Alex or Bob?”, there is not really a well-known base rate.
Thanks!