Well, it wouldn’t be blind this way. The attendees of the usual platitudes-filled self help workshop absolutely swear by the gains. There may actually be gains—some people really just need to be reaffirmed via platitudes.
I’d expect to see some gains in both forks of the experiment, for exactly those reasons. The question is whether or not we’d see more gains from a proper rationality course than from a motivation seminar dressed up as one: if so, that’s good evidence that CFAR is teaching something worthwhile. If not, well, that would also be nice to know.
Blinding it would be hard, since we’re dealing with smart people who’d have some reason to be skeptical of the course contents and especially since many of them would have read LW and have some idea of what a rationality program should look like, but I think it should be doable.
I dunno what an actual rationality program should look like, but in general I’d say it would have much more emphasis on time. Smart but inefficient people I know are already spending too much time in analysis paralysis(i.e. basically doing what’s on this graph out there) rather than doing things that generally get them closer to multitude of goals under a multitude of possible assumptions about the world.
edit: one curious thing about the world is the enormous size of solution space. When you are not sure what you’ll want, when you’re not sure of some facts, you can restrict the search space to what furthers either goal under either assumption of the facts—you’ll still miss all the really good solutions even in that restricted space due to inability to search all of it, and this restriction decreases expected value of a solution you may find by a negligible amount in a lot of the cases. This is dramatically different from most textbook examples on the subject. Just think about it. There’s probably a tweet-sized text that you could, in theory, come up with which constitutes an insight for an invention on which you can make millions dollars. You can search for this, or you can, figuratively speaking, ponder which can of juice at the supermarket is cheaper per litre or try to quantify how much you like either juice and multiply that with price.
Yea, there is often an emphasis on how an ideal unconstrained agent (with a resource-unlimited AIXI like superintelligence in mind) would reason, or how biases should be avoided, with the assumption that a resource-limited agent should just try to approximate what a (practically) resource-unlimited agent should do as closely as it can given its resources.
That, however, is a bad assumption to make. For every limitation-class, an entirely different strategy and different heuristics may be optimal.
As an example, when you try to program your old calculator to play speed chess as well as possible, you should not try to take the best available chess program. There are biases you should adopt to get the best result, given the limitations. I was once in a grad AI class in which we needed to best a given computer opponent in an obscure game, with severe computational resource limitations. The best program was a seemingly bad hack of in generality wrong heuristics, but it turned out to use the resources in the most optimal way. It bested much “better” programs that would’ve beaten it if given more time.
Take teaching the Bayes equation as if that’s what you’d actually use. Sure, some general ideas (consider the prior, always update on your observations) are useful, but the equation itself? Noone at CFAR walks into a supermarket and then continuously inputs actual numbers into Bayes equations in their head.
Take teaching the Bayes equation as if that’s what you’d actually use. Sure, some general ideas (consider the prior, always update on your observations) are useful, but the equation itself? Noone at CFAR walks into a supermarket and then continuously inputs actual numbers into Bayes equations in their head.
Agreed! I teach the Bayes class at workshops, and it’s not a math drill class. It’s on how to get the habit of paying attention to the components of Bayes theorem in everyday life. For example, we usually ask just ask “Would I be likely to see Y if X were true?” and skip the question “Would I be likely to see Y if X were not true?” So we practice ways to trigger this thought so you don’t get tricked by base rates or other pitfalls.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Or maybe you treated it as a weaker signal than it is. This is a strawman anyway, people who never in their life heard of Bayes do compare it to their hypothetical idea of how a competent person would do on their exam, and soon thereafter, to their actual knowledge of how a competent person does, remedying all sorts of miscalibrations.
If anything, in practice the actual problem with interviews is generally that incompetents get through, because incompetents are being interviewed so much more than anyone else. Diligent ability to never flunk anything (conscientiousness) is, at least, something very useful in workplace that you can’t fake by preparing specifically for interviews.
Then there’s also this enormous utility disparity between the minor dis-utility of perhaps running the interviews for a little longer and ending up hiring the best when no one passes, and major dis-utility of hiring an incompetent.
It’s not really an advice, but I can see how its likeable—there’s people who didn’t get hired because they flunked “maybe one question”, and these folks will get a fix of their endorphins when they rationalize it as the HR person being irrational.
More generally, a very good test will result in a uniform distribution of scores (rather than a bell curve), maximizing the information content of the score.
Well, even bigger issue with Bayes is how easy it is to get it wrong on graphs in general (which contain loops). Worse than that, what we actually have is uncertain topology. This all should make rational updates much smaller and compartmentalized-looking than some naive idea of ‘updating’ beliefs from one argument, then from the other, and so on.
There’s also a lot of very advanced math related to specifically estimation. E.g. an expected utility would be a huge sum, vast majority of terms in which you do not know. When deciding on some binary action, you have those two sums on both sides of comparison, and you need to estimate the sign of the difference most accurately (which is dramatically not same as summing all available terms), then you need to quantify the expected inaccuracy in your estimate of the sign, and adjust for that. Simply put, its complicated and people who have good working understanding of such concerns can write important textbooks, software, papers, etc. (Which make a lot of difference to the world, as well as make any spin off ‘workshops’ credible). Whereas people who are very far from having any understanding of such can do things like estimating 8 lives saved per dollar donated.
There’s probably a tweet-sized text that you could, in theory, come up with which constitutes an insight for an invention on which you can make millions dollars.
Thank you for this. You managed to translate “the world is full of possibilities” into something that hits right in the gut.
Well, it wouldn’t be blind this way. The attendees of the usual platitudes-filled self help workshop absolutely swear by the gains. There may actually be gains—some people really just need to be reaffirmed via platitudes.
I’d expect to see some gains in both forks of the experiment, for exactly those reasons. The question is whether or not we’d see more gains from a proper rationality course than from a motivation seminar dressed up as one: if so, that’s good evidence that CFAR is teaching something worthwhile. If not, well, that would also be nice to know.
Blinding it would be hard, since we’re dealing with smart people who’d have some reason to be skeptical of the course contents and especially since many of them would have read LW and have some idea of what a rationality program should look like, but I think it should be doable.
I dunno what an actual rationality program should look like, but in general I’d say it would have much more emphasis on time. Smart but inefficient people I know are already spending too much time in analysis paralysis(i.e. basically doing what’s on this graph out there) rather than doing things that generally get them closer to multitude of goals under a multitude of possible assumptions about the world.
edit: one curious thing about the world is the enormous size of solution space. When you are not sure what you’ll want, when you’re not sure of some facts, you can restrict the search space to what furthers either goal under either assumption of the facts—you’ll still miss all the really good solutions even in that restricted space due to inability to search all of it, and this restriction decreases expected value of a solution you may find by a negligible amount in a lot of the cases. This is dramatically different from most textbook examples on the subject. Just think about it. There’s probably a tweet-sized text that you could, in theory, come up with which constitutes an insight for an invention on which you can make millions dollars. You can search for this, or you can, figuratively speaking, ponder which can of juice at the supermarket is cheaper per litre or try to quantify how much you like either juice and multiply that with price.
Yea, there is often an emphasis on how an ideal unconstrained agent (with a resource-unlimited AIXI like superintelligence in mind) would reason, or how biases should be avoided, with the assumption that a resource-limited agent should just try to approximate what a (practically) resource-unlimited agent should do as closely as it can given its resources.
That, however, is a bad assumption to make. For every limitation-class, an entirely different strategy and different heuristics may be optimal.
As an example, when you try to program your old calculator to play speed chess as well as possible, you should not try to take the best available chess program. There are biases you should adopt to get the best result, given the limitations. I was once in a grad AI class in which we needed to best a given computer opponent in an obscure game, with severe computational resource limitations. The best program was a seemingly bad hack of in generality wrong heuristics, but it turned out to use the resources in the most optimal way. It bested much “better” programs that would’ve beaten it if given more time.
Take teaching the Bayes equation as if that’s what you’d actually use. Sure, some general ideas (consider the prior, always update on your observations) are useful, but the equation itself? Noone at CFAR walks into a supermarket and then continuously inputs actual numbers into Bayes equations in their head.
Agreed! I teach the Bayes class at workshops, and it’s not a math drill class. It’s on how to get the habit of paying attention to the components of Bayes theorem in everyday life. For example, we usually ask just ask “Would I be likely to see Y if X were true?” and skip the question “Would I be likely to see Y if X were not true?” So we practice ways to trigger this thought so you don’t get tricked by base rates or other pitfalls.
Concrete example: Someone you’re interviewing for a job flubs one question and your first thought is that your shouldn’t hire them, because people who aren’t qualified flub questions. But pause and ask how often you expect qualified people to miss one question in an hour long interview. Your answer will vary based on the kinds of questions your asking, but you may be treated the evidence as a stronger signal than it is.
Or maybe you treated it as a weaker signal than it is. This is a strawman anyway, people who never in their life heard of Bayes do compare it to their hypothetical idea of how a competent person would do on their exam, and soon thereafter, to their actual knowledge of how a competent person does, remedying all sorts of miscalibrations.
If anything, in practice the actual problem with interviews is generally that incompetents get through, because incompetents are being interviewed so much more than anyone else. Diligent ability to never flunk anything (conscientiousness) is, at least, something very useful in workplace that you can’t fake by preparing specifically for interviews.
Then there’s also this enormous utility disparity between the minor dis-utility of perhaps running the interviews for a little longer and ending up hiring the best when no one passes, and major dis-utility of hiring an incompetent.
It’s not really an advice, but I can see how its likeable—there’s people who didn’t get hired because they flunked “maybe one question”, and these folks will get a fix of their endorphins when they rationalize it as the HR person being irrational.
A good test will include a few questions almost no-one can answer. That avoids the problem of having more than one 10⁄10 score.
More generally, a very good test will result in a uniform distribution of scores (rather than a bell curve), maximizing the information content of the score.
Well, even bigger issue with Bayes is how easy it is to get it wrong on graphs in general (which contain loops). Worse than that, what we actually have is uncertain topology. This all should make rational updates much smaller and compartmentalized-looking than some naive idea of ‘updating’ beliefs from one argument, then from the other, and so on.
There’s also a lot of very advanced math related to specifically estimation. E.g. an expected utility would be a huge sum, vast majority of terms in which you do not know. When deciding on some binary action, you have those two sums on both sides of comparison, and you need to estimate the sign of the difference most accurately (which is dramatically not same as summing all available terms), then you need to quantify the expected inaccuracy in your estimate of the sign, and adjust for that. Simply put, its complicated and people who have good working understanding of such concerns can write important textbooks, software, papers, etc. (Which make a lot of difference to the world, as well as make any spin off ‘workshops’ credible). Whereas people who are very far from having any understanding of such can do things like estimating 8 lives saved per dollar donated.
Thank you for this. You managed to translate “the world is full of possibilities” into something that hits right in the gut.