Shouldn’t the rationality school suggested by Eliezer, though, be able to train someone to be able to do well on these tests, by essentially becoming very familiar with the literature? Just devil’s advocating against your devil’s advocation; it seems like this would actually be pretty ideal, as you have scientifically benchmarked tests that show what let’s say “naive” individuals think when encountering these problems, from where you could then see progress from the “trained” rationalists. The problem with gaming this system would be with people who are studying rationality but plan to subvert it at some point; the rationalist community would need to have frequent re-certifications so that rationalists don’t rest one their laurels and rely on status to convey and inferred rationality of the decision.
This is a problem with “class tests” of anything, of course. I’ve thought (more than five minutes) on your post, but I didn’t come up with much specifically about rationality testing. (Except for “automatically build arbitrary but coherent «worlds» automatically, let students model them and the check how well their model fits «reality» afterwards”, which is an obvious application of the definition, and has been suggested already several times.)
I’ve come up with a few thought on testing in general:
1) As you say, cheap-but-game-able tests are often useful; we do have useful universities despite the problem of Us awarding diplomas to their own students. I think this is more than just “works well enough”, in some case it’s actually useful:
(a) Having good tests (e.g., by a third party) requires defining well in advance exactly what you’re testing. But in many cases it can be useful if a school experiments with what it teaches (and even why), and the only test needed is internal.
(b) In many (most?) cases, you can’t really test some ability until you really try using it. There are plausible cases where a quick-and-dirty (but cheap) test (e.g. university diplomas) is needed only to pre-select people (i.e., weed out most incompetents), and then get to real testing doing actual work (e.g., hiring interviews and tests, then probation period). If you make the initial test «better» (e.g., harder to game) but more expensive you may be actually loosing if it’s not «better» in the sense of accurate for whatever you need people to be good in.
OK, now I’m getting to what you’re saying about doing good in class but bad in real life. It seems an obvious solution that you should actually be doing the testing in real life: first weed out the bad as well as you can with an approximate test (how good you do on this tests your map against reality), then “hire” (whatever that means in the context) people who look promising, make them do real work, and evaluate them there.
You don’t have to evaluate everything they do, as long as you do it randomly (i.e., nobody knows when they’re evaluated). The fact that random testing is done can be safely made public: if you don’t know when it’s done, the only way to “game” this is to actually be as good as you can be all the time.
The random testing can be passive (e.g. audits) or active (e.g. penetration testing). The only trick is that you have to do it often enough to give significant information, and that the tested can’t tell when they’re being tested. For instance, testing for biases can be very useful even in a context where everybody is extensively familiar with their existence, as long as you do it often enough to have a decent chance of catching people unawares. (This is hard to do, which is why such tests are difficult. Which is why university exams are still useful.)
Note that you don’t have to make all tests undetectable; having some tests detected (especially if it’s not obvious that they are detectable on purpose) both reminds testees of them, and allows detecting people who react differently when tested than in real life. (This can then allow you to notice when people detect tests you’re trying to keep secret, assuming there’s enough testing going on.)
Oh, and another thing that seems obvious: change tests often enough that they can’t be gamed. This is of course hard and expensive, which is why it isn’t done very often.
I had a similar idea, but I’m still not sure about it. Succeeding in Real Life does seem like a good measure, to a point. How could one gauge one’s success in real life, though? Through yearly income, or net worth? What about happiness or satisfaction?
You have to admit that’s an empirical question, though. It could be that getting the competence to do well on rationality tests requires the same skill as applying the same knowledge to real life. There are some areas where ‘fake it till you make it’ works, and there are some things you can’t pretend to do without actually succeeding in doing the thing.
Shouldn’t the rationality school suggested by Eliezer, though, be able to train someone to be able to do well on these tests, by essentially becoming very familiar with the literature? Just devil’s advocating against your devil’s advocation; it seems like this would actually be pretty ideal, as you have scientifically benchmarked tests that show what let’s say “naive” individuals think when encountering these problems, from where you could then see progress from the “trained” rationalists. The problem with gaming this system would be with people who are studying rationality but plan to subvert it at some point; the rationalist community would need to have frequent re-certifications so that rationalists don’t rest one their laurels and rely on status to convey and inferred rationality of the decision.
The problem is if they do well on written questions in classes but no better than average at applying the same knowledge to real life.
This is a problem with “class tests” of anything, of course. I’ve thought (more than five minutes) on your post, but I didn’t come up with much specifically about rationality testing. (Except for “automatically build arbitrary but coherent «worlds» automatically, let students model them and the check how well their model fits «reality» afterwards”, which is an obvious application of the definition, and has been suggested already several times.)
I’ve come up with a few thought on testing in general:
1) As you say, cheap-but-game-able tests are often useful; we do have useful universities despite the problem of Us awarding diplomas to their own students. I think this is more than just “works well enough”, in some case it’s actually useful: (a) Having good tests (e.g., by a third party) requires defining well in advance exactly what you’re testing. But in many cases it can be useful if a school experiments with what it teaches (and even why), and the only test needed is internal. (b) In many (most?) cases, you can’t really test some ability until you really try using it. There are plausible cases where a quick-and-dirty (but cheap) test (e.g. university diplomas) is needed only to pre-select people (i.e., weed out most incompetents), and then get to real testing doing actual work (e.g., hiring interviews and tests, then probation period). If you make the initial test «better» (e.g., harder to game) but more expensive you may be actually loosing if it’s not «better» in the sense of accurate for whatever you need people to be good in.
OK, now I’m getting to what you’re saying about doing good in class but bad in real life. It seems an obvious solution that you should actually be doing the testing in real life: first weed out the bad as well as you can with an approximate test (how good you do on this tests your map against reality), then “hire” (whatever that means in the context) people who look promising, make them do real work, and evaluate them there.
You don’t have to evaluate everything they do, as long as you do it randomly (i.e., nobody knows when they’re evaluated). The fact that random testing is done can be safely made public: if you don’t know when it’s done, the only way to “game” this is to actually be as good as you can be all the time.
The random testing can be passive (e.g. audits) or active (e.g. penetration testing). The only trick is that you have to do it often enough to give significant information, and that the tested can’t tell when they’re being tested. For instance, testing for biases can be very useful even in a context where everybody is extensively familiar with their existence, as long as you do it often enough to have a decent chance of catching people unawares. (This is hard to do, which is why such tests are difficult. Which is why university exams are still useful.)
Note that you don’t have to make all tests undetectable; having some tests detected (especially if it’s not obvious that they are detectable on purpose) both reminds testees of them, and allows detecting people who react differently when tested than in real life. (This can then allow you to notice when people detect tests you’re trying to keep secret, assuming there’s enough testing going on.)
Oh, and another thing that seems obvious: change tests often enough that they can’t be gamed. This is of course hard and expensive, which is why it isn’t done very often.
I had a similar idea, but I’m still not sure about it. Succeeding in Real Life does seem like a good measure, to a point. How could one gauge one’s success in real life, though? Through yearly income, or net worth? What about happiness or satisfaction?
You have to admit that’s an empirical question, though. It could be that getting the competence to do well on rationality tests requires the same skill as applying the same knowledge to real life. There are some areas where ‘fake it till you make it’ works, and there are some things you can’t pretend to do without actually succeeding in doing the thing.
Test for real life? Ouch.