A Gamification Of Education: a modest proposal based on the Universal Decimal Classification and RPG skill trees
While making the inventory of my personal library and applying the Universal Decimal System to its classification, I found myself discovering a systematized classification of fields of knowledge, nested and organized and intricate, many of which I didn’t even know existed. I couldn’t help but compare how information was therein classified, and how it was imparted to me in engineering school. I also thought about how, often, software engineers and computer scientists were mostly self-thought, with even college mostly consisting of “here’s a problem: go forth and figure out a way to solve it”. This made me wonder whether another way of certified and certifiable education couldn’t be achieved, and a couple of ideas sort of came to me.
It’s pretty nebulous in my mind so far, but the crux of the concept would be a modular structure of education, where the academic institution essentially established what information precisely you need from each module, and lets you get on with the activity of learning, with periodic exams that you can sign up for, which will certify your level and area of proficiency in each module.
A recommended tree of learning can be established, but it should be possible to not take every intermediate test, if passing the final test proves that you’ve passed all the others behind it (this would allow people coming from different academic systems to certify their knowledge quickly and easily, thus avoiding the classic “Doctor in Physics from Former Soviet Union, current Taxi Driver in New York” scenario).
Thus, a universal standard of how much you have proven to know about what topics can be established.
Employers would then be free to request profiles in the format of such a tree. It need not be a binary “you need to have done all these courses and only these courses to work for us”, they could be free to write their utility function for this or that job however they would see fit, with whichever weights and restrictions they would need.
Students and other learners would be free to advance in whichever tree they required, depending on what kind of profile they want to end up with at what age or point in time. One would determine what to learn based on statistical studies of what elements are, by and large, most desired by employers of/predictors of professional success in a certain field you want to work in.
One would find, for example, that mastering the peculiar field of railway engineering is great to be a proficient railway engineer, but also that having studied, say, things involved with people skills (from rhetoric to psychology to management), correlates positively with success in that field.
Conversely, a painter may find that learning about statistics, market predictions, web design, or cognitive biases correlates with a more successful career (whether it be on terms of income, or in terms of copies sold, or of public exposure… each one may optimize their own learning according to their own criteria).
One might even be able to calculate whether such complimentary education is actually worth their time, and which of them are the most cost-efficient.
I would predict that such a system would help society overall optimize how many people know what skills, and facilitate the learning of new skills and the updating of old ones for everyone, thus reducing structural unemployment, and preventing pigeonholing and other forms of professional arthritis.
I would even dare to predict that, given the vague, statistical, cluster-ish nature of this system, people would be encouraged to learn quite a lot more, and on a quite wider range of fields, than they do now, when one must jump through a great many hoops and endure a great many constraints in space and time and coin to get access to some types of educations (and to the acknowledgement of their acquisition thereof).
Acquiring access to the actual sources of knowledge, a library (virtual or otherwise), lectures (virtual or otherwise), and so on, would be a private matter, up to the learner:
some of them already have the knowledge and just need to get it certified,
others can actually buy the books they want/need, especially if keeping them around as reference will be useful to them in the future,
others can subscribe to one or many libraries, of the on-site sort or by correspondence
others can buy access to pre-recorded lectures, peruse lectures that are available for free, or enroll in academic institutions whose ostensible purpose is to give lectures and/or otherwise guide students through learning, more or less closely
the same applies to finding study groups with whom you can work on a topic together: I can easily imagine dedicated social networks could be created for that purpose, helping people pair up with each other based on mutual distance, predicted personal affinity, mutual goals, backgrounds, and so on. Who knows what amazing research teams might be borne of the intellectual equivalent of OK!Cupid.
A thing that I would like very much about this system is that it would free up the strange conflicts of interest that hamper the function of traditional educational institutions.
When the ones who teach you are also the ones who grade you, the effort they invest in you can feel like a zero-sum game, especially if they are only allowed to let a percentage of you pass.
When the ones who teach you have priorities other than teach (usually research, but some teachers are also involved in administrative functions, or even private interests completely outside of the university’s ivory tower1), this can and often does reduce the energy and dedication they can/will allocate to the actual function of teaching, as opposed to the others.
By separating these functions, and the contradictory incentives they provide, the organizations performing them are free to optimize for each:
Testing is optimized for predicting current and future competence in a subject: the testers whose tests are the most reliable have more employers requiring their certificates, and thus more people requesting that they test them
Teaching is optimized for getting the knowledge through whatever the heck the students want, whether it be to succeed at the tests or to simply master the subject (I don’t know much game theory, but I’d naively guess that the spontaneous equilibrium between the teaching and testing institutions would lead to both goals becoming identical).
Researching is optimized for research (researchers are not teachers. dang it, those are very different skill-sets!). However researchers and other experts get to have a pretty big say in what the tests test for and how, because their involvement makes the tests more trustworthy for employers, and because they, too, are employers.
And of course entire meta-institutions can spring from this, whose role is to statistically verify, over the long term,
how good a predictor of professional success in this or that field is passing the corresponding test, and
how good a predictor of passing the test is to be taught by this or that teaching institution.
how good a predictor of the test being reliable is the input of these or those researchers and experts
It occurs to me now that, if one wished to be really nitpicky about who watches the watchmen, I suspect that there would be institutions testing the reliability of those meta-institutions, and so on and so forth… When does it stop? How to avoid vested interests and little cheats and manipulations pulling an academic equivalent of the AAA certification of sub-prime junk debt in 2008?
Another discrepancy I’d like to see solved is the difference between the official time it is supposed to take to obtain this or that degree, to learn this or that subject, and the actual statistical distribution of that time. Nowadays, a degree that’s supposed to take you five years ends up taking up eight or ten years of your life. You find yourself having to go through the most difficult subjects again and again, because they are explained in an extremely rushed way, the materials crammed into a pre-formatted time. Other subjects are so exceedingly easy and thinly-spread that you find that going to class is a waste of time, and that you’re better off preparing for it one week before finals. Now, after having written all of the above, my mind is quite spent, and I don’t feel capable of either anticipating the effect of my proposed idea on this particular, nor of offering any solutions. Nevertheless, I wish to draw attention to this, so I’m leaving this paragraph in until I can amend it to something more useful/promising.
I hereby submit this idea to the LW community for screening and sound-boarding. I apologize in advance for your time, just in case this idea appears to be flawed enough to be unsalvageable. If you deem the concept good but flawed, we could perhaps work on ironing those kinks together. If, afterwards, this seems to you like a good enough idea to implement, know that good proposals are a dime a dozen; if there is any interest in seeing something like this happen, we can need to move on to proprely understanding the current state of secondary/superior/higher education, and figuring out of what incentives/powers/leverages are needed to actually get it implemented.
1By ivory tower I simply mean the protected environment where professors teach, researchers research, and students study, with multiple buffers between it and the ebb and flow of political, economical, and social turmoil. No value judgement is intended.
EDIT: And now I look upon the title of this article and realize that, though I had comparisons to games in mind, I never got around to writing them down. My inspirations here were mostly Civilization’s Research Trees, RPG Skill Scores and Perks, and, in particular, Skyrim’s skills and perks tree.
Basically, your level at whatever skill improves by studying and by practising it rather than merely by levelling up, and, when you need to perform a task that’s outside your profile, you can go and learn it without having to commit to a class. Knowing the right combination of skills at the right level lets you unlock perks or access previously-unavailable skills and applications. What I like the most about it is that there’s a lot of freedom to learn what you want and be who you want to be according to your own tastes and wishes, but, overall, it sounds sensible and is relatively well-balanced. And of course there’s the fact that it allows you to keep a careful tally of how good you are at what things, and the sense of accomplishment is so motivating and encouraging!
Speaking of which, several netwroks and consoles’ Achievement systems also strike me as motivators for keeping track of what one has achieved so far, to look back and be able to say “I’ve come a long way” (in an effect similar to that of gratitude journals), and also to accomplish a task and have this immediate and universal acknowledgement that you did it dammit (and, for those who care about that kind of thing, the chance to rub it the face of those who haven’t).
I would think our educational systems could benefit from this kind of modularity and from this ability to keep track of things in a systematic way. What do you guys think?
What if this finds that the best predictors for most employers consists of two tests (1) a <10 minute IQ test, and (2) a test showing a student’s willingness to submit to authority and learn what he was told, in the manner under which he was told without getting too far ahead or behind his classmates?
I am willing to bet money (up to €100, I am a student) that these will not, in fact, be the best predictors, just as grades in college have been proven not to be good predictors of how well a Google employee does, IQ tests have been proven not to be a good predictor of how well a salesman does, and higher salaries have proven to, past a certain point, be detrimental to productivity for employees whose jobs require creativity, imagination or otherwise intellectual exertion.
My hypothesis is that a <10 min IQ test will be so inaccurate as to be useless, and that the current school system already favors people who do what they are told as they are told without getting too far ahead or behind, with the destructive and wasteful results we all know: they are the ones it grades best, and yet, time and again, tested against the workplace, those grades have proven little.
IQ test scores are massively correlated with workplace performance.
I find it plausible that among the set of people Google has hired grades don’t predict workplace success, but I bet if Google were to randomly select employees from among the U.S. population (or even from just among computer science majors) grades would be hugely correlated with performance.
How much are you betting?
I can cite my studies, if you want. It may take me some time to find them (an afternoon or two). Can you cite yours?
What are the terms of the bet? I would be willing to bet €100 depending on the terms. Full disclosure: I read a lot of academic research on IQ to help me write this book, one of whose chapters is about IQ.
There is a lot of silly stuff written about IQ. No offense, but do you think you have the needed background to separate the high from the low quality IQ scholarship?
I only read Talent is Overrated, which cited a few studies to the effect that IQ was a bad predictor of professional performance beyond the short term; to become a world-class performer at some specific skill, you need to spend 20000 hours of your life deliberately practicing it (trying things outside your comfort zone, failing to get it right, figuring out what to improve and how, getting it right, trying something harder, rinse and repeat, and it’s a pain in the neck). Most people, no matter how smart, tend to stop improving at their jobs after around one year and a half of practice: this includes highly skilled professionals like doctors and lawyers.
And, on the anecdotal side, in my own experience and what I could get from the people that frequent Lesswrong and TVTropes, a high IQ commonly results in huge childhood issues, a miserable social life, and a boatpload of akrasia, which seriously hamper whatever base utility it might have. Perhaps there’s a selection bias and only the incompetent brains have the time to hang out here, while competent smart people are too busy actually getting things done?
There is massive selection bias going on here.
I don’t think it has as much to do with free time as with target audience, though. LW attracts a few different clusters of people, but the ones you’re seeing in this context are those who feel their thinking is flawed in some way, and who believe they have a decent chance of fixing it with a cognitive science toolset and vocabulary. The site’s native idiom and interaction style—basically a founder effect—imposes a few more filters, underrepresenting some problems and overrepresenting others. Akrasia and social problems are precisely the issues I’d expect to see a lot of, given those constraints.
TV Tropes… well, that might have more to do with free time. Almost everyone likes media, but if you want to make many original contributions, you need unusual knowledge of media and an analytical attitude towards it. Moreover, the media best represented there tend to be the most time-consuming ones—TV, anime, doorstopper fantasy novels. I don’t think I need to go into too much detail regarding the people most likely to share those requirements.
Offtopic post, but a discussion I wish to pursue nonetheless:
Regarding TVT: it used to be so. Nowadays school study media and the “literary canon” are beginning to find their way in… and all those tropes with silly names, built from mass media, are proving their usefulness as tools of analysis. Of course, getting a movie adaptation or a TV miniseries is one of the best ways to draw troper attention to a work, but classics always end up getting those with some regularity. So let’s just say that the user base has widened. Oh, and many classics are as doorstoppery as modern fantasy sagas, especially stuff from the XIXth century, when novels where published as long-running serials in magazines and authors were paid by the word. When people call Eliezer Yudkowsky a terrible writer because of MoR’s lack of tightness or his using it as a vehicle for ideas and lectures, I feel half-tempted to point at the likes of Victor Hugo or Alexandre Dumas or Dickens or Benito Pérez Galdós, just to name a few… surely if those are the traits of terrible writing, it means that those books have room for improvement, if only by way of abridging them?
Regarding LW: The filters the site imposes on its demographics (language and mode of interaction) worry me: what’s the point of translating MoR to Spanish or French, if afterwards hispanophones and francophones don’t get to read their sequences and have a site to be a community in? I’m trying to seed a rationalist community at my school, and the language barrier is proving to be troublesome; I often find myself not even knowing how a bias would be called in Spanish.
There have been a few Sequences translation projects discussed, proposed, or started, in various languages, plus a few more for MoR. Of these, only one (diegocaleiro’s, in Portugese) seems aimed at a persistent community hub. I might have missed some, though.
With regard to the specific issue of bias names, if you don’t have access to cognitive science faculty, I think a good place to start might be finding a translation of Daniel Kahneman’s Thinking, Fast and Slow; Kahneman is a major source for the Sequences, his book’s been widely translated, and there should be a lot of overlap. After that, you might want to try digging into XiXiDu’s resource list. Though there are a few original LW coinages floating around, and I’m afraid you’re on your own with those.
That is junk science.
The correlation between IQ and workplace performance is extremely robust and very well established.
See Intelligence: Knowns and Unknowns
Well thank goodness I’m on a holiday and can actually afford to read all that. Nevertheless, such a violent flipflop of opinions would make me dangerously closer to epistemic learned helplessness, i.e. giving up on learning new stuff because of not feeling confident that I can tell good arguments from bad ones when they come from experts after getting burned one time too many.
That is a dangerous state for a rationalist to be in. so would you please be a dear and have a look at that book to figure out how precisely it junk-scienced me and the rest of its readers? It would really help me out and I’d really be grateful for that.
Also, I have a vested interest in believing in the “only deliberate practice matters” thesis. I used to feel having a high IQ obliged me to get excellent results with little effort, and every time that didn’t happen I felt disappointed in myself and that I didn’t live up to some weird standard. Nowadays I only believe in working as hard as possible for as long as possible, and it serves me much better.
I’m afraid I haven’t read that exact popularization, but if it’s drawing on Ericsson’s research as it sounds like, the explanation is easy enough: Ericsson’s points are valid largely because the studies are correlational, do not control for underlying factors or Matthew effects, and suffer from heavy range restriction in he’s already looking at people who are selected or self-selected to be elites.
(ie. suppose someone studied MIT physicists with a mean IQ of 150 and discovered that in this group of physicists, Conscientiousness predicted better than IQ which would go on to win Nobels. This is a possible result, and what this has actually demonstrated is “you have to be incredibly brainy to be a MIT physicist in the first place, but once you’ve gotten that, then other things are also important; which is another way of saying that if we look at the general population, like all the people from IQ 60 to 150, IQ is the overwhelming most important trait” but this is easily popularized to “IQ doesn’t matter!”)
As a practical matter, it’s probably a good idea to believe hard work matters and deliberate practice matters. If you could somehow improve your intelligence, then it might be important to hold correct beliefs about IQ being far more important than practice; but unfortunately, IQ is pretty much fixed and all that’s left is to make the best use possible. Given people matched on IQ and other traits, and Conscientiousness will be pretty important.
However, in other contexts, it’s very important to hold the correct beliefs about the relative value of intelligence and ‘just work harder’ - if we were discussing iodization or immigration or whether someone should go into debt for college, for example. Many population-level questions will rest far more on intelligence than other traits.
Understanding which context we’re in can be a hard balancing act, and especially difficult when reading papers making statistical claims (did they control for IQ? Should they control? Or for education? Or for range restriction? Or for reliability of their metrics?); I try to be consistent and clear in any discussions of Conscientiousness or IQ which context we’re in and which we should value or ignore, but I don’t think I always succeed.
Well thank you very much!
.. sigh This may be the first time I find some actual use for compartmentalization, and I can’t do it anymore...
Imagine a world where 50% of your results are genetically determined and 50% of your results are hard work. What would be the best strategy for success in that world, assuming that you already have decent genes? It would be working hard. Not working 50% hard, but working 100% hard.
Seems like you found the right strategy for the wrong reasons. You can keep the strategy; you don’t have to blindly reverse your decisions.
I dunno, at a certain point the marginal utility of one unit of hard work will be less than the marginal utility of one unit of leisure, and it’s well possible that the point at which that happens depends on how genetically good you are.
You should work 100% hard on whatever you’re working on when you’re working on, but there might still be cases where you should think about the nature/nurture ratio to get the best outcome.
If outcomes are all about hard work, doggedly aiming for a rare high-reward position that requires a large amount of skill, like a quantitative analyst on Wall Street or a professional athlete, can be a good high risk / high reward strategy. But the more you know outcomes to be affected by genetic talent, the faster you’d want to recognize that some goals are beyond you and direct your 100% effort elsewhere if you find your genetic talent lacking, because then the people who also put in 100% effort but have more genetic talent than you will take all the positions no matter how much effort you put in.
See Hambrick et al, Deliberate practice: Is that all it takes to become an expert?, for a recent critical assessment of the claim that deliberate practice is sufficient for attaining expertise. Gwern provides some quotes.
I’d be curious to see that research that proves IQ test scores are correlated with workplace performance. All jobs or just some? How strongly correlated? How is workplace performance measured? Is it discussed in your book?
A good starting point might be this 2004 review by Schmidt & Hunter. It tabulates correlations of 0.23 to 0.58 between “general mental ability” and “supervisory ratings of job performance”, with the correlation increasing monotonically with a five-level rating of job complexity.
Edit: I should probably add Schmidt & Hunter’s note that those averages “are corrected for measurement error in the dependent variable and for range restriction but not for measurement error in the GMA measure; hence, these are estimates of operational validities, not construct-level correlations.”
Even if (1) was possible (which I kind-of doubt), it’d be quickly Campbelled into uselessness if employers started to use it.
It’s very useful and, in the U.S., illegal. Employers, I suspect, often favor applicants with degrees from prestigious colleges as a way to get around the direct ban on the use of IQ tests.
Given recent Supreme Court decisions, I’m not sure how much longer this will be a true statement of US law.
The most recent case held that fear of disparate impact lawsuits was not a justification for a disparate treatment (throwing out a promotion test because no minority passed). In a concurrence (agree on result, not reasoning), one justice noted that disparate impact treatment law for employment is in considerable tension with the constitutional doctrine on equal protection—ConLaw equal protection does not have a disparate treatment component, while you correctly note that private employment discrimination law does.
There are two possible resolutions of this legal tension:
a) Employers are screwed—any choice subjects them to either (1) a valid disparate impact claim or (2) a valid disparate treatment claim.
b) Disparate impact claims will be tightened substantially—to the point that one would likely need to prove a facially neutral test was selected primarily for its disparate impact. This is functionally equivalent to lowering the legally required evidence to show disparate treatment.
My read of the legal landscape is that (b) is massively more likely than (a). (> .99). The resolution of that tension might even make private disparate impact claims practically impossible to prove. Or disparate impact liability might even be ruled unconstitutional (in the absence of disparate treatment evidence).
Although IQ tests were initially outlawed by the courts on disparate impact grounds, I believe that Congress enacted these IQ test restrictions directly into civil rights laws so they can’t be overturned by courts except on constitutional grounds, which seem unlikely.
The test makers in the Ricci v. DeStefano case you cite went out of their way to have their tests not be just IQ tests.
I’ve looked briefly, and I don’t see a federal statutory cite that explicitly prohibits IQ testing. Most employer decisions not to administer an IQ can be almost completely explained by a risk-averse unwillingness to defend IQ tests. Defending lawsuits costs money, even if one wins, and litigation is always a risk.
In short, I think employer behavior avoiding IQ tests is expect-value maximizing for individual employers—for tragedy-of-the-commons reasons, this might be bad for employers overall. Most importantly, I don’t think there is a federal law specifically prohibiting IQ testing for employment, it’s just expensive for an employer to show that higher IQ is helpful for the specific position the potential employee is seeking.
See this.
That article is consistent with my brief legal research, and increases my confidence that Congress has not explicitly prohibited IQ tests by statute, regardless of whether a potential employee could show disparate impact through use of the IQ test. I think this quote from the article is a reasonable statement of current law:
I make no comment on whether the current state of the law optimizes what it claims, or what it should optimize in some moral sense.
One possible source of information here is what goes in other countries that do use IQ tests for this end.
In England I’ve witnessed de facto IQ tests being used for grammar school admissions, and job applications to banks & auditors. So some English headmasters & professional service companies’ HR departments think they’re useful, which does not prove they are useful, but is good evidence of it.
I suspect the tests are more useful for the schools than the graduate employers, since there’s less scope for cheating with the school tests (which are virtually always administered like exams AFAIK) than the employment tests (the first round of which can sometimes be taken on any Internet-connected computer, such as the one you’re sitting at in a campus café while your mate hangs over the back of your seat and has a jolly time alternating between calling out answers and ribbing you for feeling the need to cheat), and the employers are sampling a more range-restricted population (stereotypically, 21- or 22-year-olds with 2:1 or 1st class quantitative degrees from impressive-sounding universities).
I don’t doubt that IQ tests can be useful, but I’m sceptical that one shorter than 10 minutes can be so robust as to make anything else (save a test for willingness to submission) unnecessary.
I was speaking more to your Campbelling point than JM’s shortness point. I’ll quickly address the latter now.
I’ve seen papers mention developing or using short versions of IQ tests that give scores correlating well (r > 0.8) with scores on the original full tests. I don’t know whether these “short” tests last <10 minutes, but it wouldn’t surprise me. Also, the US General Social Survey has a 10-item (I think?) vocabulary test that works as a passable proxy for IQ, even though it must take only 5 minutes or so to administer. Given these two existing kinds of test, I wouldn’t be surprised if a 9-minute IQ test had enough reliability to be almost as good as a full-length IQ test.
Note that JM wasn’t saying his two suggested test types would make other tests unnecessary. I agree with you that short tests for IQ & submissiveness wouldn’t make all other predictors of job performance superfluous. (Conscientiousness tests & structured interviews, if I remember rightly, both predict job performance even after controlling for IQ.) JM just said they’d be better predictors than anything else, which wouldn’t surprise me much. AFAIK, a full-length IQ test is the best general predictor of job performance, and the same would presumably be true for a 9-minute IQ test were it reliable enough (q.v. the previous paragraph).
[Edit: Christ, I really like that “wouldn’t surprise me” phrase, don’t I?]
That sounds even more amenable to that problem (if the items are the same in all editions of the test): if employers started to use such a test to sort prospective employees, within a year applicants would probably start memorizing those particular 10 words and >80% of them would ace the test.
The words are not always the same. They are selected from a standard wordbank.
From what I’ve seen in certain similar situations, some people would prepare for such a test by trying to memorize every single word in the wordbank, even if there are several thousands of them; OTOH, if the bank is large enough, then the extent to which they’d manage to do that would probably correlate with IQ strongly enough. (Unless some candidates use SRSs and other don’t.)
People have already pointed out that IQ tests are considered to be illegal discrimination, and have been ignoring that, but I think a good case can be made that it is proper to treat them as illegal discrimination. Imagine that a test genuinely detects ability to perform at the job but is also correlated with race independently of whether people of that race can do the job. Under those circumstances, if you use an IQ test to test employees, you’ll 1) avoid stupid employees and 2) avoid employees of one race in favor of equally competent people of another.
The overall result will be beneficial to you, since avoiding stupid employees benefits you while avoiding employees of one race has neutral effects on you. But the overall effect on people of that race will be devastating.
This will never be corrected by the market. The employers, after all, benefit unless there are so few available employees that cutting out the good ones of the wrong race reduces the pool of potential employees enough to affect the overall employee quality to a significant degree. How could we prevent this other than by banning IQ tests?
This ‘devastating’ effect sounds drastic. It also seems to directly require the assumption that the performance of people of the particular race perform devastatingly poor on IQ tests. That’s a rather dubious assumption and when asked to ‘imagine’ a world where this idea is so, any conclusions we draw only apply in that imaginary universe, not this one. This is not a ‘good case’.
When arguing that IQ tests should be illegal discrimination due to drastic racial bias of IQ tests the reasoning from the premise to conclusion is straightforward. The bulk of a ‘good case’ must be in providing evidence for the premise. Maybe there is sufficient evidence to make that claim about IQ tests but imagining it is not sufficient.
No, it doesn’t. It only requires the assumption that 1) such people score slightly more poorly, and 2) employers begin choosing employees from the tail end of the distribution. A slight difference in scores on the test will reduce the percentage at the tail end out of proportion to the size of the difference. I don’t remember the formal name for this, but it’s very well known.
Edit: Also, consider what percentage of the population needs to be unemployed for the problem to be considered serious. 10% is bad. If test bias causes an additional 5 or 10% to be unemployed, that’s going to be a huge effect even if 5% or 10% is only a small part of the entire population. Nobody says that the unemployment rate needs to get up to 25% or 50% in order for unemployment to be a serious problem.
I will not quibble about what constitutes ‘devastatingly poor’ beyond observing that I of course agree that the difference need not be numerically large in order to be devastating. I will again point out that you are assuming that there is sufficient bias in the IQ test that if used (instead of whatever other form of measurement or selection could be adopted) the change will be devastating. That is a significant claim, particularly when it is quite possible that subjective evaluations of intelligence by interviewers are more biased than IQ tests. I expect IQ may be somewhat racially biased. I know for certain that unstructured human arbitration is racially biased. I don’t know for sure to which is worse but do observe that it is sufficiently controversial that some evidence is required.
You asked the reader to imagine a world where some assumption holds—which is fine as far as it goes. It becomes an error in reasoning when you jump from “imagine...” to “a good case” without providing evidence that the imaginary scenario applies to reality. Even the bare statement “I assert that IQ tests are more racially biased than whatever is used in their stead” would have made the case at least coherent, albeit still weak.
The argument was in response to the implicit assumption that even if the tests are biased, we shouldn’t worry about that as long as they can predict performance. “Imagine” takes the place of the word “if” in there and the intended conclusion is “yes, you should worry about whether the tests are biased, because if they are biased, that would be bad.” To make that conclusion I do not need to provide evidence that the imaginary scenario applies to reality.
If I say “imagine that you jumped off a cliff. You’ll get smashed. Maybe you should avoid jumping off cliffs”, I don’t need to provide evidence that you jump off cliffs, because the conclusion is in the form of a conditional that already conditions on whether the imaginary scenario is real.
This sounds like an awesome idea. Does anyone know if such a thing already exists? If not, I would be willing to commit $500 to someone who wants to make it (assuming they already have the appropriate skill set, of course).
I would like to strongly voice my interest in such a service myself. I don’t have spare capital to give, but I want this to happen.
I submit a (very weak) candidacy: I don’t have the skillset yet, but I’m in a relatively good position to acquire it if I am sufficiently motivated. I know some programming (the very basics of C++ and Python), some statistics (the basics of linear regression, confidence intervals, normal and t-student distributions), some advanced math (calculus up to R3 to R integrals, algebra/geometry up to quadratic functions and the optimization thereof), and some Operations Research (specifically “modeling a problem as a function to maximize or minimize, and a number of restrictions”, and “optimizing stuff using the Simplex method”). I also know a few basic things about product design, marketing, viability of business plans, and making web pages…
Dammit, my skill-set sucks! Somebody please show up, don’t let us be stuck with me!
Well, I must agree that your skill-set sucks for a project like this. But DAMN, that’s a pretty solid skill-set as skill-sets go.
Thank you, that’s very nice to hear. I myself am rather frustrated with it: I started out with the intention of leaving with a Save The World pack, but nowadays I’m finishing the degree and finding that it’s more like a Now At Least You Have A Vague Idea Where To Begin pack...
Now you’re at the point where the career center tries to usher you into a job at megacorp or lure you into formal graduate school.
Depending on which region of the world you’re in, the term “Engineer” may have certain regulatory obligations, meaning that you may feel the (sometimes debatable) allure of attempting to work enough years (usually 3) under an accredited engineer to earn your own designation; at least, so it seems in some countries.
Yes, precisely, you wouldn’t believe how much pressure we’re getting in that direction. I’d really like to find a proper study on how advisable that is, the interests and pressures involved, and so on. I don’t mind following the mainstream if it’s the best compromise between speed and safety in achieving my goals, but I’d like to be able to determine that. Right now I fraknly feel like I’m mostly going blind.
I think we do get all of the legal privileges (we get to sign projects) and responsibilities (we get to be responsible for anything bad that happens afterwards) as soon as we’re out of school, though I need to investigate that...
wait, seriously? I’ve got that along with perhaps slightly stronger programming skills, significantly more math, and a whole slew of concepts from an electrical engineering education.
Any good ideas for independent projects that would coincide with these sorts of skills?
Oh, I didn’t mention electrical engineering (specifically, power lines, motors and generators, static and dynamic converters and transformers) because I thought it was completely irrelevant to this particular task, as were my skills in mechanics and machine theory, thermodynamics and heat transfer, hydraulics, and automated control (off the top of my head). Of course, getting hired by rationalists and working among them would be freaking awesome, so if anyone’s got a job for the likes of me, please do tell.
I’m reminded of a couple of students at a German university last year who studied all the material by dividing up the classes between them and exchanging notes, took all the exams, and passed in a few months. The university then turned around and sued them for studying too fast.
http://www.thelocal.de/education/20120703-43517.html
It is not in a University’s interests to do a good job. It’s like any other company: The aim is to extract the most $$$ from you while they give up the least value in return.
In highly competitive markets this leads to fairly marginal profits. But formal education is not a competitive market. Not at all. Very tightly regulated.
The competition for education comes almost entirely around costs for most people. They view education as an expense, not as an investment—which is quite reasonable when you consider the likely quality they’re going to get—and aim to minimise that expense. Offer people a better education and the majority of them won’t be willing to pay much for it, offer them a cheaper education though, or a faster one—which amounts to more or less the same thing....
That’s the financial side of things anyway, and one of the reasons I think your idea’s just never going to happen.
#
Assorted
I don’t know, but I wouldn’t think this data’s going to be available or reliable. Google did their own research on this, which suggests to me that the data wasn’t available when they looked for it elsewhere. One thing’s just that everyone’s going to record things slightly differently—there’s not an industry standard for measuring this sort of stuff that I’m aware of. The other is that most places are not particularly rational about their HR procedures—there are a set of skills that go into even the basic recording of data that I’d not expect them to have. I’d expect it to be more like they sit there with a sheet of paper that they’re marking 1-10 based on their subjective opinion of your answer to a question. That’s probably the extent of any data most people will be expected to generate and, perhaps, keep.
In the case of desired by employers I suspect you might find the data was actually contradictory to predictors of success a lot of the time too. I know, when I go for interviews, I tone down a lot of the qualities that let me do things easily—the corollary of don’t dress better than the boss is don’t act smarter/more skilled than the boss—most people like to hire people they can use as tools for their own success, not colleagues who they have to work with. Especially considering lots of economic behaviour seems to be self-justifying/rent-seeking.
================
That said, I would imagine that what you really want to do is to follow up with people who’ve left university, both those who won and those who lost, and ask them what they wish they’d known then that they know now. Certainly, a few years out of the gate, I could give my university a great deal of feedback.
================
2 . The incentives in HR seem to be to be risk averse. To find reasons not to take people on. If you hire someone and they do really well then you don’t get rewarded, whereas if you hire someone and they’re awful then your job is at risk. If that’s heavily weighted in the decision maker’s consideration, then giving people more granular data will result in dramatically fewer hires, since the probability of a concurrent set of criteria being fulfilled is the multiple of its individual probabilities. Even if the individual probabilities are quite high, HR managers will be able to talk their way into using a great number of individual criteria and then you’ll be hung.
3 . It makes the market more competitive, not necessarily in a good way. Students take on a great deal of the financial risk in education these days. Which honestly seems the wrong way around, but there you go. If rich people can afford to grind their way higher up the tech tree, then you’re essentially pricing poor people out of the market. The rich person only has to be ahead by one or two points—you’re probably going to pick someone who’s even slightly better on paper over someone worse. So there’s an incentive to have large steps in your pricing.
That in turn reminds me of Nick Bostrom:
Thank you for bringing this up. One of the huge deals of the Industrial Revolution, together with the American System of Manufacturing (basically, building in spare parts) and the Patent System, was the extensive development of Standards: originally, everyone agrees to do things the same way because it makes it easier to divvy up tasks between competing msnufacturers, and makes things cheaper.
Then there were safety standards and regulations, which, in practice, are often about who to blame if anything goes wrong; like taxes, there’s a strong sense of “it ain’t cheating unless you’re caught”.
And then there’s this more recent movement for getting ISO certifications in the workplace and in universities. I haven’t been been in the front-lines for that, but it would appear that it’s being kind of a huge deal: standardized organization, standardized customer service, and so on and so forth. The buzzword is !quality”.
To sum it up, the standardization of stuff is a work in progress, and people do realize the value in it.
And that’s terrible! Do we have a name for that bullgum, or shall we just call it an instance of the Tragedy of Commons? Aren’t there studies on how this attitude is wrong for a company’s future and how to disincentive it from within the company? Or maybe pay a third party to do the hiring?
?! Why is—Huh? I’m sorry, I’m dumbfounded by how freaking stupid this is.
That doesn’t sound economically sustainable for me: the law of conservation of work force (which I just made up) dictates that, in a steady state of work output and productivity, what comes out must come in.The number of hires needed is the number of hires needed. Unless we’re in an economic crunch and companies are explicitly downsizing, adding hoops to jump through is an expensive zero-sum game between HR people and departments, not something prospective employees should be worried about.
Isn’t that already so, though? Only the well-off, and those so brilliant that they get scholarships (and how many full scholarships are there still?) can bear to go through the excruciatingly long, overly detailed and wide studies we have nowadays. I suspect that things wouldn’t change much from today: middle classes and rich immigrants would invest in education because they can’t live off capital alone, lower classes usually won’t bother taking the risk and will stay where they are, and upper classes won’t bother making the effort, or will just study whatever they please at their own leisure.
I haven’t seen this dynamic discussed in the cog sci literature, but colloquially it sounds like an instance of tall poppy syndrome. Tragedy of the commons doesn’t sound right—there isn’t a shared resource being exploited, and coming to individual understandings doesn’t further incentivize those not bound by them.
I thought TPS was for people who were already equals, not for acquiring new resources.
More importantly, how does one work around that? What is it that makes people not mind the success of others? What is it that makes them supportive of said success?
I think I am like this, but I am not sure why. Introspection has its limits, but here is my guess about the contributing factors:
1) I feel good about myself. I love myself, unconditionally. No matter how much success other people achieve, I will always be my precious me. You can’t make me feel bad about myself by being better than me, because that’s completely unrelated. At worst, you can make me curse my akrasia more (which in far mode I would consider a good thing, so thank you for that). -- Funny thing is that this trait is often perceived negatively by other people; it probably violates the social pecking order. (Who am I to feel good about myself, when our superiors and leaders often don’t?)
2) I focus on cooperation. If you have a skill I don’t have, I want you to develop that skill as much as possible, because then we could do more impressive projects together. It’s as if I am outsourcing the skill to you. And if you have the skill I have too, I want you to develope that skill as much as possible, to that I can learn from you. Both cases assume that you are my friend, of course. But if you are not my friend, I can usually simply ignore you. Actually, there are so many existing skills and subskills that whatever you focus on, I am most likely not doing exactly the same thing, so in some sense we are not competitors.
3) I see the big picture. Something like that Eliezer’s drawing where you have the village idiot next to Einstein, and there is a lot of space above them. From that perspective, all our local advantages and disadvantages seem trivial. We are a bunch of stupid monkeys and most likely we are all going to die, aren’t we? Competing with other people is like running at the special olympics. Even if you win, you’re still retarded.
4) I usually ignore other people, so I often simply don’t notice their success. -- It’s not the only factor, because I also feel supportive when I notice. But perhaps reducing the total amount of perceived success is helpful; maybe more perceptive people are overwhelmed by what they see. (I mean, if you know 99 people as successful as you, and if you perceive their every success, then 99% of the success you see is not yours. Perhaps that makes people sad.)
EDIT: Assuming this model is correct (which is doubtful), the important question is whether this can be taught. Seems to me that people usually go for (1) for their children and fail to achieve that; which in my opinion does not mean the goal is impossible, just that they are doing it wrong, probably because they don’t have the skill themselves. Part (3) seems easy to teach, part (2) should be possible with longer training and experience. Part (4) probably requires to be born as an aspie; and if it can be taught, it would be rather controversial.
The main issue in implementing such a system is testing and certification. If you use quizzes then there is nothing to stop the student from looking up the answers in another window. The system can still be made but the tests need to be designed so that students can’t just look up the answers. The solution is applied knowledge problems.
For example instead of asking someone about the physical properties of a metal you would ask them what metal they would use for a certain construction and why. Student programmers would be required to write short scripts. Student geographers would interpret maps. Student lawyers could say how a client should behave is different situations.
Basically you would test whether people can use their knowledge instead of whether they can communicate it to avoid students second-hand communication. The downside is that questions are harder to make.
The uspide is that it tests for the actual skills (expertise at) rather than the ability to talk about them (expertise on), which is something that should only be required of communicators such as teachers/professors, journalists, lawmakers, and so on.
TL;DR- the difficulty of solving this problem is that availability of good data, NOT the lack of decent skill-set models.
So lets say you finish an outline of skills in these trees, as you suggest. Now, we want to make the statistical models that are at the core of your proposal- “One would determine what to learn based on statistical studies of what elements are, by and large, most desired by employers of/predictors of professional success in a certain field you want to work in.” Where exactly do you plan to get the data to actually do this? As you “level up” down your skill tree, the number of people who have skill combinations is going to get very, very thin. I might be the only person in the world who has both calculated a Next-to-Next-to-Leading-Order cross-section at Tevatron energy levels AND who has used restricted-Boltzmann-machines to categorize medical billing codes to triage health insurance claims. Backing up a bit, and “quantum field theory” + “health care informatics” + “statistics” might have what, a dozen people, all of whom are going to be outlier-ish for the general field of insurance data analysis.
Lets look at data from physics (because I know it well) here is the AIP’s focus on physics bachelor holders. Bachelors outnumber phds by quite a bit, so this is the densest you are going to get for certain types of physics skills. http://www.aip.org/statistics/trends/reports/empinibs0910.pdf. If you subtract out the exclusions (unemployed, people who went back to their pre-college job, and part-time employed), the data is already cut down to 2/3. Of those, the two largest chunks are highschool teachers (about 6% of the total physics bachelors) and engineers (about 7% of the total physics bachelors).
So we have about 350 people who teach highschool, probably highschool science, so you can maybe compare physics-educated general science teachers to other educated general science teachers. A bit more then that are in “engineering” fields, but there are tons of subfields/‘skill-trees’ in engineering. You’ll have <100 per skill-tree at the trees base! The data is already very thin, and its much thinner for all other career paths.
Career organizations that have much larger memberships (i.e. most career organizations) have much less detailed information because collecting it becomes more of a chore. These larger datasets probably won’t have the information you’d really need to fill out your skill-trees.
Also, I’m willing to bet that the world moves fast enough in a number of fields that you can’t use very deep historical data to do predictions. i.e. two decades ago a physics phds was a decent way to get a quant job on wallstreet, BUT two decades ago a masters of financial engineering degree didn’t exist. Maybe in another decade the MFEs squeeze out the phds the way that CS squeezed out physicist programmers in the 70s and 80s.
I’m starting a job in an “adaptive learning” startup soon, and many of the points you make here remind me of the things this company does or plans to do. The basic idea of the company is that it collects data about the student as he or she interacts with an electronic course, then uses this to personalize the course and make recommendations for the student’s education path. This isn’t quite the same as what you’re suggesting, where a student independently finds educational content and then gets certified in those areas. However, there are several similarities.
The tree of learning definitely exists. In your idea, the students find their own paths through this tree, but the adaptive learning way is to recommend paths to each student based on their performance in previous courses.
Both ideas allow each student to have some type of profile that showcases exactly what knowledge they have obtained. Adaptive learning could also add to this profile things that are specific to how the student learns (e.g. is it better to introduce this person this math concept by offering a rigorous proof or by explaining several revealing examples). Something like that could be helpful for the “intellectual OK!Cupid.”
You envision a scheme wherein the student acquires access to real or virtual libraries of educational content, with varying degrees of completely independent learning and institutional guided learning. Right now, the company where I will work partners with education content companies (which seem to be mostly textbook publishers), and is only able to offer adaptive learning to students learning through the specific classes created around these textbooks. However, I believe there are plans to make the platform more open, so any course could be built with it.
The main disadvantage of adaptive learning, which your system does better with, is the problem where the teaching and grading is mostly done by the same entity. This seems to be a necessary aspect of adaptive learning, because the data collected about the student as they are taking a test, quiz, or exam is used to learn more about how the student thinks and learns. Still, there is the possibility that an adaptive course can use a test or quiz as a personalized teaching tool, and not only as a way to evaluate performance.
That is absolutely brilliant. Well, I guess the “Nothing New Under The Sun” effect strikes again; It’s really hard to come up with something that is both very original and very good. Still, I’m glad to know that other people are doing something similar, and that a practical, empirical groundwork is being laid: this might be more than a pipe dream after all.
How about starting in a supportive domain that you happen to be interested in where existing taxonomies and skill progressions seem to exist?
http://intelligence.org/courses/
http://rationality.org/recommended-reading-on-rationality/
LW “curriculum”
Tech/skill trees help with the visualization process, and that goes a long way to motivation. Much of my struggle battling akrasia and learning math & beginner rationality is to try to stop imagining forbidding mountains of concepts, nomenclature and future practice, and instead focus on images of cartoonish/animated tech trees that are more appealing, tractable and familiar (for those of us with gaming backgrounds); ultimately, anything less intimidating than an impassible mountain to climb.
Added bonus: You will probably not find a more willing group of participants to enthusiastically input, rate and measure themselves on their cognitive and rationalist skills.
Added bonus 2: I speculate that CFAR may be able to integrate this type of system as additional value-add for post-workshop alumni.
edit: “Data Scientist” seems to be a buzz word used to trigger competence and desirability in job candidates. Here’s an example of a visual roadmap: http://nirvacana.com/thoughts/becoming-a-data-scientist/. I think other people are willing to create skill trees, but the missing ingredient is the ability to animate and infuse them with meaning over time (i.e. referring back to your ideas of tests for each node/module with clear achievement and progress indicators).
Going to college and getting a degree by the standard method serves as signalling; it shows that you are the person who is willing to spend a lot of resources (both time and money) on your skills.
Any method of education which doesn’t take as many resources for the student will inherently subvert this, and therefore won’t be accepted by employers.
Testing and certification is an extremely difficult thing to get right, and indeed I’m not sure anyone really has done this. It takes a lot of effort to write good tests. It is almost impossible to simultaneously reduce both false positives and false negatives. If the test is standardized or repeated, teachers teach to and students train for the test in preference to the subject matter. Ideally you want the test to be such that someone who prepares for the test is also preparing for the skills you wish to measure.
Are there any existing tests (e.g. bar exam, SAT, CCNA, etc.) that you would point to as suitable for this system? If not, what changes would need to be made to make them suitable?
I will admit that the current system of testing and certification is bad, and I remind myself that I need to watch out for status quo bias here.
“Doctor in Physics from Former Soviet Union, current Taxi Driver in New York” is a problem only because “Doctor in Physics from Current United States, current Taxi Driver in New York” is also a problem. No one doubts the quality of physicists from the former Soviet Union as a group. It’s just that there aren’t enough jobs in physics for all the PhDs we graduate.
Sorry, my bad: I said Doctor when I meant ex-Professor, i.e. someone who used to teach and perhaps even have tenure.
That doesn’t change anything. There are fewer academic physics jobs in the United States than there are qualified people to fill them. Increasing the supply of qualified candidates just makes the discrepancy worse.
I’m actually not sure that “Doctor in Physics, current Taxi Driver” is as much of an issue today as it was 20-30 years ago. Back then I certainly knew people in this or equivalent positions (e.g. Physics PhD working as taxi driver, auto mechanic, printshop worker, etc.) These days the common path is physics PhD, software developer.
And yet, in most schools and universities, classrooms are too crowded, and there are too few teachers. How does that work?
In the States? University enrollment grew 37 percent between 2000 and 2010. Student loan reforms increased demand for college education drastically. At the same time, federal and state education funding levels in most places either remained stagnant or dropped relative to inflation, reducing the amount of budget available for hiring faculty. Administrative costs have also skyrocketed in many places, for reasons that are not entirely clear to me.
Most universities attempted to raise tuition and/or hire cheaper faculty (e.g., adjuncts) as stop-gap measures, further weakening the demand for Ph. D.-holding educators. These measures have more-or-less hit peak effectiveness.
Sounds like a great opportunity for problem-solving. Where to begin?
nitpick: it wasn’t the debt that was AAA rated, it was certain derivatives based on the debt (which is much less unreasonable. That the AAA rating was undeserved does not follow simply from the fact that the underlying morgages were sub-AAA.)
Employers looking for candidates beyond the entry level tend to be interested in a candidate’s experience and work history far more than in any result of a test. Actuaries may be one exception. Off the top of my head it is the only profession I can think of where promotion within the field is gated by tests, and not just admission to the field. However I am not an actuary, so I may not fully understand how this works.
FYI: I regularly recruit, interview, and hire candidates within my field. I would love to have a test I could give to candidates to tell me who to hire, but I know of no such test. At best the written and computer administered tests that exist serve as a bozo filter to eliminate the worst candidates. They do not eliminate all bozos, and do not distinguish the mildly competent from the ridiculously effective, not even at the entry level.
Absolutely love this idea, just one little comment on the watchdogs.
Why do the watchdogs exist? Because business produce demand for accurate measurements of employability, and are willing to pay for a service that offers that information. I’m sure if a significant number of watchdog companies exist a business will need to employ a meta-watchdog to determine which watchdog to employ, but it does eventually terminate at the employer themselves, who has to personally be the watchdog for which meta-watchdog company to employ.
And that information is easily available (albeit after some lag). Which company reliably recommends the tests that actually produce better employees?
How big is the chain of information when you’re deciding which restaurant to eat at? That’s perfectly analogous scenario, as far as I can tell.
So the reason employers wouldn’t make the mistake with testers that banks made with moneylenders is… that there isn’t a secondary market of diplomas/graduates? That educational entities with a vested interest in getting evaluated well would be a much smaller part of the market of employers than banks with a vested interest in selling bad debt were in respect to the market of debt buyers?
Am I getting this right, or am I just making a spectacle of my ignorance?
I would think business wouldn’t make the mistake of employing biased watchdogs. Let’s look at the main incentives:
Businesses wan’t an accurate idea of who is employable.
Schools of course want to appear to produce employable students.
Watchdogs want money.
Now, unless I’m mistaken, being bribed by schools to put them in a favorable light is a temporary bubble, liable to pop when businesses discover that they’ve been lied to. Watchdogs’ main employers would be businesses searching for employees, as far as I’m willing to predict, and failing to successfully out-predict competing watchdogs will result in a watchdog’s failure.
Fraud will happen. Humans tend to lie for short term gains. However, it’s an unsustainable business model in the long run. Fear it greatly as this market is forming, but as a trend it shouldn’t last.
This is only partially true. Businesses want two things:
An accurate metric of who is employable
The ability to co-opt that metric for specific people, in order to promo te nepotism without producing the appearance of nepotism.
‘2’ is a rare exception, but it is a VITALLY important one—important enough that any system you design will be doomed to failure if it fails to provide for case 2, or makes its process of providing for case 2 too obvious to outsiders.
EDIT: Okay. I’m going to go ahead and take whatever further karma hit I have to to point this out. I lost FOURTY EIGHT karma over the course of five minutes, after posting this. Seriously, LW. Fix your goddamn karma system.
I don’t see how nepotism effects the system too horribly. Businesses are still entirely allowed to hire close friends and family despite less credentials, and any other nepotistic behavior that’s counter to the purpose of profit.
The Watchdogs, however, can’t just hand out certification willy nilly, because they need a universal metric for all competing businesses in a field.
If Business A makes a habit of bribing Watchdog A so Mr. CEO’s sons can have the same certification as actual students (for the purpose of hiding nepotism), then Watchdog B’s nepotism free selection methods will locate higher quality employees more consistently, and garner costumers from the majority of companies that are primarily interested in actual profit.
Well, I was speaking from the perspective of someone who has actual experience in writing business credentialing systems. The effect is small enough to come out in the wash most of the time, but is important enough to require implementation (or at the very least, to prevent robustness in the implementation of goal 1.) The problem is that profit-maximization is ultimately less important than status-maintenance, in many cases—but the environment requires that profit-maximization appears to be the primary motive.
Ok, I think I understand what your saying.
The system needs to be designed in such a way to allow covert nepotism, as businesses desire the ability to behave nepotistical without appearing to be, and this demand is important and consistent enough for the market solution to inevitably include it. And you are also saying that nepotism isn’t too harmful, and the system we’d attempt to implement should include mechanisms for nepotism.
(God I sometimes hate human behavior. ARGGH)
Is this correct? Have you communicated the correct ideas to me?
And, In what ways have business credentialing systems handled this kind of demand historically, and are similar methods applicable here?
From my experience, yes.
In my experience (which is by no means exhaustive), the “recommended” solution has been to ensure that the data entry and transmission processes are performed by humans, rather than fully automated, and that no automated data integrity checks are performed.
Also in my experience, this ‘solution’ does far more harm to the system than the nepotism itself. :(
Ok. So the problem is that designing systems with accessible back-doors for nepotism is a larger hit to the effectiveness of the industry than if there were simply blatant nepotism itself, and systems weren’t compromised by third parties.
Is this correct? In your (admittedly non-exauhstive) experience, in what specific ways is the industry harmed?
Yes; that’s my current thesis, anyway.
Because nepotism is being hidden behind a cloud of general inefficiency and unaccountability, systems are designed to be inefficient and unaccountable. This means that plenty of other failure modes ride in on the same train—HR interviewing processes become a nightmare, reporting and recording instances of harassment, fraud, etc. become fraught with roadblocks and potential backlash, and internal and outreach hires based on specific recommendations / vouchsafes become a needlessly complicated process.
That still seems to leave room for (1) one or more broad cycles where most of the watchdog industry grows rich off lies & bias before imploding (as happens with bubbles) or (2) a steadily growing industry with a continually replenished proportion of fraudulent watchdogs that pop into existence, make money from lies, then get swooped on (as happens with consumer fraud).
It seems as if these are generic enough problems with enough past incidences to assume studies have already occurred regarding them. Do you happen to know what the existing solution to these problems are and how effective they are?
or alternatively, how harmful the problems are, and wether an imposed solution tends to be cost effective or not.