Don’t dismiss what non-LWers are trying to say just because they don’t phrase it as a LWer would. “Didn’t offer real accreditation” means that they 1) are skeptical about whether the the plan teaches useful skills (doing a Bayseian update on how likely that is, conditional on the fact that you are not accredited), or 2) they are skeptical that the plan actually has the success rate you claim (based on their belief that employers prefer accreditation, which ultimately boils down to Bayseianism as well).
Furthermore, it’s hard to figure the probability that something is a scam. I can’t think of any real-world situations where I would estimate (with reasonable error bars) that something has a 50% chance of being a scam. How would I be able to tell the difference between something with a 50% chance of being a scam and a 90% chance of being a scam?
I don’t think that they’re thinking rationally and just saying things wrong. They’re legitimately thinking wrong.
If they’re skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.
Their point about accreditation usually came up after I had cited their jobs statistics.
If their point about accreditation was meant to indicate that they are skeptical that the plan leads to useful skills or to getting a job, then having them bring it up when you cite the job statistics is entirely expected. They brought up evidence against getting a job when you gave them evidence for getting one.
(And if you’re thinking that job statistics are such good evidence that even bringing up something correlated with lack of jobs doesn’t affect the chances much, that’s not true. There are a number of ways in which job statistics can be poor evidence, and those people were likely aware that such ways exist.)
To elaborate a bit, one form of deceptive figures I’ve heard about is to only count successes as percentages of people who go through the entire program. It makes sense to do this to some degree since you don’t want to count people who dropped out after a day, but depending on how the program is run, it’s not hard to weed out a lot of people part of the way through and artificially increase your success rate.
There’s also the difference between the percentage of people who get jobs and the percentage who keep them, and the possibility that past performance covers a time period where the job market was better and won’t generalize to your chance of getting a job from the program now. Not to mention that success rate partly depends on the people who take the course—if most of the people who take the course are, say, high school graduates with high aptitude but no money for college, their success rate might not translate to the success rate for an adult who moves from another area.
And there’s the possibility of overly-literal wording. Has everyone who has gotten a job gotten a job based on a skill learned during the program? Is an “average salary” a mean or median?
Then there’s always the possibility that the success rate is simply false. Sure, false advertising is illegal,. but with no oversight, how’s anyone supposed to find that out?
I don’t know specifically about App Academy, but I’ve found a hacker news thread where there is some speculation that these “coding bootcamps” might inflate their statistics by having a selective enrollment interviews that screens off most people who are not already employable and/or hire their own students as instructors or something after they complete the program, so that they can be counted as employed, even for a short time.
Don’t dismiss what non-LWers are trying to say just because they don’t phrase it as a LWer would. “Didn’t offer real accreditation” means that they 1) are skeptical about whether the the plan teaches useful skills (doing a Bayseian update on how likely that is, conditional on the fact that you are not accredited), or 2) they are skeptical that the plan actually has the success rate you claim (based on their belief that employers prefer accreditation, which ultimately boils down to Bayseianism as well).
Furthermore, it’s hard to figure the probability that something is a scam. I can’t think of any real-world situations where I would estimate (with reasonable error bars) that something has a 50% chance of being a scam. How would I be able to tell the difference between something with a 50% chance of being a scam and a 90% chance of being a scam?
I don’t think that they’re thinking rationally and just saying things wrong. They’re legitimately thinking wrong.
If they’re skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.
If their point about accreditation was meant to indicate that they are skeptical that the plan leads to useful skills or to getting a job, then having them bring it up when you cite the job statistics is entirely expected. They brought up evidence against getting a job when you gave them evidence for getting one.
(And if you’re thinking that job statistics are such good evidence that even bringing up something correlated with lack of jobs doesn’t affect the chances much, that’s not true. There are a number of ways in which job statistics can be poor evidence, and those people were likely aware that such ways exist.)
To elaborate a bit, one form of deceptive figures I’ve heard about is to only count successes as percentages of people who go through the entire program. It makes sense to do this to some degree since you don’t want to count people who dropped out after a day, but depending on how the program is run, it’s not hard to weed out a lot of people part of the way through and artificially increase your success rate.
There’s also the difference between the percentage of people who get jobs and the percentage who keep them, and the possibility that past performance covers a time period where the job market was better and won’t generalize to your chance of getting a job from the program now. Not to mention that success rate partly depends on the people who take the course—if most of the people who take the course are, say, high school graduates with high aptitude but no money for college, their success rate might not translate to the success rate for an adult who moves from another area.
And there’s the possibility of overly-literal wording. Has everyone who has gotten a job gotten a job based on a skill learned during the program? Is an “average salary” a mean or median?
Then there’s always the possibility that the success rate is simply false. Sure, false advertising is illegal,. but with no oversight, how’s anyone supposed to find that out?
I don’t know specifically about App Academy, but I’ve found a hacker news thread where there is some speculation that these “coding bootcamps” might inflate their statistics by having a selective enrollment interviews that screens off most people who are not already employable and/or hire their own students as instructors or something after they complete the program, so that they can be counted as employed, even for a short time.