Some. I tell you that I mixed chemicals X and Y to make Z. You are initially 99% confident that X and Y don’t make Z and so think I’m probably lying. Then you read that something in the air in the state in which I live cause X and Y to create Z. Won’t your estimate of my having told the truth go up?
I would be even more certain that X and Y don’t make Z, and that you were mistaken. I would believe that you mixed X and Y and a and made Z, where a is the characteristic in the air which I was unaware of prior.
There is the very weak effect that I am more likely to understand how something happens if it is possible than if it is impossible, and things which are possible are more likely to happen than things that are impossible. Therefore I am more likely to be confused in general by things that didn’t happen than by things that did- but not more likely to be confused by things that didn’t happen but are possible than by things which did happen and are possible.
My biggest doubt comes from the fact that there should be trivial to reference to at least one grant which is literally as bad as the example given; this could be done without compromising anonymity, given that FOIA requests can originate from any source. Because the details of one grant as bad as the iPod/makeover grant would be fairly weak evidence that almost all grants are horrible, the absence of any in my research is fairly strong evidence that not almost all grants in the nation are horrible.
Which is not to say that there couldn’t be districts where horrible grants are the norm, or clearly fraudulent grants.
Finally, the biggest inconsistencies I found in the original post were A) That an apparently literate and intelligent person though that a state-standardized test was an accurate measure of literacy, B) That a school with a test results problem would still have a 75% pass rate among lowest-class students, and C) That he never mentions being told by his supervisor that his job was specifically not to evaluate if the goals were appropriate (that being the job of the department issuing the grant, prior to issuing the grant; if they said that giving students iPods was the goal of the grant, it was sufficient), but only to evaluate whether the goals written into the grant were met. Instead the author describes the conversation as being one of ’colluding’ D) (weak) By law, in every state, schools do not give out lists of students who are on free/reduced meal programs nor of students who failed tests. It is possible that the administrator in question simply violated the law; that the data was provided in a technically non-personally-identifiable manner, such as student ID numbers that qualified for meal programs; or some combination of the two.
Did you just claim that g correlates well enough with two specific s to measure one s (ability to determine the answer expected by the writer of a test) and provide results for a different one (literacy) in the general sense?
Because my position is that most standardized tests measure a combination of the intended subject and the ability of the taker to figure out the test writer; part of this comes from my observed ability to consistently outperform people with an equal or better knowledge of the subject being tested on many different tests, and most of it comes from my ability to explicitly recognize the test author’s thought patterns in determining which options were available in multiple choice tests and figure the correct answer to a large number of their questions by looking only at the possible answers.
Great. We can use test any skill to accurately measure any other skill now, right? It is impossible for someone to have great math skills and poor English literacy, because of general intelligence?
That requires more extraordinary evidence for me to believe than I have seen. What is the most extraordinary citable data that you encountered in your research indicating that specific skills are in general interchangeable?
I asked if you were making the claim that you could, in general, measure how well someone could guess the teacher’s password (on a test for any subject) and get results which measured their literacy (implied: or calculus) skills. You indicated yes.
If that is the case, it is impossible for people in general to have different literacy and calculus skills, because the same test would generally measure both skills.
Did you instead want to make the claim that people who are poor at guessing passwords are in general worse at a skill than their performance on a hypothetical test which measured only that skill would indicate, and vice versa for people who are good at guessing passwords?
My default case for considering a typical ‘official’ test is that it measures mostly the ability to guess the password, and only somewhat the mastery of the subject. Based on my experience with IQ tests, there is a significant factor of password guessing in them as well.
No disrespect intended, but I would rather not go over the literature on IQ with you. If you are interested I do discuss this literature in this book in Chapter 7 which is called “What IQ Tells You”.
Lots of tasks, such as the ability to repeat a sequence of numbers backwards, are highly correlated with IQ.
In general though, the ability to guess the teacher’s password is not a measure of literacy. To the point that entered this line:
A) That an apparently literate and intelligent person thought that a state-standardized test was an accurate measure of literacy
I think that (A) is true because of Spearman’s g. The evidence for g is overwhelming.
Suppose there was a test with a single score that was 50% based on password guessing and 50% based on literacy: It is not the case that because password guessing and literacy both correlate to intelligence, that this test measures literacy.
Password guessing is a specific skill, like literacy or calculus. Those skills develop faster in people with high intelligence, but they can be developed to a high level in almost anybody. My experience with education makes me believe that the skill of password guessing is being intentionally taught to the detriment of the skills of literacy or math, precisely because it results in a greater increase in test scores on the tests used to evaluate the teachers.
My not understanding how something could have happened is evidence for it not having happened.
If you understand how something could happen, how strong is that evidence that it happened?
Some. I tell you that I mixed chemicals X and Y to make Z. You are initially 99% confident that X and Y don’t make Z and so think I’m probably lying. Then you read that something in the air in the state in which I live cause X and Y to create Z. Won’t your estimate of my having told the truth go up?
I would be even more certain that X and Y don’t make Z, and that you were mistaken. I would believe that you mixed X and Y and a and made Z, where a is the characteristic in the air which I was unaware of prior.
There is the very weak effect that I am more likely to understand how something happens if it is possible than if it is impossible, and things which are possible are more likely to happen than things that are impossible. Therefore I am more likely to be confused in general by things that didn’t happen than by things that did- but not more likely to be confused by things that didn’t happen but are possible than by things which did happen and are possible.
My biggest doubt comes from the fact that there should be trivial to reference to at least one grant which is literally as bad as the example given; this could be done without compromising anonymity, given that FOIA requests can originate from any source. Because the details of one grant as bad as the iPod/makeover grant would be fairly weak evidence that almost all grants are horrible, the absence of any in my research is fairly strong evidence that not almost all grants in the nation are horrible.
Which is not to say that there couldn’t be districts where horrible grants are the norm, or clearly fraudulent grants.
Finally, the biggest inconsistencies I found in the original post were
A) That an apparently literate and intelligent person though that a state-standardized test was an accurate measure of literacy,
B) That a school with a test results problem would still have a 75% pass rate among lowest-class students, and
C) That he never mentions being told by his supervisor that his job was specifically not to evaluate if the goals were appropriate (that being the job of the department issuing the grant, prior to issuing the grant; if they said that giving students iPods was the goal of the grant, it was sufficient), but only to evaluate whether the goals written into the grant were met. Instead the author describes the conversation as being one of ’colluding’
D) (weak) By law, in every state, schools do not give out lists of students who are on free/reduced meal programs nor of students who failed tests. It is possible that the administrator in question simply violated the law; that the data was provided in a technically non-personally-identifiable manner, such as student ID numbers that qualified for meal programs; or some combination of the two.
I think that (A) is true because of Spearman’s g. The evidence for g is overwhelming.
Did you just claim that g correlates well enough with two specific s to measure one s (ability to determine the answer expected by the writer of a test) and provide results for a different one (literacy) in the general sense?
Because my position is that most standardized tests measure a combination of the intended subject and the ability of the taker to figure out the test writer; part of this comes from my observed ability to consistently outperform people with an equal or better knowledge of the subject being tested on many different tests, and most of it comes from my ability to explicitly recognize the test author’s thought patterns in determining which options were available in multiple choice tests and figure the correct answer to a large number of their questions by looking only at the possible answers.
Yes. I did a huge amount of reading on IQ to write this.
Great. We can use test any skill to accurately measure any other skill now, right? It is impossible for someone to have great math skills and poor English literacy, because of general intelligence?
That requires more extraordinary evidence for me to believe than I have seen. What is the most extraordinary citable data that you encountered in your research indicating that specific skills are in general interchangeable?
Just do research on IQ. And we’re talking correlations so your use of the world “impossible” is incorrect.
I asked if you were making the claim that you could, in general, measure how well someone could guess the teacher’s password (on a test for any subject) and get results which measured their literacy (implied: or calculus) skills. You indicated yes.
If that is the case, it is impossible for people in general to have different literacy and calculus skills, because the same test would generally measure both skills.
Did you instead want to make the claim that people who are poor at guessing passwords are in general worse at a skill than their performance on a hypothetical test which measured only that skill would indicate, and vice versa for people who are good at guessing passwords?
My default case for considering a typical ‘official’ test is that it measures mostly the ability to guess the password, and only somewhat the mastery of the subject. Based on my experience with IQ tests, there is a significant factor of password guessing in them as well.
No disrespect intended, but I would rather not go over the literature on IQ with you. If you are interested I do discuss this literature in this book in Chapter 7 which is called “What IQ Tells You”.
Lots of tasks, such as the ability to repeat a sequence of numbers backwards, are highly correlated with IQ.
In general though, the ability to guess the teacher’s password is not a measure of literacy. To the point that entered this line:
Suppose there was a test with a single score that was 50% based on password guessing and 50% based on literacy: It is not the case that because password guessing and literacy both correlate to intelligence, that this test measures literacy.
Password guessing is a specific skill, like literacy or calculus. Those skills develop faster in people with high intelligence, but they can be developed to a high level in almost anybody. My experience with education makes me believe that the skill of password guessing is being intentionally taught to the detriment of the skills of literacy or math, precisely because it results in a greater increase in test scores on the tests used to evaluate the teachers.