As a warm-up, and to indicate how I intend to prompt discussion (subject to the group’s feedback) I have posted a summary of the Preface. (ETA: for instance, implications of this method are that it’s up to participants to check back on the post from time to time to see if new summaries have been posted; then after reading the parts summarized, come back and answer this comment. Does that work?)
I will start work today on a summary of as much of Chapter 1 as might make for a nice bite-sized chunk to discuss, and post that in a few days, or sooner if the discussion on the Preface dies down quickly.
Discussion question for the Preface: can you think of further examples of the type of “old ideas” Jaynes refers to?
However, I’m not sure this is the sort of thing Jaynes had in mind—the old ideas are shaping which data gets collected, so it’s not an example of re-examining the same data set. On the other hand, no amount of study of inmates in prisons and insane asylums could have established that there were homosexuals living ordinary lives.
No, I think this is right on target, and it reminds me of Yvain’s post on “disease”.
Cataloging a particular behavior as a pathology leads to “hidden inferences”, and no amount of new data can lead to correct conclusions without first challenging those among such hidden inferences which happen to be false. We could ask, “what data are we failing to collect on causes of obesity owing to our prevailing model of obesity”?
I wonder if Jaynes’ statement is really true? Here is an example that is on my mind because I’m reading the (thus far) awesome book The Making of the Atomic Bomb. Apologies if I get details wrong:
In the 1930s, there was a lot of work done on neutron bombardment of uranium. At some point, Fermi fired slow moving neutrons at uranium and got a bunch of interesting reaction products that he concluded were most plausibly transuranic elements. I believe he came to this conclusion because the models of the day discounted the hypothesis that a slow moving neutron could do anything but release a “small” particle like a helium nucleus or something and furthermore there was experimental work done to discount the lower elements that were in the vicinity of uranium.
Some weird experimental data by Joliet and Curie which seemed inconsistent with the prevailing model came up later. Hahn and Strassman seemed not to believe their results, and so tried to replicate them and found similar anomalies. A careful chemical analysis of the reaction products of uranium bombardment found elements like barium—much lower on the periodic table. Meitner and Frisch came along and provided a new model which turned out to be right.
So here was data that when analyzed with respect to old models seemed implausible. The data was questioned, but then replicated, studied and then understood. The result was that the old model had to be cast aside for something new. The reason is that the data was incompatible with the model (or at least implausible enough) that a new model needed to be created.
Isn’t this narrative the way knowledge often goes? New data comes along and blows up old ideas because the new data is inconsistent with or implausible in the old model. Does this jibe with Jaynes’ statement?
I can’t really figure out what he means by that. His example with dangerous doses of artificial sweeteners seems to be about asking the wrong question. It seems logical that no amount of data can get you the right answer if you don’t ask the/a right (set of) question(s).
He goes on about mutilating datasets, which seems to me a sin. Me, with GBytes of storage on my PC. When the medium of storage is paper, data gets mutilated. Consider a doctor writing up anamnesis: patient talks on and on, but only what the doctor considers relevant data is written down. Seems like a perfect example of a mutilated dataset and what Jaynes was talking about—if the doctor has a wrong model in mind while collecting data, (s)he is more likely not to collect important information.
I heard that the people at CERN don’t let a bit go unstored. But are there variables not measured at all, due to our existing models of the universe.
I believe Jaynes was implying that since the experimenters didn’t have a threshold model in mind, the experiment did not measure a broad enough range of doses to distinguish between a linear response and a threshold. For example, if the only tests of the sweetener were at doses which produced harmful effects, then it might be impossible to derive the correct model based on only that data.
As a warm-up, and to indicate how I intend to prompt discussion (subject to the group’s feedback) I have posted a summary of the Preface. (ETA: for instance, implications of this method are that it’s up to participants to check back on the post from time to time to see if new summaries have been posted; then after reading the parts summarized, come back and answer this comment. Does that work?)
I will start work today on a summary of as much of Chapter 1 as might make for a nice bite-sized chunk to discuss, and post that in a few days, or sooner if the discussion on the Preface dies down quickly.
Discussion question for the Preface: can you think of further examples of the type of “old ideas” Jaynes refers to?
(http://www.thisamericanlife.org/radio-archives/episode/204/81-Words): For some time, the only homosexuals who were studied were in prison or insane asylums. It took a good bit of work and some risk to get the word out that there were homosexuals living non-pathological lives.
However, I’m not sure this is the sort of thing Jaynes had in mind—the old ideas are shaping which data gets collected, so it’s not an example of re-examining the same data set. On the other hand, no amount of study of inmates in prisons and insane asylums could have established that there were homosexuals living ordinary lives.
No, I think this is right on target, and it reminds me of Yvain’s post on “disease”.
Cataloging a particular behavior as a pathology leads to “hidden inferences”, and no amount of new data can lead to correct conclusions without first challenging those among such hidden inferences which happen to be false. We could ask, “what data are we failing to collect on causes of obesity owing to our prevailing model of obesity”?
We could also ask “what data are we failing to collect about the risks of intentional weight loss because of our prevailing model of obesity?”.
I wonder if Jaynes’ statement is really true? Here is an example that is on my mind because I’m reading the (thus far) awesome book The Making of the Atomic Bomb. Apologies if I get details wrong:
In the 1930s, there was a lot of work done on neutron bombardment of uranium. At some point, Fermi fired slow moving neutrons at uranium and got a bunch of interesting reaction products that he concluded were most plausibly transuranic elements. I believe he came to this conclusion because the models of the day discounted the hypothesis that a slow moving neutron could do anything but release a “small” particle like a helium nucleus or something and furthermore there was experimental work done to discount the lower elements that were in the vicinity of uranium.
Some weird experimental data by Joliet and Curie which seemed inconsistent with the prevailing model came up later. Hahn and Strassman seemed not to believe their results, and so tried to replicate them and found similar anomalies. A careful chemical analysis of the reaction products of uranium bombardment found elements like barium—much lower on the periodic table. Meitner and Frisch came along and provided a new model which turned out to be right.
So here was data that when analyzed with respect to old models seemed implausible. The data was questioned, but then replicated, studied and then understood. The result was that the old model had to be cast aside for something new. The reason is that the data was incompatible with the model (or at least implausible enough) that a new model needed to be created.
Isn’t this narrative the way knowledge often goes? New data comes along and blows up old ideas because the new data is inconsistent with or implausible in the old model. Does this jibe with Jaynes’ statement?
re: old ideas
I can’t really figure out what he means by that. His example with dangerous doses of artificial sweeteners seems to be about asking the wrong question. It seems logical that no amount of data can get you the right answer if you don’t ask the/a right (set of) question(s).
He goes on about mutilating datasets, which seems to me a sin. Me, with GBytes of storage on my PC. When the medium of storage is paper, data gets mutilated. Consider a doctor writing up anamnesis: patient talks on and on, but only what the doctor considers relevant data is written down. Seems like a perfect example of a mutilated dataset and what Jaynes was talking about—if the doctor has a wrong model in mind while collecting data, (s)he is more likely not to collect important information.
I heard that the people at CERN don’t let a bit go unstored. But are there variables not measured at all, due to our existing models of the universe.
I believe Jaynes was implying that since the experimenters didn’t have a threshold model in mind, the experiment did not measure a broad enough range of doses to distinguish between a linear response and a threshold. For example, if the only tests of the sweetener were at doses which produced harmful effects, then it might be impossible to derive the correct model based on only that data.
Would also like to participate.
I’d like to be part of the group. m3lani3all3n would be my google wave, if thats how we do it...