Unfortunately no, but from your description it seems quite like the theory of the mind of General Semantics.
I think it’s similar, but Lakoff focuses more on how things are abstracted away. For example, because in childhood affection is usually associated with warmth (e.g. through hugs), the different areas of your brain that code for those things become linked (“neurons that wire together, fire together”). This then becomes the basis of a cognitive metaphor, Affection Is Warmth, such that we can also say “She has a warm smile” or “He gave me the cold shoulder” even though we’re not talking literally about body temperature.
Similarly, in Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being, he summarises his chapter “Boole’s Metaphor: Classes and Symbolic Logic” thusly:
There is evidence … that Container schemas are grounded in the sensory-motor system of the brain, and that they have inferential structures like those just discussed. These include Container schema versions of the four inferential laws of classical logic.
We know … that conceptual metaphors are cognitive cross-domain mappings that preserve inferential structure.
… [W]e know that there is a Classes are Containers metaphor. This grounds our understanding of classes, by mapping the inferential structure of embodied Container schemas to classes as we understand them.
Boole’s metaphor and the Propositional Logic metaphor have been carefully crafted by mathematicians to mathematicize classes and map them onto propositional structures.
The symbolic-logic mapping was also crafted by mathematicians, so that propositional logic could be made into a symbolic calculus governed by “blind” rules of symbol manipulation.
Thus, our understanding of symbolic logic traces back via metaphorical and symbolic mappings to the inferential structure of embodied Container schemas.
That’s what I was getting at above, but I’m not sure I explained it very well. I’m less eloquent than Mr. Lakoff is, I think.
Yes. Since a group of maps can be seen just as a set of things in itself, it can be treated as a valid territory. In logic there are also map/territory loops, where the formulas itself becomes the territory mapped by the same formulas (akin to talking in English about the English language). This trick is used for example in Goedel’s and Tarski’s theorems.
Hmm interesting. I should become more familiar with those.
Yes. Basically the Bayesian definition is more inclusive: e.g. there is no definition of a probability of a single coin toss in the frequency interpretation, but there is in the Bayesian. Also in Bayes take on probability the frequentist definition emerges just as a natural by-product. Plus, the Bayesian framework produced a lot of detangling in frequentist statistics and introduced more powerful methods.
Oh right for sure, another historical example would be “What’s the probability of a nuclear reactor melting down?” before any nuclear reactors had melted down. But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?
The first two chapters of Jaynes’ book, a pre-print version of which is available online for free, do a great job in explaining and using Cox to derive Bayesian probability. I urge you to read them to fully grasp this point of view.
Right, I think I found the whole thing online, actually. And the first chapter I understood pretty much without difficulty, but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I’m trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox’s Theorem in general would make Jayne’s particular presentation of it easier to digest?
But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?
You’ll have to ask to a frequentist :) Bayesian use both definition (even though they call long-run frequency… well, long-run frequency), but frequentist refuse to acknowledge bayesian probability definition and methods.
but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I’m trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox’s Theorem in general would make Jayne’s particular presentation of it easier to digest?
I skipped the whole derivation too, it was not interesting. What is important is at the end of the chapter, that is that developing Cox requirements brings to the product and the negation rules, and that’s all you need.
I think it’s similar, but Lakoff focuses more on how things are abstracted away. For example, because in childhood affection is usually associated with warmth (e.g. through hugs), the different areas of your brain that code for those things become linked (“neurons that wire together, fire together”). This then becomes the basis of a cognitive metaphor, Affection Is Warmth, such that we can also say “She has a warm smile” or “He gave me the cold shoulder” even though we’re not talking literally about body temperature.
Similarly, in Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being, he summarises his chapter “Boole’s Metaphor: Classes and Symbolic Logic” thusly:
That’s what I was getting at above, but I’m not sure I explained it very well. I’m less eloquent than Mr. Lakoff is, I think.
Hmm interesting. I should become more familiar with those.
Oh right for sure, another historical example would be “What’s the probability of a nuclear reactor melting down?” before any nuclear reactors had melted down. But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?
Right, I think I found the whole thing online, actually. And the first chapter I understood pretty much without difficulty, but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I’m trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox’s Theorem in general would make Jayne’s particular presentation of it easier to digest?
Hooray, I understand some things!
You’ll have to ask to a frequentist :)
Bayesian use both definition (even though they call long-run frequency… well, long-run frequency), but frequentist refuse to acknowledge bayesian probability definition and methods.
I skipped the whole derivation too, it was not interesting. What is important is at the end of the chapter, that is that developing Cox requirements brings to the product and the negation rules, and that’s all you need.