A counterpoint to the “developing better reasoning skills” point above: it’s known that transfer of learning from one domain to another is often very low.
In my anecdotal experience, math is the most transferable of all skills I’ve learnt.
One of the barriers I run into when I delve into physics is that I have a very rationalist approach to math. I hate terminology and I want as little of it as possible in my reasoning. Physics has rather high barriers in that way in that academic physicists don’t really like mathematical rigour, and don’t precisely specify, say, the abstract algebraic axioms of the structures they are using. But when I get to a point of being able to specify what structure is behind a physical theory, I can usually intuit it readily.
Physics is domain knowledge compared to mathematical reasoning ability.
I have a bad habit of stating things and then explaining them. I meant it is rationalist in that I:
Hate terminology. Give me axioms, definitions and theorems; then we can discuss them in words later.
Build up my intuitions, and especially weed out the useless ones. I don’t really do proofs if it is not necessary, and sometimes I even skimp on the formal details; using my connectionist intelligence to it’s full potential.
I try to explore as much as possible, and look for people to learn from. Proving things is a question of strategy, and many a Nobel laureate has had mentors who were Nobel laureates too.
The Humans Guide to Words sequence and the concept of “words should refer to something” pertains to the first item.
The Quantum Mechanics sequence and the concept of “It all adds up to normality” pertains to the second item.
The third is based on an inversion of the idea behind the Sequences in general, that I need giants to stand on the shoulders of, and I forget exactly where it says that the most valuable skills in maths are non-verbal.
These three points I have on reflexive gedankenexperiment and discourse with more experienced CS and mathematics students, attempted to disprove and I have found that this is difficult, long winded and that the counterarguments are weak.
I also recognize that maths have tremendous instrumental value in the work I plan to do in the future.
All of this is basic bayesian skills, and I have met several people, CS, maths and physics students who were doing things adverse to understanding maths, which could be fixed by implementing any of the above strategies.
The word “rational” does not mean “was discussed in the Sequences” and certainly doesn’t mean “was analogous to something that was discussed in the Sequences”.
I relish the irony of your belief that “words should refer to something” when you readily inflate the meaning of “rational” and “bayesian”.
These three points I have on reflexive gedankenexperiment and discourse with more experienced CS and mathematics students, attempted to disprove and I have found that this is difficult, long winded and that the counterarguments are weak.
This indicates to me that you’ve assumed I’m criticizing the substance of your advice. This is a false assumption.
Do we agree that you can implement more or less winning strategies as a member of the species of homo-sapiens, congruent with the utility-concept of ‘making the world a better place’ , and that there is an absolute ranking criterion on how good said strategies are?
Do we agree that a very common failure mode of homo sapiens is statistical biases in their bayesian cognition, and that these biases have clear causal origin in our evolutionary history?
Do we agree that said biases hamper homo sapiens’ ability to implement winning strategies in the general case?
Do we agree that the writings of Eliezer Yudkowsky and the content of this site as a whole describe ways to partially get around these built in flaws of homo sapiens?
I am fairly confident that a close reading of my comments will find the interprentation of ‘rational’ to be synonymous with ‘winning-strategy-implementation’, and ‘bayesian’ to be synonymous with (in the case that it refers to a person) ‘lesswrong-site-member/sequence-implementor/bayes-conspiracist’ or (in the case that it refers to cognitive architectures) ‘bayesian inference’ and I am tempted to edit them as such.
I am nonplussed at your attempt to lull readers into agreeing with you by asking a lot of rhetorical questions. It’d have been less wrong to post just the last paragraph:
I am fairly confident that a close reading of my comments will find the interprentation [sic] of ‘rational’ to be synonymous with ‘winning-strategy-implementation’, and ‘bayesian’ to be synonymous with (in the case that it refers to a person) ‘lesswrong-site-member/sequence-implementor/bayes-conspiracist’ or (in the case that it refers to cognitive architectures) ‘bayesian inference’ and I am tempted to edit them as such.
The missing link in the argument here is how your examples are, in fact, winning strategies. You claimed some superficial resemblance to things in the sequences, and that you did better than some small sample of humans.
I disapprove of this expanded definition of “bayesian” on the basis that it conflates honest mathematics with handwaving and specious analogies. For example, “it all adds up to normality” is merely a paraphrase of the correspondence principle in QM and does not have any particular legislative force outside that domain.
If mathematical details matter, they should be specified (or be clear anyway—e.g. you don’t define “real numbers” in a physics paper). Physics can need some domain knowledge, but knowledge alone is completely useless—you need the same general reasoning ability as in mathematics to do anything (both for experimental and theoretical physics).
In fact, many physics problems get solved by reducing them to mathematical problems (that is the physics part) and then solving those mathematical problems (still considered as “solving the physical problem”, but purely mathematics)
In my anecdotal experience, math is the most transferable of all skills I’ve learnt.
Add physics to that.
One of the barriers I run into when I delve into physics is that I have a very rationalist approach to math. I hate terminology and I want as little of it as possible in my reasoning. Physics has rather high barriers in that way in that academic physicists don’t really like mathematical rigour, and don’t precisely specify, say, the abstract algebraic axioms of the structures they are using. But when I get to a point of being able to specify what structure is behind a physical theory, I can usually intuit it readily.
Physics is domain knowledge compared to mathematical reasoning ability.
What does this mean? You only attack problems with high VoI?
I have a bad habit of stating things and then explaining them. I meant it is rationalist in that I:
Hate terminology. Give me axioms, definitions and theorems; then we can discuss them in words later.
Build up my intuitions, and especially weed out the useless ones. I don’t really do proofs if it is not necessary, and sometimes I even skimp on the formal details; using my connectionist intelligence to it’s full potential.
I try to explore as much as possible, and look for people to learn from. Proving things is a question of strategy, and many a Nobel laureate has had mentors who were Nobel laureates too.
How are those things particularly rationalist? Sounds to me you’re just using the word in some inflationary sense.
The Humans Guide to Words sequence and the concept of “words should refer to something” pertains to the first item.
The Quantum Mechanics sequence and the concept of “It all adds up to normality” pertains to the second item.
The third is based on an inversion of the idea behind the Sequences in general, that I need giants to stand on the shoulders of, and I forget exactly where it says that the most valuable skills in maths are non-verbal.
These three points I have on reflexive gedankenexperiment and discourse with more experienced CS and mathematics students, attempted to disprove and I have found that this is difficult, long winded and that the counterarguments are weak.
I also recognize that maths have tremendous instrumental value in the work I plan to do in the future.
All of this is basic bayesian skills, and I have met several people, CS, maths and physics students who were doing things adverse to understanding maths, which could be fixed by implementing any of the above strategies.
The word “rational” does not mean “was discussed in the Sequences” and certainly doesn’t mean “was analogous to something that was discussed in the Sequences”.
I relish the irony of your belief that “words should refer to something” when you readily inflate the meaning of “rational” and “bayesian”.
This indicates to me that you’ve assumed I’m criticizing the substance of your advice. This is a false assumption.
Great. Now you have really confused me.
Do we agree that you can implement more or less winning strategies as a member of the species of homo-sapiens, congruent with the utility-concept of ‘making the world a better place’ , and that there is an absolute ranking criterion on how good said strategies are?
Do we agree that a very common failure mode of homo sapiens is statistical biases in their bayesian cognition, and that these biases have clear causal origin in our evolutionary history?
Do we agree that said biases hamper homo sapiens’ ability to implement winning strategies in the general case?
Do we agree that the writings of Eliezer Yudkowsky and the content of this site as a whole describe ways to partially get around these built in flaws of homo sapiens?
I am fairly confident that a close reading of my comments will find the interprentation of ‘rational’ to be synonymous with ‘winning-strategy-implementation’, and ‘bayesian’ to be synonymous with (in the case that it refers to a person) ‘lesswrong-site-member/sequence-implementor/bayes-conspiracist’ or (in the case that it refers to cognitive architectures) ‘bayesian inference’ and I am tempted to edit them as such.
I am nonplussed at your attempt to lull readers into agreeing with you by asking a lot of rhetorical questions. It’d have been less wrong to post just the last paragraph:
The missing link in the argument here is how your examples are, in fact, winning strategies. You claimed some superficial resemblance to things in the sequences, and that you did better than some small sample of humans.
I disapprove of this expanded definition of “bayesian” on the basis that it conflates honest mathematics with handwaving and specious analogies. For example, “it all adds up to normality” is merely a paraphrase of the correspondence principle in QM and does not have any particular legislative force outside that domain.
I’ll concede the point, partially because I tire of this discourse.
If mathematical details matter, they should be specified (or be clear anyway—e.g. you don’t define “real numbers” in a physics paper). Physics can need some domain knowledge, but knowledge alone is completely useless—you need the same general reasoning ability as in mathematics to do anything (both for experimental and theoretical physics).
In fact, many physics problems get solved by reducing them to mathematical problems (that is the physics part) and then solving those mathematical problems (still considered as “solving the physical problem”, but purely mathematics)
Logic even more so.