It’s not trivially clear at all to me how bayes rule leads into such things as bottom-line reasoning avoidance. It seems plausible for me that for a lot of people there is enough handwaveing that the actual words put forward don’t do the majority of the lifting. That is a person talking about epistemology refers to bayes rule and explains why habits like avoiding bottom-line reasoning are good but they don’t materially need bayes rule for that. There might be belief in entailment rather than actual reproducable / objectively verifable entailment. If I wave a giant “2+2=4” flag while robbing a bunch of banks in one sense that fact has caused theft and in another it has not. Neither is is clear that anyone that robs banks must believe “2+2=4″.
You can avoid these things while knowing nothing of the Way of Bayes, but the Way of Bayes shows the underlying structure of reality that unifies the reasons for all of them being faults of reasoning.
I am unsure whether the dogmatic tone is put forward in sarcasm or as just plain straight.
One could argue that God is a convenient and unified way about thinking what is moral. And it is quite common for prisoners to find great utility in faith. But beliefs with the structure of “godlessness is dangerous as then there would be no right and wrong.” cloud thinking a lot and tie the beliefs to a specific ontology. Are beliefs like “it’s only good in so far that it aligns with God” and “it’s reasonable only so far as it aligns with bayes” meaningfully structurally different?
What if there is a deeper way of thinking why certain cognitve moves are good at even more unified view? What principles do we use to verify that the way of bayes checks out?
I am being quite straight, although consciously adopting some of Eliezer’s rhetorical style.
Are beliefs like “it’s only good in so far that it aligns with God” and “it’s reasonable only so far as it aligns with bayes” meaningfully structurally different?
They are meaningfully different. The Way of Bayes works; the Way of God does not. “Works” means “capable of leading one’s beliefs to align more closely with reality.” For more about this, it’s all there in the Sequences.
The analog I was shooting for was “thing is X only in so far that it approximates Y” where in one case X=good, Y= God and another case X=reasonable and Y=Bayes. The case of X=reasonable and Y=God doesn’t impact anything (althought I guess that the stance that there is a divine gatekeeper to truth isn’t a totally alien one, but I was not referencing it here).
Part of the reason as far as I understood for the rhetorical style is to make the silly things jump out as silly to not vest too serious weight in it.
There is the addiotional issue that rationalist are not particularly winning so the case of “one is broken, one is legit” can be questioned. Because of the heavy redefinition or questinioning of definitions it can be hard to verify that epikunfukas succeed in a metric other than the one defined by their teacher. This despite one of the central points being the reliance on external measures for success. If you follow fervently the teacher that teaches that you should not follow your teacher blindly you are still fervently following. That you have a model that refers to itself as making two variables close to each other doesn’t say whether it is a good model (“I am a true model” is not informative).
Part of the reason as far as I understood for the rhetorical style is to make the silly things jump out as silly to not vest too serious weight in it.
I can’t speak for Eliezer, but my intention is to imply that this is actually as important as the linguistic devices say it is. There is no irony intended here, no buffer of plausible deniability against being thought to be serious.
I can’t make any sense of your last paragraph, and the non-word “epikunfukas” is the least of it.
It’s not trivially clear at all to me how bayes rule leads into such things as bottom-line reasoning avoidance. It seems plausible for me that for a lot of people there is enough handwaveing that the actual words put forward don’t do the majority of the lifting. That is a person talking about epistemology refers to bayes rule and explains why habits like avoiding bottom-line reasoning are good but they don’t materially need bayes rule for that. There might be belief in entailment rather than actual reproducable / objectively verifable entailment. If I wave a giant “2+2=4” flag while robbing a bunch of banks in one sense that fact has caused theft and in another it has not. Neither is is clear that anyone that robs banks must believe “2+2=4″.
You can avoid these things while knowing nothing of the Way of Bayes, but the Way of Bayes shows the underlying structure of reality that unifies the reasons for all of them being faults of reasoning.
I am unsure whether the dogmatic tone is put forward in sarcasm or as just plain straight.
One could argue that God is a convenient and unified way about thinking what is moral. And it is quite common for prisoners to find great utility in faith. But beliefs with the structure of “godlessness is dangerous as then there would be no right and wrong.” cloud thinking a lot and tie the beliefs to a specific ontology. Are beliefs like “it’s only good in so far that it aligns with God” and “it’s reasonable only so far as it aligns with bayes” meaningfully structurally different?
What if there is a deeper way of thinking why certain cognitve moves are good at even more unified view? What principles do we use to verify that the way of bayes checks out?
I am being quite straight, although consciously adopting some of Eliezer’s rhetorical style.
They are meaningfully different. The Way of Bayes works; the Way of God does not. “Works” means “capable of leading one’s beliefs to align more closely with reality.” For more about this, it’s all there in the Sequences.
The analog I was shooting for was “thing is X only in so far that it approximates Y” where in one case X=good, Y= God and another case X=reasonable and Y=Bayes. The case of X=reasonable and Y=God doesn’t impact anything (althought I guess that the stance that there is a divine gatekeeper to truth isn’t a totally alien one, but I was not referencing it here).
Part of the reason as far as I understood for the rhetorical style is to make the silly things jump out as silly to not vest too serious weight in it.
There is the addiotional issue that rationalist are not particularly winning so the case of “one is broken, one is legit” can be questioned. Because of the heavy redefinition or questinioning of definitions it can be hard to verify that epikunfukas succeed in a metric other than the one defined by their teacher. This despite one of the central points being the reliance on external measures for success. If you follow fervently the teacher that teaches that you should not follow your teacher blindly you are still fervently following. That you have a model that refers to itself as making two variables close to each other doesn’t say whether it is a good model (“I am a true model” is not informative).
I can’t speak for Eliezer, but my intention is to imply that this is actually as important as the linguistic devices say it is. There is no irony intended here, no buffer of plausible deniability against being thought to be serious.
I can’t make any sense of your last paragraph, and the non-word “epikunfukas” is the least of it.