You don’t think how the mind works is important? You don’t think the mind’s systematic malfunctions are important? Do you think the Inquisition would have tortured witches, if all were ideal Bayesians?
begging the question
From epistemic rationality how can one infer a particular route for instrumental rationality? I can’t think of any way since there are unaccounted for variables independent of rationality skills, like the complexity of one’s values (there’s a sequence post on this, too!)
In all human history, every great leap forward has been driven by a new clarity of thought. Except for a few natural catastrophes, every great woe has been driven by a stupidity. Our last enemy is ourselves; and this is a war, and we are soldiers.
Blue or green?. It’s this kind of post, I hypothesise, which scared Mark away.
You were born too late to remember a time when the rise of totalitarianism seemed unstoppable, when one country after another fell to secret police and the thunderous knock at midnight, while the professors of free universities hailed the Soviet Union’s purges as progress. It feels as alien to you as fiction; it is hard for you to take seriously.
Presumptuous. And, compulsory history lessons indoctrinated this fear into me. Moreover, politics is broadly along liberal-conservatives (manned by anti-totalitarians) vs progressive-socialists in my country, so totalitarianism is a salient concept in many political debates. I identify as a liberal (in the classical liberal sense), but don’t precommit to any abstract notions, including those of fellow liberals, one might assume follows from there, because only I can stop my mind from being killed. It’s only significance to me is concrete – it’s permission to participate in the social activities of the one of 2 major political parties in my country.
Stuart Chase and others have come near to claiming that all abstract words are meaningless, and have used this as a pretext for advocating a kind of political quietism. Since you don’t know what Fascism is, how can you struggle against Fascism?
We look back with the clarity of history, and weep to remember the planned famines of Stalin and Mao, which killed tens of millions. We call this evil, because it was done by deliberate human intent to inflict pain and death upon innocent human beings. We call this evil, because of the revulsion that we feel against it, looking back with the clarity of history. For perpetrators of evil to avoid its natural opposition, the revulsion must remain latent. Clarity must be avoided at any cost. Even as humans of clear sight tend to oppose the evil that they see; so too does human evil, wherever it exists, set out to muddle thinking.
In our time, political speech and writing are largely the defence of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification...
How are rationalist writers any different? Consider Tyler’s argument again:
Rationalist taboo?
Tyler Cowen apparently feels that overcoming bias is just as biased as bias: “I view Robin’s blog as exemplifying bias, and indeed showing that bias can be very useful.” I hope this is only the result of thinking too abstractly while trying to sound clever. Does Tyler seriously think that scope insensitivity to the value of human life is on the same level with trying to create plans that will really save as many lives as possible?
Strawmanning your competition, steelmanning your own side – unfair audience manipulation here
Then again, maybe I’m expecting too much from debate, reasoning isn’t about logic, it’s about arguing, even if that’s about improving our logical ability to get better at arguing. It’s not going to work unless we’re at the same interal inferential step.
Scope sensitivity, as I wrote on that sequence article too, eternalism isn’t the rational position. I, for one, discount the value of anything in the future because of my capacity to die, or to change my mind to a position unaccountable with present information.
As with chocolate cookies, not everything that feels pleasurable is good for you
begging the question
From epistemic rationality how can one infer a particular route for instrumental rationality? I can’t think of any way since there are unaccounted for variables independent of rationality skills, like the complexity of one’s values (there’s a sequence post on this, too!)
Blue or green?. It’s this kind of post, I hypothesise, which scared Mark away.
Presumptuous. And, compulsory history lessons indoctrinated this fear into me. Moreover, politics is broadly along liberal-conservatives (manned by anti-totalitarians) vs progressive-socialists in my country, so totalitarianism is a salient concept in many political debates. I identify as a liberal (in the classical liberal sense), but don’t precommit to any abstract notions, including those of fellow liberals, one might assume follows from there, because only I can stop my mind from being killed. It’s only significance to me is concrete – it’s permission to participate in the social activities of the one of 2 major political parties in my country.
How are rationalist writers any different? Consider Tyler’s argument again:
Rationalist taboo?
Strawmanning your competition, steelmanning your own side – unfair audience manipulation here
I suppose you think that’s justified because it advances the rationalist course, or the MIRI course, but ends don’t justify means amongst humans.
Then again, maybe I’m expecting too much from debate, reasoning isn’t about logic, it’s about arguing, even if that’s about improving our logical ability to get better at arguing. It’s not going to work unless we’re at the same interal inferential step.
Scope sensitivity, as I wrote on that sequence article too, eternalism isn’t the rational position. I, for one, discount the value of anything in the future because of my capacity to die, or to change my mind to a position unaccountable with present information.