If by “rationalist”, the LW community means someone who believes it is possible and desirable to make at least the most important judgements solely by the use of reason operating on empirically demonstrable facts, then I am an ex-rationalist. My “intellectual stew” had simmered into it several forms of formal logic, applied math, and seasoned with a BS in Computer Science at age 23.
By age 28 or so, I concluded that most of the really important things in life were not amenable to this approach, and that the type of thinking I had learned was useful for earning a living, but was woefully inadequate for other purposes.
At age 50, I am still refining the way I think. I come to LW to lurk, learn, and (occasionally) quibble.
I’m probably also an ex-rationalist. Simply looking at
the list of biases that I should really be
correcting for in making a decision under uncertainty is rather intimidating. I’d like to be right—but
do I really want to be right that much?
Frankly, the fact that I still maintain a cryonics membership is really status quo bias: I set that up before
Reading The Crack of a Future Dawn—downgrade by 2X if uploads/ems
dominate and are impoverished to the point of being on the edge of survivable subsistence.
Watching the repugnant Leon Kass lead a cheerleading section for
the grim reaper from the chairmanship of W’s bioethics council. Extending human lifespans is a hard enough
technical problem—but I hadn’t imagined that there was going to be a whole faction on the side of death.
Downgrade the odds by another 2X if there is a faction trying to actively keep cryonicists dead.
Watching Watson perform impressively in an open
problem domain. The traditional weakness of classical AI has been brittleness, breaking spectacularly on
moving outside of a very narrow domain. That firewall against ufAI has now been breached.
Yet another downgrade of 2X for this hazard gaining strength...
If by “rationalist”, the LW community means someone who believes it is possible and desirable to make at least the most important judgements solely by the use of reason operating on empirically demonstrable facts, then I am an ex-rationalist. My “intellectual stew” had simmered into it several forms of formal logic, applied math, and seasoned with a BS in Computer Science at age 23.
By age 28 or so, I concluded that most of the really important things in life were not amenable to this approach, and that the type of thinking I had learned was useful for earning a living, but was woefully inadequate for other purposes.
At age 50, I am still refining the way I think. I come to LW to lurk, learn, and (occasionally) quibble.
Welcome to Less Wrong!
You might want to post your introduction in the current official “welcome” thread.
LW’s notion of rationality differs greatly from what you described. You may find our version more palatable.
I’m probably also an ex-rationalist. Simply looking at the list of biases that I should really be correcting for in making a decision under uncertainty is rather intimidating. I’d like to be right—but do I really want to be right that much?
Frankly, the fact that I still maintain a cryonics membership is really status quo bias: I set that up before
Reading The Crack of a Future Dawn—downgrade by 2X if uploads/ems dominate and are impoverished to the point of being on the edge of survivable subsistence.
Watching the repugnant Leon Kass lead a cheerleading section for the grim reaper from the chairmanship of W’s bioethics council. Extending human lifespans is a hard enough technical problem—but I hadn’t imagined that there was going to be a whole faction on the side of death. Downgrade the odds by another 2X if there is a faction trying to actively keep cryonicists dead.
Watching Watson perform impressively in an open problem domain. The traditional weakness of classical AI has been brittleness, breaking spectacularly on moving outside of a very narrow domain. That firewall against ufAI has now been breached. Yet another downgrade of 2X for this hazard gaining strength...