I’m probably also an ex-rationalist. Simply looking at
the list of biases that I should really be
correcting for in making a decision under uncertainty is rather intimidating. I’d like to be right—but
do I really want to be right that much?
Frankly, the fact that I still maintain a cryonics membership is really status quo bias: I set that up before
Reading The Crack of a Future Dawn—downgrade by 2X if uploads/ems
dominate and are impoverished to the point of being on the edge of survivable subsistence.
Watching the repugnant Leon Kass lead a cheerleading section for
the grim reaper from the chairmanship of W’s bioethics council. Extending human lifespans is a hard enough
technical problem—but I hadn’t imagined that there was going to be a whole faction on the side of death.
Downgrade the odds by another 2X if there is a faction trying to actively keep cryonicists dead.
Watching Watson perform impressively in an open
problem domain. The traditional weakness of classical AI has been brittleness, breaking spectacularly on
moving outside of a very narrow domain. That firewall against ufAI has now been breached.
Yet another downgrade of 2X for this hazard gaining strength...
Welcome to Less Wrong!
You might want to post your introduction in the current official “welcome” thread.
LW’s notion of rationality differs greatly from what you described. You may find our version more palatable.
I’m probably also an ex-rationalist. Simply looking at the list of biases that I should really be correcting for in making a decision under uncertainty is rather intimidating. I’d like to be right—but do I really want to be right that much?
Frankly, the fact that I still maintain a cryonics membership is really status quo bias: I set that up before
Reading The Crack of a Future Dawn—downgrade by 2X if uploads/ems dominate and are impoverished to the point of being on the edge of survivable subsistence.
Watching the repugnant Leon Kass lead a cheerleading section for the grim reaper from the chairmanship of W’s bioethics council. Extending human lifespans is a hard enough technical problem—but I hadn’t imagined that there was going to be a whole faction on the side of death. Downgrade the odds by another 2X if there is a faction trying to actively keep cryonicists dead.
Watching Watson perform impressively in an open problem domain. The traditional weakness of classical AI has been brittleness, breaking spectacularly on moving outside of a very narrow domain. That firewall against ufAI has now been breached. Yet another downgrade of 2X for this hazard gaining strength...