This is an important consideration. I just can’t figure out how to test it.
colinmarshall
Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.
This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I’m dubious about the extent to which it can be improved. In any case, I’m not quite to either goal myself yet. And if your own head-land making accurate predictions, that’s a good thing; I just can’t get those kinds of results out of mine. Yet.
I second this request.
I’d like to make it that, but we’ll see what I can do.
Nah; it was supposed to read “in which I construct.” I just fumbled the editing.
Thank y’kindly. I upvote any and all comments that correct mistakes that would’ve made me look like a sub-lingual doof otherwise.
Glad to hear it. I aim to please.
Thanks; duly noted. I plan to write a few posts on the “road testing” of Less Wrong and Less Wrong-y theories about rationality and the defeat of akrasia, so these are helpful pointers.
Thanks. I expect most of my posts here will be more Useful Practice than True Theory, but only just; my hope is that the Less Wrong community won’t spare the downvotes if I stray too far from rationality and too close to self-help territory.
You’re absolutely right; it’s the overuse of narrative we need to be concerned about. Humanity can’t get by without it, but one inch too much and we’re in self-delusion territory.
We seem to have a population here that already cares, and deeply, about rationality. I do trust them to upvote whatever has a lot to do with rationality and downvote whatever has too little to do with it. In fact, I’d go so far as to submit that we’re doing something wrong if there aren’t enough off-topic-ish, net-negative-karma posts; it would show that posters aren’t taking quite enough risks as regards widening rationality’s domain. I’m weary of the PUA and overly self-help-y talk, sure, but seeing nothing like it around here would be the dead canary in the coal mine.
The more time I spend thinking about it, the more I come to realize that Narrative Is the Enemy, at least where attempts to see and reason clearly are concerned. One heuristic has proven surprisingly useful time and time again, in efforts of rationality as well as creativity: don’t try to deliberately tell a story.
I would submit that it’s less an issue of the biologically-imposed limit to our life spans than the biologically-imposed limit to our predictive abilities, to the amount of “moving part” data our brains can work with simultaneously. Considering that we only seem to achieve anything like accuracy when predicting events on a very, very small scale of both time and complexity, one might argue that we actually plan in too long a term.
More expansion on the possibilities of such a solved computational might be in order here; even mathematicians will have to crank their imaginations a bit to think through the specific advantages afforded by the formalized-computer-mathematics future.
For rationalist polymaths out there, Isaac Asimov’s The Roving Mind
Paul Graham’s “Lies We Tell Kids”
I used to be a terrible hypochondriac when I was young and a great reader of medical dictionaries. One day I realised that I was not actually frightened of terminal illness but of not getting done the things I wanted to get done.
A.C. Grayling, from a recent Guardian mini-interview
(My interpretation: remember that our various seemingly nonsensical personality tics can mask other, more addressable concerns.)
Seconded and extended: it’s going to need to be made very, very clear that there’s no political slant at work. I’d even recommend going completely sans political subject matter for a little while; poke a few holes in some pundit’s argument and you’ll be assumed to have an ironclad agenda to promote the opposite (and probably also bad) position.
Sounds like the concept of “agility” could be generalized richly indeed.