So.......luminosity equals subtlety of metacognition? Er, I’ll read the sequence. :)
chatquitevoit
I wonder how much of a correlation there is between people who put effort into self-training in rationality (or communal training, a la Less Wrong) and those who actually train a martial art. And I don’t mean in the “Now I’lll be able to beat up people Hoo-AH” three-week-course training—I mean real, long-term, long-rewards-curve training. I’ve done aikido on and off for years (my life’s been too hectic to settle down to a single dojo, sadly), and it takes a similar sort of dedication, determination, and self-reflection as a serious foray into training your mind to rationality. And, I’d go so far as to say, a similar ‘predilection of mind and preference’ (and I’ll let you LWers go to town on that one).
This may be a bit naive, but can a FAI even have a really directive utility function? It would seem to me that by definition (caveats to using that aside) it would not be running with any ‘utility’ in ‘mind’.
I think, actually, because we hardly ever play with optimal strategy goals are going to be nigh impossible to deduce. Would such a end-from-means deduction even work if the actor was not using the optimal strategy? Because humans only do so in games on the level of tic-tac-toe (the more rational ones maybe in more complex situations, but not by much), and as for machines that could utilize optimal strategy, we’ve just excluded them from even having such ‘goals’.
I’m a 19-yo female student in the NYC area.
I was mildly ecstatic to find that not only does Less Wrong exist, but it’s members have articulated absolute loads of things that my own mind had danced around but not gotten close to putting into words (reservations as to the value of that aside). I actually first became fascinated with Bayesian analysis when I learned about its use in cryptography, and in the pre-computer-age Bomba Machine that helped crack the German Enigma code at Bletchley Park. I saw that it could be used in a much less narrow way, insofar as plain old everyday rationality is concerned and I’ve been increasingly interested in it since. And along came Less Wrong to just blow open the idea into so, so many tangents and applications. :) Just great.
LW has also sort of managed to shock me by covering almost all of the specific areas into which my autodidactism has ranged, from philosophy and theosophy, to neurology and quantum physics. And seeing as I am (and as I suspect many people who become unhappy with the rate that the universe is ‘giving’ them information, and decide to SEEK it) ‘educated’ in a very deep but very patchy manner, LW’s holistic approach to knowledge has been really refreshing, and I’ve had great fun (although not in the trivial sense at all) exploring it for a while. Now I’m going to start in on the Sequences.
I’m also absolutely going to seek out the LW/OB NYC meetups once fall starts—it’s highly difficult for me to find people to have, er, rational and challenging discussions with, not to mention the camaraderie that comes from shared true ‘curiosity’, as per Eliezer’s definition. I see good evidence here on the blog to believe it will live up to my expectations.
Cheers.
SIgh......I’ve certainly seen all the ‘evidence to the contrary’, or at least a significantly representative amount.
This is the long and short of it: artificial sweeteners give taste, not satiety, so you won’t be as full as you would if you ate sugar, hence may eat more. Also, if you overestimate the number of calories you’re ‘saving’ using sweeteners, you’ll undoubtedly end up eating more, and potentially gaining weight. It’s the stereotypical “Ooh, I drank a diet coke instead of a real one, saved 200 calories, so I can have a donut!”
Conclusion: pay attention to EVERYTHING you’re eating, keeping in mind that you DO have a precondition of ‘how much food you need’, and do so in a manner that consciously minimizes your biases. It’s not that hard, but most people don’t take such a holistic approach, and I’ve not ever seen it specified as a factor in the ‘studies’ on artificial sweeteners. So, the studies are correct, per se, but you and I can hopefully be a little smarter than that....it’s pretty much a problem of overcoming internal bias by acting on as complete info as possible.
Fair enough—I don’t like the syrupyness of regular coke, but I drink diet, although it certainly doesn’t taste like real sugar. Although I’d ask if you’ve used other artificial sweeteners than Splenda, because most taste terrible, but it’s an entirely different chemical preparation—sucralose which comes from actual sugar, not dextrose or aspartame which come from tar.
Question, hopefully one betraying my busynes, not laziness :D......can you watch the BBC production of Darwin’s Dangerous Idea instead of reading it? And if so, which sections correspond?
Thanks loads.
Also, you can use Splenda, for no calories at all, and it tastes just fine. I know some people can get downright militant about how awful the stuff is, but they are the same people who buy organic when the term is essentially meaningless, and they seem to hate the thought that you are “cheating” to get deliciousness. I simply say to them “Er, human technology has progressed to the point where I can have, say, a sweet breakfast without consuming any sugar, and I’m going to do so. Cheating has nothing to do with it.” I drink tea with it alllll the time, too. :)
“Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.”
1984, George Orwell (although I really shouldn’t have to attribute this one)
Probably my favorite statement on rationality, it’s so practical for launching off into every other sphere of thought—politics, ethics, theology, maths/physics, and, well, all else that follows.
The most feasible iteration of a Dyson sphere would probably be the least dense, which would have great influence on the ways they could be used, and that makes them less likely because they are less commercially useful. Still, it could happen.
...or that both of you are wrong. Most times people argue, neither party actually has a fundamental grasp of their own position. If both did, it would either change the argument to an ENTIRELY different and more essential one, or dissolve it. And either of those options is of absolute gain for the participants.
Not that I can do anything about this aside from in my own actions, but it’s annoying as hell sometimes.
And what does this make it for those of us who do both?
This is a valid attempt to deal with conflicting stimuli from the world—to create standards to which you adhere consciously because you don’t trust your intuitions to motivate you rationally in the environment with which you must interact. And really, such attention is partially what it means to be conscious/human—to audit your actions ‘from the outside’ instead of merely reacting. And with today’s bizarre and skewed ‘food environment’, as it were, this becomes VERY necessary, especially for people with a predilection for analyzing their own behavior even in such supposedly mundane (but really fundamental) things as food consumption.
Aaaand the takeaway metaphor is that ‘creative’ ideas are the probably explosive ones, but we sometimes still really need to move trucks.
Solidly great, this.