http://rejectiontherapy.com/ - the 30 day rejection challenge seems to fit here. Try and, for 30 consecutive days, provoke genuine rejections or denials of reasonable requests, as part of your regular activities, at the rate of one per day.
aausch
I may not be smart enough to debate you point-for-point on this, but I have the feeling about 60% of what you say is crap.
David Letterman, To Bill O’Reilly, in discussion about the supposed War on Christmas, as quoted in “In Letterman appearance, O’Reilly repeated false claim that school changed ‘Silent Night’ lyrics”, Media Matters for America, (2006-01-04) (From Wikiquote)
Alright, imagine that you as the “watched” have a choice on the type of alert you get hit with. And on the number of strangers that have to want to alert you before it happens—maybe your screen gets sent to 10 strangers and they have to achieve a quorum.
Or 10 friends. This could be a facebook app.
If anyone decides to work on this, please let me know in advance.
Alright, imagine that you as the “watched” have a choice on the type of alert you get hit with. And on the number of strangers that have to want to alert you before it happens—maybe your screen gets sent to 10 strangers and they have to achieve a quorum.
I read this and it immediately shouted at me:
“Chatroullette clone”
I’m not entirely sure how this might work. Maybe something along the lines of :
Every 15 minutes a screenshot of your work is sent to a complete stranger. If they think you’re procrastinating, they can hit a “horn” button which causes a loud noise to occur on your computer, and broadcasts 5 seconds worth of your reaction.
I had always assumed that the current primary purpose of the Lesswrong site is to spread the word—to increase awareness of the existing body of knowledge related to rationality, focusing on proof for the benefits of becoming more rational, and enumerate the techniques required to obtain it, only indirectly supporting the actual work required for becoming more rational. Similar in nature to the Harry Potter story.
For the actual practice, at least in an online setting, I imagine that something closer to the Lumosity site would be appropriate.
I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.
The hardware can be reprogrammed by two actors—the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).
I haven’t yet seen a coherent discussion of this kind of model—maybe it exists and I’m missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?
Does anyone know of studies which measure how much of an effect access to reliable information has on decision making?
I think you are the first person I know of, who actively uses Google Wave.
Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.
-- Gautama Buddha
- Apr 12, 2010, 5:11 AM; 1 point) 's comment on Rationality Quotes: February 2010 by (
I think I have a similar point of view to yours, on this.
I get it now, thank you.
You would expect rational thought to lead to a higher level of commitment on decisions about religion than gun control, but higher level of commitment on the topics is not a good signal for rational thought.
I assume we agree that atheism is not a signal for rational thought anymore—if that’s true, are you getting any additional useful information by looking at how loudly someone antagonizes religion?
Most of the educated people I know are ultra-behaviorists
I like that qualification. It’s hard to make these calls out of the group context.
I wonder what you mean by “hardcore atheists”?
I’m guessing you don’t mean hardcore as in “signaling group membership loudly”, and Eliezer already argued the point that atheism is no longer a valid synonym for reliable, rational thought.
Is there a good source of documentation for the expected side-effects of coffee—the “down” period of reduced mental capacity that often occurs a few hours after drinking, the effect of disrupted sleep cycles for those who do not normally drink the stuff, etc...?
You do ill if you praise, but worse if you censure, what you do not understand. Leonardo da Vinci
If I don’t eat the bunny, I’m sure to find something else to eat. If I eat the bunny, though, it’s definitely not going to be alive anymore (attack of the zombie digested bunnies anyone?)
There isn’t as much pressure on human evolution to avoid making cuteness mistakes, as there is on bunny evolution to be cute. If evolution were to move fast enough, and the cuteness complex would be hackable, it seems possible to me that things would evolve to hack it (and beat out babies at it).
This quote, for me, shows two ideas: The I defy the data that khafra mentions below, as well as, on Letterman’s side, an ability to accurately detect bs and dismiss it without having spent significant resources on formal debate. That ability seems incredibly useful to me, and definitely worth cultivating.
I associate the second idea with the Prior Information Chapter of HPMoR