The enjoyment of the activity factors into whether it is a good use of time.
jschulter
my probability that something like string theory is true would go way up after the detection of a Higgs boson
I’m not sure that this should be the case, as the Higgs is a Standard Model prediction and string theory is an attempt to extend that model. The accuracy of the former has little to say on whether the latter is sensible or accurate. For a concrete example, this is like allowing the accuracy of Newtonian Mechanics (via say some confirmed prediction about the existence of a planetary body based on anomalous orbital data) to influence your confidence in General Relativity before the latter had predicted Mercury’s precession or the Michelson-Morley experiment had been done.
EDIT: Unless of course you were initially under the impression that there were flaws in the basic theory which would make the extension fall apart, which I just realized may have been the case for you regarding the Standard Model.
But it is near-consequentialist: “I’m a hard-working person and hard-working people wouldn’t just give up” --> “the act of giving up will make me feel less like a hard-working person and therefore make me less likely to work hard in the future”
That particular element seems like it would incentivize campers to spend the period hyper-aware of their own and others’ specificity, which seems counterproductive to me. The goal is an increase in the specificity of statements made casually, which could be entirely unrelated. Extending the period to say, a week, might work to prevent this- at that point it would be a long term incentive rather than a prize.
This activity seems like it would tie in well with a unit on hypothesis and experiment generation as well- it reminds me of the 2-4-6 test. Perhaps have two different scoring rules: when trying to teach specificity, give points for getting your partner to guess; when teaching how to find the right hypotheses and tests, give points for guessing correctly.
I unfortunately wont be able to make it up for this one, despite my strong interest in the subject. Would someone be willing to host me via Skype though?
This sounds very interesting. Do you have anybody who’s particularly experienced with TDT or other decision theories committed to come? And which business will it be at, as that appears to be a mall of some sort?
Also, is anyone from down here in Tucson looking for a ride up, or already has one with an extra seat?
Okay, thanks for clarifying the question. I’ve essentially already stated all the “evidence” I’m using for the claim, it’s almost entirely anecdotal, and there’s certainly no actual studies that I’ve used to support this particular bullet point. So, there is a good chance I may have stated things in a way which seems overconfident, and I may in fact be overconfident regarding this particular claim, especially considering that I’ve not tested alternate explanations for the efficacy I’ve had. I’d be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method, but I feel as though this probably isn’t the place(I’ve already messaged you), though I’d be happy to update the wording of the article afterwards if it’s necessary.
The statement about strong visualization (essentially simulating experiences as closely as possible) is taken from the video and personal (and anecdotal) experience with the method. The reinforcement from actual completion refers to how once you’ve completed the task you were motivating yourself to do, you should get the feeling of reward you were imagining to motivate yourself. Actually experiencing the reward makes it easier to simulate if you need to become motivated again later. Additionally the mental connection you’ll make between completing the task and the reward makes it less likely that you’ll need to repeat the exercise for that task, unless it has an extremely high activation cost: the next time you go to do the task, one of the first things that comes to mind will likely be the reward you felt last time(s) you performed it.
Do you read/watch a lot of fiction? I personally end up selecting for fiction which matches my beliefs somewhat closely, and that in retrospect has likely strongly enforced the connection. This seems like a reasonable candidate for an automatic yet unnoticeable process with those results.
This can actually be done unintentionally as well. One of the things that might have caused the original haunted rationalist problem could have been watching/reading too much horror fiction: if most experiences you’ve seen regarding an old house end up with people tortured and dead, even if you know they were all known to be fictitious, you will still anticipate, however strongly, bad things happening in old houses. This also makes me wary that my anticipations regarding the future are likely highly influenced by all the science fiction I read, so I know to watch my aliefs in that regard very very closely.
This example was intended as a possible alief you might want to hold, whether it is accurate to your beliefs or not. There are some people who can reasonably expect to never encounter a dangerous snake in the wild who are nonetheless very afraid of them (and all other snakes as well); while respect and fear for dangerous and potentially poisonous animals is worthwhile for some, for others it can be a handicap.
I should also mention (though I took this part out of the article) that there are some situations where one might want to alieve things entirely counter to ones beliefs. The technique allows for cultivation of these types of aliefs as well, and not fearing snakes might be one of them. Other examples could be the alief that cake is not delicious, or that drinking/being drunk is boring and often painful. Note that I do not personally advocate lying to oneself in an overly convincing manner, as that way darkness lies.
Fixed.
Rationalist Judo, or Using the Availability Heuristic to Win
True. I was actually considering omitting the last sentence, as it doesn’t really contribute much, but I wasn’t sure if that would have been misleading as to the original meaning.
“When you are stubbornly making an argument, there is a possibility that you are uninformed, ignorant, in denial, and/or being a jerk. Of course, you might be right.”
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.
As I’ve seen it used here, “rationality” most commonly refers to “the best [memecluster] for winning at life” whatever that actual memecluster may be. If it could be shown that believing in the christian god uniformly improved or did not affect every aspect of believers lives regardless of any other beliefs held, I think a majority of lesswrongers would take every effort necessary to actually believe in a christian god. The problem seems to be how rationality and “the meme-cluster that we most enjoy and are best-equipped to participate in” are equated- these two are currently very similar memeclusters for the current lesswrong demographic, but they are not necessarily so. “It would be really convenient if the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life, rationality.” seems more sensical.
So, due to bad luck, bad timing, and lack of proper foresight, it seems this attempt was a total bust(well, not total, I got some work done). I’ll try another one sometime this month. Any feedback would be helpful.
I’m wearing a dark red shirt and jeans and typing on a white laptop if that helps.
Wait, so is this on Monday the 3rd or Tuesday the 4th?