“The Fermi paradox is actually quite easily resolvable. There are zillions of aliens teeming all around us. They’re just so technologically advanced that they have no trouble at all hiding all evidence of their existence from us.”
wuwei
I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.
Yes, but not all self-help needs to involve positive affirmations.
I was going to ask whether repeating positive statements about oneself has actually been recommended on lesswrong. Then I remembered this post. Perhaps that post would have made a more suitable target than the claim that rationalists should win.
Wouldn’t a rationalist looking to win simply welcome this study along with any other evidence about what does or does not work?
Unless that changes then, I wouldn’t particularly recommend programming as a job. I quite like my programming job but that’s because I like programming and I don’t work in a dilbert cartoon.
According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art.
The physician, whose reputation was such that his name became synonymous with medical science in China, replied, “My eldest brother sees the spirit of sickness and removes it before it takes shape and so his name does not get out of the house.”
“My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood.”
“As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords.”
-- Thomas Cleary, Introduction to The Art of War
Do you program for fun?
Take the thoughts of such an one, used for many years to one tract, out of that narrow compass he has been all his life confined to, you will find him no more capable of reasoning than almost a perfect natural. Some one or two rules on which their conclusions immediately depend you will find in most men have governed all their thoughts; these, true or false, have been the maxims they have been guided by. Take these from them, and they are perfectly at a loss, their compass and polestar then are gone and their understanding is perfectly at a nonplus; and therefore they either immediately return to their old maxims again as the foundations of all truth to them, notwithstanding all that can be said to show their weakness, or, if they give them up to their reasons, they with them give up all truth and further enquiry and think there is no such thing as certainty.
-- John Locke, Of the Conduct of Understanding
There is a mathematical style in which proofs are presented as strings of unmotivated tricks that miraculously do the job, but we found greater intellectual satisfaction in showing how each next step in the argument, if not actually forced, is at least something sweetly reasonable to try. Another reason for avoiding [pulling] rabbits [out of the magicians’s hat] as much as possible was that we did not want to teach proofs, we wanted to teach proof design. Eventually, expelling rabbits became another joy of my professional life.
-- Edsger Dijkstra
Edit: Added context to “rabbits” in brackets.
Thanks for the explanations.
Testing shows the presence, not the absence of bugs.
-- Edsger Dijkstra
Here’s one way this could be explained: Susie realizes that her name could become a cheap and effective marketing tool if she sells seashells at the seashore. Since that’s something she enjoys doing anyway, she does so.
If that’s how things are, I wouldn’t really call this a cognitive bias.
That’s a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.
To answer your second question: No, there aren’t any historical examples I am thinking of. Do you find many historical examples of existential risks?
Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
Increases in rationality can sometimes lead with some regularity to decreasing knowledge or utility (hopefully only temporarily and in limited domains).
I suspect you aren’t sufficiently taking into account the magnitude of people’s irrationality and the non-monotonicity of rationality’s rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality’s effects will be more careful and targeted—and therefore more likely to work as existential risk mitigation.
I still have very little idea what you mean by ‘objectification’ and ‘objectify people’.
I was momentarily off-put by Roko’s comment on the desire to have sex with extremely attractive women that money and status would get. This was because of:
the focus on sex, whereas I would desire a relationship.
the connotation of ‘attractive’ which in my mind usually means physical attractiveness, whereas my preferences are dominated by other features of women.
the modifier ‘extremely’ which seems to imply a large difference in utility placed on sex with extremely attractive women vs. very attractive or moderately attractive women, especially when followed by identifying this desire as a generator for desiring high social status rather than vice versa or discussing both directions of causation. (The latter would have made more sense to me in the context of Roko saying we should value social influential power.)
I had negative associations attached to Roko’s comment because I started imagining myself with my preferences adopting Roko’s suggestions. However, I wouldn’t have voiced these negative associations in any phrases along the lines of ‘objectificaton’ or ‘objectifying’, or in terms of any moral concerns. The use of the word ‘get’ by itself did not strike me as particularly out of place any more than talk of ‘getting a girlfriend/boyfriend’.