The article isn’t so much about Reiki as about intentionally utilizing the placebo effect in medicine. And that there is some evidence that, for the group of people that currently believe (medicine x) is effective, the placebo effect of fake (medicine x) may be stronger than that of fake (medicine y) and (medicine x) has fewer medically significant side effects than (medicine y).
imuli
Thinking Fast and Slow references studies of disbelief requiring attention—which is what I assume you mean by “easier”.
We’re a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we’re still developing the art, we are able to do better if we don’t tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.
Meetup : Maine: Automatic Cognition
I’ve been introducing rationality not by name, but by description. As in, “I’ve been working on forming more accurate beliefs and taking more effective action.”
Ionizing Radiation—preferably expressed as synthetic heat or pain with a tolerable cap. The various types could be differentiated, by location or flavor, but mostly it’s the warning that matters.
There are a significant number of people who judge themselves harshly. Too harshly. It’s not fun and not productive, see Ozy’s Post on Scrupulosity. It maybe would be helpful for the unscrupulous to judge themselves with a bit more rigor, but leniency has a lot to recommend it as viewed from over here.
Basic version debug apk here, (more recent) source on GitHub, and Google Play.
The most notable feature lacking is locking the phone when the start time arrives. PM me if you run into problems. Don’t set the end time one minute before the start time, or you’ll only be able to unlock the phone in that minute.
A more advanced version of this would be to lock the phone into “emergency calls only” mode within a specific time window. I don’t know how hard that would be to pull off.
This appears to be possible with the Device Administration API to relock the screen upon receiving an ACTION_USER_PRESENT intent. Neither of which requires a rooted phone.
Probably because they have been dead for forty for fifty years.
The best example still living might be Robert Aumann, though his science is less central (economics) than anyone on your list. Find a well known modern scientist who is doing impressive work and believes in any reasonably traditional sense of God! It’s not interesting to show a bunch of people who believed in God when >99% of the rest of their society did.
I’m talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that’s recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you’re doing wrong without offense.
I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I’ve never worked with someone who has gone through one of the new programming schools, but I don’t think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.
So I’d say that there are nodes on the graph that I don’t have labels for, and are not taught formally as far as I know. The best way to learn them is to read lots of big well written code bases and try to figure out why everything was done one way and not some other. Second best then maybe is to write a few huge code bases and figure out why things keep falling apart?
Ok, then, humble from the OED: “Having a low estimate of one’s importance, worthiness, or merits; marked by the absence of self-assertion or self-exaltation; lowly: the opposite of proud.”
Clicking out.
I think you understand the concept that I was trying to convey, and are trying to say that ‘humble’ and ‘humility’ are the wrong labels for that concept. Right? I basically agree with the OED’s definition of humility: “The quality of being humble or having a lowly opinion of oneself; meekness, lowliness, humbleness: the opposite of pride or haughtiness.” Note the use of the word opposite, not absence.
Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues claims that the earth ISN`T flat? Well, who am I to come and believe otherwise?
That’s exactly the problem, at best one ends up following whoever is loudest, at worst one ends up saying “everybody is right” and “but we can’t really know” and not even pretending to try to figure out the truth.
I was speaking more to how someone acts inside than how someone presents themself. If they believe themself unworthy or unimportant or without merit, they tend not to reject ideas very well and do a lot of equivocating. (Though, I think, all my evidence for that is anecdotal.)
You might say that they are both traps, at least from a truth seeker’s perspective. The arrogant will not question their belief sufficiently; the humble will not sufficiently believe.
There’re other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:
Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child’s life, not counting the other benefits).
If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don’t help MIRI and no population growth, that $25 now needs to give them enough resources to delay UFAI about 31 seconds.
This is true for any project that reduces humanity’s existential risk. AI is just the saddest if it goes wrong, because then it goes wrong for everything in, slightly less than, our light cone.
It started happening well before the story was complete...
But what does one maximize?
We can not maximize more than one thing (except in trivial cases). It’s not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?
epistemic rationality
ethics
social interaction
existance
I’m not sure if I’m confused.
My thoughts are that you probably havn’t read Malcolm’s post on communication cultures, or you disagree.
Roughly, different styles of communication cultures (guess, ask, tell) are supported by mutual assumptions of trust in different things (and product hurt and confusion in the absence of that trust). Telling someone you would enjoy a hug is likely to harm a relationship where the other person’s assumptions are aligned with ask or guess, even if you don’t expect the other person to automatically hug you!
You need to coordinate with people on what type of and which particular culture to use (and that coordination usually happens through inference and experimentation). I certainly expect people who happen to coordinate on a Tell Culture to do better, but I doubt that it works as an intervention, unless they make the underlying adjustments in trust.