Also, there’s ways of using uranium 238
BenAlbahari
Sigh.
A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose “fault” it was. Your “sigh” reaction comes across as expressing the sentiment “It’s your fault for not getting me. Didn’t you read what I wrote? It’s so obvious”. But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of “Oh that’s interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...”.
FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for “moralize” when a less loaded phrase like “suspended judgement” could work? My head hurts reading these comments trying to figure out how each person is using the term “moralize” and I now have to think twice when reading the term on LW, including even your old posts. This is an unnecessary cognitive burden. In any case, my final note here would be to consider that you’d be lucky if your target audience for your upcoming book(s) was anywhere near as sharp as wedrifid. So if he’s confused, that’s a valuable signal.
people who identify as rationalists they seem to moralize slightly less than average
Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.
Good to see you’ve morally condoned the 5 second method.
rationalists don’t moralize.
don’t go into the evolutionary psychology of politics or the game theory of punishing non-punishers
OK, so you’re saying that to change someone’s mind, identify mental behaviors that are “world view building blocks”, and then to instill these behaviors in others:
...come up with exercises which, if people go through them, causes them to experience the 5-second events
Such as:
...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...
Or:
...to feel the temptation to doubt, and to make the choice not to doubt, and to associate alternative procedural patterns such as pausing, prayer...
The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.
I have an image of Eliezer queued up in a coffee shop, guiltily eyeing up the assortment of immodestly priced sugary treats. The reptilian parts of his brain have commandeered the more recently evolved parts of his brain into fervently computing the hedonic calculus of an action that other, more foolish types, might misclassify as a sordid instance of discretionary spending. Caught staring into the glaze of a particularly sinful muffin, he now faces a crucial choice. A cognitive bias, thought to have been eradicated from his brain before the SIAI was founded, seizes its moment. “I’ll take the triple chocolate muffin thank you” Eliezer blurts out. “Are you sure?” the barista asks. “Well I can’t be 100% sure. But the future of intergalactic civilizations may very well depend on it!”
- 12 May 2012 0:29 UTC; 4 points) 's comment on Thoughts on the Singularity Institute (SI) by (
While I’m inclined to agree with the conclusion, this post is perhaps a little guilty of generalizing from one example—the paragraphs building up the case for the conclusion are all “I...” yet when we get to the conclusion it’s suddenly “We humans...”. Maybe some people can’t handle the truth. Or maybe we can handle the truth under certain conditions that so far have applied to you.
P.S. I compiled a bunch of quotes from experts/influential people for the questions Can we handle the truth? and Is self-deception a fault?.
The chief role of metaethics is to provide far-mode superstimulus for those inclined to rationalize social signals literally.
Ethics and aesthetics have strong parallels here. Consider this quote from Oscar Wilde:
For we who are working in art cannot accept any theory of beauty in exchange for beauty itself, and, so far from desiring to isolate it in a formula appealing to the intellect, we, on the contrary, seek to materialise it in a form that gives joy to the soul through the senses. We want to create it, not to define it. The definition should follow the work: the work should not adapt itself to the definition.
Whereby any theory of art...
merely serves as after-the-fact justification of the sentiments that were already there.
I’ve gone through massive reversals in my metaethics twice now, and guess what? At no time did I spontaneously acquire the urge to rape people. At no time did I stop caring about the impoverished. At no time did I want to steal from the elderly. At no time did people stop having reasons to praise or condemn certain desires and actions of mine, and at no time did I stop having reasons to praise or condemn the desires and actions of others.
Metaethics: what’s it good for...
I believe the primary form of entertainment for the last million years has had plenty of color.
I don’t think social influence alone is a good explanation for the delusion in the video. Or more precisely, I don’t think the delusion in the video can be explained as just a riff on the Asch conformity experiment.
I’m merely less skeptical that the woman in the video is a stooge after hearing what Nancy had to say. But yes, the anchoring techniques he uses in the video might be nothing but deliberate misdirection.
Interesting. This makes me less skeptical of Derren Brown’s color illusion video (summary: a celebrity mentalist uses NLP techniques to convince a woman yellow is red, red is black etc.).
Perhaps the post could be improved if it laid out the types of errors our intuitions can make (e.g. memory errors, language errors, etc.). Each type of error could then be analyzed in terms of how seriously it impacts prevalent theories of cognition (or common assumptions in mainstream philosophy). As it stands, the post seems like a rather random (though interesting!) sampling of cognitive errors that serve to support the somewhat unremarkable conclusion that yes, our seemingly infallible intuitions have glitches.
I dunno Nancy. I mean you start off innocently clicking on a link to a math blog. Next minute you’re following these hyperlinks and soon you find yourself getting sucked into a quantum healing website. I’m still trying to get a refund on these crystals I ended up buying. Let’s face it. These seemingly harmless websites with unrigorous intellectual standards are really gateway drugs to hard-core irrationality. So I have a new feature request: every time someone clicks on an external link from Less Wrong, a piece of Javascript pops up with the message: “You are very probably about to enter an irrational area of the internet. Are you sure you want to continue?” If you have less than 100000 karma points, clicking yes simply redirects you the sequences.
Speaking of the yellow banana, people do a lot of filling in with color.
One of Dennett’s points is the misleading notion that our mind “fills in”. In the case of vision, our brain doesn’t “paint in” missing visual data, such as the area in our field of vision not captured by our fovea. Our brains simply lack epistemic hunger for such information in order to perform the tasks that they need to.
I’ve noticed that this account potentially explains how color works in my dreams. My dreams aren’t even black and white—the visual aspects are mostly just forms. However, if the color has meaning or emotion, it’s there. I recently had a dream where I looked up at the sky, and the moon was huge and black, moving in a rapid arc across the sky then suddenly diving into the Earth causing an apocalyptic wave of dirt to head towards me. The vivid blackness was present, because it meant something to me emotionally. The houses, in comparison, merely had form, but no color. In any case, it seems that the question “Do we dream in color?” can’t be answered adequately if using a “filling in” model of the mind.
Daniel Dennett’s Consciousness Explained is a very relevant piece of work here. Early in his book, he seeks to establish that our intuitions about our own perceptions are faulty, and provides several scientific examples to build his case. The Wikipedia entry on his multiple drafts theory gives a reasonable summary.
You’ve articulated some of the problems of a blogroll well. Perhaps the blogroll idea could be evolved into a concept that better fits the needs of this community, while retaining its core value and simplicity:
1) Along side a link could be its controversy level, based on the votes for and against the link. By making the controversy explicit, the link can no longer be seen as a straight-up endorsement.
2) Along side a link could be its ranking based on say only the top 50 users. This would let people explicitly see what the majority vs. the “elite rationalists” thought—an interesting barometer of community rationality.
3) Split the “blogroll” in two—all-time most votes vs. most votes in the last week/month. This would alleviate the problem of staleness that Nancy pointed out. This is also nice because the links could be for not just websites, but any interesting new article.
4) Allow discussion of any link. Comments could warn users of applause lights etc. This is perhaps why the current voting system works well for choosing top posts, despite the problems you point out with majority opinion. A poor post/link can never get past the gauntlet of critical comments.
You could generalize this to the point that ordinary posts essentially become a special case of an “internal link”. Anyway, enough about a technical proposal—at this point I’m reluctant to push any harder on this. An impression I have of Less Wrong is that it’s somewhat of a walled garden (albeit a beautiful one!) and that such changes would open it up a little, while maintaining its integrity. The resistance people have seems to be rooted in this—a fear of in any way endorsing “inferior intellectual standards”. What we should instead be fearful of is not doing everything we can to raise the sanity waterline.
Hi Zvi,
A couple of months ago I wrote a covid-19 risk calculator that’s gotten some press, and even translated into Spanish. Here’s the link:
https://www.solenya.org/coronavirus
I’ve updated the calculations to leverage your table for age & preconditions, which were better than what I had. You can check the code for the calculator by clicking on the link near the top of the page. I’ve also put a link in that code to your article here.
Note that I’m trying to keep the interface ultra-simple. I get a stream of suggestions (e.g. can you add a separate slider for condition x), which if all implemented, will have little effect on the overall outcome, but will overcomplicate the interface, and make the calculator lose its appeal.
Thanks,
Ben
Press:
https://www.tomsguide.com/news/coronavirus-calculator
https://www.news18.com/news/tech/can-the-coronavirus-kill-you-this-website-attempts-to-give-you-the-good-or-bad-news-2539469.html
https://www.quo.es/salud/coronavirus/q2004116668/calculadora-probabilidad-morir-coronavirus/