LW 2.0 is a good example of trying to fix something that isn’t broken and ending up breaking it further.
pwno
Thanks for trying it out. Hermes is still a work in progress and one of our top priorities now is improving responsiveness.
Looking forward to helping you out!
I recently launched a new service called Hermes. It connects users with dating experts for live texting advice. It runs on a unique platform designed to greatly simplify sharing and discussing text conversations. Since modern dating is changing so rapidly, especially with the rise of online dating apps and a growing population of young people glued to their phones, helping people improve their texting can greatly improve their dating life. I’ve been a software developer and dating coach for over 10 years so this is sort of my passion project.
I’d be happy to get some trial users. General feedback is greatly appreciated too.
In the case of voting for Trump and writing the note in the Wailing Wall, I think there’s little to no risk of having it change your prior beliefs or weaken your self-deception defense mechanisms. They both require you to be dishonest about something that clashes with so many other strong beliefs that it’s highly unlikely to contaminate your belief system. The more dangerous lies are the ones that don’t clash as much with your other beliefs.
How’s that related?
Inter alia, yes. But the step from “rationality is supposed to reduce X” to “I will act as if X has been reduced to negligibility” is not a valid one.
Well, isn’t that a good technique to reduce X? Obviously not in all cases, but I think it’s a valid technique in the cases we’re talking about.
If you value your belief that’s there are no ghost then it’s irrational to be scared by ghosts.
Are you talking about “real” ghosts? You shouldn’t be afraid of real ghosts because they don’t exist, not because you value your belief that there are no ghosts. Why should beliefs have any value for you beyond their accuracy?
Funny you mention that anecdote because I actually wrote it http://lesswrong.com/lw/1l/the_mystery_of_the_haunted_rationalist/w9
Human brains aren’t very good at detaching themselves from their actions
Isn’t that what rationality is supposed to reduce?
The government picks arbitrary ages for when an individual has the mental capacity to make certain decisions, like drinking alcohol or having sex. But not everyone mentally matures at the same rate. It’d be nice to have an institution that allows minors with good backgrounds and who pass certain intelligence/rationality tests to be exempt from these laws.
observe the features common to the intuitions in different domains, and abstract the common features out.
Have you explicitly factored these out? If so, what are some examples?
I agree
I think it’s because system 1 and system 2 update differently. System 1 often needs experiential evidence in order to update, while system 2 can update using logical deduction alone. Doing a bunch of research is effective in updating system 2, but less so system 1. I’d guess that if you continue being positive and and don’t experience any downside to it, then eventually your system 1 will update.
I think interviewers rely more on their intuition to evaluate candidates for managerial positions. For purely engineering positions, a longer, more systematic evaluation is needed.
Yes, the way I wrote the scenario makes it seem like he deliberately got himself into an awkward situation for little benefit in return. And I see how this weakens the scenario as an illustration of the problem. So let me try improving the scenario:
Imagine he determined that refraining from disclosing the information to his mother was ethical. A week later, he finds himself in a similar situation. He wants to drink a couple of beers, but knows that by the time he’ll finish, he’ll need to drive his mother. This time he has no qualms about drinking, making the beer-drinking pleasure worth the consequences.
He might then profitably spend those two hours examining the underlying problem: why he chose to have those beers.
Why would this be a problem?
BTW, his mother already knows he’s been drinking.
I didn’t make it clear, but in the scenario she doesn’t know.
So the question is, when your goals conflict with another’s, when is it right to use force or subterfuge to get your way?
In the scenarios with the 5-year-old and the mother, the protagonist’s goal conflicts with what he deems to be an irrational goal. From his perspective, if they were more rational, their goals wouldn’t be conflicting in the first place. So there are two questions that arise 1) can he make that judgement call on their rationality and 2) can he remove their ability to act as agents because of his assessment?
you’re treating them as an agent, but an adversarial one.
But if you thought of them as having agency, you’d want to respect their desires and therefore disclose the information, possibly hoping you’d come to some sort of compromise.
Ethicality of Denying Agency
Reliable/predictable isn’t high status.
Why is frame control central to this post? While it explains frame control well, the focus seems to be about people consciously/unconsciously harmfully manipulating one another. How to avoid being manipulated, gaslighted, deceived, etc is an important topic to discuss and a valuable skill to have. And this post offers good advice on it (whether or not it intended to). But it could’ve done so without bringing up the concept of frame control.