I may be reading between the lines too much, but I get the sense that you’re not diagnosed by a psychiatrist, or undergoing treatment. If that’s the case, this might not be the exact area to try to outdo the professionals.
drc500free
Thank you. They’re still relevant for the topics they cover… good background to see how much of the site is covered by sequences.
Is there a generic form of that for any nth derivative?
Well, that they are the family of solutions, allowing for various transformations.
*-Disclaimer, I haven’t looked at a differential equation in 6 years.
Hello, My name is Dave Coleman. I was raised Atheist Jewish, and have identified as a rationalist my whole life. Browsing through the sequences, I realized I had failed to recognize some deeply ingrained biases.
I value making myself and others happy. Which others, and how happy, is something I’ve always struggled with. I used to have a framework with Jewish ethics, but I’m realizing that those are only clear in comparison to Christian ethics. Much of what I learned and considered was about how to make the Torah and Talmud relevant to modern, atheistic life.
I’m realizing the strong bias we had against saying “maybe it’s not relevant, since it was written by immature goatherders 3500 years ago who had no knowledge of science or empathy for those outside their tribe.” Admitting that wouldn’t sound wise, so we twist and turn with answers, cluttering what could be a solid system of ethics.
For a while I’ve considered myself a reconstructionist Jew, with the underlying ethos of “do all Jewish traditions by default, but don’t do anything that has a good reason not to be done.” I’ve realized that not polluting my mind with incorrect and biased thought patterns is a good reason to avoid many things.
Another recent change has been an understanding of Judaism in terms of evolutionary fallacies. There is a strong sense in Judaism of being a Chosen People, and of a universal intention that Jews survive as Jews. Assimilation may be the biggest struggle for Jews, bigger even than persecution.
I realized that this is the same fallacy that sees intent in a species’s characteristics. I had been labeling aspects of Judaism that lead to survival as being virtuous themselves—all of the dietary rituals to keep separate from goyim, the fear and guilt of assimilation. Even the love of learning and the drive to succeed has undertones of “thrive, for that is how you will survive the next pogrom.” Preservation of the culture is virtuous, therefore anything that keeps the culture alive is virtuous.
I remember my first Differential Equations class, when we learned that the function that is its own derivative is f(x)=e^x, and the function that is its own second derivative is f(x)=sin(x). There was this eerie confusion as I first thought that those functions were just a possible solution, and then realized that they described the only solutions. I found it very disturbing that I couldn’t describe whether the sine looked as it does by virtue of being its own second derivative, or whether it was its own second derivative by virtue of looking as it does. I still feel slightly uneasy that I can’t assign a causal relationship in one direction or the other.
That’s how I view Judaism now. The characteristics of all species and memes are a solution to the equation of survival. There is no intent or deeper meaning than that, and I think I’ve finally let that go.
Oh, and I got here from Reddit, where someone posted a link to the Paperclip Maximizer.
You can import/export from a bookmark file. I’m not sure whether that’s less tedious.
TrailMemes for Sequences
Somewhat relevant is the Gervais Principle. This Principle is based on the idea that a corporate pyramid is topped by “sociopaths,” has “losers” as a foundation, and a culture of ladder-climbing “clueless” between the two:
Sociopaths, in their own best interests, knowingly promote over-performing losers into middle-management, groom under-performing losers into sociopaths, and leave the average bare-minimum-effort losers to fend for themselves.
It’s not a very rigorously investigate principle, though it matches well with my professional experiences.
It’s not clear to me how you’re mapping this problem to the trolley problem.
To me the Trolley problem is largely about how much you’re willing to only look at end-states. In the trolley problem you have two scenarios with two options, leaving you with identical end states. Same goes for the House Elf problem, assuming that it is in the wizard’s power to create more human-like desires.
The main difference between the cases that I see in the Trolley problems are “to what extent is the person you’re killing already in danger?” Being already on a track is pretty inherently dangerous. Being on a bridge in a mine isn’t as dangerous. Wandering into a hospital with healthy organs isn’t inherently dangerous at all.
Suppose the house elves were created just wanting to do chores. Would it be moral to leave them like that if you could make them more human? What if they had once been more human and you were now “reverting” them?
My lower brain agrees with you. My upper brain asks if this is just a trolley problem that puts a high moral value on non-intervention.
Scenario A: Option 1: Create house elves out of nothingness, wire them to enjoy doing chores. Option 2: Create house elves out of nothingness, wire them to enjoy human desires.
Scenario B: Option 1: Take existing house elves with human desires, wire them to enjoy doing chores. Option 2: Leave existing house elves with human desires alone
Is there a non-trolley explanation for why it is immoral to rewire a normal elf, but not immoral to create a new race that is hard-wired for chores? On the trolley questions I was fine with even pushing a supervisor on the tracks, but I couldn’t agree with harvesting a healthy victim for multiple organs.
Instead of creating them from scratch, would it be immoral to take a species that hated chores and wirehead them to enjoy chores?
The house elves seem to be a bit of a shout out to the Ameglian Major Cow. In that case a mind was wire-headed to enjoy something that was pretty clearly bad for it. Arthur had a problem with this, but they argued that if you were going to eat a Cow, it was more moral to wire it to enjoy being eaten.
If you accept that doing chores is just on a continuum with being tortured or eaten, which EY might, then the question is the same as whether it’s Evil to wirehead someone into enjoying being tortured or eaten.
Edit: For clarity, I don’t think I agree with the claim that creating them is “Evil,” but I think I understand why EY would make a character who makes statements like that.
Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don’t control nuclear weapons aren’t that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.
So it needs to have heuristics that are robust against incomplete information. There’s definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren’t all-knowing.
This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I’m in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The “least convenient universe” story is tantalizing, but I think it leads astray here.
Thank you! That information is very helpful.
This seems like a good audience to solve a tip-of-my-brain problem. I read something in the last year about subconscious mirroring of gestures during conversation. The discussion was about a researcher filming a family (mother, father, child) having a conversation, and analyzing a 3 second clip in slow motion for several months. The researcher noted an almost instantaneous mirroring of the speaker’s micro-gestures in the listeners.
I think that I’ve tracked the original researcher down to Jay Haley, though unfortunately the articles are behind a pay wall: http://onlinelibrary.wiley.com/doi/10.1111/j.1545-5300.1964.00041.x/abstract
What I can’t remember is who I was reading that referenced it. It was likely to be someone like Malcolm Gladwell or Jared Diamond. Does this strike a chord with anyone?
[For context, I was interested in understanding repeatable thought patterns that span two or more people. I’ve noticed that I have repeated sequences of thoughts, emotions, and states of mind, each reliable triggering the next. I’ve considered my identity at any point to be approximately the set of those repeated patterns. I think that when I’m in a relationship, I develop new sequences of thought/emotion that span my partner’s mind and my own—each state may be dependent on a preceding state in its own or the other mind. I’m wanting to understand the modalities by which a state in one mind could consistantly trigger a state in the other mind, how that ties in to those twins with conjoined brains, and if that implies a meaningful overlap in consciousness between myself and my wife.]
I’m around.
I don’t think that the sunk cost consideration is a fallacy in this case.
As far as life can be said to “begin” anywhen, it begins at conception.
You think womens’ rights trump kids’ rights or the other way round, okay.
http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer
You’re arguing definitions, claiming that your definition of “life” is universal, and using an ambigious definition of “kid” to pull emotional strings. I think we all agree on the anticipated outcomes of a pregnancy. Given how emotional “life” and “kid” are, taboo them.
Can we agree that morality is a set of rules that maximizes global “fun” when executed locally by each person? That’s what it is to me. I don’t think that it’s obvious that the moral definition of “life” is constant, and that we should therefore expect a constant mapping to a biological definition. If you have a morality where allowing all life to end naturally has a constant value that would be quite critical, but I can’t think of a society that values all unnatural termination of life equally.
Is it moral to assign more value to one life than to another? Is the life of the head of a household with 7 dependents worth more than the life of a hermit, since his family will take a big “fun” hit?
Do we actually mean ending remaining life, seeing as all lives will end at some point, and you can’t take away the ones that already happened? Are some years more fun than others? Are some years in fact negative fun? Do individuals get to decide what is fun for them? Is there some point before which individuals aren’t responsible enough to know what will give them the most fun?
Is it less moral to kill someone when they are awake and die in terror than to kill them in their sleep so that their life simply ends and they don’t experience any more fun?
Answer a bunch of questions like this and I can determine how immoral it is to terminate a life at fertilization (genetic code is unique except for your identical siblings), gastrulation (no more twins can form), various levels of brain activity (beginnings of a mind), birth (eats, breathes, poops, and communicates), infancy, or cancer-ridden old-age. Arguing over whether something is a “life” or not with no moral context is about as useful as arguing over what a “sound” is.
Someone linked to the paperclip maximizer wiki page in a post on reddit.
Believe me, I know that high intelligence can skew a professional’s diagnosis. But the underlying disorder is still the same and still treatable with essentially the same methods. You have to shop around a bit anyway to find someone you can work with, and even more so if you are high functioning and cope well.
There’s no reason you can’t do things traditionally as a baseline, and then decide how to proceed; mania is a terrible place to make a decision from.