Yes, unfortunately people do this.
Violet
Have you considered that some of us might have utility functions that do have terms for socially distant people? Thus the charity can give direct utility to us, which seems ignored by the analysis.
Second, end points rarely are optimal. E.g. eating only tuna and nothing else could be unhealthy and weird, but that does not imply that eating some tuna is unhealthy or weird. Thus your analysis seems to miss the obvious answer.
The Sex at Dawn story is nice but the whole debate seems backwards.
Everyone picks their favorite modern social models and then molds citations and stories to support that it must be natural and even the ancient hunter gatherers...
Popularized evo-psych seems to be a lot like appealing that a certain way of life is “natural” and thus “good”.
btw Is there a name to the “natural → good” bias/fallacy?
I think the issue is whether to use “relative status” or “absolute status”.
For example using the karma example, it is not very important what the karma numbers are absolutely but what their relative value is. Thus a couple of friends voting each other up raise the average (+mode + whatever statistical marker one prefers). Thus while their absolute status rises the relative status of other people sinks.
I think we may have different notions of status with me thinking of “relative inside a given group”.
Could you elaborate or point to a link about status being positive sum?
Don’t really care for the genders of partners. So any gender mix really. Female + bisexual with mostly female partners at the moment.
Actually the logistics is not so clear-cut.
Lets say Sarah has two partners Tom and Maria. Now Sarah has the wednesday afternoon free. The probablity that one of her partners has free time is higher than it would be in a monogamous arrangement.
The time needed is not necassary “everyone needed” but for “some suitable combination of people”.
5:3 would be far more enjoyable in my experience from polyamoric relationships.
I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don’t place a large value on there being a “Violet” in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible “fixing” of brain is very high priority 5) Thus I don’t want to be revived by far-future and death without cryonics seems a safe way for that
The whole affair smells quite a lot like harassment and someone not being content when asked to stop.
Of course this type of preprenup being common would create a market for the opposite preprenup “I will not agree to a prenup or I will pay max(my_net_worth,partners_net_worth)/2”.
Actually it would make sense for the same company to market both of them. They could even pay something to get young people to agree to these contacts financed by the conflicts the preprenups would create later on.
There are rules for the game that are perceived as fair.
If one participant goes changing the rules in the middle of the game this 1) makes rule changing acceptable in the game, 2) forces other players to analyze the current (and future changes) to the game to ensure they are fair.
Cutting the deck probably doesn’t affect the probability distribution (unless you shuffled the deck in a “funny” way). Allowing it makes a case for allowing the next changes in the rules too. Thus you can end up analyzing a new game rather than having fun playing poker.
This depends on the situation.
With a rare diagnosed conditions it is kind of easy for the patient to have more knowledge than a typical doctor. The doctor has heard 15 minutes about it 20 years ago in med school while the patient has gone through all the recent research.
Self-diagnosing is typically problematic. Self-managing chronic conditions is many times quite rational.
So let’s say I’m confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
1) There is $1M in the large box and thus (D xor E)=true
2) I know that I am an one boxing agent
3) Thus D=”one boxing”
4) Thus I know D/=E since the xor is true
5) I one-box and live happily with $1,000,000When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
It seems like precommitting to destroy the AI in such a situation is the best approach.
If one has already decided to destroy it if it makes threats: 1) the AI must be suicidal or it cannot really simulate you 2) and it is not very Friendly in any case
So when the AI simulates you and will notice that you are very trigger happy, it won’t start telling you tales about torturing your copies if it has any self-preservation instincts.
If you want non-PC approaches there are two communities you could look into: sales-people and conning people. The second one actually has most of the how-to-hack-peoples minds. If you want a kinder version look at it titled “social engineering”.