Fair enough
benelliott
Would it be possible to make those clearer in the post?
I had thought, from the way you phrased it, that the assumption was that for any game, I would be equally likelly to encounter a game with the choices and power levels of the original game reversed. This struck me as plausible, or at least a good point to start from.
What you in fact seem to need, is that I am equally likely to encounter a game with the outcome under this scheme reversed, but the power levels kept the same. This continues to strike me as a very substansive and almost certainly false assertion about the games I am likely to face.
I don’t therefore see strong evidence I should reject my informal proof at this point.
I think you and I have very different understandings of the word ‘proof’.
In the real world, agent’s marginals vary a lot, and the gains from trade are huge, so this isn’t likely to come up.
I doubt this claim, particularly the second part.
True, many interactions have gains from trade, but I suspect the weight of these interactions is overstated in most people’s minds by the fact that they are the sort of thing that spring to mind when you talk about making deals.
Probably the most common form of interaction I have with people is when we walk past each-other in the street and neither of us hands the other the contents of their wallet. I admit I am using the word ‘interaction’ quite strangely here, but you have given no reason why this shouldn’t count as a game for the purposes of bargaining solutions, we certainly both stand to gain more than the default outcome if we could control the other). My reaction to all but a tiny portion of humanity is to not even think about them, and in a great many cases there is not much to be gained by thinking about them.
I suspect the same is true of marginal preferences, in games with small amounts at stake, preferences should be roughly linear, and where desirable objects are fungible, as they often are, will be very similar accross agents.
In the default, Alice gets nothing. If k is small, she’ll likely get a good chunk of the stuff. If k is large, that means that Bob can generate most of the value on his own: Alice isn’t contributing much at all, but will still get something if she really cares about it. I don’t see this as ultra-unfavourable to Alice!
If k is moderately large, e.g. 1.5 at least, then Alice will probably get less than half of the remaining treasure (i.e. treasure Bob couldn’t have acquired on his own) even by her own valuation. Of course the are individual differences, but it seems pretty clear to me that compared to other bargaining solutions, this one is quite strongly biased towards the powerful.
This question isn’t precisely answerable without a good prior over games, and any such prior is essentially arbitrary, but I hope I have made it clear that it is at the very least not obvious that there is any degree of symmetry between the powerful and the weak. This renders the x+y > 2h ‘proof’ in your post bogus, as x and y are normalised differently, so adding them is meaningless.
You’re right, I made a false statement because I was in a rush. What I meant to say was that as long as Bob’s utility was linear, whatever utility function Alice has there is no way to get all the money.
Are you enforcing that choice? Because it’s not a natural one.
It simplifies the scenario, and suggests.
Linear utility is not the most obviously correct utility function: diminishing marginal returns, for instance.
Why is diminishing marginal returns any more obvious that accelerating marginal returns. The former happens to be the human attitude to the thing humans most commonly gamble with (money) but there is no reason to privilege it in general. If Alice and Bob have accelerating returns then in general the money will always be given to Bob, if they have linear returns, it will always be given to Bob, if they have Diminishing returns, it could go either way. This does not seem fair to me.
varying marginal valuations can push the solution in one direction or the other.
This is true, but the default is for them to go to the powerful player.
Look at a moderately more general example, the treasure splitting game. In this version, if Alice and Bob work together, they can get a large treasure haul, consisting of a variety of different desirable objects. We will suppose that if they work separately, Bob is capable of getting a much smaller haul for himself, while Alice can get nothing, mkaing Bob more powerful.
In this game, Alice’s value for the whole treasure gets sent to 1, Bob’s value for the whole treasure gets sent to a constant more than 1, call it k. For any given object in the treasure, we can work out what proportional of the total value each thinks it is, if Alice’s number is at least k times Bob’s, then she gets it, otherwise Bob does. This means, if their valuations are identical or even roughly similar, Bob gets everything. There are ways for Alice to get some of it if she values it more, but there are symmetric solutions that favour Bob just as much. The ‘central’ solution is vastly favourable to Bob.
It does not. See this post ( http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/ ): any player can lie about their utility to force their preferred outcome to be chosen (as long as it’s admissible). The weaker player can thus lie to get the maximum possible out of the stronger player. This means that there are weaker players with utility functions that would naturally give them the maximum possible. We can’t assume either the weaker player or the stronger one will come out ahead in a trade, without knowing more.
Alice has $1000. Bob has $1100. The only choices available to them are to give some of their money to the other. With linear utility on both sides, the most obvious utility function, Alice gives all her money to Bob. There is no pair of utility functions under which Bob gives all his money to Alice.
If situation A is one where I am more powerful, then I will always face it at high-normalisation, and always face its complement at low normalisation. Since this system generally gives almost everything to the more powerful player, if I make the elementary error of adding the differently normalised utilities I will come up with an overly rosy view of my future prospects.
You x+y > 2h proof is flawed, since my utility may be normalised differently in different scenarios, but this does not mean I will personally weight scenarios where it is normalised to a large number higher than those where it is normalised to a small number. I would give an example if I had more time.
I didn’t interpret the quote as implying that it would actually work, but rather as implying that (the Author thinks) Hanson’s ‘people don’t actually care’ arguments are often quite superficial.
consider that “there are no transhumanly intelligent entities in our environment” would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote
Why?
It seems like a mess of tautologies and thought experiments
My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don’t suggests there is still work to be done.
I don’t know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).
Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you’re begging on the streets, not so much if you’re a billionaire).
However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.
For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.
They aren’t isomorphic problems, however it is the case that CDT two-boxes and defects while TDT one boxes and co-operates (against some opponents).
But at some point your character is going to think about something for more than an instant (if they don’t then I strongly contest that they are very intelligent). In a best case scenario, it will take you a very long time to write this story, but I think there’s some extent to which being more intelligent widens the range of thoughts you can think of ever.
That’s clearly the first level meaning. He’s wondering whether there’s a second meaning, which is a subtle hint that he has already done exactly that, maybe hoping that Harry will pick up on it and not saying it directly in case Dumbledore or someone else is listening, maybe just a private joke.
I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don’t see why a ‘happiness function’ would be even slightly interesting to decision theorists.
I think I’d want to define a utility function as “what an agent wants to maximise” but I’m not entirely clear how to unpack the word ‘want’ in that sentence, I will admit I’m somewhat confused.
However, I’m not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.
In that case, I would say their true utility function was “follow the deontological rules” or “avoid being smited by divine clippy”, and that maximising paperclips is an instrumental subgoal.
In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.
(1/36)(1+34p0) is bounded by 1⁄36, I think a classical statistician would be happy to say that the evidence has a p-value of 1⁄36 her. Same for any test where H_0 is a composite hypothesis, you just take the supremum.
A bigger problem with your argument is that it is a fully general counter-argument against frequentists ever concluding anything. All data has to be acquired before it can be analysed statistically, all methods of acquiring data have some probability of error (in the real world) and the probability of error is always ‘unknowable’, at least in the same sense that p0 is in your argument.
You might as well say that a classical statistician would not say the sun had exploded because he would be in a state of total Cartesian doubt about everything.
So, I wrote a similar program to Phil and got similar averages, here’s a sample of 5 taken while I write this comment
8.2 6.9 7.7 8.0 7.1
These look pretty similar to the numbers he’s getting. Like Phil, I also get occasional results that deviate far from the mean, much more than you’d expect to happen with and approximately normally distributed variable.
I also wrote a program to test your hypothesis about the sequences being too long, running the same number of trials and seeing what the longest string of heads is, the results are
19 22 18 25 23
Do these seem abnormal enough to explain the deviation, or is there a problem with your calculations?
Not quite always
http://www.boston.com/news/local/massachusetts/articles/2011/07/31/a_lottery_game_with_a_windfall_for_a_knowing_few/