I’d strongly agree with that, mainly because, while points with this property exist, they are not necessarily unique. The non-uniqueness is a big issue.
Diffractor
UDT1.01: Logical Inductors and Implicit Beliefs (5/10)
UDT1.01 Essential Miscellanea (4/10)
UDT1.01: Plannable and Unplanned Observations (3/10)
UDT1.01: Local Affineness and Influence Measures (2/10)
UDT1.01: The Story So Far (1/10)
Infrafunctions Proofs
Infrafunctions and Robust Optimization
It was a typo! And it has been fixed.
It is because beach/beach is the surplus-maximizing result. Any Pareto-optimal bargaining solution where money is involved will involve the surplus-maximizing result being played, and a side payment occuring.
I have a reduction of this problem to a (hopefully) simpler problem. First up, establish the notation used.
[n] refers to the set . is the number of candidates. Use as an abbreviation for the space , it’s the space of probability distributions over the candidates. View as embedded in , and set the origin at the center of .
At this point, we can note that we can biject the following:
1: Functions of type
2: Affine functions of type
3: Functions of the form , where , and and , and everything’s suitably set so that these functions are bounded in over . (basically, we extend our affine function to the entire space with the Hahn-Banach theorem, and use that every affine function can be written as a linear function plus a constant) We can reexpress our distribution over utility functions as a distribution over these normal vectors .Now, we can reexpress the conjecture as follows. Is it the case that there exists a s.t. for all , we have
Where is the function that’s −1 if the quantity is negative, 0 if 0, and 1 if the quantity is positive. To see the equivalence to the original formulation, we can rewrite things as
Where the bold 1 is an indicator function. And split up the expectation and realize that this is a probability, so we get
And this then rephrases as
Which was the original formulation of the problem.
Abbreviating the function as , then a necessary condition to have a that dominates everything is that
If you have this property, then you might not necessarily have an optimal that dominates everything, but there are that get a worst-case expectation arbitrarily close to 0. Namely, even if the worst possible is selected, then the violation of the defining domination inequality happens with arbitrarily small magnitude. There might not be an optimal lottery-lottery, but there are lottery-lotteries arbitrarily close to optimal where this closeness-to-optimality is uniform over every foe. Which seems good enough to me. So I’ll be focused on proving this slightly easier statement and glossing over the subtle distinction between that, and the existence of truly optimal lottery-lotteries.
As it turns out, this slightly easier statement (that sup inf is 0 or higher) can be outright proven assuming the following conjecture.
Stably-Good-Response Conjecture: For every , and , there exists a and a s.t.
Pretty much, for any desired level of suckage and any foe , there’s a probability distribution you can pick which isn’t just a good response (this always exists, just pick itself), but a stably good response, in the sense that there’s some nonzero level of perturbation to the foe where remains a good response no matter how the foe is perturbed.
Theorem 1 Assuming the Stably-Good-Response Conjecture, .
I’ll derive a contradiction from the assumption that . Accordingly, assume the strict inequality.
In such a case, there is some s.t. . Let the set . Now, every lies in the interior of for some , by the Stably-Good-Response Conjecture. Since is a compact set, we can isolate a finite subcover and get some finite set of probability distributions s.t. .
Now, let the set . Since , this family of sets manages to cover all of (convex hull of our finite set.) Further, for any fixed , is a continuous function (a bit nonobvious, but true nontheless because there’s only finitely many vertices to worry about). Due to continuity, all the sets will be open. Since we have an open cover of , which is a finite simplex (and thus compact), we can isolate a finite subcover, to get a finite set of s.t. . And now we can go
The first strict inequality was from how all had some which made get a bad score. The was from expanding the set of options. The was from how is a continuous linear function when restricted to , both of which are compact convex sets, so the minimax theorem can be applied. Then the next was from restricting the set of options, and the was from how every had some that’d make get a good score, by construction of (and compactness to make the inequality a strict one).
But wait, we just showed , that’s a contradiction. Therefore, our original assumption must have been wrong. Said original assumption was that , so negating it, we’ve proved that
As desired.
What I mean is, the players hit each other up, are like “yo, let’s decide on which ROSE point in the object-level game we’re heading towards”
Of course, they don’t manage to settle on what equilibrium to go for in the resulting bargaining game, because, again, multiple ROSE points might show up in the bargaining game.
But, the ROSE points in the bargaining game are in a restricted enough zone (what with that whole “must be better than the random-dictator point” thing) to seriously constrain the possibilities in the object-level game. “Worst ROSE point for Alice in the object-level-game” is a whole lot worse for her than “Worst ROSE point for Alice in the bargaining game about what to do in the object-level-game”.
So, the players should be able to ratchet up their disagreement point and go “even if the next round of bargaining fails, at least we can promise that everyone does this well, right? Sure, everyone’s going for their own idea of fairness, but even if Alice ends up with her worst ROSE point in the bargaining game, her utility is going to be at least this high, and similar for everyone else.”
And so, each step of bargaining that successfully happens ratchets up the disagreement point closer to the Pareto frontier, in a way that should quickly converge. If someone disagrees on step 3, then the step-3 disagreement point gets played, which isn’t that short of the Pareto frontier. And if someone doesn’t have time for all this bargaining, they can just break things off at step 10 or something, that’s just about as good as going all the way to infinity.
Or at least, it should work like this. I haven’t proved that it does, and it depends on things like “what does ROSE bargaining look like for n players” and “does the random-dictator-point-dominance thing still hold in the n-player case” and “what’s the analogue of that strategy where you block your foe from getting more than X utility, when there are multiple foes?”. But this disagreement-point ratcheting is a strategy that address your worries with “ever-smaller pieces of the problem live on higher meta-levels, so the process of adding layers of meta actually converges to solving the problem, and breaking it off early solves most of the problem”
Regarding your last comment, yes, you could always just have a foe that’s a jerk, but you can at least act so they don’t gain from being an jerk, in a way robust against you and a foe having slightly different definitions of “jerk”.
I think I have a contender for something which evades the conditional-threat issue stated at the end, as well as obvious variants and strengthenings of it, and which would be threat-resistant in a dramatically stronger sense than ROSE.
There’s still a lot of things to check about it that I haven’t done yet. And I’m unsure how to generalize to the n-player case. And it still feels unpleasantly hacky, according to my mathematical taste.
But the task at least feels possible, now.
EDIT: it turns out it was still susceptible to the conditional-threat issue, but then I thought for a while and came up with a different contender that feels a lot less hacky, and that provably evades the conditional-threat issue. Still lots of work left to be done on it, though.
For 1, it’s just intrinsically mathematically appealing (continuity is always really nice when you can get it), and also because of an intution that if your foe experiences a tiny preference perturbation, you should be able to use small conditional payments to replicate their original preferences/incentive structure and start negotiating with that, instead.
I should also note that nowhere in the visual proof of the ROSE value for the toy case, is continuity used. Continuity just happens to appear.
For 2, yes, it’s part of game setup. The buttons are of whatever intensity you want (but they have to be intensity-capped somewhere for technical reasons regarding compactness). Looking at the setup, for each player pair i,j, is the cap for how much of j’s utility that i can destroy. These can vary, as long as they’re nonnegative and not infinite. From this, it’s clear “Alice has a powerful button, Bob has a weak one” is one of the possibilities, that would just mean . There isn’t an assumption that everyone has an equally powerful button, because then you could argue that everyone just has an equal strength threat and then it wouldn’t be much of a threat-resistance desideratum, now would it? Heck, you can even give one player a powerful button and the other a zero-strength button that has no effect, that fits in the formalism.
So the theorem is actually saying “for all members of the family of destruction games with the button caps set wherever the heck you want, the payoffs are the same as the original game”.
My preferred way of resolving it is treating the process of “arguing over which equilibrium to move to” as a bargaining game, and just find a ROSE point from that bargaining game. If there’s multiple ROSE points, well, fire up another round of bargaining. This repeated process should very rapidly have the disagreement points close in on the Pareto frontier, until everyone is just arguing over very tiny slices of utility.
This is imperfectly specified, though, because I’m not entirely sure what the disagreement points would be, because I’m not sure how the “don’t let foes get more than what you think is fair” strategy generalizes to >2 players. Maaaybe disagreement-point-invariance comes in clutch here? If everyone agrees that an outcome as bad or worse than their least-preferred ROSE point would happen if they disagreed, then disagreement-point-invariance should come in to have everyone agree that it doesn’t really matter exactly where that disagreement point is.
Or maybe there’s some nice principled property that some equilibria have, which others don’t, that lets us winnow down the field of equilibria somewhat. Maybe that could happen.
I’m still pretty unsure, but “iterate the bargaining process to argue over which equilibria to go to, you don’t get an infinite regress because you rapidly home in on the Pareto frontier with each extra round you add” is my best bad idea for it.
EDIT: John Harsanyi had the same idea. He apparently had some example where there were multiple CoCo equilibria and his suggestion was that a second round of bargaining could be initiated over which equilibria to pick, but that in general, it’d be so hard to compute the n-person Pareto frontier for large n, that an equilibria might be stable because nobody can find a different equilibria nearby to aim for.
So this problem isn’t unique to ROSE points in full generality (CoCo equilibria have the exact same issue), it’s just that ROSE is the only one that produces multiple solutions for bargaining games, while CoCo only returns a single solution for bargaining games. (bargaining games are a subset of games in general)
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
So, if you are limited to only pure strategies, for some reason, then yes, Chinese would be on the Pareto frontier.
But if you can implement randomization, then Chinese is not on the Pareto frontier, because both sides agree that “flip a coin, Heads for Sushi, Tails for Italian” is just strictly better than Chinese.
The convex shape consists of all the payoff pairs you can get if you allow randomization.
Alright, this is kind of a Special Interest, so here’s your relevant thought dump.
First up, the image is kind of misleading, in the sense that you can always tack on extra orders of magnitude. You could tack on another thousand orders of magnitude and make it look even longer, or just go “this is 900 OOM’s of literally nothing happening, lets clip that off and focus on the interesting part”
Assuming proton decay is a thing (that free protons decay with a ridiculously long half-life)....
ok, I was planning on going “as a ludicrous upper bound, here’s the number”, but, uh, the completely ludicrous upper bound wound up being a WHOLE LOT longer than I thought. I… I didn’t even think it was possible to stall till the evaporation of even a small black hole. But this calculation indicates that if you’re aiming solely at living ludicrously long, you can stall about a googol years, enough for even the largest black holes to evaporate, and to get to the end of the black hole era. I’m gonna need to rethink some stuff.
EDIT: rethought some stuff, realized it doesn’t change my conclusions from when I last looked into this. The fundamental problem is that, for any remotely realistic numbers, if you’re trying to catch the final evaporation of a black hole to harvest its mass-energy, you’ll blow a lot more than the amount of mass-energy that you could gain, in order to wait that long.
Final conclusion: If proton decay is a thing, it’s definitely not worth waiting to the end of a black hole, you’ll want to have things wrapped up far earlier. If proton decay isn’t a thing, you’ll want to wait till the black hole evaporates to catch that final party and last kg of mass-energy. If proton decay is a thing and you’re willing to blow completely ridiculous cosmic amounts of resources on it, you can last till the late parts of the black hole era.
The rough rationale is as follows. Start with 10x the mass of the largest black holes in the universe, around solar masses stockpiled. If they’re spinning fast enough, you can extract energy from them, assume you can extract all of it (it’s over 10 percent, so let’s round it up to 100 percent). Assume that proton decay is years (a high estimate), and that we use the energy at 100 percent efficiency to make matter (also high estimate), you can take out one proton, wait for around a proton decay time, take out the next proton, and so on. Then you can take out around protons, and each one lasts you around years, getting around years (high uncertainty). And, coincidentally, natural Hawing radiation finishes off the black hole of that size in years, leaving a small margin left over for silly considerations like “maybe the intelligence needs more than one proton to physically implement”.
So, not remotely practical, but maybe something like years would actually be doable? That extra 29 OOM’s of wiggle room patches over a lot of sins in this calculation.
But, in terms of what would actually be practical for the far future of humanity, it’d be the strat of “dump as much mass into a fast-spinning black hole as possible. Like, eat the entire Laniakea supercluster complex. Wait a trillion years for the cosmic microwave background radiation to cool to its floor temperature. You’d be in the late Stelliferous era at this point, with a few red dwarfs around, if you didn’t dump all the stars in the mega-black-hole already. Set up some infrastructure around the mega-hole, and use the Blandford-Znajek mechanism to convert the mega-hole spin into electrical power. You should be able to get about a gigawatt of power for the next years to run a whole lotta computation and a little bit of maintanence, and if proton decay is messing with things, chop however many OOM’s you need off the time and add those OOM’s to the power output. Party for a trillion trillion trillion eons with your super-optimized low-temperature computing infrastructure”
That original post lays out UDT1.0, I don’t see anything about precomputing the optimal policy within it. The UDT1.1 fix of optimizing the global policy instead of figuring out the best thing to do on the fly, was first presented here, note that the 1.1 post that I linked came chronologically after the post you linked.