My main argument in favour of risk profiles is to think in terms of the frequency in which events go wrong. It is true that me not getting Covid yesterday should not impact my decisions today, however making choices that yield a probability of a bad event of p over the next month means I’ll have that bad event happen once every 1/p months, on average. Due to loss aversion I might want to cap that frequency, even at the cost of reduced expected utility, hence I’ll give myself a risk profile. This goes into the psychological effects of loss, which tend to overweight positive outcomes. Any thoughts?
migueltorrescosta
I have a personal belief that a lot of low hanging fruit does not get picked because of we have masses where each benefits a little vs smaller entities with a lot to lose, such as drug companies wanting smaller enforcement. As such the invested minority can outlast the majority in terms of preventing these changes from becoming law.
Do you see other factors having more significance? Further, can we avoid these impasses?
They can profit without this sort of Ponzi scheme. The best analogy I have seen is as follows:
Suppose you have 5 phones on the market, and by law short sellers have to buy 10 phones. Since the demand will always be higher than supply ( the legal requirement forces short sellers to buy ), then the price will go off to infinity by natural supply/demand mechanics.
The only way to break this is by increasing supply, ie if long stock holders decide to sell they shares as you recommended when you say get out. This would not be maximally beneficial for the long holders. Once supply surpasses demand the overpricing immediately breaks, but that doesn’t need to happen. Since short sellers owe something like 120% of the stock ( I’m not sure of the exact value), long holders could theoretically agree to sell only 1% of their stock each at a million a share, and this would still work and benefit all long holders.
This was only possible because short sellers overbought their side. The interesting issue here is that even though there is a way for ALL long holders to profit immensely, it would fail if enough of them get scared into selling, so it becomes a real life coordination problem. Do you think they can pull it off?
PS: I’ve worked in finance and found this very interesting, both due to the unusual short squeeze it is, and to the behavioural side of the situation. I’d like to hear opposing thoughts and questions if my writing isn’t as clear as it should be. Exciting times!
First of all thank you for your post, it’s very thorough :)
While I want to reread it in case I missed any arguments for this, the main issue I usually have with these trust webs is the propensity for the creation of echo chambers: by relying only on those you trust and who they trust, you might filter out others opinions not because they are less valid, but because you disagree on some fundamental axioms. Have you given any thought on how to avoid echo chambers in these webs of trust?
Best, Miguel
I’d really like this feature as well
If Tim tells the truth with probability $p$, you simply get that you should guess what he said if $p<\frac{1}{1000000}$, and $p>\frac{1}{1000000}$. For Tim the optimal choice is to have $p=\frac{1}{1000000}$ in order not to give you any information: Anything else is playing on psychology and human biases, which exist in reality but trying to play a “perfect” game by assuming your opponent is not also leaves you vulnerable to exploitability, as you mentioned.
It seems you are trying to get a deeper understanding of human fallibility rather than playing optimal games. Have I misunderstood it?
Lovely idea.
Minor point: it feels to me the average bet isn’t the usual average but instead the harmonic mean of all bets taken. The difference might be small and more importantly there’s no reason why the arithmetic average is fairer than the harmonic average, but it was just a small thing I noticed 😜
I’m up for this
Thank you habryka!
Note: The LaTeX is not rendering properly on this reply. Does anyone know what the reason could be?
I chose because the optimal point in that case is the set of integers , but the argument holds for any positive real constant, and by using either equality, less than or not greater than.
There is one thing we assumed which is that, given the utility function , our proxy utility function is .This is not necessarily obvious, and even more so if we think of more convoluted utility functions: if our utility was given by , what would be our proxy when we only know ?
To answer this question generally my first thought would be to build a function that maps a vector space , a utility function , the manifold S of possible points and a map from those points to a filtration that tells us the information we have available when at point to a new utility function .
However this full generality seems a lot harder to describe.
Best, Miguel
I think it’s possible to build a Goodharts example on a 2D vector space.
Say you get to choose two parameters and . You want to maximize their sum, but you are constrained by . Then the maximum is attained when . Now assume that is hard to measure, so you use as a proxy. Then you move from the optimal point we had above to the worse situation where , but .
The key point being that you are searching for a solution in a manifold inside your vector, but since some dimensions of that vector space are too hard or even impossible to measure, you end up in sub optimal points of your manifold.
In formal terms you have a true utility function based on all the data you have, and a misaligned utility function based on the subspace of known variables , where could be obtained by integrating out the unknown dimensions if we know their probability distribution, or any other technique that might be more suitable.
Would this count as a more substantive assumption?
Best, Miguel
Edit: added the “In formal terms” paragraph
Have you seen Kialo?
Thank you for your post abramdemski!
I failed to understand why you can’t arrive at a solution for the Single-Shot game via Iterated Play without memory of the previous game. In order to clarify my ideas let me define two concepts first:
Iterated Play with memory: We repeatedly play the game knowing the results of the previous games.
Iterated Play without memory: We repeatedly play the game, while having no memory of the previous play.
The distinction is important: With memory we can at any time search all previous games and act accordingly, allowing for strategies such as Tit-for-Tat and other history dependent strategies. Without memory we can still learn ( for example by applying some sort of Bayesian updates to our probability estimates of each move being played ), whilst not having access to the previous games before each move. That way we can “learn” how to best play the single shot version of the game by iterated play.
Does what I said above need any clarification, and is there any failure in its’ logic?
Best Regards, Miguel
You mention that a Martingale is a betting strategy where the player doubles their bet each time.
A Martingale is a fair game (i.e. the expected outcome is zero). If your outcome is given by a coin toss, and you receive only what you bet, then that is a Martingale game (you win X £ with probability and lose X £ with probability too ).
Then you could say that doubling your bet is a betting strategy on a Martingale game, BUT not that a Martingale game is a betting strategy where the player doubles their bet each time (in the same way that a dog is an animal but an animal is not a dog).
Does that make sense?
Other than that I’m very intrigued by the claim made. Definitely worth reading, but my hopes for something worthwhile are few :P
Would this be a concrete example of the above:
We have two states S=0, S=1 as inputs, channel k1 given by the identity matrix, i.e. it gives us all information about the original, and k2 which loses all information about the initial states (i.e. it always returns S=1 as the output, regardless of the input ). Then k1 strictly dominates k2, however if we preprocess the inputs by mapping them both to S=1, then both channels convey no information, and as such there is no strict domination anymore. Is this so?
More generally, any k1>k2 can lose the strict domination property by a pregarbling where all information is destroyed, rendering both channels useless.
Have I missed anything?