“end of the world” images make me wonder if Dall-E thinks the world is flat
hawkebia
Interesting. Though I think extremes represent fewer degrees of freedom; where certain traits/characteristics dominate, and heuristics can better model behaviour. The “typical” person has all the different traits pushing/pulling, and so fewer variables you can ignore. i.e. the typical person might be more representative of hard-mode.
I think identifying the blind spots of the typical AI engineer/architect is an interesting and potentially important goal. Though I’m not sure I follow the reasoning behind identifying the opposite as the path to “modeling the desires of the typical person.”?
I think investigating this would be of interest to people working in AI alignment and whose ultimate goal and whose ultimate goal is improving the condition of humanity in general. Understanding the needs and wants of the subset of humans most unlike themselves would likely help in modeling the desires of the typical person.
Isn’t that better and more easily accomplished by identifying the median person i.e. in what way is the typical AI engineer different from the general population, and adjusting for that?
Alternatively, one could find what is complementary to autism rather than the opposite of autism; assuming those are not necessarily the same. People who may be attracted to and good at roles/professions like people management, team sports, therapists etc.
The Ontics and The Decouplers
So trying to see what effect immigration has on inflation is fundamentally misguided—if immigration increases supply, which one might think would reduce prices, it’s entirely possible that the government will react by creating more money, undoing this effect, since they can now do so without inflation going up.
This remains my primary question i.e. I definitely wouldn’t think immigration is the only thing that creates inflation. But if we think its possible that immigration can impact price, then understanding if and how it could create “inflationary pressure” or “inflationary relief” would be quite useful to understand. Even if the government undoes it with other policies.
So, did the steadily declining immigration rate in the early 20th centuary contribute to the inflation Ameirca saw in the 70s—in addition to the increase in money supply and other policies? Was that stark dip, and then rise for the latter half of the centuary purely independant coincidence, or related somehow to inflation? And if so, what role did it play? Similarly, has the recent downward trend in immigration contributed this time?
All of these seem like fairly reasonable and important questions to ask, even if we find the answer to be inconclusive or in the negative. I guess finding it mostly missing from the conversation, even as we talk about supply chains, willingness to work etc. seemed a bit odd to me.
Finally, I was a little confused by:
Deep in the footnotes of the academic papers claiming this, you may see an acknowledgement that the spiral can continue only if the central bank “validates” the price increases by creating more money.
I thought the typical response, even according to Keynesians, is to increase interest rates, therefore reducing money supply, rather than creating more money. The mechanism could be people buying more treasuries thereby removing money supply in circulation. Or people starting consuming less since borrowing rates are high—especially housing, cars etc.
While some people ask for price or wage controls, it seems like its a fairly fringe view, even amongst those considered “left leaning economists”. Am I misunderstanding something here?
Is there a link between Immigration Policy and Inflation?
Heuristics explain some of the failure to predict emergent behaviours. Much of Engineering relies on “perfect is the enemy of good” thinking. But, extremely tiny errors and costs, especially the non-fungible types, compound and interfere at scale. One lesson may be that as our capacity to model and build complex systems improves, we simultaneously reduce the number of heuristics employed.
Material physical systems do use thresholds, but they don’t completely ignore tiny values (eg. neurotransmitter molecules don’t just disappear at low potential levels).
What is being lost is related to your intuition in the earlier comment:
if the market is 49.9 / 50.1 in millions of dollars, then you can be fairly confident that 50% is the “right” price.
Without knowing how many people of the “I’ve studied this subject, and still don’t think a reasonable prediction is possible” variety didn’t participate in the market, it’s very hard to place any trust in it being the “right” price.
This is similar to the “pundit” problem where you are only hearing from the most opinionated people. If 60 nutritionist are on TV and writing papers saying eating fats is bad, you may try to draw the “wrong” conclusion from that.; because unknown to you, 40 nutritionists believe “we just don’t know yet”. And these 40 are provided no incentives to say so.
Take the Russia-Kiev question on Metaculus which had a large number of participants. It hovered at 8% for a long time. If prediction markets are to be useful beyond just pure speculation, that market didn’t tell me how many knowledgable people thought an opinion was simply not possible.
The ontological skepticism signal is missing—people saying there is no right or wrong that “exists”—we just don’t know. So be skeptical of what this market says.
As for KBC—most markets allow you to change/sell your bet before the event happens; especially for longer-term events. So my guess is that this is already happening. In fact, the uncertainty index would seperate out much of the “What do other people think?” element into it’s own question.
For locked in markets like ACX where the suggestion is to leave your prediction blank if you don’t know, imagine every question being paired with “What percentage of people will leave this prediction blank?”
All these indicators are definitely useful for a market observer. And betting on these indicators would make for an interesting derivatives market—especially on higher volume questions. The issue I was referring to is that all these indicators are still only based on traders who felt certain enough to bet on the market.
Say 100 people who have researched East-Asian geopolitics saw the question “Will China invade Taiwan this year?”. 20 did not feel confident enough to place a bet. Of the remaining 80 people, 20 bet small amounts because of their lack of certainty.
The market and most of the indicators you mentioned would be dominated by the 60 that placed large bets. A LOT of information about uncertainty would be lost. And this would have been fairly useful information about an event.
The goal would be to capture the uncertainty signal of the 40 that did not place bets, or placed small bets. One way to do that would be to make “uncertainty” itself a bettable property of the question. And one way to accomplish that would be to bet on what percentage of bets are on “uncertainty” vs. a prediction.
First, I want to dispute the statement that a 50% is uninformative. It can be very informative depending on value of the outcomes.
Yes, absolutely. 50% can be incredibly useful. Unfortunately, it also represents the “I don’t know” calibration option in most prediction markets. A market at 50% for “Will we discover a civilization ending asteroid in the next 50 years?” would be cause for much concern.
Is the market really saying that discovering this asteroid is essentially a coin flip with 1:1 odds? More likely it just represents the entire market saying “I don’t know”. It’s these types of 50% that are considered useless, but I think do still convey information—especially if saying “I don’t know” is an informed opinion.
The Bayesian approach to the problem (which is in fact the very problem that Bayes originally discussed!) would require you to provide a distribution of your “expected” (I want to avoid the terms “prior” or “subjective” explicitly here) probabilities
I think there might an ontological misunderstanding here? I fully agree that ones expectations are often best represented by a non-normal distribution of outcomes. But this presumes that such a distribution “exists”? If it does, then one way to capture it would be to place multiple bets at different levels like one does with options for a stock. Metaculus already captures this distribution for the market as a whole—but only for those who were confident and certain enough to place bets.
My suggestion is to also capture signal from those with studied uncertainty who don’t feel comfortable placing bets on ANY distribution. It’s not that their distribution is flat—it’s that for them a meaningful distribution does not exist. Their belief is “doubt that a meaningful prediction is even possible”.
Capturing Uncertainty in Prediction Markets
I think there is a certain “cost amnesia” that sets in after a “good” decision? Even for fairly large costs. So the “indistinguishability blindness” is often a cognitive response to maintain the image of a good decision rather than determined by hard numbers.
Regardless, this is likely entering speculation territory. It’s something I’d noticed in my own life as well as in policy decisions i.e. a negative reaction to talking about fairly large costs because net-benefits were still positive.
Your final conclusion here appears to be—Do not expect your new pleasures to replace the old.
Yes. And thank you! I’m glad you enjoyed it.
One of my goals was to also apply “Do not expect your new pleasures to replace the old” to other types of decision making. It was a critique of using net-benefit analysis on non-fungible costs. The benefits of a policy don’t replace its harms and costs. Tradeoffs are not the same thing as substitution.
Every tradeoff between non-fungibles incurs a debt, and net-benefit hides that behind a single positive number. It’s an uneccessary step—one can still make tradeoffs, while keeping a ledger of this debt.
This influences real-life outcomes in things like City Planning (where housing people quickly, without underlying infrastructure results in permanent slums) or Politics (where those on the wrong end of the net-gain in jobs are ignored).
These phenomenon wouldn’t come as a big surprise if one had recognized the debt incurred every time we made these tradeoffs.
If you can make the product 1000 times better by increasing the cost by 0.0001 times you probably should do it.
Yes—this is not an argument against making that tradeoff. This is an argument for not treating that tradeoff as a substitue i.e. fungible. Fungibility implies two things are essentially interchangeable and each of whose parts is indistinguishable from another part.
Your weighting or prioritizing didn’t lead to a substitution. You incurred a debt every time you prioritized one category over another. Not all debt is bad, some debt is good—but ignoring debt or incorrectly funging it leads to bad outcomes.
The traditional net-benefit approach leads to exactly this mistake by treating tradeoffs as fungible. Making it ok to incur costs in one category as long as you gain in another category based on some weight and proportion—ad infinitum.
And this actually happens in real life. It’s what leads to things like Technical Debt—where moving quickly is prioritzed over clean code. This can be a good thing for a startup, but treating it as fungible can bring the entire org to a stand-still. The same in City Planning—where housing people quickly, without underlying infrastructure results in permanent slums.
The concept of Non-Fungible Costs is then a useful tool to avoid this tradeoff=substitution spiral.
Could you elaborate? The point was that desires are not always fungible—they don’t neatly add up or cancel out to give you a single satisfaction score. Your decision making math would still pick the suburb because its convenience value outweighs its lack of restaurants. But you don’t suddenly stop caring about restaurants because of that. Convenience isn’t fungible with it.
I’ve added an EDIT in the post towards the end that I think responds to this
> costs and benefits MUST be converted to a single comparable value.
Yes. And now, what should the decision process look like? For monetary decisions, this is easy—money is fungible, so one can use addition/subtraction. But for costs and benefits which are not fungible, can we come up with a decision process, and make sense of our reaction to that decision?
There are two elements to this. The first is purely one of framing—the resulting decision is the same. The second arises from the framing—where the outcome necessitates more decisions. To demonstrate:
Since , choose Suburb.Till here everything is fine. Now, the traditional approach:
So, there is a 20 point increase in satisfactionCompare this to recognizing that the costs are not fungible with the benefits:
There will be more moments of satisfcation than dissatisfaction.My claim is that the traditional approach is superficially conclusive. Your satisfaction has increased by 20 points, grumbling feels irrational. It’s not providing any further signals. The non-fungible approach, simply through framing, drives you to look at that −80. To understand the moments of dissatisfaction. And most importantly—drives you to action in order to reduce it.
Much of the analysis hinges on this, so I think it needs to thought through more deeply. I would argue that the odds of Putin “being overthrown and jailed or killed” are higher if he gives the order to use nukes, than if he accepts “Vietnam”.
The NATO response to nukes would be catastrophic. Any remaining support from China/India would disappear. Further, the war is becoming less popular within Russia. Russia escalating to nukes and the possibility of all out war weakens Putins position both internally and externally.
My guess is that withdrawal would also be met with a certain degree of “relief” from a significant portion of the Russian population.
There is a long history of Goliaths accepting and surviving embarrassing defeats. The level of control Putin exerts internally makes it more likely he would survive and spin “Vietnam” into something not too embarrassing for him personally, instead pinning the blame on an incompetent and corrupt military. Much of the Russian news media is already taking this approach.