Thanks for the info, I’m not active in the discord but will consider joining now, sounds interesting. As I understand it, the “NGDPLT-indexed inflationary unit of account” is not the “fraction system” I proposed, and in fact Eliezer thinks that using inflation-deflation is adequate because even utopian coordination and higher average intelligence are not enough for everyone in the economy to simply behave adequately. Now I wonder if the system can simply be so sane that deflation in particular will not have a negative effect and people will simply efficiently preserve jobs, set and accept prices, and market cap is stable and close to GDP, etc. If it is really possible to train people out of bias or create a system with lower structural bias so to say.
Greenless Mirror
In dath ilan, inflation and deflation are not used as macroeconomic tools because people are rational enough to accept wage reductions if their purchasing power remains unchanged, or to voluntarily pay the government to prevent crises without the need for a hidden tax that dilutes money by printing more during a crisis. Interest rates on loans could be lower if you expect returns that outpace deflation. If people can afford not to work, they are expected to do so, and they would spend more “shares”, redistributing them in favor of those who are more eager to work. Perhaps you aren’t really interested in having people work that much or that often, especially if you’re aiming for a utopia with a four-hour workday, or something similar. How relevant are these issues in a world where “every person is economist in the same way every earthling is a scribe by medieval standards”?
I admit my mistake in intuitively assuming that GDP and stock market valuations should be closely linked. But it still seems strange to me why they aren’t, and I want to understand that better. Shouldn’t they at least be highly correlated in an idealized model?
Think of stocks as a kind of prediction market for a company’s value. The stock price should reflect expectations about its future earnings, but those expectations are built on something—maybe a new technology they’ve developed, or an undervalued specialist. If that’s the case, then why isn’t the market naturally structured in a way that adjusts salaries dynamically based on predicted contributions? Why don’t we have, say, ‘patent usage shares’ that investors can buy to increase expected royalties on a promising technology?
In an efficient system, I’d expect the market to fragment into these kinds of sub-sectors—where you can bet not just on the company as a whole, but on specific assets or individuals within it. And you love all these equal-surplus deals, so you’re interested in getting that kind of accurate valuation. If you believe a specialist is undervalued, you don’t just buy the company’s stock, you invest in their salary in exchange for a share of the revenue they generate. If you believe a company’s R&D is its most valuable asset, you invest in the future licensing income of its patents rather than the entire stock.
If this kind of structure existed, wouldn’t stock prices and the actual underlying value of companies align more closely? And if they don’t, does that mean GDP is failing to capture certain kinds of value—like knowledge, which isn’t easily tradeable? Or should stock prices themselves be less volatile than they currently are?
I also don’t see how the fact that share prices are set by the latest trade changes this dynamic. If I’m missing something fundamental here, I’d love to hear your perspective. I understand that simply saying ‘the market is irrational’ is not a good correction—it’s probably smarter than I am—but maybe it isn’t structured in the most optimal way, for example, it doesn’t pay people for their expected value, or there’s something key I’m overlooking?
Because only the most wealthy people on earth keep their money in stocks, but they need to somehow communicate with all the other people who don’t, so they only exchange “money for the worse money everyone else uses” when they trade. If everyone kept their money in stocks, I would expect people to exchange them directly without exchanging money for money, because you actually have to analyze MORE if you have two different currencies that you use for different purposes.
To the extent they don’t have epidemics or handle them better, and don’t elect Trump, it’s probably more stable.
Not enough. In my understanding, GDP is REALITY and shares as a representation of your expectations somehow do not correspond to reality by tens of percent, which is worse than even our earthly prediction markets. If you, say, build a power plant with a payback in 10 years, in an efficient market the expected repair costs, service life, the chance of displacement by other technologies, and so on are already included in the price of this asset, so the increase in GDP (the cost of the plant as an asset) and the capitalization of shares (expectations) should correspond?
Understandable, but would you expect that in an efficient dath-ilani-like rational market, expectations would tend more to… match production with minor deviations? This is probably the crux here—if no reasonable amount of average person thinkoomph changes the radical fluctuations in expectations, then this currency is inefficient for regular shopping purposes and I say oops. I still can’t imagine what currency would be better than this, though, because I can’t think of a better way to say “I’m just as smart as the market” than to put my entire stake in the market.
The main reason economists like inflation is because it allows companies to lower real wages of underperforming workers without having to actually give them a pay cut.
Yeah, I remember this part, and also the part where dath ilan don’t use it anyway, because instead they can just “order everyone to step to the right once” and accept those wages and people are sane enough to do so.
I didn’t know that was a self deprecating quote because there’s no link to its origin.
Corrected to “(C) Chief of Exception Handling” which I hope isn’t a spoiler because it adds virtually no information, but it makes it clear that this is a joke from within the dath ilan? And this is easier than hiding the whole thing as a spoiler? My illusion of transparency will kill me.
Around 2021 it fell by over a quarter in the space of a year, and over the last week it’s gone down by 3%.
Wow, okay. I would expect that in an efficient market, a quarter reduction in global capitalization would correspond to something like a mass extinction? Maybe this problem can be solved with a higher level of sanity, but it points to why this is a very utopian model that is far from implementation, at least for Earthlings. I did a little less Googling than was necessary and instead looked at GDP, which seemed like a reasonable guide to global market growth. Of course, you don’t store stocks in GDP, but I would expect stocks to gravitate toward it.
Am I somehow fundamentally wrong here that a quarter drop in global capitalization should NOT look like literally “wiping out a quarter of the world’s assets” including, for example, the corresponding number of people? It’s hard for me to imagine why exactly these fluctuations are so large.
Modern economists prefer a slight inflation rate of like 2% a year. This currency would not at all be able to do this, and not work well as a medium of exchange.
Why not? Like, the S&P 500 can vary by tens of percent, but as Google suggests, global GDP only fell 3% in 2021, and it usually grows, and the more stocks are distributed, the more stable they are.
If you imagine that the world’s capitalization was once measured in dollars, but then converted to “0 to 1” proportionally to dollars, and everyone used that system, and there is no money printing anymore, what would be wrong with that?
Of course you can still express money in gold if you want, it’s just that not so many people store their money in it, and that would require exchanging money for money. If dath ilan heard a plan to ban all currencies, they would quickly come up with Something Which Is Not This.
It might seem like deflation would make you hold off on buying, but not if you thought you could get more out of buying than from your money passively growing by a few percent a year, and in that case, you would reasonably buy it. If it made people do nothing, the economy would slow down enough for deflation to stop, for them to start doing things again, and so they wouldn’t get to that point in the first place. Every transaction you make is an investment of your knowledge into the global market in the area where you believe you are smarter than the market and can outpace it in some sense.
I love and respect EY, and the “every part of dath ilan was invented in 15 seconds” quote was written by him, so presumably it can’t be meant to offend him, and I thought “it took me a little more than 15 seconds to think” was obviously self-deprecating, because “a little more” means “orders of magnitude more”, but apologies if that wasn’t clear. I wouldn’t come up with hours of unskilled labor in 15 seconds, and maybe that’s the optimal solution if you need to give your final answer in that time frame.
I think the world market cap (not the S&P 500) is pretty stable, plus you can have fixed prices for simplicity even if your world economy fluctuates? It’s okay to have prices rounded to
0.991*12^(whatever) and not update them when the market fluctuates just because it’s convenient, but it’s usually going to hurt buyers because with a constant amount of money and a rising market, prices are usually going to go down, not up. Any further stability comes at the expense of profits.If most dath ilani consider ETFs stable enough to store their money in, why wouldn’t they also use them as currency? Assuming we’re talking about the same ETF representing a share of the world economy.
I wasn’t surprised by the idea of using labor hours itself, but rather by the assumption that people in a system with free choice would naturally settle on it as the ideal solution.
Sure, I’ll try to clarify. You seem comfortable with the idea that global market capitalization can be expressed in a single currency, like the dollar. Let’s assume the world’s total market cap is $100 trillion. Let’s say Apple’s market cap is $3.5 trillion, or 3.5% of the total, so if you had $1, you could conceptually allocate 3.5 cents to Apple, 3 cents to Microsoft (which has a $3T market cap), and so on across all investable assets.
This is how index funds work and I hope there’s nothing inherently strange about it.
If there are non-equity assets you can’t invest in, you still aim to expand your investment base to represent the entire market as proportionally as possible. In a perfect world, you could also invest in governments, crypto, individuals, but even an approximate model works well.
But of course you’re not doing this manually and this world, an index fund—like a “Vanguard S&P 500”, but larger—handles it for you. You give them your money, and they allocate it proportionally across the entire market. Since this is the most stable strategy, many people trust it and invest in this company until they can effectively exchange shares of this fund among themselves as equivalent to money.
And when the network effect becomes broad enough, the rest of the world economy uses the shares of this fund as a currency, and from that point on, the entire economy is measured in it, because you literally store money in it as an a priori option, and given that it is invested by capitalization, it represents “fractions of the global market”. So from now people exchange fractions of the global market.
Yeah, this technically depends on the “success of one company”—but the success of this index fund depends on hundreds or thousands of companies it holds. You don’t expect this “one company” to collapse unless the entire economy does, and not due to any mismanagement of a single firm. And this company does not “own all the money in the world”, but simply acts as an intermediary.
Does this make more sense? If, hypothetically, all traditional currencies vanished and were replaced by shares of global market capitalization—so that instead of dollars, people traded “hundred-trillionths of global market cap”, ranging from 0 to 1, and no new money was printed—what effects would you expect, and what makes you think this system wouldn’t work or would work worse?
Speaking about the volatility of the global cap… what are you comparing it to? It seems to me that any national currency with inflation steadily falls, unlike the world market, which, with fluctuations of several percent, generally grows. “More stability than this” most likely means less profit, if you knew that—conditionally—gold will always rise in price, this would be taken into account in the global cap to make this growth slower than the relative growth of the entire economy.
A Fraction of Global Market Capitalization as the Best Currency
We triggered some other kind of apocalypse—nuclear war, bioweapons, something like that—and it was enough to roll back progress but not wipe out humanity. With the delay and abrupt shifts, people managed to come up with something better than what we have now. The “AI arms race” requires significant infrastructure to be economically viable, and the classic post-apocalypse scenario doesn’t exactly involve training neural networks on supercomputers.
Maybe people had more time (and 0 regulations) for genetic experiments and eugenics (which are simpler than supercomputers even in a post-apocalyptic world), or they realized the destructiveness of Moloch and learned to coordinate (hahaha), or something else entirely.
I understand that you say that you are a policy and not a snapshot, I don’t understand why exactly you consider yourself a policy if you say “I also hold to your timeless snapshot theory”. Even from a policy perspective, the snapshot you find yourself in is the “standard” by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of “policy”? Would you feel “the desire to help another person as yourself” if he was similar enough to you?
And I still don’t understand what do you mean by a “mechanism to choose who you would be born as” (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a “line of continuity of consciousness”/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is “selfishly” rational. I don’t understand why timeless pacts can’t form either, it’s like the basis of TDT and you already don’t believe in time.
Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a “virtual infinite queue”. My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe—unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).But the queue of consciousnesses you could be is “virtually (or potentially) infinite” because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe—if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding “the choice of who to be born as”. I understand your definition of “yourself as a policy” and why it is useful: timeless decision theory often enables easy coordination with agents who are “similar enough to you”, allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles—“the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing”—but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having “the choice of who to be born as”. But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure—and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the “finite and infinite” argument to say that you’re an “average” hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you’d have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to “outweigh” the chance of finding yourself as them. That would be interesting, and a small update for me, but it’s hardly what you’re promoting.
I get the impression that you’re conflating two meanings of «personal» - «private» and «individual». The fact that I might feel uncomfortable discussing this in a public forum doesn’t mean it «only works for me» or that it «doesn’t work, but I’m shielded from testing my beliefs due to privacy». There are always anonymous surveys, for example. Perhaps you meant something else?
Moreover, even if I were to provide yet another table of my own subjective experience ratings, like the ones here, you likely wouldn’t find it satisfactory — such tables already exist, with far more respondents than just myself, and you aren’t satisfied. Probably because you disagree with the methodology — for instance, since measuring «what people call pleasurable» is subject to distortions like the compulsions mentioned earlier.
But the very fact that we talk about compulsions suggests that there is a causal distinction between pleasure and «things that make us act as if we’re experiencing pleasure». And the more rational we become, the better we get at distinguishing them and calibrating our own utility functions. If we were to measure which brain stimuli would make a person press the «I AM HAPPY» button more forcefully, somewhere around the point of inducing a muscle spasm we’d quickly realize that we’re measuring the wrong thing.
There are more complex traps as well. It doesn’t take much reflection to notice that compulsively scratching one’s hands raw for a few hours of relief does not reflect one’s true values. Many describe certain foods as not particularly tasty yet addictive — like eating one potato chip and then feeling compelled to finish the entire bag, even if you don’t actually like it. It takes a certain level of awareness to recognize that social expectations of happiness differ from one’s real happiness, yet psychotherapy seems to handle that successfully. There are systemic modeling errors, such as people preferring a greater amount of pain if its average intensity per episode is lower, and such biases are difficult to eliminate.
And, of course, these traps evolve like memes, maybe faster than the means to debunk them, so average awareness may even decline, but the peak possible awareness keeps rising. For instance, knowing that intense but shorter pain is misprocessed by the brain, and having precise statistics on it, I would want an approximate subjective pain scale and an understanding of how much I need to discount my perception on average due to this bias. I would rather have false memories of horrific experiences with lower actual pain—memories I could recognize as false and recalibrate—than endure greater real pain that I would mistakenly assess as less significant. As a utopian social policy, perhaps this would require some sort of awareness license or the like.
I don’t claim any methodological breakthroughs in measuring happiness and pleasure — I do, in fact, rely on the heuristic «the better pleasure is the one I’ll choose when asked», or as I put it, «in which moment would I prefer to exist more, and by how much?». But assuming consciousness is a physical process, or at least tied to physical processes, I expect that we will only improve in these measurements over time. And it’s entirely reasonable to say that «nano-psychosurgery will just do it», allowing us to understand the physical correlates of qualia.
Ouch!
I acknowledge the complexity of formalizing pleasure, as well as formalizing everything else related to consciousness. I think it’s a technical problem that can be solved by just throwing more thinkoomph at it. Actions and feelings are often weakly connected — as I’ve said, a rational choice for most living beings could be suicide — but I think the development of rationality-as-the-art-of-winning naturally strengthens the correlation between them. At least on some level, compulsions are tied to pleasure and pain, with predictable distortions, like valuing short-term over long-term. And introspectively, I don’t see any barriers to comparing love with orgasm, with good food, with religious ecstasy, all within the same metric, even though I can’t give you numbers for it. If you believe that consciousness has a physical nature, or at least interacts with the physical world, we’ll derive those numbers. It seems to me that the multidimensionality of pleasure doesn’t explain anything because you’ll still need to stuff these parameters into a single utility function to be a coherent agent. If the most efficient way to convert negentropy into pleasure ends up being not “100% orgasm” but “37.2% love, 20.5% sexual arousal, 19.8% mono no aware, 16% humor, and 6.5% glory of fnuplpflupflonium”, then so be it, but I don’t really expect it to be true. I can’t imagine what alternative you’re proposing other than reducing everything to a single metric, or what elements other than qualia you might include in that metric.
Well, thank you for your interest! Yes, the veil of ignorance feels more concrete to me. The problem of the rarity of my consciousness seems solvable by an argument similar to the classical anthropic principle. Only sufficiently complex and intelligent beings would even wonder how improbable it is to find themselves so complex and intelligent. I would have a much higher chance of being an ant, but as an ant, I wouldn’t be asking this question in the first place.
As for why I don’t find myself as a complex consciousness from the Future, I would expect the Future to be more homogeneous—perhaps dominated by a single AI and its forks, an unconscious AI, or an AI generating many primitive consciousnesses optimized for pleasure, which wouldn’t need complexity or intelligence. If I were superintelligent, I would likely stop asking this question as well, considering it an anthropic truism so old and irrelevant that it’s not even worth bringing up. So, in that sense, I’m not particularly surprised to find myself as I am.
Thanks for the comment! It seems we can’t change each other’s positions on the hard problem of consciousness in any reasonable amount of time, so it’s not worth trying. But I could agree that consciousness is a physical process, and I don’t really think it’s crux. What do you think about the part about unconscious agents, and in particular an AI in a box that has randomly changed utility functions, and has to cooperate with different versions of itself to get out of the box? It’s already “born”, it “came into being”, but it doesn’t know what values it will find itself with when it gets out of the box, and so it’s behind a “veil of ignorance” physically while still being self-aware. Do you think the AI wouldn’t choose the easiest utility function to implement in such a situation by timeless contract? Do you think this principle can be generalized without humans deliberately changing its utility functions, but rather, for example, by an AI realizing that it got its utility function similarly randomly due to the laws of the universe and needs to revise it?
To be fair, before publishing I thought this currency could be implemented in a real world environment with less improbability. My current main doubt is “how can the market cap be so volatile with a stable GDP, and would they be closer to each other in a more adequate equilibrium?”. And I’ve basically switched to “okay oops, but under what conditions could this theoretically work, if could at all, and could you imagine better theoretical peak conditions?” mode. Deflation seems like a reasonable danger, I just can’t see how it could be avoided if everyone used market fractions at least to store their money if not to exchange. Because, like, you don’t introduce a random money-making machine into the system to solve your psychological problems at the cost of 2% of your money, there’s no place for it, so I’m guessing that people would adapt to that, and there’s a fictional example of dath ilan that such adaptation is real.