[..] requires eating the Sun, and will be feasible at some technology level [..]
Do we have some basic physical-feasibility insights on this or you just speculate?
[..] requires eating the Sun, and will be feasible at some technology level [..]
Do we have some basic physical-feasibility insights on this or you just speculate?
Indeed the topic I’ve dedicated the 2nd part of the comment, as the “potential truth” how I framed it (and I have no particular objection to you making it slightly more absolutist).
This is interesting! And given you generously leave it rather open as to how to interpret it, I propose we should think the other way round than people usually might tend to, when seeing such results:
I think there’s not even the slightest hint at any beyond-pure-base-physics stuff going on in LLMs revealing even any type of
phenomenon that resists [conventional] explanation
Instead, this merely reveals our limitations of tracking (or ‘emphasizing with’) well enough the statistics within the machine. We know we have just programmed and bite-by-bite-trained into it exactly every syllable the LLM utters. Augment your brain with a few extra neurons or transistors or what have you, and that smart-enough version of you would be capable of perfectly understanding why in response to the training you gave it, it spits out exactly the words it does.[1]
So, instead, it’s interesting the other way round:
Realizations you describe could be a step closer to showing how a simple pure basic machine can start to be ‘convinced’ it has intrinsic value and so on - just the way we all are convinced of having that.
So AI might eventually bring illusionism nearer to us, even if I’m not 100% sure getting closer to that potential truth ends well for us. Or that, anyway, we’d really be able to fully buy into the it even if it were to become glaringly obvious to any outsider observing us.
Don’t misread that as me saying it’s anyhow easy… just, in the limit, basic (even if insanely large scale and convoluted maybe) tracking of the mathematics we put in would really bring us there. So, admittedly, don’t take literally ‘a few’ more neurons to help you, but instead a huge ton..
Indeed. I though it to be relatively clear with “buy” I meant to mostly focus on things we typically explicitly buy with money (for brevity even for these I simplified a lot, omitting that shops are often not allowed to open 24⁄7, some things like alcohol aren’t sold to people of all ages, in some countries not sold in every type of shop, and/or or not at all times).
Although I don’t want to say that exploring how to port the core thought to broader categories of exchanges/relationships couldn’t bring interesting extra insights.
I cannot say I’ve thought about it deep enough, but I’ve thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:
A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:
Simply tax high revenues or profits.
No need to tax “AI (developers?)”/”bots” specifically.
In fact, if AIs remain rather replicable/if we have many competing instances: Scarcity rents will be in raw factors (e.g. ores and/or land) rather than the algorithms used to processing them
UBI to the people.
International tax (and migration) coordination as essential.
Else, especially if it’s perfectly mobile AIs that earn the scarcity rents, we end up with one or a few tax havens that amass & keep the wealth to them
If you have good international coordination, and can track revenues well, you may use very high tax rates, and correspondingly spread a very high share of global value added with the population.
If specifically world economy will be dominated by platform economies, make sure we deal properly with it, ensuring there’s competition instead of lock-in monopoly
I.e. if, say, we’d all want to live in metaverses, avoid everyone being forced to live in Meta’s instead of choosing freely among competing metaverses.
Risks include:
Expect geographic revenue distribution to be foreign to us today, and potentially more unequal with entire lands with zero net contribution in terms of revenue-earning value added
Maybe ores (and/or some types of land) will capture the dominant share of value added, not anymore the educated populations
Maybe instead it’s a monopoly or oligopoly, say with huge shares in Silicon Valley and/or its Chinese counterpart or what have you
Inequality might exceed today’s: Today poor people can become more attractive by offering cheap labor. Tomorrow, people deprived of valuable (i) ores or so, or (ii) specific, scarcity-rent earning AI capabilities, may be able to contribute zero, so have zero raw earnings
Our rent-seeking economic lobbies who successfully put their agents at top policy-makers in charge, and who lead us to voting for antisocial things, will have ever stronger incentive to keep rents for themselves. Stylized example: We’ll elect the supposedly-anti-immigration populist, but whose main deed is to make sure firms don’t pay high enough taxes
You can more easily land-grab than people-grab by force, so may expect military land conquest to become more a thing than in the post-war decades where minds seemed the most valuable thing
Human psychology. Dunno what happens with societies with no work (though I guess we’re more malleable, able to evolve into a society that can cope with it, than some people think, tbc)
Trade unions and alike, trying to keep their jobs somehow, and finding pseudo-justifications for it, so the rest of society lets them do that.
B. Specifically to your following point:
I don’t think the math works out if / when AI companies dominate the economy, since they’ll capture more and more of the economy unless tax rates are high enough that everyone else receives more through UBI than they’re paying the AI companies.
Imagine it’s really at AI companies where the scarcity rents i.e. profits, occur (as mentioned, that’s not at all clear): Imagine for simplicity all humans still want TVs and cars, maybe plus metaverses, and AI requires Nvidia cards. By scenario definition, AI produces everything, and as in this example we assume it’s not the ores that earn the scarcity rents, and the AIs are powerful in producing stuff from raw earth, we don’t explicitly track intermediate goods other than Nvidia cards the AIs produce too. Output be thus:
AI output = 100 TVs, 100 cars, 100 Nvidia cards, 100 digital metaverses, say in $bn.
Taxes = Profit tax = 50% (could instead call it income tax for AI owners; in reality would all be bit more complex, but overall doesn’t matter much).
AI profit 300 ( = all output minus the Nividia cards)
People thus get $150bn; AI owners get $150bn as distributed AI profit after taxes
People consume 50 TVs, 50 cars, 50 digital metaverses
AI owners also consume 50 TVs, 50 cars, 50 digital metaverses
So you have a ‘normal’ circular economy that works. Not so normal, e.g. we have simplified for AI to require not only no labor but also no raw resources (or none with scarcity rent captured by somebody else). You can easily extend it to more complex cases.
In reality, of course, output will be adjusted, e.g. with different goods the rich like to consume instead of thousands of TVs per rich person, as happens already today in many forms; what the rich like to do with the wealth remains to be seen. Maybe fly around (real) space. Maybe get better metaverses. Or employ lots of machines to improve their body cells.
C. Btw, the “we’ll just find other jobs” imho is indeed overrated, and I think the bias, esp. among economists, can be very easily explained when looking at history (where these economists had been spot on) yet realizing, that in future, machines will not anymore augment brains but replace them instead.
I find things as “Gambling Self-Exclusion Schemes” of multiple countries, thanks for the hint, indeed a good example, corroborating that at least in some of the most egregious examples of addictive goods unleashed on the population some action in in the suggested direction is technically & politically feasible—how successful tbc; looking fwd to looking into it in more detail!
Depends on what we call super-dumb—or what where we draw the system borders of “society”. I include the special interest groups as part of our society; and are the small wheel in it gearing us towards the ‘dumb’ outcome in the aggregate. But yes, the problem is simply not trivial, smart/dumb is too relative, so my term was not useful (just expressing my frustration with our policies & thinking, that your nice post reminded me of)
This is a good topic for exploration, though I don’t have much belief that there’s any feasible implementation “at a societal level”.
Fair. I have instead the impression I see plenty of avenues. Bit embarrassingly: they are so far indeed not sufficiently structured in my head, require more detailed tinkering out, exploring failure modes and avenues for addressing in detail, plus they might, require significant restructuring of the relevant markets, and, worst, I have insufficient time to explore them in much detail quite now). But yes, it would remain to be shown & tested-out as mentioned in the post, and I hope I once can explore/write about it a bit more. For now my ambition is: Look, that is indeed a serious topic to explore; we should at least be aware of the possibility to upgrade people’s liberty by providing them ‘only’ alienable instead of inalienable rights to buy or consume. And start looking around as to what we might be able to do..
There are plenty of options at individual levels, mostly informal—commitments to friends and family, writing down plans and reviewing them later, etc.
An indicator of how good we are by using the “options at individual level” is how society looks; as explained, it doesn’t look good (though, as caveat-ed in the post, admittedly I cannot say how much of what seems to be addressable by commitment is indeed solely commitment failure; though there is imho plenty of indirect and anecdotal evidence suggesting it contributes a sizeable junk of it).
It’s not clear at all why we would, in principle, enforce the wishes of one part of someone onto another part.
“in principle” right, in practice I think it’s relatively simple. Nothing is really simple, but:
I think we could quite easily pull out of our hands (not meant as derogatory as it sounds) a bit of analytical theoreming to show under reasonably general conditions fitting some of the salient facts about our ‘short-term misbehavior’, the great benefits, say in medium- & and long-term aggregate utility or something, even when strongly discounted if you wish, of reigning in our short-term self. Discuss under what conditions the conclusion holds, then take away: without crazy assumptions, we often see benefits from supporting not short-termie. I actually think we might even find plenty of existing theory of commitment, discounting etc., doing just that or things close to doing just that.
I can personally not work on that in detail atm, though, and I think in practice the case appears so blatantly obvious when looking a ton of stylized facts (see post where I mention only few most obvious ones) in that domain that it’s worthwhile to start thinking about markets differently now, to start searching for solutions, while some shiny theoretical underpinning remain still pending.
Moreover, I think we societally already accept the case for that thing.
For example, I think paternalistic policies might have much a harder time to force or press us into, say, saving for later (or maybe also into not smoking by prohibitions or taxes etc.) if it wasn’t for many of us to silently agree that actually we’re happy for state to force us (or even the dear ones around us) to do something that actually part of us internally (that is of course long-termie) prefers while short-termie might just blow it instead.
In that paternalistic domain we currently indeed mainly rely on (i) external coercion and officially explain it as (ii) “state has to clean up if I get old, that’s why he forces me to save for pension”, but note how we already have some policies that may even more specifically be best be explained by implicitly acknowledging superiority of long-term self: While multiple compulsory pension schemes keep me save by covering my basic old age expenses, the state strongly incentivizes me to do voluntary pension contributions beyond what’s necessary to cover my basic living costs. If we wouldn’t, in practice, as a society, somehow agree with the idea that long-termie should have more of a say than he naturally has, I think it could be particularly difficult to get society to just stand by while I ‘evade’ taxes by using that scheme.[1]
That voluntary savings scheme incentivizes the saving-until-retirement by removing earnings & wealth taxes. It is on top of the compulsory schemes that are meant to cover basic living costs while at age (this has become a bit harder today but used to be, I think, simpler in the past when the voluntary policy also existed already).
Spot on! Let’s zoom out and see we have (i) created a never before seen food industry that could feed us healthily at unprecedentedly low cost, yet (ii) we end up systematically killing us with all that. We’re super dumb as society to continue doing as if nothing, nothing on a societal level, had to be done.
Btw, imho a more interesting, but not really much more challenging, extension of your case is, if overall what the orphans produce is actually very valuable, say creating utility of 500 $/day for ultimate consumers, but mere market forces, competition between the firms or businessmen, means market prices for the goods produced become still only 50.01c/day, while the labor market clearing wage for the destitute orphans is 50c/day.
Even in this situation, commonsense ‘exploitation’ is straightforward applicable and +- intelligible a concept:
To a degree, the firms or businessmen become a bit irrelevant intermediaries. One refuses to do the trade? Another one will jump in anyway… Are they exploitative or not? Depends a bit on subtle details, but individually they have little leeway to change anything in the system.
The rich society as an aggregate who enjoys the 500 $/day worth items as consumers, while having, via their firms, had them produced for 50.01c/day by the poor orphans with no outside options, is of course an exploitative society in common usage of the term. Yes, the orphans may be better off than without it, but commoners do have an uneasy feeling if they see our society doing that, and I don’t see any surprise in it; indeed, we’re a ‘bad’ society if we just leave it like that and don’t think about doing something more to improve the situation.
The fact that some in society take the wrong conclusion from the feeling of unease about exploitation, and think we ought to stop buying the stuff from the orphans, is really not the ‘fault’ of the exploitation concept, it is the failure of us to imagine (or be willing to bite the bullet of) a beyond-the-market solution, namely the bulk sharing of riches with those destitute orphan workers or what have you. (I actually now wonder whether that may be where the confusion that imho underlies the OP’s article is coming from: Yes, people do take weird econ-101-igoring conclusions when they detect exploitation, but this doesn’t mean they interpret the wrong things as exploitation. It means their feel-good ‘solution’ might backfire; instead they should track consequences of alternatives and see that the real solution to the indeed existing exploitation problem isn’t as simple as to go to the next, overpriced pseudo-local pseudo-sustainable hipster shop, but is to start doing something more directly about the sheer poverty of their fellow beings far or near).
If there’s a situation where a bunch of poor orphans are employed for 50c per grueling 16 hour work day plus room and board, then the fact that it might be better than starving to death on the street doesn’t mean it’s as great as we might wish for them. We might be sad about that, and wish they weren’t forced to take such a deal. Does that make it “exploitation?” in the mind of a lot of people, yeah. Because a lot of people never make it further than “I want them to have a better deal, so you have to give it to them”—even if it turns out they’re only creating 50.01c/day worth of value, the employer got into the business out of the goodness of his heart, and not one of the people crying “exploitation!” cares enough about the orphans to give them a better deal or even make they’re not voting them out of a living. I’d argue that this just isn’t exploitation, and anyone thinking it is just hasn’t thought things through.
Notice how you had to create a strawman of what people commonsensically call exploitation. The person you describe does exactly NOT seem to be employing the workers merely to “gaining disproportionate benefit from someone’s work because their alternatives are poor”. In your example, informed about the situation, with about 0 sec of reflection, people would understand him to NOT be exploitative. Of course, people usually would NOT blame Mother Theresa for having poor people work in her facilities and earning little, IF Mother Theresa did so just out of good heart, without ulterior motives, without deriving disproportionate benefit, and while paying 99.98% of receipts to staff, even if that was little.
Note, me saying exploitation is ‘simple’ and is just what it is even if there is a sort of tension with econ 101, doesn’t mean every report about supposed exploitation would be correct, and I never maintained it wouldn’t be easy—with usual one paragraph newspaper reports—to mislead the superficial mob into seeing something as exploitation even when it isn’t.
It remains really easy to make sense of usual usage of ‘exploitation’ vis a vis econ 101 also in your example:
The guy is how you describe? No hint of exploitation, and indeed a good deal for the poor.
The situation is slightly different, the guy would earn more and does it such as to merely to get as rich as possible? He’s an exploitative business man. Yes, the world is better off with him doing his thing, but of course he’s not a good* man. He’d have to e.g. share his wealth one way or another in a useful way if he really wanted to be. Basta. (*usual disclaimer about the term..)
If a rich person wants to help the poor, it will be more effective so simply help the poor—i.e. with some of their own resources. Trying to distort the market leads to smaller gains from trade which could be used to help the poor. So far so good.
I think we agree on at least one of the main points thus.
Regarding
“Should” is a red flag word
I did not mean to invoke a particularly heavy philosophical absolutist ‘ought’ or anything like that, with my “should”. It was instead simply a sloppy shortcut—and you’re right to call that out—to say the banal: the rich considering whether she’s exploiting the poor and/or whether it’s a win win, might want to consider—what tends to be surprisingly often overseen—that the exploitation vs. beneficial trade may have no easily satisfying solution as long as she keeps the bulk of her riches to herself vis a vis the sheer poverty of her potential poor interlocutant.
But with regards to having to (I add the emphasis):
That’s not to say we have to give up on caring about all exploitation and just do free trade, but it does mean that if we want to have both we have to figure out how to update our understanding of exploitation/economics until the two fit.
I think there’s not much to update. “Exploitation” is a shortcut for a particular, negative feeling we humans tend to naturally get from certain type of situation, and as I tried to explain, it is a rather simple thing. We cannot just define that general aversion away just to square everything we like in a simple way. ‘Exploitation’ simply is exploitation even if it is (e.g. slightly) better for the poor than one other unfair counterfactual (non-exploitation without sharing the unfairly* distributed riches), nothing can change that. Only bulk sharing of our resources may lead to a situation we may wholeheartedly embrace with regards to (i) exploitation and (ii) economics. So if we’re not willing to bite the bullet of bulk-sharing of resources, we’re stuck with either being unhappy about exploitation or about foregoing gains of trade (unless we’ve imbibed econ 101 so strongly that we’ve grown insensitive to ‘exploitation’ at least as long as we don’t use simple thought experiments to remind ourselves how exploitative even some win-win trades can be).
*Before you red-flag ‘unfair’ as well: Again, I’m simply referring to the way people tend to perceive things, on average or so.
Your post introduces a thoughtful definition of exploitation, but I don’t think narrowing the definition is necessary. The common understanding — say “gaining disproportionate benefit from someone’s work because their alternatives are poor” or so — is already clear and widely accepted. The real confusion lies in how exploitation can coexist with voluntary, mutually beneficial trade. This coexistence is entirely natural and doesn’t require resolution — they are simply two different questions. Yet neither Econ 101 nor its critics seem to recognize this.
Econ 101 focuses entirely on the mutual benefit of trade, treating it as a clear win-win, and dismisses concerns about exploitation as irrelevant. Critics, by contrast, are so appalled by the exploitative aspect of such relationships that they often deny the mutual benefit altogether. Both sides fail to see that trade can improve lives while still being exploitative. These are not contradictions; they are two truths operating simultaneously.
For (stylized) example, when rich countries (or thus their companies) offshore to places like Bangladesh or earlier South Korea, they often offer wages that are slightly better than local alternatives — a clear improvement for workers. However, those same companies leverage their stronger bargaining position to offer the bare minimum necessary to secure labor, stopping far short of providing what might be considered fair compensation. This is both a win-win in economic terms and exploitative in a moral sense. Recognizing this duality doesn’t require redefining exploitation — it simply requires acknowledging it.
This misunderstanding leads to counterproductive responses. Economists too quickly dismiss concerns about exploitation, while critics focus on measures like boycotts or buying expensive domestic products, which may (net) harm poor offshore workers. I think also Will MacAskill noted in Doing Good Better this issue, and that the elephant in the room is that the rich should help the poor independently of the question of the labor exchange itself, i.e. that the overwhelming moral point is that, if we care, we should simply donate some of our resources.
Exploitation isn’t about minor adjustments to working conditions or wages. It’s about recognizing how voluntary trade, while beneficial, can still be exploitative if the party with the excessively limited outside options has to put in unjustifiably much while gaining unjustifiably little. This applies to sweatshop factories just as much as to surrogate mother-ship or mineral resource mining—and maybe to Bob in your example, independently of they phone call details.
Would you personally answer Should we be concerned about eating too much soy? with “Nope, definitely not”, or do you just find it’s a reasonable gamble to take to eat the very large qty of soy you describe?
Btw, thanks a lot for the post; MANY parallels with my past as more-serious-but-uncareful-vegan until body showed clear signs of issues that I realized only late as I’d have never believed anyone that healthy vegan diet is that tricky.
Not all forms of mirror biology would even need to be restricted. For instance, there are potential uses for mirror proteins, and those can be safely engineered in the lab. The only dangerous technologies are the creation of full mirror cells, and certain enabling technologies which could easily lead to that (such as the creation of a full mirror genome or key components of a proteome).
Once we get used to create and deal with mirror proteins, and once we get used to designing & building cells, which I don’t know when it happens, maybe adding 1+1 together will also become easy. This suggests that, assuming upsides are limited enough (?), maybe better already to try to halt even any form of mirror biology research.
Taking what you write as excuse to nerd a bit about Hyperbolic Discounting
One way to paraphrase esp. some of your ice cream example:
Hyperbolic discounting—the habit of valuing this moment a lot while abruptly (not smoothly exponentially) discounting everything coming even just a short while after—may in a technical sense be ‘time inconsistent’, but it’s misguided to call it ‘irrational’ in the common usage of the term: My current self may simply care about itself distinctly more than about the future selves, even if some of these future selves are forthcoming relatively soon. It’s my current self’s preference structure, and preferences are not rational or irrational, basta.
I agree and had been thinking this, and I find it an interesting counterpoint to the usual description of hyperbolic discounting as ‘irrational’.
It is a bit funny also as we have plenty of discussions trying to explain when/why some hyperbolic discounting may actually be “rational” (ex. here, here, here), but I’ve not yet seen any so fundamental (and simple) rejection of the notion of irrationality (though maybe I’ve just missed it so far).
(Then, with their dubious habits of using common terms in subtly misleading ways, fellow economists may rebut that we have simply defined irrationality in this debate as meaning to have non-exponential alias time-inconsistent preferences, justifying the term ‘irrationality’ here quasi by definition)
Spurious correlation here, big time, imho.
Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.
I think it’s mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.
Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.
Wouldn’t automatically abolish all requirements; maybe I’m not good enough in searching but to the degree I’m not an outlier:
With internet we have reviews, but they’re not always trustworthy, and even if they are, understanding/checking/searching reviews is costly, sometimes very costly.
There is value in being able to walk up to the next-best random store for a random thing and being served by a person with a minimum standard of education in the trade. Even for rather trivial things.
This seems underappreciated here.
Flower safety isn’t a thing. But having the next-best florist to for sure be a serious florist person to talk to, has serious value. So, I’m not even sure for something like flowers I’m entirely against any sort of requirements.
So it seems to me more a question of balance what exactly to require in which trade, and that’s a tricky one, but in some places I lived seems to have been handled mostly +- okay. Admittedly simply according to my shallow glance at things.
Lived also in countries that seem more permissive, requiring less job training, but clearly prefer the customer experience in those that regulate, despite higher prices.
Then, I wouldn’t want the untrained/unexamined florist to starve or even simply become impoverished. But at least in some countries, social safety net mostly prevents that.
Called Windfall Tax
Random examples:
VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps
Reuters Windfall tax mechanisms on energy companies across Europe
Especially with the 2022 Ukraine energy prices, the notion’s popularity spiked along.
Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.
I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy politics, not always as trivial as it might look on first sight, but feasible, and some countries managed to implement some in the energy crisis.