Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
In regards to this argument,
And as a matter of hard fact, most governments operate a fairly Georgist system with oil exploration and extraction, or just about any mining activities, i.e. they auction off licences to explore and extract.
The winning bid for the licence must, by definition, be approx. equal to the rental value of the site (or the rights to do certain things at the site). And the winning bid, if calculated correctly, will leave the company with a good profit on its operations in future, and as a matter of fact, most mining companies and most oil companies make profits, end of discussion, there is no disincentive for exploration at all.
Or do you think that when Western oil companies rock up in Saudi Arabia, that the Saudis don’t make them pay every cent for the value of the land/natural resources? The Western oil companies just get to keep the additional profits made by extracting, refining, shipping the stuff.
I may be misunderstanding their argument, but it seems to be overstated and overlooks some obvious counterpoints. For one, the fact that new oil discoveries continue to occur in the modern world does not strongly support the claim that existing policies have no disincentive effect. Taxes and certain poorly-designed property rights structures typically reduce economic activity rather than eliminating it entirely.
In other words, disincentives usually result in diminished productivity, not a complete halt to it. Applying this reasoning here, I would frame my argument as implying that under a land value tax, oil and other valuable resources, such as minerals, would still be discovered. However, the frequency of these discoveries would likely be lower compared to the counterfactual because the incentive to invest effort and resources into the discovery process would be weakened as a result of the tax.
Secondly, and more importantly, countries like Saudi Arabia (and other Gulf states) presumably have strong incentives to uncover natural oil reserves for essentially the same reason that a private landowner would: discovering oil makes them wealthier. The key difference between our current system (as described in the comment) and a hypothetical system under a naive land value tax (as described in the post) lies in how these incentives and abilities would function.
Under the current system, governments are free to invest resources in surveying and discovering oil reserves on government-owned property. In contrast, under a naive LVT system, the government would lack the legal ability to survey for oil on privately owned land without the landowner’s permission, even though they’d receive the rental income from this private property via the tax. At the same time, such an LVT would also undermine the incentives for private landowners themselves to search for oil, as the economic payoff for their efforts would be diminished. This means that the very economic actors that could give the government permission to survey the land would have no incentive to let the government do so.
This creates a scenario where neither the government nor private landowners are properly incentivized to discover oil, which seems clearly worse than the present system—assuming my interpretation of the current situation is correct.
Of course, the government could in theory compensate private landowners for discovery efforts, mitigating this flaw in the LVT, but then this just seems like the “patch” to the naive LVT that I talked about in the post.
Thanks for the correction. I’ve now modified the post to cite the World Bank as estimating the true fraction of wealth targeted by an LVT at 13%, which reflects my new understanding of their accounting methodology.
Since 13% is over twice 6%, this significantly updates me on the viability of a land value tax, and its ability to replace other taxes. I weakened my language in the post to reflect this personal update.
That said, nearly all of the arguments I made in the post remain valid regardless of this specific 13% estimate. Additionally, I expect this figure would be significantly revised downward in practice. This is because the tax base for a naive implementation of the LVT would need to be substantially reduced in order to address and eliminate the economic distortions that such a straightforward version of the tax would create. However, I want to emphasize that your comment still provides an important correction.
My revised figure comes from the following explanation given in their report. From ‘The Changing Wealth of Nations 2021’, page 438:
Drawing on Kunte et al. (1998), urban land is valued as a fixed proportion of the value of physical capital. Ideally, this proportion would be country specific. In practice, detailed national balance sheet information with which to compute these ratios was not available. Thus, as in Kunte et al (1998), a constant proportion equal to 24 percent is assumed; therefore the value of urban land is estimated as 24 percent of produced capital stock (machinery, equipment, and structures) in a given year.
To ensure transparency, I will detail the calculations I used to arrive at this figure below:
Total global wealth: $1,152,005 trillion
Natural capital: $64,542 trillion
Produced capital: $359,267 trillion
Human capital: $732,179 trillion
Urban land: Calculated as 24% of produced capital, which is 0.24 × $359,267 trillion = $86,224.08 trillion
Adding natural capital and urban land together gives:
$64,542 trillion + $86,224.08 trillion = $150,766.08 trillion
To calculate the fraction of total wealth represented by natural capital and urban land, we divide this sum by total wealth:
$150,766.08 trillion ÷ $1,152,005 trillion ≈ 0.1309 (or about 13%)
Ideally, I would prefer to rely on an alternative authoritative source to confirm or refine this analysis. However, I was unable to find another suitable source with comparable authority and detail. For this reason, I will continue to use the World Bank’s figures for now, despite the limitations in their methodology.
Here you aren’t just making an argument against LVT. You’re making a more general argument for keeping housing prices high, and maybe even rising (because people might count on that). But high and rising housing prices make lots of people homeless, and the threat of homelessness plays a big role in propping up these prices. So in effect, many people’s retirement plans depend on keeping many other people homeless, and fixing that (by LVT or otherwise) is deemed too disruptive. This does have a certain logic to it, but also it sounds like a bad equilibrium.
I agree this argument could be generalized in the way you suggest, but I want to distinguish between,
Keeping housing prices artificially high by maintaining zoning regulations that act as a barrier to economic growth, in particular by restricting the development of new housing that would drive down the price of existing housing if it were allowed to be constructed.
Keeping the value of property held in land high by not confiscating the ~full rental value of land from people.
While I agree the first policy “does have a certain logic to it”, it also seems more straightforwardly bad than the second approach since it more directly makes society poorer in order to maintain existing people’s wealth. Moreover, abandoning the first policy does not appear to involve reneging on prior commitments much, unless you interpret local governments as “committing” to keep restrictive zoning regulations for an entire community indefinitely. Even if people indeed interpret governments as making such commitments, I assume most people more strongly interpret the government as making more explicit commitments not to suddenly confiscate people’s property.
I want to emphasize this distinction because a key element of my argument is that I am not relying on a “fairness” objection to LVT in that part of the post. My point is not about whether imposing an LVT would be unfair to people who expected it to never happen, and purchased land under that assumption. If fairness were my only argument, I agree that your response would weaken my position. However, my argument in that section focuses instead on the inefficiency that comes from forcing people to adapt to new economic circumstances unnecessarily.
Here’s why the distinction matters: if we were to abandon restrictive zoning policies and allow more housing to be built, it’s similarly true that many people would face costs as they adapt to the resulting changes. However, this disruption seems like it would likely be offset—more than adequately—by the significant economic growth and welfare gains that would follow from increasing the housing supply. In contrast, adopting a land value tax would force a sudden and large disruption, but without many apparent corresponding benefits to justify these costs. This point becomes clearer if we accept the argument that LVT operates essentially as a zero-sum wealth transfer. In that case, it’s highly questionable whether the benefits of implementing such a tax would outweigh the harm caused by the forced adaptation.
It may be worth elaborating on how you think auctions work to mitigate the issues I’ve identified. If you are referring to either a Vickrey auction or a Harberger tax system, Bryan Caplan has provided arguments for why these proposals do not seem to solve the issue regarding the disincentive to discover new uses for land:
I can explain our argument with a simple example. Clever Georgists propose a regime where property owners self-assess the value of their property, subject to the constraint that owners must sell their property to anyone who offers that self-assessed value. Now suppose you own a vacant lot with oil underneath; the present value of the oil minus the cost of extraction equals $1M. How will you self-assess? As long as the value of your land is public information, you cannot safely self-assess at anything less than its full value of $1M. So you self-assess at $1M, pay the Georgist tax (say 99%), and pump the oil anyway, right?
There’s just one problem: While the Georgist tax has no effect on the incentive to pump discovered oil, it has a devastating effect on the incentive to discover oil in the first place. Suppose you could find a $1M well by spending $900k on exploration. With a 99% Georgist tax, your expected profits are negative $890k. (.01*$1M-$900k=-$890k)
While I did agree that Linch’s comment reasonably accurately summarized my post, I don’t think a large part of my post was about the idea that we should now think that human values are much simpler than Yudkowsky portrayed them to be. Instead, I believe this section from Linch’s comment does a better job at conveying what I intended to be the main point,
Suppose in 2000 you were told that a100-line Python program (that doesn’t abuse any of the particular complexities embedded elsewhere in Python) can provide a perfect specification of human values. Then you should rationally conclude that human values aren’t actually all that complex (more complex than the clean mathematical statement, but simpler than almost everything else).
In such a world, if inner alignment is solved, you can “just” train a superintelligent AI to “optimize for the results of that Python program” and you’d get a superintelligent AI with human values.
Notably, alignment isn’t solved by itself. You still need to get the superintelligent AI to actually optimize for that Python program and not some random other thing that happens to have low predictive loss in training on that program.
Well, in 2023 we have that Python program, with a few relaxations:
The answer isn’t embedded in 100 lines of Python, but in a subset of the weights of GPT-4
Notably the human value function (as expressed by GPT-4) is necessarily significantly simpler than the weights of GPT-4, as GPT-4 knows so much more than just human values.
What we have now isn’t a perfect specification of human values, but instead roughly the level of understanding of human values that a 85th percentile human can come up with.
The primary point I intended to emphasize is not that human values are fundamentally simple, but rather that we now have something else important: an explicit, and cheaply computable representation of human values that can be directly utilized in AI development. This is a major step forward because it allows us to incorporate these values into programs in a way that provides clear and accurate feedback during processes like RLHF. This explicitness and legibility are critical for designing aligned AI systems, as they enable developers to work with a tangible and faithful specification of human values rather than relying on poor proxies that clearly do not track the full breadth and depth of what humans care about.
The fact that the underlying values may be relatively simple is less important than the fact that we can now operationalize them, in a way that reflects human judgement fairly well. Having a specification that is clear, structured, and usable means we are better equipped to train AI systems to share those values. This representation serves as a foundation for ensuring that the AI optimizes for what we actually care about, rather than inadvertently optimizing for proxies or unrelated objectives that merely correlate with training signals. In essence, the true significance lies in having a practical, actionable specification of human values that can actively guide the creation of future AI, not just in observing that these values may be less complex than previously assumed.
Similar constraints may apply to AIs unless one gets much smarter much more quickly, as you say.
I do think that AIs will eventually get much smarter than humans, and this implies that artificial minds will likely capture the majority of wealth and power in the world in the future. However, I don’t think the way that we get to that state will necessarily be because the AIs staged a coup. I find more lawful and smooth transitions more likely.
There are alternative means of accumulating power than taking everything by force. AIs could get rights and then work within our existing systems to achieve their objectives. Our institutions could continuously evolve with increasing AI presence, becoming more directed by AIs with time.
What I’m objecting to is the inevitability of a sudden collapse when “the AI” decides to take over in an untimely coup. I’m proposing that there could just be a smoother, albeit rapid transition to a post-AGI world. Our institutions and laws could simply adjust to incorporate AIs into the system, rather than being obliterated by surprise once the AIs coordinate an all-out assault.
In this scenario, human influence will decline, eventually quite far. Perhaps this soon takes us all the way to the situation you described in which humans will become like stray dogs or cats in our current world: utterly at the whim of more powerful beings who do not share their desires.
However, I think that scenario is only one possibility. Another possibility is that humans could enhance their own cognition to better keep up with the world. After all, we’re talking about a scenario in which AIs are rapidly advancing technology and science. Could humans not share in some of that prosperity?
One more possibility is that, unlike cats and dogs, humans could continue to communicate legibly with the AIs and stay relevant for reasons of legal and cultural tradition, as well as some forms of trade. Our current institutions didn’t descend from institutions constructed by stray cats and dogs. There was no stray animal civilization that we inherited our laws and traditions from. But perhaps if our institutions did originate in this way, then cats and dogs would hold a higher position in our society.
There are enormous hurdles preventing the U.S. military from overthrowing the civilian government.
The confusion in your statement is caused by blocking up all the members of the armed forces in the term “U.S. military”. Principally, a coup is an act of coordination.
Is it your contention that similar constraints will not apply to AIs?
When people talk about how “the AI” will launch a coup in the future, I think they’re making essentially the same mistake you talk about here. They’re treating a potentially vast group of AI entities — like a billion copies of GPT-7 — as if they form a single, unified force, all working seamlessly toward one objective, as a monolithic agent. But just like with your description of human affairs, this view overlooks the coordination challenges that would naturally arise among such a massive number of entities. They’re imagining these AIs could bypass the complex logistics of organizing a coup, evading detection, and maintaining control after launching a war without facing any relevant obstacles or costs, even though humans routinely face these challenges amongst ourselves.
In these discussions, I think there’s an implicit assumption that AIs would automatically operate outside the usual norms, laws, and social constraints that govern social behavior. The idea is that all the ordinary rules of society will simply stop applying, because we’re talking about AIs.
Yet I think this simple idea is basically wrong, for essentially the same reasons you identified for human institutions.
Of course, AIs will be different in numerous ways from humans, and AIs will eventually be far smarter and more competent than humans. This matters. Because AIs will be very capable, it makes sense to think that artificial minds will one day hold the majority of wealth, power, and social status in our world. But these facts alone don’t show that the usual constraints that prevent coups and revolutions will simply go away. Just because AIs are smart doesn’t mean they’ll necessarily use force and violently revolt to achieve their goals. Just like humans, they’ll probably have other avenues available for pursuing their objectives.
Asteroid impact
Type of estimate: best model
Estimate: ~0.02% per decade.
Perhaps worth noting: this estimate seems too low to me over longer horizons than the next 10 years, given the potential for asteroid terrorism later this century. I’m significantly more worried about asteroids being directed towards Earth purposely than I am about natural asteroid paths.
That said, my guess is that purposeful asteroid deflection probably won’t advance much in the next 10 years, at least without AGI. So 0.02% is still a reasonable estimate if we don’t get accelerated technological development soon.
Does trade here just means humans consuming, I.e. trading money for AI goods and services? That doesn’t sound like trading in the usual sense where it is a reciprocal exchange of goods and services.
Trade can involve anything that someone “owns”, which includes both their labor and their property, and government welfare. Retired people are generally characterized by trading their property and government welfare for goods and services, rather than primarily trading their labor. This is the basic picture I was trying to present.
How many ‘different’ AI individuals do you expect there to be ?
I think the answer to this question depends on how we individuate AIs. I don’t think most AIs will be as cleanly separable from each other as humans are, as most (non-robotic) AIs will lack bodies, and will be able to share information with each other more easily than humans can. It’s a bit like asking how many “ant units” there are. There are many individual ants per colony, but each colony can be treated as a unit by itself. I suppose the real answer is that it depends on context and what you’re trying to figure out by asking the question.
A recently commonly heard viewpoint on the development of AI states that AI will be economically impactful but will not upend the dominancy of humans. Instead AI and humans will flourish together, trading and cooperating with one another. This view is particularly popular with a certain kind of libertarian economist: Tyler Cowen, Matthew Barnett, Robin Hanson.
They share the curious conviction that the probablity of AI-caused extinction p(Doom) is neglible. They base this with analogizing AI with previous technological transition of humanity, like the industrial revolution or the development of new communication mediums. A core assumption/argument is that AI will not disempower humanity because they will respect the existing legal system, apparently because they can gain from trades with humans.
I think this summarizes my view quite poorly on a number of points. For example, I think that:
AI is likely to be much more impactful than the development of new communication mediums. My default prediction is that AI will fundamentally increase the economic growth rate, rather than merely continuing the trend of the last few centuries.
Biological humans are very unlikely to remain dominant in the future, pretty much no matter how this is measured. Instead, I predict that artificial minds and humans who upgrade their cognition will likely capture the majority of future wealth, political influence, and social power, with non-upgraded biological humans becoming an increasingly small force in the world over time.
The legal system will likely evolve to cope with the challenges of incorporating and integrating non-human minds. This will likely involve a series of fundamental reforms, and will eventually look very different from the idea of “AIs will fit neatly into human social roles and obey human-controlled institutions indefinitely”.
A more accurate description of my view is that humans will become economically obsolete after AGI, but this obsolescence will happen peacefully, without a massive genocide of biological humans. In the scenario I find most likely, humans will have time to prepare and adapt to the changing world, allowing us to secure a comfortable retirement, and/or join the AIs via mind uploading. Trade between AIs and humans will likely persist even into our retirement, but this doesn’t mean that humans will own everything or control the whole legal system forever.
How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?
One would gain control by renting access to the model, i.e., the same way you can control what an instance of ChatGPT currently does. Here, I am referring to practical control over the actual behavior of the AI, when determining what the AI does, such as what tasks it performs, how it is fine-tuned, or what inputs are fed into the model.
This is not too dissimilar from the high level of practical control one can exercise over, for example, an AWS server that they rent. While Amazon may host these servers, and thereby have the final say over what happens to the computer in the case of a conflict, the company is nonetheless inherently dependent on customer revenue, implying that they cannot feasibly use all their servers privately for their own internal purposes. As a consequence of this practical constraint, Amazon rents these servers out to the public, and they do not substantially limit user control over AWS servers, providing for substantial discretion to end-users over what software is ultimately implemented.
In the future, these controls could also be determined by contracts and law, analogously to how one has control over their own bank account, despite the bank providing the service and hosting one’s account. Then, even in the case of a conflict, the entity that merely hosts an AI may not have practical control over what happens, as they may have legal obligations to their customers that they cannot breach without incurring enormous costs to themselves. The AIs themselves may resist such a breach as well.
In practice, I agree these distinctions may be hard to recognize. There may be a case in which we thought that control over AI was decentralized, but in fact, power over the AIs was more concentrated or unified than we believed, as a consequence of centralization over the development or the provision of AI services. Indeed, perhaps real control was always in the hands of the government all along, as they could always choose to pass a law to nationalize AI, and take control away from the companies.
Nonetheless, these cases seem adequately described as a mistake in our perception of who was “really in control” rather than an error in the framework I provided, which was mostly an attempt to offer careful distinctions, rather than to predict how the future will go.
If one actor—such as OpenAI—can feasibly get away with seizing practical control over all the AIs they host without incurring high costs to the continuity of their business through loss of customers, then this indeed may surprise someone who assumed that OpenAI was operating under different constraints. However, this scenario still fits nicely within the framework as I’ve provided, as it merely describes a case in which one was mistaken about the true degree of concentration along one axis, rather than one of my concepts intrinsically fitting reality poorly.
It is not always an expression of selfish motives when people take a stance against genocide. I would even go as far as saying that, in the majority of cases, people genuinely have non-selfish motives when taking that position. That is, they actually do care, to at least some degree, about the genocide, beyond the fact that signaling their concern helps them fit in with their friend group.
Nonetheless, and this is important: few people are willing to pay substantial selfish costs in order to prevent genocides that are socially distant from them.
The theory I am advancing here does not rest on the idea that people aren’t genuine in their desire for faraway strangers to be better off. Rather, my theory is that people generally care little about such strangers, when helping those strangers trades off significantly against objectives that are closer to themselves, their family, friend group, and their own tribe.
Or, put another way, distant strangers usually get little weight in our utility function. Our family, and our own happiness, by contrast, usually get a much larger weight.
The core element of my theory concerns the amount that people care about themselves (and their family, friends, and tribe) versus other people, not whether they care about other people at all.
While the term “outer alignment” wasn’t coined until later to describe the exact issue that I’m talking about, I was using that term purely as a descriptive label for the problem this post clearly highlights, rather than implying that you were using or aware of the term in 2007.
Because I was simply using “outer alignment” in this descriptive sense, I reject the notion that my comment was anachronistic. I used that term as shorthand for the thing I was talking about, which is clearly and obviously portrayed by your post, that’s all.
To be very clear: the exact problem I am talking about is the inherent challenge of precisely defining what you want or intend, especially (though not exclusively) in the context of designing a utility function. This difficulty arises because, when the desired outcome is complex, it becomes nearly impossible to perfectly delineate between all potential ‘good’ scenarios and all possible ‘bad’ scenarios. This challenge has been a recurring theme in discussions of alignment, as it’s considered hard to capture every nuance of what you want in your specification without missing an edge case.
This problem is manifestly portrayed by your post, using the example of an outcome pump to illustrate. I was responding to this portrayal of the problem, and specifically saying that this specific narrow problem seems easier in light of LLMs, for particular reasons.
It is frankly frustrating to me that, from my perspective, you seem to have reliably missed the point of what I am trying to convey here.
I only brought up Christiano-style proposals because I thought you were changing the topic to a broader discussion, specifically to ask me what methodologies I had in mind when I made particular points. If you had not asked me “So would you care to spell out what clever methodology you think invalidates what you take to be the larger point of this post—though of course it has no bearing on the actual point that this post makes?” then I would not have mentioned those things. In any case, none of the things I said about Christiano-style proposals were intended to critique this post’s narrow point. I was responding to that particular part of your comment instead.
As far as the actual content of this post, I do not dispute its exact thesis. The post seems to be a parable, not a detailed argument with a clear conclusion. The parable seems interesting to me. It also doesn’t seem wrong, in any strict sense. However, I do think that some of the broader conclusions that many people have drawn from the parable seem false, in context. I was responding to the specific way that this post had been applied and interpreted in broader arguments about AI alignment.
My central thesis in regards to this post is simply: the post clearly portrays a specific problem that was later called the “outer alignment” problem by other people. This post portrays this problem as being difficult in a particular way. And I think this portrayal is misleading, even if the literal parable holds up in pure isolation.
Matthew is not disputing this point, as far as I can tell.
Instead, he is trying to critique some version of[1] the “larger argument” (mentioned in the May 2024 update to this post) in which this point plays a role.
I’ll confirm that I’m not saying this post’s exact thesis is false. This post seems to be largely a parable about a fictional device, rather than an explicit argument with premises and clear conclusions. I’m not saying the parable is wrong. Parables are rarely “wrong” in a strict sense, and I am not disputing this parable’s conclusion.
However, I am saying: this parable presumably played some role in the “larger” argument that MIRI has made in the past. What role did it play? Well, I think a good guess is that it portrayed the difficulty of precisely specifying what you want or intend, for example when explicitly designing a utility function. This problem was often alleged to be difficult because, when you want something complex, it’s difficult to perfectly delineate potential “good” scenarios and distinguish them from all potential “bad” scenarios. This is the problem I was analyzing in my original comment.
While the term “outer alignment” was not invented to describe this exact problem until much later, I was using that term purely as descriptive terminology for the problem this post clearly describes, rather than claiming that Eliezer in 2007 was deliberately describing something that he called “outer alignment” at the time. Because my usage of “outer alignment” was merely descriptive in this sense, I reject the idea that my comment was anachronistic.
And again: I am not claiming that this post is inaccurate in isolation. In both my above comment, and in my 2023 post, I merely cited this post as portraying an aspect of the problem that I was talking about, rather than saying something like “this particular post’s conclusion is wrong”. I think the fact that the post doesn’t really have a clear thesis in the first place means that it can’t be wrong in a strong sense at all. However, the post was definitely interpreted as explaining some part of why alignment is hard — for a long time by many people — and I was critiquing the particular application of the post to this argument, rather than the post itself in isolation.
The object-level content of these norms is different in different cultures and subcultures and times, for sure. But the special way that we relate to these norms has an innate aspect; it’s not just a logical consequence of existing and having goals etc. How do I know? Well, the hypothesis “if X is generally a good idea, then we’ll internalize X and consider not-X to be dreadfully wrong and condemnable” is easily falsified by considering any other aspect of life that doesn’t involve what other people will think of you.
To be clear, I didn’t mean to propose the specific mechanism of: if some behavior has a selfish consequence, then people will internalize that class of behaviors in moral terms rather than in purely practical terms. In other words, I am not saying that all relevant behaviors get internalized this way. I agree that only some behaviors are internalized by people in moral terms, and other behaviors do not get internalized in terms of moral principles in the way I described.
Admittedly, my statement was imprecise, but my intention in that quote was merely to convey that people tend to internalize certain behaviors in terms of moral principles, which explains the fact that people don’t immediately abandon their habits when the environment suddenly shifts. However, I was silent on the question of which selfishly useful behaviors get internalized this way and which ones don’t.
A good starting hypothesis is that people internalize certain behaviors in moral terms if they are taught to see those behaviors in moral terms. This ties into your theory that people “have an innate drive to notice, internalize, endorse, and take pride in following social norms”. We are not taught to see “reaching into your wallet and shredding a dollar” as impinging on moral principles, so people don’t tend to internalize the behavior that way. Yet, we are taught to see punching someone in the face as impinging on a moral principle. However, this hypothesis still leaves much to be explained, as it doesn’t tell us which behaviors we will tend to be taught about in moral terms, and which ones we won’t be taught in moral terms.
As a deeper, perhaps evolutionary explanation, I suspect that internalizing certain behaviors in moral terms helps make our commitments to other people more credible: if someone thinks you’re not going to steal from them because you think it’s genuinely wrong to steal, then they’re more likely to trust you with their stuff than if they think you merely recognize the practical utility of not stealing from them. This explanation hints at the idea that we will tend to internalize certain behaviors in moral terms if those behaviors are both selfishly relevant, and important for earning trust among other agents in the world. This is my best guess at what explains the rough outlines of human morality that we see in most societies.
I’m not sure what “largely” means here. I hope we can agree that our objectives are selfish in some ways and unselfish in other ways.
Parents generally like their children, above and beyond the fact that their children might give them yummy food and shelter in old age. People generally form friendships, and want their friends to not get tortured, above and beyond the fact that having their friends not get tortured could lead to more yummy food and shelter later on. Etc.
In that sentence, I meant “largely selfish” as a stand-in for what I think humans-by-default care overwhelmingly about, which is something like “themselves, their family, their friends, and their tribe, in rough descending order of importance”. The problem is that I am not aware of any word in the English language to describe people who have these desires, except perhaps the word “normal”.
The word selfish usually denotes someone who is preoccupied with their own feelings, and is unconcerned with anyone else. We both agree that humans are not entirely selfish. Nonetheless, the opposite word, altruistic, often denotes someone who is preoccupied with the general social good, and who cares about strangers, not merely their own family and friend circles. This is especially the case in philosophical discussions in which one defines altruism in terms of impartial benevolence to all sentient life, which is extremely far from an accurate description of the typical human.
Humans exist on a spectrum between these two extremes. We are not perfectly selfish, nor are we perfectly altruistic. However, we are generally closer to the ideal of perfect selfishness than to the ideal of perfect altruism, given the fact that our own family, friend group, and tribe tends to be only a small part of the entire world. This is why I used the language of “largely selfish” rather than something else.
The post is about the complexity of what needs to be gotten inside the AI. If you had a perfect blackbox that exactly evaluated the thing-that-needs-to-be-inside-the-AI, this could possibly simplify some particular approaches to alignment, that would still in fact be too hard because nobody has a way of getting an AI to point at anything.
I think it’s important to be able to make a narrow point about outer alignment without needing to defend a broader thesis about the entire alignment problem. To the extent my argument is “outer alignment seems easier than you portrayed it to be in this post, and elsewhere”, then your reply here that inner alignment is still hard doesn’t seem like it particularly rebuts my narrow point.
This post definitely seems to relevantly touch on the question of outer alignment, given the premise that we are explicitly specifying the conditions that the outcome pump needs to satisfy in order for the outcome pump to produce a safe outcome. Explicitly specifying a function that delineates safe from unsafe outcomes is essentially the prototypical case of an outer alignment problem. I was making a point about this aspect of the post, rather than a more general point about how all of alignment is easy.
(It’s possible that you’ll reply to me by saying “I never intended people to interpret me as saying anything about outer alignment in this post” despite the clear portrayal of an outer alignment problem in the post. Even so, I don’t think what you intended really matters that much here. I’m responding to what was clearly and explicitly written, rather than what was in your head at the time, which is unknowable to me.)
One cannot hook up a function to an AI directly; it has to be physically instantiated somehow. For example, the function could be a human pressing a button; and then, any experimentation on the AI’s part to determine what “really” controls the button, will find that administering drugs to the human, or building a robot to seize control of the reward button, is “really” (from the AI’s perspective) the true meaning of the reward button after all! Perhaps you do not have this exact scenario in mind.
It seems you’re assuming here that something like iterated amplification and distillation will simply fail, because the supervisor function that provides rewards to the model can be hacked or deceived. I think my response to this is that I just tend to be more optimistic than you are that we can end up doing safe supervision where the supervisor ~always remains in control, and they can evaluate the AI’s outputs accurately, more-or-less sidestepping the issues you mention here.
I think my reasons for believing this are pretty mundane: I’d point to the fact that evaluation tends to be easier than generation, and the fact that we can employ non-agentic tools to help evaluate, monitor, and control our models to provide them accurate rewards without getting hacked. I think your general pessimism about these things is fairly unwarranted, and my guess is that if you had made specific predictions about this question in the past, about what will happen prior to world-ending AI, these predictions would largely have fared worse than predictions from someone like Paul Christiano.
I’m still kinda confused. You wrote “But across almost all environments, you get positive feedback from being nice to people and thus feel or predict positive valence about these.” I want to translate that as: “All this talk of stabbing people in the back is irrelevant, because there is practically never a situation where it’s in somebody’s self-interest to act unkind and stab someone in the back. So (A) is really just fine!” I don’t think you’d endorse that, right? But it is a possible position—I tend to associate it with @Matthew Barnett. I agree that we should all keep in mind that it’s very possible for people to act kind for self-interested reasons. But I strongly don’t believe that (A) is sufficient for Safe & Beneficial AGI. But I think that you’re already in agreement with me about that, right?
Without carefully reading the above comment chain (forgive me if I need to understand the full discussion here before replying), I would like to clarify what my views are on this particular question, since I was referenced. I think that:
It is possible to construct a stable social and legal environment in which it is in the selfish interests of almost everyone to act in such a way that brings about socially beneficial outcomes. A good example of such an environment is one where theft is illegal and in order to earn money, you have to get a job. This naturally incentivizes people to earn a living by helping others rather than stealing from others, which raises social welfare.
It is not guaranteed that the existing environment will be such that self-interest is aligned with the general public interest. For example, if we make shoplifting de facto legal by never penalizing people who do it, this would impose large social costs on society.
Our current environment has a mix of both of these good and bad features. However, on the whole, in modern prosperous societies during peacetime, it is generally in one’s selfish interest to do things that help rather than hurt other people. This means that, even for psychopaths, it doesn’t usually make selfish sense to go around hurting other people.
Over time, in societies with well-functioning social and legal systems, most people learn that hurting other people doesn’t actually help them selfishly. This causes them to adopt a general presumption against committing violence, theft, and other anti-social acts themselves, as a general principle. This general principle seems to be internalized in most people’s minds as not merely “it is not in your selfish interest to hurt other people” but rather “it is morally wrong to hurt other people”. In other words, people internalize their presumption as a moral principle, rather than as a purely practical principle. This is what prevents people from stabbing each other in the backs immediately once the environment changes.
However, under different environmental conditions, given enough time, people will internalize different moral principles. For example, in an environment in which slaughtering animals becomes illegal and taboo, most people would probably end up internalizing the moral principle that it’s wrong to hurt animals. Under our current environment, very few people internalize this moral principle, but that’s mainly because slaughtering animals is currently legal, and widely accepted.
This all implies that, in an important sense, human morality is not really “in our DNA”, so to speak. Instead, we internalize certain moral principles because those moral principles encode facts about what type of conduct happens to be useful in the real world for achieving our largely selfish objectives. Whenever the environment shifts, so too does human morality. This distinguishes my view from the view that humans are “naturally good” or have empathy-by-default.
Which is not to say that there isn’t some sense in which human morality comes from human DNA. The causal mechanisms here are complicated. People vary in their capacity for empathy and the degree to which they internalize moral principles. However, I think in most contexts, it is more appropriate to look at people’s environment as the determining factor of what morality they end up adopting, rather than thinking about what their genes are.
Competitive capitalism works well for humans who are stuck on a relatively even playing field, and who have some level of empathy and concern for each other.
I think this basically isn’t true, especially the last part. It’s not that humans don’t have some level of empathy for each other; they do. I just don’t think that’s the reason why competitive capitalism works well for humans. I think the reason is instead because people have selfish interests in maintaining the system.
We don’t let Jeff Bezos accumulate billions of dollars purely out of the kindness of our heart. Indeed, it is often considered far kinder and more empathetic to confiscate his money and redistribute it to the poor. The problem with that approach is that abandoning property rights incurs costs on those who rely on the system to be reliable and predictable. If we were to establish a norm that allowed us to steal unlimited money from Jeff Bezos, many people would reason, “What prevents that norm from being used against me?”
The world pretty much runs on greed and selfishness, rather than kindness. Sure, humans aren’t all selfish, we aren’t all greedy. And few of us are downright evil. But those facts are not as important for explaining why our system works. Our system works because it’s an efficient compromise among people who are largely selfish.
It has come to my attention that this article is currently being misrepresented as proof that I/MIRI previously advocated that it would be very difficult to get machine superintelligences to understand or predict human values. This would obviously be false, and also, is not what is being argued below. The example in the post below is not about an Artificial Intelligence literally at all! If the post were about what AIs supposedly can’t do, the central example would have used an AI! The point that is made below will be about the algorithmic complexity of human values. This point is relevant within a larger argument, because it bears on the complexity of what you need to get an artificial superintelligence to want or value; rather than bearing on what a superintelligence supposedly could not predict or understand. -- EY, May 2024.
I can’t tell whether this update to the post is addressed towards me. However, it seems possible that it is addressed towards me, since I wrote a post last year criticizing some of the ideas behind this post. In either case, whether it’s addressed towards me or not, I’d like to reply to the update.
For the record, I want to definitively clarify that I never interpreted MIRI as arguing that it would be difficult to get a machine superintelligence to understand or predict human values. That was never my thesis, and I spent considerable effort clarifying the fact that this was not my thesis in my post, stating multiple times that I never thought MIRI predicted it would be hard to get an AI to understand human values.
My thesis instead was about a subtly different thing, which is easy to misinterpret if you aren’t reading carefully. I was talking about something which Eliezer called the “value identification problem”, and which had been referenced on Arbital, and in other essays by MIRI, including under a different name than the “value identification problem”. These other names included the “value specification” problem and the problem of “outer alignment” (at least in narrow contexts).
I didn’t expect as much confusion at the time when I wrote the post, because I thought clarifying what I meant and distinguishing it from other things that I did not mean multiple times would be sufficient to prevent rampant misinterpretation by so many people. However, evidently, such clarifications were insufficient, and I should have instead gone overboard in my precision and clarity. I think if I re-wrote the post now, I would try to provide like 5 different independent examples demonstrating how I was talking about a different thing than the problem of getting an AI to “understand” or “predict” human values.
At the very least, I can try now to give a bit more clarification about what I meant, just in case doing this one more time causes the concept to “click” in someone’s mind:
Eliezer doesn’t actually say this in the above post, but his general argument expressed here and elsewhere seems to be that the premise “human value is complex” implies the conclusion: “therefore, it’s hard to get an AI to care about human value”. At least, he seems to think that this premise makes this conclusion significantly more likely.[1]
This seems to be his argument, as otherwise it would be unclear why Eliezer would bring up “complexity of values” in the first place. If the complexity of values had nothing to do with the difficulty of getting an AI to care about human values, then it is baffling why he would bring it up. Clearly, there must be some connection, and I think I am interpreting the connection made here correctly.
However, suppose you have a function that inputs a state of the world and outputs a number corresponding to how “good” the state of the world is. And further suppose that this function is transparent, legible, and can actually be used in practice to reliably determine the value of a given world state. In other words, you can give the function a world state, and it will spit out a number, which reliably informs you about the value of the world state. I claim that having such a function would simplify the AI alignment problem by reducing it from the hard problem of getting an AI to care about something complex (human value) to the easier problem of getting the AI to care about that particular function (which is simple, as the function can be hooked up to the AI directly).
In other words, if you have a solution to the value identification problem (i.e., you have the function that correctly and transparently rates the value of world states, as I just described), this almost completely sidesteps the problem that “human value is complex and therefore it’s difficult to get an AI to care about human value”. That’s because, if we have a function that directly encodes human value, and can be simply referenced or directly inputted into a computer, then all the AI needs to do is care about maximizing that function rather than maximizing a more complex referent of “human values”. The pointer to “this function” is clearly simple, and in any case, simpler than the idea of all of human value.
(This was supposed to narrowly reply to MIRI, by the way. If I were writing a more general point about how LLMs were evidence that alignment might be easy, I would not have focused so heavily on the historical questions about what people said, and I would have instead made simpler points about how GPT-4 seems to straightforwardly try do what you want, when you tell it to do things.)
My main point was that I thought recent progress in LLMs had demonstrated progress at the problem of building such a function, and solving the value identification problem, and that this progress goes beyond the problem of getting an AI to understand or predict human values. For one thing, an AI that merely understands human values will not necessarily act as a transparent, legible function that will tell you the value of any outcome. However, by contrast, solving the value identification problem would give you such a function. This strongly distinguishes the two problems. These problems are not the same thing. I’d appreciate if people stopped interpreting me as saying one thing when I clearly meant another, separate thing.
This interpretation is supported by the following quote, on Arbital,
Complexity of value is a further idea above and beyond the orthogonality thesis which states that AIs don’t automatically do the right thing and that we can have, e.g., paperclip maximizers. Even if we accept that paperclip maximizers are possible, and simple and nonforced, this wouldn’t yet imply that it’s very difficult to make AIs that do the right thing. If the right thing is very simple to encode—if there are value optimizers that are scarcely more complex than diamond maximizers—then it might not be especially hard to build a nice AI even if not all AIs are nice. Complexity of Value is the further proposition that says, no, this is forseeably quite hard—not because AIs have ‘natural’ anti-nice desires, but because niceness requires a lot of work to specify. [emphasis mine]
It’s common for Georgists to propose a near-100% tax on unimproved land. One can propose a smaller tax to mitigate these disincentives, but that simultaneously shrinks the revenue one would get from the tax, making the proposal less meaningful.