All of these ideas sound awesome and exciting, and precisely the right kind of use of LLMs that I would like to see on LW!
sunwillrise
It’s looking like the values of humans are far, far simpler than a lot of evopsych literature and Yudkowsky thought, and related to this, values are less fragile than people thought 15-20 years ago, in the sense that values generalize far better OOD than people used to think 15-20 years ago
I’m not sure I like this argument very much, as it currently stands. It’s not that I believe anything you wrote in this paragraph is wrong per se, but more like this misses the mark a bit in terms of framing.
Yudkowsky had (and, AFAICT, still has) a specific theory of human values in terms of what they mean in a reductionist framework, where it makes sense (and is rather natural) to think of (approximate) utility functions of humans and of Coherent Extrapolated Volition as things-that-exist-in-the-territory.
I think a lot of writing and analysis, summarized by me here, has cast a tremendous amount of doubt on the viability of this way of thinking and has revealed what seem to me to be impossible-to-patch holes at the core of these theories. I do not believe “human values” in the Yudkowskian sense ultimately make sense as a coherent concept that carves reality at the joints; I instead observe a tremendous number of unanswered questions and apparent contradictions that throw the entire edifice in disarray.
But supplementing this reorientation of thinking around what it means to satisfy human values has been “prosaic” alignment researchers pivoting more towards intent alignment as opposed to doomed-from-the-start paradigms like “learning the true human utility function” or ambitious value learning, a recognition that realism about (AGI) rationality is likely just straight-up false and that the very specific set of conclusions MIRI-clustered alignment researchers have reached about what AGI cognition will be like are entirely overconfident and seem contradicted by our modern observations of LLMs, and ultimately an increased focus on the basic observation that full value alignment simply is not required for a good AI outcome (or at the very least to prevent AI takeover). So it’s not so much that human values (to the extent such a thing makes sense) are simpler, but more so that fulfilling those values is just not needed to nearly as high a degree as people used to think.
Mainly, minecraft isn’t actually out of distribution, LLMs still probably have examples of nice / not-nice minecraft behaviour.
Is this inherently bad? Many of the tasks that will be given to LLMs (or scaffolded versions of them) in the future will involve, at least to some extent, decision-making and processes whose analogues appear somewhere in their training data.
It still seems tremendously useful to see how they would perform in such a situation. At worst, it provides information about a possible upper bound on the alignment of these agentized versions: yes, maybe you’re right that you can’t say they will perform well in out-of-distribution contexts if all you see are benchmarks and performances on in-distribution tasks; but if they show gross misalignment on tasks that are in-distribution, then this suggest they would likely do even worse when novel problems are presented to them.
a lot of skill ceilings are much higher than you might think, and worth investing in
The former doesn’t necessarily imply the latter in general, because even if we are systematically underestimating the realistic upper bound for our skill level in these areas, we would still have to deal with diminishing marginal returns to investing in any particular one. As a result, I am much more confident of the former claim being correct for the average LW reader than of the latter. In practice, my experience tells me that you often have “phase changes” of sorts, where there’s a rather binary instead of continuous response to a skill level increase: either you’ve hit the activation energy level, and thus unlock the self-reinforcing loop of benefits that flow from the skill (once you can apply it properly and iterate on it or use it recursively), or you haven’t, in which case any measurable improvement is minimal. It’s thus often more important to get past the critical point than to make marginal improvements either before or after hitting it.
On the other hand, many of the skills you mentioned afterwards in your comment seem relatively general-purpose, so I could totally be off-base in these specific cases.
The document seems to try to argue that Uber cannot possibly become profitable. I would be happy to take a bet that Uber will become profitable within the next 5 years.
This is an otherwise valuable discussion that I’d rather not have on LW, for the standard reasons; it seems a bit to close to the partisan side of the policy/partisanship political discussion divide. I recognize I wrote a comment in reaction to yours (shame on me), and so you were fully within your rights to respond, but I’d rather stop it here.
But rather that if he doesn’t do it, it will be because he doesn’t want to, not because his constituents don’t.
I generally prefer not to dive into the details of partisan politics on LW, but my reading of the comment you are responding to makes me believe that, by “Republicans under his watch”, ChristianKl is referring to Republican politicians/executive appointees and not to Republican voters.
I am not saying I agree with this perspective, just that it seems to make a bit more sense to me in context. The idea would be that Trump has been able to use “leadership” to remake the Republican party in his image and get the party elites to support him only because he has mostly governed as a standard conservative Republican on economic issues (tax cuts for rich people&corporations, attempts to repeal the ACA, deregulation, etc); the symbiotic relationship they enjoy would therefore supposedly have as a prerequisite the idea that Trump would not try to enforce idiosyncratic views on other Republicans too much...
Tabooing words is bad if, by tabooing, you are denying your interlocutors the ability to accurately express the concepts in their minds.
We can split situations where miscommunication about the meaning of words persists despites repeated attempts by all sides to resolve it into three broad categories. On the one hand, you have those that come about because of (explicit or implicit) definitional disputes, such as the famous debate (mentioned in the Sequences) over whether trees that fall make sounds if nobody is around to hear them. Different people might have give different responses (literally ‘yes’ vs ‘no’ in this case), but this is simply because they interpret the words involved differently. When you replace the symbol with the substance, you realize that there is no empirical difference in what anticipated experiences the two sides have, and thus the entire debate is revealed to be a waste of time. By dissolving the question, you have resolved it.
That does not capture the entire cluster of persistent semantic disagreements, however, because there is one other possible upstream generator of controversy, namely the fact that, often times, the concept itself is confused. This often comes about because one side (or both) reifies or gives undue consideration to a mental construct that does not correspond to reality; perhaps the concept of ‘justice’ is an example of this, or the notion of observer-independent morality (if you subscribe to either moral anti-realism or to the non-mainstream Yudkowskian conception of realism). In this case, it is generally worthwhile to spend the time necessary to bring everyone on the same page that the concept itself should be abandoned and we should avoid trying to make sense of reality through frameworks that include it.
But, sometimes, we talk about confusing concepts not because the concepts themselves ultimately do not make sense in the territory (as opposed to our fallible maps), but because we simply lack the gears-level understanding required to make sense of our first-person, sensual experiences. All we can do is bumble around, trying to gesture at what we are confused about (like consciousness, qualia, etc), without the ability to pin it down with surgical precision. Not because our language is inadequate, not[1] because the concept we are honing in on is inherently nonsensical, but because we are like cavemen trying to reason about the nature of the stars in the sky. The stars are not an illusion, but our hypotheses about them (‘they are Gods’ or ‘they are fallen warriors’ etc) are completely incompatible with reality. Not due to language barriers or anything like that, but because we lack the large foundation and body of knowledge needed to even orient ourselves properly around them.
To me, consciousness falls into the third category. If you taboo too many of the most natural, intuitive ways of talking about it, you are not benefitting from a more careful and precise discussion of the concepts involved. On the contrary, you are instead forcing people who lack the necessary subject-matter knowledge (i.e., arguably all of us) to make up their own hypotheses about how it functions. Of course they will come to different conclusions; after all, the hard problem of consciousness is still far from being settled!
- ^
At least not necessarily because of this; you can certainly take an illusionistic perspective on the nature of consciousness.
- ^
I agree, but I think this is slightly beside the original points I wanted to make.
I continue to strongly believe that your previous post is methodologically dubious and does not provide an adequate set of explanations of what “humans believe in” when they say “consciousness.” I think the results that you obtained from your surveys are ~ entirely noise generated by forcing people who lack the expertise necessary to have a gears-level model of consciousness (i.e., literally all people in existence now or in the past) to talk about consciousness as though they did, by denying them the ability to express themselves using the language that represents their intuitions best.
Normally, I wouldn’t harp on that too much here given the passage of time (water under the bridge and all that), but literally this entire post is based on a framework I believe gets things totally backwards. Moreover, I was very (negatively) surprised to see respected users on this site apparently believing your previous post was “outstanding” and “very legible evidence” in favor of your thesis.
I dearly hope this general structure does not become part of the LW zeitgeist for thinking about an issue as important as this.
- 22 Nov 2024 22:54 UTC; 3 points) 's comment on Consciousness as a conflationary alliance term for intrinsically valued internal experiences by (
arguments about risks from destabilization/power dynamics and potential conflicts between various actors are probably both more legible and ‘truer’
Say more?
I think the general impression of people on LW is that multipolar scenarios and concerns over “which monkey finds the radioactive banana and drags it home” are in large part a driver of AI racing instead of being a potential impediment/solution to it. Individuals, companies, and nation-states justifiably believe that whichever one of them accesses potentially superhuman AGI first will have the capacity to flip the gameboard at-will, obtain power over the entire rest of the Earth, and destabilize the currently-existing system. Standard game theory explains the final inferential step for how this leads to full-on racing (see the recent U.S.-China Commission’s report for a representative example of how this plays out in practice).
I get that we’d like to all recognize this problem and coordinate globally on finding solutions, by “mak[ing] coordinated steps away from Nash equilibria in lockstep”. But I would first need to see an example, a prototype, of how this can play out in practice on an important and highly salient issue. Stuff like the Montreal Protocol banning CFCs doesn’t count because the ban only happened once comparably profitable/efficient alternatives had already been designed; totally disanalogous to the spot we are in right now, where AGI will likely be incredibly economically profitable, perhaps orders of magnitude more so than the second-best alternative.
This is in large part why Eliezer often used to challenge readers and community members to ban gain-of-function research, as a trial run of sorts for how global coordination on pausing/slowing AI might go.
Hmm, that sounds about right based on the usual human-vs-human transfer from Elo difference to performance… but I am still not sure if that holds up when you have odds games, which feel qualitatively different to me than regular games. Based on my current chess intuition, I would expect the ability to win odds games to scale better than ELO near the top level, but I could be wrong about this.
rapid time controls
I am very skeptical of this on priors, for the record. I think this statement could be true for superblitz time controls and whatnot, but I would be shocked if knight odds would be enough to beat Magnus in a 10+0 or 15+0 game. That being said, I have no inside knowledge, and I would update a lot of my beliefs significantly if your statement as currently written actually ends up being true.
conditional on you aligning with the political priorities of OP funders or OP reputational management
Do you mean something more expansive than “literally don’t pursue projects that are either conversative/Republican-coded or explicitly involved in expanding/enriching the Rationality community”? Which, to be clear, would be less-than-ideal if true, but should be talked about in more specific terms when giving advice to potential grant-receivers.
I get an overall vibe from many of the comments you’ve made recently about OP, both here and on the EA forum, that you believe in a rather broad sense they are acting to maximize their own reputation or whatever Dustin’s whims are that day (and, consequently, lying/obfuscating this in their public communications to spin these decisions the opposite way), but I don’t think[1] you have mentioned any specific details that go beyond their own dealings with Lightcone and with right-coded figures.
- ^
Could be a failure of my memory, ofc
- ^
One problem is that, at least in my experience, drunk people are much less fun to be around when you yourself are not also drunk. As a non-drinker who sometimes casually hangs out with drinkers, the drinking gap between us often makes the connection less potent.
rather than
You mean “in addition to”, right? Knowing what the AI alone is capable of doing is quite an important part of what evals are about, so keeping it there seems crucial.
[Coming at this a few months late, sorry. This comment by @Steven Byrnes sparked my interest in this topic once again]
I don’t see why we care if evolution is a good analogy for alignment risk. The arguments for misgeneralization/mis-specification stand on their own. They do not show that alignment is impossible, but they do strongly suggest that it is not trivial.
Focusing on this argument seems like missing the forest for the trees.
Ngl, I find everything you’re written here a bit… baffling, Seth. Your writing in particular and your exposition of your thoughts on AI risk generally does not use evolutionary analogies, but this only means that posts and comments criticizing analogies with evolution (sample: 1, 2, 3, 4, 5, etc) are just not aimed at you and your reasoning. I greatly enjoy reading your writing and pondering the insights you bring up, but you are simply not even close to the most publicly-salient proponent of “somewhat high ” among the AI alignment community. It makes perfect sense from the perspective of those who disagree with you (or other, more hardcore “doomers”) on the bottom-line question of AI to focus their public discourse primarily on responding to the arguments brought up by the subset of “doomers” who are most salient and also most extreme in their views, namely the MIRI-cluster centered around Eliezer, Nate Soares, and Rob Bensinger.
And when you turn to MIRI and the views that its members have espoused on these topics, I am very surprised to hear that “The arguments for misgeneralization/mis-specification stand on their own” and are not ultimately based on analogies with evolution.
But anyway, to hopefully settle this once and for all, let’s go through all the examples that pop up in my head immediately when I think of this, shall we?
From the section on inner & outer alignment of “AGI Ruin: A List of Lethalities”, by Yudkowsky (I have removed the original emphasis and added my own):
15. Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously. Given otherwise insufficient foresight by the operators, I’d expect a lot of those problems to appear approximately simultaneously after a sharp capability gain. See, again, the case of human intelligence. We didn’t break alignment with the ‘inclusive reproductive fitness’ outer loss function, immediately after the introduction of farming—something like 40,000 years into a 50,000 year Cro-Magnon takeoff, as was itself running very quickly relative to the outer optimization loop of natural selection. Instead, we got a lot of technology more advanced than was in the ancestral environment, including contraception, in one very fast burst relative to the speed of the outer optimization loop, late in the general intelligence game. We started reflecting on ourselves a lot more, started being programmed a lot more by cultural evolution, and lots and lots of assumptions underlying our alignment in the ancestral training environment broke simultaneously. (People will perhaps rationalize reasons why this abstract description doesn’t carry over to gradient descent; eg, “gradient descent has less of an information bottleneck”. My model of this variety of reader has an inside view, which they will label an outside view, that assigns great relevance to some other data points that are not observed cases of an outer optimization loop producing an inner general intelligence, and assigns little importance to our one data point actually featuring the phenomenon in question. When an outer optimization loop actually produced general intelligence, it broke alignment after it turned general, and did so relatively late in the game of that general intelligence accumulating capability and knowledge, almost immediately before it turned ‘lethally’ dangerous relative to the outer optimization loop of natural selection. Consider skepticism, if someone is ignoring this one warning, especially if they are not presenting equally lethal and dangerous things that they say will go wrong instead.)
16. Even if you train really hard on an exact loss function, that doesn’t thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments. Humans don’t explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn’t produce inner optimization in that direction. This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.
[...]
21. There’s something like a single answer, or a single bucket of answers, for questions like ‘What’s the environment really like?’ and ‘How do I figure out the environment?’ and ‘Which of my possible outputs interact with reality in a way that causes reality to have certain properties?‘, where a simple outer optimization loop will straightforwardly shove optimizees into this bucket. When you have a wrong belief, reality hits back at your wrong predictions. When you have a broken belief-updater, reality hits back at your broken predictive mechanism via predictive losses, and a gradient descent update fixes the problem in a simple way that can easily cohere with all the other predictive stuff. In contrast, when it comes to a choice of utility function, there are unbounded degrees of freedom and multiple reflectively coherent fixpoints. Reality doesn’t ‘hit back’ against things that are locally aligned with the loss function on a particular range of test cases, but globally misaligned on a wider range of test cases. This is the very abstract story about why hominids, once they finally started to generalize, generalized their capabilities to Moon landings, but their inner optimization no longer adhered very well to the outer-optimization goal of ‘relative inclusive reproductive fitness’ - even though they were in their ancestral environment optimized very strictly around this one thing and nothing else. This abstract dynamic is something you’d expect to be true about outer optimization loops on the order of both ‘natural selection’ and ‘gradient descent’. The central result: Capabilities generalize further than alignment once capabilities start to generalize far.
From “A central AI alignment problem: capabilities generalization, and the sharp left turn”, by Nate Soares, which, by the way, quite literally uses the exact phrase “The central analogy”; as before, emphasis is mine:
My guess for how AI progress goes is that at some point, some team gets an AI that starts generalizing sufficiently well, sufficiently far outside of its training distribution, that it can gain mastery of fields like physics, bioengineering, and psychology, to a high enough degree that it more-or-less singlehandedly threatens the entire world. Probably without needing explicit training for its most skilled feats, any more than humans needed many generations of killing off the least-successful rocket engineers to refine our brains towards rocket-engineering before humanity managed to achieve a moon landing.
And in the same stroke that its capabilities leap forward, its alignment properties are revealed to be shallow, and to fail to generalize. The central analogy here is that optimizing apes for inclusive genetic fitness (IGF) doesn’t make the resulting humans optimize mentally for IGF. Like, sure, the apes are eating because they have a hunger instinct and having sex because it feels good—but it’s not like they could be eating/fornicating due to explicit reasoning about how those activities lead to more IGF. They can’t yet perform the sort of abstract reasoning that would correctly justify those actions in terms of IGF. And then, when they start to generalize well in the way of humans, they predictably don’t suddenly start eating/fornicating because of abstract reasoning about IGF, even though they now could. Instead, they invent condoms, and fight you if you try to remove their enjoyment of good food (telling them to just calculate IGF manually). The alignment properties you lauded before the capabilities started to generalize, predictably fail to generalize with the capabilities.
From “The basic reasons I expect AGI ruin”, by Rob Bensinger:
When I say “general intelligence”, I’m usually thinking about “whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems”.
[...]
Human brains aren’t perfectly general, and not all narrow AI systems or animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock all of these sciences at once, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to other problems, that happened to generalize to millions of wildly novel tasks.
[...]
Human brains underwent no direct optimization for STEM ability in our ancestral environment, beyond traits like “I can distinguish four objects in my visual field from five objects”.[5]
[5] More generally, the sciences (and many other aspects of human life, like written language) are a very recent development on evolutionary timescales. So evolution has had very little time to refine and improve on our reasoning ability in many of the ways that matter
From “Niceness is unnatural”, by Nate Soares:
I think this view is wrong, and I don’t see much hope here. Here’s a variety of propositions I believe that I think sharply contradict this view:
There are lots of ways to do the work that niceness/kindness/compassion did in our ancestral environment, without being nice/kind/compassionate.
The specific way that the niceness/kindness/compassion cluster shook out in us is highly detailed, and very contingent on the specifics of our ancestral environment (as factored through its effect on our genome) and our cognitive framework (calorie-constrained massively-parallel slow-firing neurons built according to DNA), and filling out those details differently likely results in something that is not relevantly “nice”.
Relatedly, but more specifically: empathy (and other critical parts of the human variant of niceness) seem(s) critically dependent on quirks in the human architecture. More generally, there are lots of different ways for the AI’s mind to work differently from how you hope it works.
The desirable properties likely get shredded under reflection. Once the AI is in the business of noticing and resolving conflicts and inefficiencies within itself (as is liable to happen when its goals are ad-hoc internalized correlates of some training objective), the way that its objectives ultimately shake out is quite sensitive to the specifics of its resolution strategies.
From “Superintelligent AI is necessary for an amazing future, but far from sufficient”, by Nate Soares:
These are the sorts of features of human evolutionary history that resulted in us caring (at least upon reflection) about a much more diverse range of minds than “my family”, “my coalitional allies”, or even “minds I could potentially trade with” or “minds that share roughly the same values and faculties as me”.
Humans today don’t treat a family member the same as a stranger, or a sufficiently-early-development human the same as a cephalopod; but our circle of concern is certainly vastly wider than it could have been, and it has widened further as we’ve grown in power and knowledge.
From the Eliezer-edited summary of “Ngo and Yudkowsky on alignment difficulty”, by… Ngo and Yudkowsky:
Eliezer, summarized by Richard (continued): “In biological organisms, evolution is
one sourcethe ultimate source of consequentialism. Asecondsecondary outcome of evolution is reinforcement learning. For an animal like a cat, upon catching a mouse (or failing to do so) many parts of its brain get slightly updated, in a loop that makes it more likely to catch the mouse next time. (Note, however, that this process isn’t powerful enough to make the cat a pure consequentialist—rather, it has many individual traits that, when we view them from this lens, point in the same direction.)A third thing that makes humans in particular consequentialist is planning,Another outcome of evolution, which helps make humans in particular more consequentialist, is planning—especially when we’re aware of concepts like utility functions.”From “Comments on Carlsmith’s “Is power-seeking AI an existential risk?”″, by Nate Soares:
Perhaps Joe thinks that alignment is so easy that it can be solved in a short time window?
My main guess, though, is that Joe is coming at things from a different angle altogether, and one that seems foreign to me.
Attempts to generate such angles along with my corresponding responses:
Claim: perhaps it’s just not that hard to train an AI system to be “good” in the human sense? Like, maybe it wouldn’t have been that hard for natural selection to train humans to be fitness maximizers, if it had been watching for goal-divergence and constructing clever training environments?
Counter: Maybe? But I expect these sorts of things to take time, and at least some mastery of the system’s internals, and if you want them to be done so well that they actually work in practice even across the great Change-Of-Distribution to operating in the real world then you’ve got to do a whole lot of clever and probably time-intensive work.
Claim: perhaps there’s just a handful of relevant insights, and new ways of thinking about things, that render the problem easy?
Counter: Seems like wishful thinking to me, though perhaps I could go point-by-point through hopeful-to-Joe-seeming candidates?
From “Soares, Tallinn, and Yudkowsky discuss AGI cognition”, by… well, you get the point:
Eliezer: Something like, “Evolution constructed a jet engine by accident because it wasn’t particularly trying for high-speed flying and ran across a sophisticated organism that could be repurposed to a jet engine with a few alterations; a human industry would be gaining economic benefits from speed, so it would build unsophisticated propeller planes before sophisticated jet engines.” It probably sounds more convincing if you start out with a very high prior against rapid scaling / discontinuity, such that any explanation of how that could be true based on an unseen feature of the cognitive landscape which would have been unobserved one way or the other during human evolution, sounds more like it’s explaining something that ought to be true.
And why didn’t evolution build propeller planes? Well, there’d be economic benefit from them to human manufacturers, but no fitness benefit from them to organisms, I suppose? Or no intermediate path leading to there, only an intermediate path leading to the actual jet engines observed.
I actually buy a weak version of the propeller-plane thesis based on my inside-view cognitive guesses (without particular faith in them as sure things), eg, GPT-3 is a paper airplane right there, and it’s clear enough why biology could not have accessed GPT-3. But even conditional on this being true, I do not have the further particular faith that you can use propeller planes to double world GDP in 4 years, on a planet already containing jet engines, whose economy is mainly bottlenecked by the likes of the FDA rather than by vaccine invention times, before the propeller airplanes get scaled to jet airplanes.
The part where the whole line of reasoning gets to end with “And so we get huge, institution-reshaping amounts of economic progress before AGI is allowed to kill us!” is one that doesn’t feel particular attractored to me, and so I’m not constantly checking my reasoning at every point to make sure it ends up there, and so it doesn’t end up there.
From “Humans aren’t fitness maximizers”, by Soares:
One claim that is hopefully uncontroversial (but that I’ll expand upon below anyway) is:
Humans are not literally optimizing for IGF, and regularly trade other values off against IGF.
Separately, we have a stronger and more controversial claim:
If an AI’s objectives included goodness in the same way that our values include IGF, then the future would not be particularly good.
I think there’s more room for argument here, and will provide some arguments.
A semi-related third claim that seems to come up when I have discussed this in person is:
Niceness is not particularly canonical; AIs will not by default give humanity any significant fraction of the universe in the spirit of cooperation.
I endorse that point as well. It takes us somewhat further afield, and I don’t plan to argue it here, but I might argue it later.
From “Shah and Yudkowsky on alignment failures”, by the usual suspects:
Yudkowsky: and lest anyone start thinking that was an exhaustive list of fundamental problems, note the absence of, for example, “applying lots of optimization using an outer loss function doesn’t necessarily get you something with a faithful internal cognitive representation of that loss function” aka “natural selection applied a ton of optimization power to humans using a very strict very simple criterion of ‘inclusive genetic fitness’ and got out things with no explicit representation of or desire towards ‘inclusive genetic fitness’ because that’s what happens when you hill-climb and take wins in the order a simple search process through cognitive engines encounters those wins”
From the comments on “Late 2021 MIRI Conversations: AMA / Discussion”, by Yudkowsky:
Yudkowsky: I would “destroy the world” from the perspective of natural selection in the sense that I would transform it in many ways, none of which were making lots of copies of my DNA, or the information in it, or even having tons of kids half resembling my old biological self.
From the perspective of my highly similar fellow humans with whom I evolved in context, they’d get nice stuff, because “my fellow humans get nice stuff” happens to be the weird unpredictable desire that I ended up with at the equilibrium of reflection on the weird unpredictable godshatter that ended up inside me, as the result of my being strictly outer-optimized over millions of generations for inclusive genetic fitness, which I now don’t care about at all.
Paperclip-numbers do well out of paperclip-number maximization. The hapless outer creators of the thing that weirdly ends up a paperclip maximizer, not so much.
From Yudkowsky’s appearance on the Bankless podcast (full transcript here):
Ice cream didn’t exist in the natural environment, the ancestral environment, the environment of evolutionary adeptedness. There was nothing with that much sugar, salt, fat combined together as ice cream. We are not built to want ice cream. We were built to want strawberries, honey, a gazelle that you killed and cooked [...] We evolved to want those things, but then ice cream comes along, and it fits those taste buds better than anything that existed in the environment that we were optimized over.
[...]
Leaving that aside for a second, the reason why this metaphor breaks down is that although the humans are smarter than the chickens, we’re not smarter than evolution, natural selection, cumulative optimization power over the last billion years and change. (You know, there’s evolution before that but it’s pretty slow, just, like, single-cell stuff.)
There are things that cows can do for us, that we cannot do for ourselves. In particular, make meat by eating grass. We’re smarter than the cows, but there’s a thing that designed the cows; and we’re faster than that thing, but we’ve been around for much less time. So we have not yet gotten to the point of redesigning the entire cow from scratch. And because of that, there’s a purpose to keeping the cow around alive.
And humans, furthermore, being the kind of funny little creatures that we are — some people care about cows, some people care about chickens. They’re trying to fight for the cows and chickens having a better life, given that they have to exist at all. And there’s a long complicated story behind that. It’s not simple, the way that humans ended up in that [??]. It has to do with the particular details of our evolutionary history, and unfortunately it’s not just going to pop up out of nowhere.
But I’m drifting off topic here. The basic answer to the question “where does that analogy break down?” is that I expect the superintelligences to be able to do better than natural selection, not just better than the humans.
At this point, I’m tired, so I’m logging off. But I would bet a lot of money that I can find at least 3x the number of these examples if I had the energy to. As Alex Turner put it, it seems clear to me that, for a very high portion of “classic” alignment arguments about inner & outer alignment problems, at least in the form espoused by MIRI, the argumentative bedrock is ultimately based on little more than analogies with evolution.
We think it works like this
Who is “we”? Is it:
only you and your team?
the entire Apollo Research org?
the majority of mechinterp researchers worldwide?
some other group/category of people?
Also, this definitely deserves to be made into a high-level post, if you end up finding the time/energy/interest in making one.
randomly
As an aside (that’s still rather relevant, IMO), it is a huge pet peeve of mine when people use the word “randomly” in technical or semi-technical contexts (like this one) to mean “uniformly at random” instead of just “according to some probability distribution.” I think the former elevates and reifies a way-too-common confusion and draws attention away from the important upstream generator of disagreements, namely how exactly the constitution is sampled.
I wouldn’t normally have said this, but given your obvious interest in math, it’s worth pointing out that the answers to these questions you have raised naturally depend very heavily on what distribution we would be drawing from. If we are talking about, again, a uniform distribution from “the design space of minds-in-general” (so we are just summoning a “random” demon or shoggoth), then we might expect one answer. If, however, the search is inherently biased towards a particular submanifold of that space, because of the very nature of how these AIs are trained/fine-tuned/analyzed/etc., then you could expect a different answer.
Edit: This comment misinterpreted the intended meaning of the post.
I… don’t think this is necessarily what @EuanMcLean meant? At the risk of conflating his own perspective and ambivalence on this issue with my own, this is a question of personal identity and whether the computationalist perspective, generally considered a “reasonable enough” assumption to almost never be argued for explicitly on LW, is correct. As I wrote a while ago on Rob’s post:
I recognize you wrote in response to me a while ago that you “find these kinds of conversations to be very time-consuming and often not go anywhere.” I understand this, and I sympathize to a large extent: I also find these discussions very tiresome, which became part of why I ultimately did not engage too much with some of the thought-provoking responses to the question I posed a few months back. So it’s totally ok for us not to get into the weeds of this now (or at any point, really). Nevertheless, for the sake of it, I think the “everyday experience” thermostat example does not seem like an argument in favor of computationalism over physicalism-without-computationalism, since the primary generator of my intuition that my identity would be the same in that case is the literal physical continuity of my body throughout that process. I just don’t think there is a “prosaic” (i.e., bodily-continuity-preserving) analogue or intuition pump to the case of WBE or similar stuff in this respect.
Anyway, in light of footnote 10 in the post (“The question of whether such a simulation contains consciousness at all, of any kind, is a broader discussion that pertains to a weaker version of CF that I will address later on in this sequence”), which to me draws an important distinction between a brain-simulation having some consciousness/identity versus having the same consciousness/identity as that of whatever (physically-instantiated) brain it draws from, I did want to say that this particular post seems focused on the latter and not the former, which seems quite decision-relevant to me: