I think this post is very thoughtful, with admirable attempts at formalization and several interesting insights sprinkled throughout. I think you are addressing real questions, including:
why do people wonder why they ‘really’ did something?
How and when do shards generalize beyond contextual reflex behaviors into goals?
To what extent will heuristics/shards be legible / written in “similar formats”?
That said, I think some of your answers and conclusions are off/wrong:
You rely a lot on selection-level reasoning in a way which feels sketchy.
I doubt your conclusions about GPS optimizing activations directly, as a terminal end and not as yet another tactic,
I doubt the assumptions on GPS being a minimizer, or goals being minimize-distance (although you claimed in another thread this isn’t crucial?)
I don’t see why you think heuristics (shards?) “lose control” to GPS.
I don’t think why you think value-humans shard has to be perfectly aligned.
Overall, nice work, strong up, medium disagree. :)
[heuristics are] statements of the following form: “if you take such action in such situation, this will correlate with higher reward”.
I think that heuristics are reflections of historical facts of that form, but not statements themselves.
But these tendencies were put there by the selection process because the (E→A)→U correlations are valid.
In a certain set of historical reward-attainment situations, perhaps (because this depends on the learning alg being good, but I’m happy to assume that). Not in general, of course.
a) The World-Model. Initially, there wouldn’t have been a unified world-model. Each individual heuristic would’ve learned some part of the environment structure it cared about, but it wouldn’t have pooled the knowledge with the other heuristics. A cat-detecting circuit would’ve learned how a cat looks like, a mouse-detecting one how mice do, but there wouldn’t have been a communally shared “here’s how different animals look” repository.
However, everything is correlated with everything else (the presence of a cat impacts the probability of the presence of a mouse), so pooling all information together would’ve resulted in improved predictive accuracy. Hence, the agent would’ve eventually converged towards an explicit world-model.
What is the difference, on your view, between a WM which is “explicit” and one which e.g. has an outgoing connection from is-cat circuit to is-animal?
b) Cross-Heuristic Communication.
I really like the insight in this point. I’d strong-up a post containing this, alone.
Anything Else? So far as I can tell now, that’s it. Crucially, under this model, there doesn’t seem to be any pressure for heuristics to make themselves legible in any other way. No summaries of how they work, no consistent formats, nothing.
If the agent is doing SSL on its future observations and (a subset of its) recurrent state activations, then the learning process would presumably train the network to reflectively predict its own future heuristic-firings, so as to e.g. not be surprised by going near donuts and then stopping to stare at them (instead of the nominally “agreed-upon” plan of “just exit the grocery store”).
Furthermore, there should be some consistent formatting since the heuristics are functions h:Ms→As. And under certain “simplicity pressures/priors”, heuristics may reuse each other’s deliberative machinery (this is part of how I think the GPS forms). EG there shouldn’t be five heuristics each of which slightly differently computes whether the other side of the room is reachable.
That’s very much non-ideal. The GPS still can’t access the non-explicit knowledge — it basically only gets hunches about it.
So, what does it do? Starts reverse-engineering it. It’s a general-purpose problem-solver, after all — it can understand the problem specification of this, given a sufficiently rich world-model, and then solve it. In fact, it’ll probably be encouraged to do this.
I’m trying to imagine a concrete story here. I don’t know what this means.
I don’t positively buy reasoning about whether “deceptive alignment” is probable, on how others use the term. I’d have to revisit it, since it’s on my very long list of alignment reasoning downstream of AFAICT-incorrect premises or reliant on extremely handwavy, vague, and leaky “selection”-based reasoning.
we might imagine a heuristic centered around “chess”, optimized for winning chess games. When active, it would query the world-model, extract only the data relevant to the current game of chess, and compute the appropriate move using these data only.
Just one heuristic for all of chess?
Consider this situation:
I wish this were an actual situation, not an “example” which is syntactic. This would save a lot of work for the reader and possibly help you improve your own models.
That’ll work… if it had infinite time to think, and could excavate all the procedural and implicit knowledge prior to taking any action. But what if it needs to do both in lockstep?
(Flagging that this makes syntactic sense but I can’t actually give an example easily, what it means to “excavate” the procedural and implicit knowledge.)
This makes the combination of all contextual goals, let’s call it GΣ
Can you give me an example of what this means for a network which has a diamond-shard and an ice-cream-eating shard?
Prior to the GPS’ appearance, the agent was figuratively pursuing BΣ
Don’t you mean “figuratively pursuing GΣ”? How would one “pursue” contextual behaviors?
So the interim objective can be at least as bad asBΣ.
Flag that I wish you would write this as “during additional training, the interim model performance can be at least as U-unperformant as the contextual behaviors.” I think “bad” leads people to conflate “bad for us” with “bad for the agent” with “low-performance under formal loss criterion” with something else. I think these conflations are made quite often in alignment writing.
Prior to the GPS’ appearance, the agent was figuratively pursuing BΣ (“figuratively” because it wasn’t an optimizer, just an optimized). So the interim objective can be at least as bad asBΣ. On the other hand, pursuing GΣdirectly would probably be an improvement, as we wouldn’t have to go through two layers of proxies.
Example? At this point I feel like I’ve gotten off your train; you seem to be assuming a lot of weird-seeming structure and “pressures”, I don’t understand what’s happening or what experiences I should or shouldn’t anticipate. I’m worried that it feels like most of my reasoning is now syntactic.
The obvious solution is obvious: make heuristics themselves control the GPS. The GPS’ API is pretty simple, and depending on the complexity of the cross-heuristic communication channel, it might be simple enough to re-purpose its data formats for controlling the GPS.
I think that heuristics controlling GPS-machinery is probably where the GPS comes from to begin with, so this step doesn’t seem necessary.
Once that’s done, the heuristics can make it solve tasks for them, and become more effective at achieving BΣ (as this will give them better ability to runtime-adapt to unfamiliar circumstances, without waiting for the SGD/evolution to catch them up).
Same objection as above—to “achieve” BΣ? How do you “achieve” behaviors? And, what, this would happen how? What part of training are we in? What is happening in this story, is SGD optimizing the agent to be runtime-adaptive, or..?
I don’t think this is what the coherence theorems imply. I think explaining my perspective here would be a lot of work, but I can maybe say helpful things like “utility is more like a contextual yardstick governing tradeoffs between eg ice cream eating opportunities and diamond production opportunities, and less like an optimization target which the agent globally and universally optimizes.”
I am also worried about reasoning like “smart agents → coherent over value-relevant → optimizing a ‘utility function’ → argmax on utility functions is scary (does anyone remember AIXI?)”, when really the last step is invalid.
AFAICT I agree wrapper-minds are inefficient (seems like a point against them?).
I don’t know why GPS should control reverse-engineering, rather than there being generalized shards driving GPS.
I think “internalize a system of norms” is not how people’s caring works in bulk, and doesn’t address the larger commonly-activated planning-steering shards I expect to translate robustly across environments (like “go home”, “make people happy”). I agree there is a Shard Generalization Question, but I don’t think “wrapper mind” is a plausible answer to it.
The GPS can recover all of these mechanics, and then just treat the sum of all “activation strengths” as negative utility to minimize-in-expectation.
Seems like assuming “activation strengths increase the further WM values are from target values” leads us to this bizarre GPS goal. While that proposition may be true as a tendency, I don’t see why it should be true in any strict sense, or if you believe that, or whether the analysis hinges on it?
In short, the same way it’s non-trivial to know what heuristics/instincts are built into your mind, it’s non-trivial to know what you’re currently thinking of.
Aside: I think self-awareness arises from elsewhere in the shard ecosystem.
One issue is that the value-humans shard would need to be perfectly aligned with human values, and that’s most of this approach’s promised advantage gone. That’s not much of an issue, though: I think we’d need to do that in any workable approach.
What? Why? Why would a value-human shard bid for plans generated via GPS which involve people dying? (I think I have this objection because I don’t buy/understand your story for how GPS “rederives” values into some alien wrapper object.)
Is there any difference between “goals” and “values”? I’ve used the terms basically interchangeably in this post, but it might make sense to assign them to things of different types.
I use “values” to be decision-influence, and “goal” as, among other things, an instrumental subgoal in the planning process which is relevant to one or more values (e.g. hang out with friends more as relevant to a friend-shard).
Other points:
I wish the nomenclature had been clearer, with Ms being replaced by e.g. WMsubset.
I think “U” is a bad name for the policy-gradient-providing function (aka reward function).
Thanks for extensive commentary! Here’s an… unreasonably extensive response.
what it means to “excavate” the procedural and implicit knowledge
On Procedural Knowledge
1) Suppose that you have a shard that looks for a set of conditions like “it’s night AND I’m resting in an unfamiliar location in a forest AND there was a series of crunching sounds nearby”. If they’re satisfied, it raises an alarm, and forms and bids for plans to look in the direction of the noises and get ready for a fight.
That’s procedural knowledge: none of that is happening at the level of conscious understanding, you’re just suddenly alarmed and urged to be on guard, without necessarily understanding why. Most of the computations are internal to the shard, understood by no other part of the agent.
You can “excavate” this knowledge by reflecting on what happened: that you heard these noises in these circumstances, and some process in you responded. Then you can look at what happened afterward (e. g., you were attacked by an animal), and realize that this process helped you. This would allow you to explicate the procedural knowledge into a conscious heuristic (“beware of sound-patterns like this at night, get ready if you hear them”), which you put in the world-model and can then consciously access.
That “conscious access” would allow you to employ the knowledge much more fluidly, such as by:
Incorporating it in plans in advance. (You can know to ensure there’s no sources of natural noise around your camp, like waterfalls, because you’d know that being able to hear your surroundings is important.)
Transferring it to others. (Telling this heuristic to your child, who didn’t yet learn the procedural-knowledge shard itself.)
Generalizing from it. (Translate it by analogy to an alien environment where you have to “listen” to magnetic fields instead. Or to even more abstract environments, like bureaucratic conflicts, where there’s something “like” being in a forest at night (situation-of-uncertain-safety) and “like” hearing crunching noises nearby (subtle-predictors-of-an-impending-attack).)
None of that fluidity, on my understanding, would be easily replicable by the initial shard. If you’re planning in advance, or are teaching someone, it’d only activate if you vividly imagine the specific scenario that’d activate it (“I’m in my camp at night and there’s this noise”), which (1) you may not know to do to begin with, (2) is an excruciatingly slow style of planning. And the non-obvious logical generalizations are certainly not the thing it can do.
If you have that knowledge explicitly, though, you can just connect it to a node like “how to survive in a forest”, and it’d be brought to your attention every time you poke that node.
2) Also, in a different thread, you note that the predictions generated by the world-model can sometimes be also hard to make sense of, so maybe it’s not consistently-formatted either. I think what’s happening, there, is that when you imagine concrete scenarios, you’re not using just the world-model — you’re actually “spoofing” the mental context of that scenario, and that can cause your shards to activate as if it were really happening. That allows you to make use of your procedural knowledge without actually being in the situation, and so make better predictions without consciously understanding why you’re making them.
(E. g., the weird-noises-at-night shard puts simulated!you on high alert, and your WM conditions on that, and that makes it consider “is going to be attacked” more likely. So now it’s predicting so, and it’s a more accurate prediction than it would’ve been able to make with just the explicit knowledge, but it doesn’t know why exactly it ended up in this state.)
But none of that makes that procedural knowledge explicit! (Though such simulated counterfactuals are a great way to reverse-engineer it. See: thought experiments to access and reverse-engineer morality heuristics.)
3) Also something worth noting: explicit knowledge can loop non-explicit procedural knowledge in! E. g., you can have an explicit heuristic like “if you’re in a situation like this, listen to your instincts and do what they say”. That’s also entirely in-line with my model: you can know to do the things your shards urge you to, even if you don’t know why. And yet, knowing that a black-box is useful isn’t the same as knowing what’s in it.
(I suppose my definition is kind of circular, here: I’m saying that the world-model is only the thing that’s consciously accessible and consistently-formatted. That’s… Yeah, I think I’ll bite that bullet.)
On Implicit Knowledge
Here, it’s “implicit” that you should be complying with the urge to engage in the contextual behavior B = “if you heard weird noises in the forest at night, be on guard”. The question to answer here is: why? Why does it make sense to be on guard in such circumstances?
There’s several ways to explain it, but let’s go with “because it decreases the chance that a predator could take me by surprise, which is (apparently) something I don’t want to happen”. That’s the implicit contextual goal G here.
Explicating it, and setting it as the plan-making target (“how can I ensure I’m not ambushed?”), can allow you to consciously generate a bunch of other heuristics for achieving it. Like looking out for weird smells as well, or soundless but visible disturbance in the tall grass around you, etc. This, likewise, boosts your ability to generalize: both in the environment you’re in, and even if you end up displaced to e. g. an alien environment.
I also refer you to my previous example of a displaced value-child. Although his study-promoting shards end up inapplicable, he can nonetheless remain studious if he has “be studious” as an explicit goal, in the course of optimizing for which he can re-derive new heuristics appropriate for the completely unfamiliar environment. Another example: the “deontologist vs. utilitarian in an alien society” from the fourth bullet-point here.
Extrapolation
Okay, and this naturally extends into my broader point about value compilation.
Suppose you explicate a bunch of these contextual goals, like “avoid being ambushed by a predator” and “try to escape if you can’t win this fight” and “good places to live have an abundance of prey around”.
You can view these as heuristics as well. Much like the behaviors you were urged to engage in, which only hinted at your actual goal, you can view these derived goals not as your core values, but as yet more hints about your real values. As next-level procedural knowledge, with some hypothetical broader goal that generated them, and which is implicit in them.
Upon reflection on this new set of goals, you can extrapolate them into something like “avoid death”.
Doing that has all the benefits of going from “if at night in a forest and hear crunching sounds, be on guard” to “decreases the chance that a predator could take me by surprise”. You can now pursue death-avoidance across a broader swathe of environments, and with more consistency and fluidity. You can generate new lower-level goals/heuristics for supporting it.
Then you generate some more higher-level goals, e. g. “avoid death” + “make my loved ones happy” + “amass resources for the tribe”, and compile them into something like “human prosperity is important”.
And so on and on, until all contextual behaviors and goals have been incorporated into some unified global goal.
Those last few steps is what you disagree with, I think, but you see how it’s just a straightforward extrapolation of basic lower-level self-reflection mechanisms? And it passes my sanity-checks: it sure seems consistent with moral philosophy and meaning-of-life questioning and such.
Core Claims
Procedural knowledge is raw shard activations, i. e. urges that have no conscious explanation.
Explicating procedural knowledge allows you to use it in plan-making in a flexible logical manner, instead of relying on being in the right mental context for it to activate.
Imagining future scenarios isn’t just running the WM forward, it’s also spoofing the mental context to provoke shard activations, and that allows you to make use of procedural knowledge for predictions without necessarily understanding it.
The above doesn’t make procedural knowledge part of the WM; nor does it imply that the WM isn’t consistently-formatted. Or, perhaps tautologically, the WM is only that which is consistently-formatted and consciously-accessible.
Implicit knowledge are the hypothetical goals that the procedural knowledge is meant to achieve. Explicating it allows one to optimize for these goals in new contexts, and derive new heuristics for achieving them.
A straightforward extrapolation of this process leads to treating first-order derived contextual goals as just another set of heuristics, which imply some second-level contextual goal.
This process is run iteratively, until all goals are incorporated.
I don’t know why GPS should control reverse-engineering, rather than there being generalized shards driving GPS.
Okay, so my thinking on this updated a bit since I’ve written the post. I think the above process, “treat shards as hints towards your goals, then treat the derived goals as hints towards higher-level goals, then iterate”, isn’t something that shardeconomies want to do. Rather, it’s something that’s convergently “chiseled into” all sufficiently advanced minds by greedy algorithms generating them.
Consider a standard setup, where the SGD is searching for some agent that scores best according to some reward function R. Would you disagree that a wrapper-mind with that function as its terminal objective would be a great design for the SGD to find, by the SGD’s own lights? Not that it would “select” for such a mind, just that it would be pretty good for it if it did find a way to it?
Shard economies and systems of heuristics may be faster out-of-the-box, better adapted to whatever specific environment they’re in. But an R-maximizing wrapper-mind would at least match their performance, given some time to do runtime optimization of itself. If it would improve its ability to optimize for R, it can just derive contextual shards/heuristics for itself and act on them.
In other words, an R-maximizer is strictly more powerful according to R than any shard economy, inasmuch as it can generate any purpose-built shard economy from scratch, and ensure that this shard economy would be optimized for scoring well at R.
Shard economies not governed by wrapper-minds, in turn, are inferior: they’re worse at generalizing (see my points about non-explicit knowledge above), and tend to go astray if placed in unfamiliar environments (where whatever goals they embody no longer correlate with R).
And inasmuch as the level of adversity the agent was subjected to is so strong as to cause it to develop general reasoning at all, it’s probably put in environments so complex/diverse that runtime re-optimization of its entire swathe of heuristics is called-for. Environments where nothing less than this will do.
So the practical advanced mind design is probably something like a shard economy optimized for the immediate deployment environment (for computation speed and good out-of-the-box performance) + an R-aligned wrapper-mind governing it (for handling distribution shifts and for strategic planning). So I speculate that the SGD tries to converge to something like this, for the purposes of maximizing R.
Except, as per section 5, there’s no gradients towards representing R in the agent, so the SGD uses weird hacks to point the GPS in the right direction. It does the following:
Does not codify any object-level terminal goals for the GPS.
Lets shards influence the GPS’ plan-making process.
Lets the GPS reverse-engineer shards, the procedural and implicit knowledge they represent.
Encourages the GPS to treat these reverse-engineered knowledge tidbits as hints towards some hypothetical unified objective that it’s meant to adopt as its real target.
The GPS engages in value compilation as I’ve outlined, and tends to compile goal-spreads that are closer to R the more it engages in this process — inasmuch as the shard economy it’s using to derive its goals is itself optimized for R.
This hack lets the SGD point the “proto-wrapper-mind” in the direction of R without actually building R into it. The agent was already optimized for achieving R, so the SGD basically tasks it with “figure out what you’re optimized for, and go do that”, and the agent complies. (But the unified goal GΣ implicit in the agent’s design isn’t quiteR, so we get inner misalignment.)
So, in this very round-about way, we get a goal-maximizer.
I think that heuristics are reflections of historical facts of that form, but not statements themselves.
Does “evidence of historical facts of that form” work for you?
You rely a lot on selection-level reasoning in a way which feels sketchy.
Specific examples? I specifically tried to think in terms of local gradients (“in which direction would it be advantageous for the SGD to move the model from this specific point?”), not global properties (“what is the final mind-design that would best satisfy the SGD’s biases, regardless of the path taken there?”). Or do you disagree with that style of reasoning as well?
What is the difference, on your view, between a WM which is “explicit” and one which e.g. has an outgoing connection from is-cat circuit to is-animal?
I’ve outlined some reasons above — the main point is whether it’s accessible to the GPS/deliberative planner, because if it is, it allows WM-concepts to be employed much more flexibly and generally.
(I’m actually planning a separate post on this matter, though.)
If the agent is doing SSL on its future observations and (a subset of its) recurrent state activations, then the learning process would presumably train the network to reflectively predict its own future heuristic-firings
Yeah, but that’s not shards making themselves legible, that’s a separate process in the agent trying to build their generative models from their externally-observed behavior, no?
Furthermore, there should be some consistent formatting since the heuristics are functions h:Ms→As
Consistent input-output formatting, sure: an API, where each shard takes in the WM, then outputs stuff into the planner/the GPS/the bid-resolver/the cross-heuristic communication channel/some such coordination mechanism.
That’s not what I’m getting at. It still wouldn’t allow to predict what a shard will do without observing its actions. No consistent design structure, where each shard has a part you can look at and go “aha, that’s what it’s optimizing for!”. No meta-data summary/documentation to this effect attached to every shard.
And under certain “simplicity pressures/priors”, heuristics may reuse each other’s deliberative machinery
Agreed; I think I mention that in the post, even. Issue: such structures would be as ad-hoc as the shards’ inner implementation. You wouldn’t get alliances that change at runtime, where shards can look at each other’s local incentives and choose to e. g. “engage in horse-trading”, or where they can somehow “figure out” that some other shard is doing the same thing they’re doing in this specific context only and so only re-use its activations in that context.
No, you’d just get some shards that are hard-wired to always fire with some other shards, or always inhibit some other shards. These alliances can be rewritten by the reward circuitry, but not by the shards themselves.
That doesn’t require all shards to be legible to each other; that just requires there to be gradients towards some specific chains of shard activations.
I don’t positively buy reasoning about whether “deceptive alignment” is probable
My outline of it here is also written with local gradients, not global selection targets, in mind. You might want to check it out?
Just one heuristic for all of chess?
Yeah, no. I recall wanting to make an aside like “obviously in practice chess-winning will be implemented via a lot of heuristics”, but evidently I didn’t.
Can you give me an example of what [value compilation] means for a network which has a diamond-shard and an ice-cream-eating shard?
First, note that I’m not saying that GΣ is necessarily “simple”, as e. g. a hedonist’s desire for pleasure. It can have many terms that can’t be “merged” together. I’m just saying that we have an impulse to merge terms as much as possible. This is one of the cases where they can’t be merged.
As per 6A, that would go as in the “disjunction” section. I. e., the agent would figure out tradeoffs it’s willing to make WRT diamonds and ice cream, and then go for plans that maximize the weighted sum of diamonds-and-ice-cream it has.
… Alright, I see your point about “utility is not the optimization target”: there’s no inherent reason to think it’d want as many of these things as possible. E. g., ice-cream shard’s activation power may be capped at 1000 ice creams, and the agent may interpret it as a hard limit. But okay, so then it’d try to maximize the probability of achieving that utility cap, or the time it’d stay in the max-utility state, or something along those lines.
Like… There are states in which shards activate, and where they’re dormant. Thus, shards steer the agents they’re embedded in towards some world-states. Interpreting/reverse-engineering this behavior into goals, it seems natural to view it as “I want to be in such world-states over such others”. And then the GPS will be tasked with making that happen, and...
Well, it would try to output a “good” plan for making it happen, for some definition of “good”. And… you disagree that this definition has to lead to arg-maxing, okay.
I guess instead of maximizing we can satisfice: as you describe here, we can just generate a bunch of plans and choose one that seems good enough, instead of generating the best possible plan. But:
As agents become more powerful, it becomes easier for them to generate insanely good plans with trivial effort, so we have no guarantees the first idea the hyperintelligent AI would come up with won’t be basically utility-maximizing.
That only applies if the agent’s preferences themselves aren’t maximizable: if it didn’t decide its goal is to have “AS MANY diamonds as possible” instead of “at most 1000 diamonds”, or if it doesn’t have some instrumental goal like “MINIMIZE uncertainty”.
I’m… not sure humans don’t do grader-optimization? It seems like if we all had magical question-answering devices, we’d go around asking them for “the best, most resource-efficient plan for X” all the time. We just don’t have the mental resources for it, ordinarily! It’s as I’d described before: we maximize over (plan quality, resources spent on planning), not plan quality only.
(Re: magical question-answerers, yeah, we’d also want a provision like “but interpret that ask faithfully instead of doing a technical genie”. But that’s not an issue if the agent is the one doing the planning. Like, it doesn’t prompt some separate plan-making module that it has reason to fear would output something that hacks/Goodharts it. It just consciously tries to come up with “a very good plan”, and it’s just so smart it has a lot of slack on optimizing that plan along dimensions like “probability of success” and “the optimal world-state will be very stable”. And then that washes away everything in the universe that the agent is not explicitly optimizing for.)
Seems like assuming “activation strengths increase the further WM values are from target values” leads us to this bizarre GPS goal. While that proposition may be true as a tendency, I don’t see why it should be true in any strict sense, or if you believe that, or whether the analysis hinges on it?
I think no, it doesn’t hinge on it, as per the section just above? All we need is for shards to have some preferences for certain world-model-states over others.
Don’t you mean “figuratively pursuing GΣ”? How would one “pursue” contextual behaviors?
In the vacuous way where any agent could be said to maximize what they’re already doing? I did say “figuratively”.
Same objection as above—to “achieve” BΣ? How do you “achieve” behaviors? And, what, this would happen how? What part of training are we in? What is happening in this story, is SGD optimizing the agent to be runtime-adaptive, or..?
… Yeah, okay, that phrasing is very bad. What I meant is: Suppose we have a shard that tries to figure out from where a predator could ambush the agent from. Before the GPS, it had some ad-hoc analysis heuristic that was hooked up to a bunch of WM concepts. After the GPS, that shard can instead loop general-purpose planning in, prompt it with “figure out from where the predator can ambush us, here’s some ideas to start”, and the GPS would do better than the shard’s own ad-hoc algorithm.
Hence, we’ll get an agent that would “get better at what it was already doing”.
I agree that “become more effective at achieving BΣ” is a pretty nonsensical way to put it, though.
Flag that I wish you would write this as “during additional training, the interim model performance can be at least as U-unperformant as the contextual behaviors.”
Sure.
I think that heuristics controlling GPS-machinery is probably where the GPS comes from to begin with, so this step doesn’t seem necessary.
Agreed; also think I mentioned that in a footnote. I’m not sure, though, and I think we can design some weird training setups where the GPS might first appear in the WM or something (as part of a simulated human?), so my goal here was to show that the process would go this way regardless of where the GPS originated.
I agree that the way I phrased that there is weird, though.
What? Why? Why would a value-human shard bid for plans generated via GPS which involve people dying?
I don’t think it’d bid for such plans. I think shards have less decision-making power in advanced agents, compared to the GPS’ interpretation of shards’ goals. Inasmuch as there would be imperfections in the value-humans shard’s caring, the GPS would uncover them, and exploit them to make that shard play nicer with other shards.
E. g., suppose the value-humans shard isn’t as upset as we would be if a human got their thumb torn off (and is anomalously non-upset about any second-order effects of that, etc.; it basically ignores tear-a-thumb-off plans), and there’s some shard like “sadistic fun” that really enjoys seeing humans get their fingers torn off. Even if the value-humans shard is much more powerful, the GPS’ desire to integrate all its values would lead to it adopting some combination value where it thinks it’s fine to tear people’s fingers off for fun.
That’s not a realistic example, but I hope it conveys the broader point: any imperfections in value-humans will be exploited by the rest of the shard economy, and the broader process that tries to satisfy the goal implicitly embodied by the shard economy.
And then, even if the value-humans shard is perfect, the AI might just figure out some galaxy-brained merger of it with a bunch of other shards, that makes logical sense to it as an extrapolation, and just override the value-humans shard’s protests. (Returning to a previous example: Suppose we’ve adopted “avoid ambush predators” as our explicit goal, then ended up in a forest environment where we’re ~100% sure there are no predators. The “be afraid of crunchy noises at night” shard would activate, but we’d just dismiss it, because we know it has no clue and we know better.)
I use “values” to be decision-influence
Mm, I dispute that choice. I think “value” has the connotation of “sacred value” and “terminal value” and “something the agent wouldn’t want to change about themselves”, and that doesn’t clearly map onto “a consistent way the agent’s decisions are steered”? My broad point, here, is that shards-as-decision-infuencers aren’t necessarily endorsed by agents in their initial form, and calling them “values” conveys wrong intuitions (for my purposes, at least).
I prefer “proto-values” for shards-when-viewed-as-repositories-of-contextual-goals, and… Yeah, I don’t think I even have anything in my model that works well for “value”. “Intermediary values” as a description of contextual goals, maybe.
Aside: I think self-awareness arises from elsewhere in the shard ecosystem.
I think this post is very thoughtful, with admirable attempts at formalization and several interesting insights sprinkled throughout. I think you are addressing real questions, including:
why do people wonder why they ‘really’ did something?
How and when do shards generalize beyond contextual reflex behaviors into goals?
To what extent will heuristics/shards be legible / written in “similar formats”?
That said, I think some of your answers and conclusions are off/wrong:
You rely a lot on selection-level reasoning in a way which feels sketchy.
I doubt your conclusions about GPS optimizing activations directly, as a terminal end and not as yet another tactic,
I doubt the assumptions on GPS being a minimizer, or goals being minimize-distance (although you claimed in another thread this isn’t crucial?)
I don’t see why you think heuristics (shards?) “lose control” to GPS.
I don’t think why you think value-humans shard has to be perfectly aligned.
Overall, nice work, strong up, medium disagree. :)
I think that heuristics are reflections of historical facts of that form, but not statements themselves.
In a certain set of historical reward-attainment situations, perhaps (because this depends on the learning alg being good, but I’m happy to assume that). Not in general, of course.
What is the difference, on your view, between a WM which is “explicit” and one which e.g. has an outgoing connection from
is-cat
circuit tois-animal
?I really like the insight in this point. I’d strong-up a post containing this, alone.
If the agent is doing SSL on its future observations and (a subset of its) recurrent state activations, then the learning process would presumably train the network to reflectively predict its own future heuristic-firings, so as to e.g. not be surprised by going near donuts and then stopping to stare at them (instead of the nominally “agreed-upon” plan of “just exit the grocery store”).
Furthermore, there should be some consistent formatting since the heuristics are functions h:Ms→As. And under certain “simplicity pressures/priors”, heuristics may reuse each other’s deliberative machinery (this is part of how I think the GPS forms). EG there shouldn’t be five heuristics each of which slightly differently computes whether the other side of the room is reachable.
I’m trying to imagine a concrete story here. I don’t know what this means.
I don’t positively buy reasoning about whether “deceptive alignment” is probable, on how others use the term. I’d have to revisit it, since it’s on my very long list of alignment reasoning downstream of AFAICT-incorrect premises or reliant on extremely handwavy, vague, and leaky “selection”-based reasoning.
Just one heuristic for all of chess?
I wish this were an actual situation, not an “example” which is syntactic. This would save a lot of work for the reader and possibly help you improve your own models.
(Flagging that this makes syntactic sense but I can’t actually give an example easily, what it means to “excavate” the procedural and implicit knowledge.)
Can you give me an example of what this means for a network which has a diamond-shard and an ice-cream-eating shard?
Don’t you mean “figuratively pursuing GΣ”? How would one “pursue” contextual behaviors?
Flag that I wish you would write this as “during additional training, the interim model performance can be at least as U-unperformant as the contextual behaviors.” I think “bad” leads people to conflate “bad for us” with “bad for the agent” with “low-performance under formal loss criterion” with something else. I think these conflations are made quite often in alignment writing.
Example? At this point I feel like I’ve gotten off your train; you seem to be assuming a lot of weird-seeming structure and “pressures”, I don’t understand what’s happening or what experiences I should or shouldn’t anticipate. I’m worried that it feels like most of my reasoning is now syntactic.
I think that heuristics controlling GPS-machinery is probably where the GPS comes from to begin with, so this step doesn’t seem necessary.
Same objection as above—to “achieve” BΣ? How do you “achieve” behaviors? And, what, this would happen how? What part of training are we in? What is happening in this story, is SGD optimizing the agent to be runtime-adaptive, or..?
Strong disagree.
I don’t think this is what the coherence theorems imply. I think explaining my perspective here would be a lot of work, but I can maybe say helpful things like “utility is more like a contextual yardstick governing tradeoffs between eg ice cream eating opportunities and diamond production opportunities, and less like an optimization target which the agent globally and universally optimizes.”
I am also worried about reasoning like “smart agents → coherent over value-relevant → optimizing a ‘utility function’ → argmax on utility functions is scary (does anyone remember AIXI?)”, when really the last step is invalid.
AFAICT I agree wrapper-minds are inefficient (seems like a point against them?).
I don’t know why GPS should control reverse-engineering, rather than there being generalized shards driving GPS.
I think “internalize a system of norms” is not how people’s caring works in bulk, and doesn’t address the larger commonly-activated planning-steering shards I expect to translate robustly across environments (like “go home”, “make people happy”). I agree there is a Shard Generalization Question, but I don’t think “wrapper mind” is a plausible answer to it.
Seems like assuming “activation strengths increase the further WM values are from target values” leads us to this bizarre GPS goal. While that proposition may be true as a tendency, I don’t see why it should be true in any strict sense, or if you believe that, or whether the analysis hinges on it?
Aside: I think self-awareness arises from elsewhere in the shard ecosystem.
What? Why? Why would a value-human shard bid for plans generated via GPS which involve people dying? (I think I have this objection because I don’t buy/understand your story for how GPS “rederives” values into some alien wrapper object.)
I use “values” to be decision-influence, and “goal” as, among other things, an instrumental subgoal in the planning process which is relevant to one or more values (e.g. hang out with friends more as relevant to a friend-shard).
Other points:
I wish the nomenclature had been clearer, with Ms being replaced by e.g. WMsubset.
I think “U” is a bad name for the policy-gradient-providing function (aka reward function).
Thanks for extensive commentary! Here’s an… unreasonably extensive response.
On Procedural Knowledge
1) Suppose that you have a shard that looks for a set of conditions like “it’s night AND I’m resting in an unfamiliar location in a forest AND there was a series of crunching sounds nearby”. If they’re satisfied, it raises an alarm, and forms and bids for plans to look in the direction of the noises and get ready for a fight.
That’s procedural knowledge: none of that is happening at the level of conscious understanding, you’re just suddenly alarmed and urged to be on guard, without necessarily understanding why. Most of the computations are internal to the shard, understood by no other part of the agent.
You can “excavate” this knowledge by reflecting on what happened: that you heard these noises in these circumstances, and some process in you responded. Then you can look at what happened afterward (e. g., you were attacked by an animal), and realize that this process helped you. This would allow you to explicate the procedural knowledge into a conscious heuristic (“beware of sound-patterns like this at night, get ready if you hear them”), which you put in the world-model and can then consciously access.
That “conscious access” would allow you to employ the knowledge much more fluidly, such as by:
Incorporating it in plans in advance. (You can know to ensure there’s no sources of natural noise around your camp, like waterfalls, because you’d know that being able to hear your surroundings is important.)
Transferring it to others. (Telling this heuristic to your child, who didn’t yet learn the procedural-knowledge shard itself.)
Generalizing from it. (Translate it by analogy to an alien environment where you have to “listen” to magnetic fields instead. Or to even more abstract environments, like bureaucratic conflicts, where there’s something “like” being in a forest at night (situation-of-uncertain-safety) and “like” hearing crunching noises nearby (subtle-predictors-of-an-impending-attack).)
None of that fluidity, on my understanding, would be easily replicable by the initial shard. If you’re planning in advance, or are teaching someone, it’d only activate if you vividly imagine the specific scenario that’d activate it (“I’m in my camp at night and there’s this noise”), which (1) you may not know to do to begin with, (2) is an excruciatingly slow style of planning. And the non-obvious logical generalizations are certainly not the thing it can do.
If you have that knowledge explicitly, though, you can just connect it to a node like “how to survive in a forest”, and it’d be brought to your attention every time you poke that node.
2) Also, in a different thread, you note that the predictions generated by the world-model can sometimes be also hard to make sense of, so maybe it’s not consistently-formatted either. I think what’s happening, there, is that when you imagine concrete scenarios, you’re not using just the world-model — you’re actually “spoofing” the mental context of that scenario, and that can cause your shards to activate as if it were really happening. That allows you to make use of your procedural knowledge without actually being in the situation, and so make better predictions without consciously understanding why you’re making them.
(E. g., the weird-noises-at-night shard puts simulated!you on high alert, and your WM conditions on that, and that makes it consider “is going to be attacked” more likely. So now it’s predicting so, and it’s a more accurate prediction than it would’ve been able to make with just the explicit knowledge, but it doesn’t know why exactly it ended up in this state.)
But none of that makes that procedural knowledge explicit! (Though such simulated counterfactuals are a great way to reverse-engineer it. See: thought experiments to access and reverse-engineer morality heuristics.)
3) Also something worth noting: explicit knowledge can loop non-explicit procedural knowledge in! E. g., you can have an explicit heuristic like “if you’re in a situation like this, listen to your instincts and do what they say”. That’s also entirely in-line with my model: you can know to do the things your shards urge you to, even if you don’t know why. And yet, knowing that a black-box is useful isn’t the same as knowing what’s in it.
(I suppose my definition is kind of circular, here: I’m saying that the world-model is only the thing that’s consciously accessible and consistently-formatted. That’s… Yeah, I think I’ll bite that bullet.)
On Implicit Knowledge
Here, it’s “implicit” that you should be complying with the urge to engage in the contextual behavior B = “if you heard weird noises in the forest at night, be on guard”. The question to answer here is: why? Why does it make sense to be on guard in such circumstances?
There’s several ways to explain it, but let’s go with “because it decreases the chance that a predator could take me by surprise, which is (apparently) something I don’t want to happen”. That’s the implicit contextual goal G here.
Explicating it, and setting it as the plan-making target (“how can I ensure I’m not ambushed?”), can allow you to consciously generate a bunch of other heuristics for achieving it. Like looking out for weird smells as well, or soundless but visible disturbance in the tall grass around you, etc. This, likewise, boosts your ability to generalize: both in the environment you’re in, and even if you end up displaced to e. g. an alien environment.
I also refer you to my previous example of a displaced value-child. Although his study-promoting shards end up inapplicable, he can nonetheless remain studious if he has “be studious” as an explicit goal, in the course of optimizing for which he can re-derive new heuristics appropriate for the completely unfamiliar environment. Another example: the “deontologist vs. utilitarian in an alien society” from the fourth bullet-point here.
Extrapolation
Okay, and this naturally extends into my broader point about value compilation.
Suppose you explicate a bunch of these contextual goals, like “avoid being ambushed by a predator” and “try to escape if you can’t win this fight” and “good places to live have an abundance of prey around”.
You can view these as heuristics as well. Much like the behaviors you were urged to engage in, which only hinted at your actual goal, you can view these derived goals not as your core values, but as yet more hints about your real values. As next-level procedural knowledge, with some hypothetical broader goal that generated them, and which is implicit in them.
Upon reflection on this new set of goals, you can extrapolate them into something like “avoid death”.
Doing that has all the benefits of going from “if at night in a forest and hear crunching sounds, be on guard” to “decreases the chance that a predator could take me by surprise”. You can now pursue death-avoidance across a broader swathe of environments, and with more consistency and fluidity. You can generate new lower-level goals/heuristics for supporting it.
Then you generate some more higher-level goals, e. g. “avoid death” + “make my loved ones happy” + “amass resources for the tribe”, and compile them into something like “human prosperity is important”.
And so on and on, until all contextual behaviors and goals have been incorporated into some unified global goal.
Those last few steps is what you disagree with, I think, but you see how it’s just a straightforward extrapolation of basic lower-level self-reflection mechanisms? And it passes my sanity-checks: it sure seems consistent with moral philosophy and meaning-of-life questioning and such.
Core Claims
Procedural knowledge is raw shard activations, i. e. urges that have no conscious explanation.
Explicating procedural knowledge allows you to use it in plan-making in a flexible logical manner, instead of relying on being in the right mental context for it to activate.
Imagining future scenarios isn’t just running the WM forward, it’s also spoofing the mental context to provoke shard activations, and that allows you to make use of procedural knowledge for predictions without necessarily understanding it.
The above doesn’t make procedural knowledge part of the WM; nor does it imply that the WM isn’t consistently-formatted. Or, perhaps tautologically, the WM is only that which is consistently-formatted and consciously-accessible.
Implicit knowledge are the hypothetical goals that the procedural knowledge is meant to achieve. Explicating it allows one to optimize for these goals in new contexts, and derive new heuristics for achieving them.
A straightforward extrapolation of this process leads to treating first-order derived contextual goals as just another set of heuristics, which imply some second-level contextual goal.
This process is run iteratively, until all goals are incorporated.
Okay, so my thinking on this updated a bit since I’ve written the post. I think the above process, “treat shards as hints towards your goals, then treat the derived goals as hints towards higher-level goals, then iterate”, isn’t something that shard economies want to do. Rather, it’s something that’s convergently “chiseled into” all sufficiently advanced minds by greedy algorithms generating them.
Consider a standard setup, where the SGD is searching for some agent that scores best according to some reward function R. Would you disagree that a wrapper-mind with that function as its terminal objective would be a great design for the SGD to find, by the SGD’s own lights? Not that it would “select” for such a mind, just that it would be pretty good for it if it did find a way to it?
Shard economies and systems of heuristics may be faster out-of-the-box, better adapted to whatever specific environment they’re in. But an R-maximizing wrapper-mind would at least match their performance, given some time to do runtime optimization of itself. If it would improve its ability to optimize for R, it can just derive contextual shards/heuristics for itself and act on them.
In other words, an R-maximizer is strictly more powerful according to R than any shard economy, inasmuch as it can generate any purpose-built shard economy from scratch, and ensure that this shard economy would be optimized for scoring well at R.
Shard economies not governed by wrapper-minds, in turn, are inferior: they’re worse at generalizing (see my points about non-explicit knowledge above), and tend to go astray if placed in unfamiliar environments (where whatever goals they embody no longer correlate with R).
And inasmuch as the level of adversity the agent was subjected to is so strong as to cause it to develop general reasoning at all, it’s probably put in environments so complex/diverse that runtime re-optimization of its entire swathe of heuristics is called-for. Environments where nothing less than this will do.
So the practical advanced mind design is probably something like a shard economy optimized for the immediate deployment environment (for computation speed and good out-of-the-box performance) + an R-aligned wrapper-mind governing it (for handling distribution shifts and for strategic planning). So I speculate that the SGD tries to converge to something like this, for the purposes of maximizing R.
Except, as per section 5, there’s no gradients towards representing R in the agent, so the SGD uses weird hacks to point the GPS in the right direction. It does the following:
Does not codify any object-level terminal goals for the GPS.
Lets shards influence the GPS’ plan-making process.
Lets the GPS reverse-engineer shards, the procedural and implicit knowledge they represent.
Encourages the GPS to treat these reverse-engineered knowledge tidbits as hints towards some hypothetical unified objective that it’s meant to adopt as its real target.
The GPS engages in value compilation as I’ve outlined, and tends to compile goal-spreads that are closer to R the more it engages in this process — inasmuch as the shard economy it’s using to derive its goals is itself optimized for R.
This hack lets the SGD point the “proto-wrapper-mind” in the direction of R without actually building R into it. The agent was already optimized for achieving R, so the SGD basically tasks it with “figure out what you’re optimized for, and go do that”, and the agent complies. (But the unified goal GΣ implicit in the agent’s design isn’t quite R, so we get inner misalignment.)
So, in this very round-about way, we get a goal-maximizer.
Does “evidence of historical facts of that form” work for you?
Specific examples? I specifically tried to think in terms of local gradients (“in which direction would it be advantageous for the SGD to move the model from this specific point?”), not global properties (“what is the final mind-design that would best satisfy the SGD’s biases, regardless of the path taken there?”). Or do you disagree with that style of reasoning as well?
I’ve outlined some reasons above — the main point is whether it’s accessible to the GPS/deliberative planner, because if it is, it allows WM-concepts to be employed much more flexibly and generally.
(I’m actually planning a separate post on this matter, though.)
Yeah, but that’s not shards making themselves legible, that’s a separate process in the agent trying to build their generative models from their externally-observed behavior, no?
Consistent input-output formatting, sure: an API, where each shard takes in the WM, then outputs stuff into the planner/the GPS/the bid-resolver/the cross-heuristic communication channel/some such coordination mechanism.
That’s not what I’m getting at. It still wouldn’t allow to predict what a shard will do without observing its actions. No consistent design structure, where each shard has a part you can look at and go “aha, that’s what it’s optimizing for!”. No meta-data summary/documentation to this effect attached to every shard.
Agreed; I think I mention that in the post, even. Issue: such structures would be as ad-hoc as the shards’ inner implementation. You wouldn’t get alliances that change at runtime, where shards can look at each other’s local incentives and choose to e. g. “engage in horse-trading”, or where they can somehow “figure out” that some other shard is doing the same thing they’re doing in this specific context only and so only re-use its activations in that context.
No, you’d just get some shards that are hard-wired to always fire with some other shards, or always inhibit some other shards. These alliances can be rewritten by the reward circuitry, but not by the shards themselves.
That doesn’t require all shards to be legible to each other; that just requires there to be gradients towards some specific chains of shard activations.
My outline of it here is also written with local gradients, not global selection targets, in mind. You might want to check it out?
Yeah, no. I recall wanting to make an aside like “obviously in practice chess-winning will be implemented via a lot of heuristics”, but evidently I didn’t.
First, note that I’m not saying that GΣ is necessarily “simple”, as e. g. a hedonist’s desire for pleasure. It can have many terms that can’t be “merged” together. I’m just saying that we have an impulse to merge terms as much as possible. This is one of the cases where they can’t be merged.
As per 6A, that would go as in the “disjunction” section. I. e., the agent would figure out tradeoffs it’s willing to make WRT diamonds and ice cream, and then go for plans that maximize the weighted sum of diamonds-and-ice-cream it has.
… Alright, I see your point about “utility is not the optimization target”: there’s no inherent reason to think it’d want as many of these things as possible. E. g., ice-cream shard’s activation power may be capped at 1000 ice creams, and the agent may interpret it as a hard limit. But okay, so then it’d try to maximize the probability of achieving that utility cap, or the time it’d stay in the max-utility state, or something along those lines.
Like… There are states in which shards activate, and where they’re dormant. Thus, shards steer the agents they’re embedded in towards some world-states. Interpreting/reverse-engineering this behavior into goals, it seems natural to view it as “I want to be in such world-states over such others”. And then the GPS will be tasked with making that happen, and...
Well, it would try to output a “good” plan for making it happen, for some definition of “good”. And… you disagree that this definition has to lead to arg-maxing, okay.
I guess instead of maximizing we can satisfice: as you describe here, we can just generate a bunch of plans and choose one that seems good enough, instead of generating the best possible plan. But:
As agents become more powerful, it becomes easier for them to generate insanely good plans with trivial effort, so we have no guarantees the first idea the hyperintelligent AI would come up with won’t be basically utility-maximizing.
That only applies if the agent’s preferences themselves aren’t maximizable: if it didn’t decide its goal is to have “AS MANY diamonds as possible” instead of “at most 1000 diamonds”, or if it doesn’t have some instrumental goal like “MINIMIZE uncertainty”.
I’m… not sure humans don’t do grader-optimization? It seems like if we all had magical question-answering devices, we’d go around asking them for “the best, most resource-efficient plan for X” all the time. We just don’t have the mental resources for it, ordinarily! It’s as I’d described before: we maximize over (plan quality, resources spent on planning), not plan quality only.
(Re: magical question-answerers, yeah, we’d also want a provision like “but interpret that ask faithfully instead of doing a technical genie”. But that’s not an issue if the agent is the one doing the planning. Like, it doesn’t prompt some separate plan-making module that it has reason to fear would output something that hacks/Goodharts it. It just consciously tries to come up with “a very good plan”, and it’s just so smart it has a lot of slack on optimizing that plan along dimensions like “probability of success” and “the optimal world-state will be very stable”. And then that washes away everything in the universe that the agent is not explicitly optimizing for.)
I think no, it doesn’t hinge on it, as per the section just above? All we need is for shards to have some preferences for certain world-model-states over others.
In the vacuous way where any agent could be said to maximize what they’re already doing? I did say “figuratively”.
… Yeah, okay, that phrasing is very bad. What I meant is: Suppose we have a shard that tries to figure out from where a predator could ambush the agent from. Before the GPS, it had some ad-hoc analysis heuristic that was hooked up to a bunch of WM concepts. After the GPS, that shard can instead loop general-purpose planning in, prompt it with “figure out from where the predator can ambush us, here’s some ideas to start”, and the GPS would do better than the shard’s own ad-hoc algorithm.
Hence, we’ll get an agent that would “get better at what it was already doing”.
I agree that “become more effective at achieving BΣ” is a pretty nonsensical way to put it, though.
Sure.
Agreed; also think I mentioned that in a footnote. I’m not sure, though, and I think we can design some weird training setups where the GPS might first appear in the WM or something (as part of a simulated human?), so my goal here was to show that the process would go this way regardless of where the GPS originated.
I agree that the way I phrased that there is weird, though.
I don’t think it’d bid for such plans. I think shards have less decision-making power in advanced agents, compared to the GPS’ interpretation of shards’ goals. Inasmuch as there would be imperfections in the value-humans shard’s caring, the GPS would uncover them, and exploit them to make that shard play nicer with other shards.
E. g., suppose the value-humans shard isn’t as upset as we would be if a human got their thumb torn off (and is anomalously non-upset about any second-order effects of that, etc.; it basically ignores tear-a-thumb-off plans), and there’s some shard like “sadistic fun” that really enjoys seeing humans get their fingers torn off. Even if the value-humans shard is much more powerful, the GPS’ desire to integrate all its values would lead to it adopting some combination value where it thinks it’s fine to tear people’s fingers off for fun.
That’s not a realistic example, but I hope it conveys the broader point: any imperfections in value-humans will be exploited by the rest of the shard economy, and the broader process that tries to satisfy the goal implicitly embodied by the shard economy.
And then, even if the value-humans shard is perfect, the AI might just figure out some galaxy-brained merger of it with a bunch of other shards, that makes logical sense to it as an extrapolation, and just override the value-humans shard’s protests. (Returning to a previous example: Suppose we’ve adopted “avoid ambush predators” as our explicit goal, then ended up in a forest environment where we’re ~100% sure there are no predators. The “be afraid of crunchy noises at night” shard would activate, but we’d just dismiss it, because we know it has no clue and we know better.)
Mm, I dispute that choice. I think “value” has the connotation of “sacred value” and “terminal value” and “something the agent wouldn’t want to change about themselves”, and that doesn’t clearly map onto “a consistent way the agent’s decisions are steered”? My broad point, here, is that shards-as-decision-infuencers aren’t necessarily endorsed by agents in their initial form, and calling them “values” conveys wrong intuitions (for my purposes, at least).
I prefer “proto-values” for shards-when-viewed-as-repositories-of-contextual-goals, and… Yeah, I don’t think I even have anything in my model that works well for “value”. “Intermediary values” as a description of contextual goals, maybe.
Would be interested in your model of that!