Contra shard theory, in the context of the diamond maximizer problem
A bunch of my response to shard theory is a generalization of how niceness is unnatural. In a similar fashion, the other “shards” that the shard theory folk want to learn are unnatural too.
That said, I’ll spend a few extra words responding to the admirably-concrete diamond maximizer proposal that TurnTrout recently published, on the theory that briefly gesturing at my beliefs is better than saying nothing.
I’ll be focusing on the diamond maximizer plan, though this criticism can be generalized and applied more broadly to shard theory.
The first “problem” with this plan is that you don’t get an AGI this way. You get an unintelligent robot that steers towards diamonds. If you keep trying to have the training be about diamonds, it never particularly learns to think. When you compromise and start putting it in environments where it needs to be able to think to succeed, then your new reward-signals end up promoting all sorts of internal goals that aren’t particularly about diamond, but are instead about understanding the world and/or making efficient use of internal memory and/or suchlike.
Separately, insofar as you were able to get some sort of internalized diamond-ish goal, if you’re not really careful then you end up getting lots of subgoals such as ones about glittering things, and stones cut in stylized ways, and proximity to diamond rather than presence of diamond, and so on and so forth.
Furthermore, once you get it to be smart, all of those little correlates-of-training-objectives that it latched onto in order to have a gradient up to general intelligence, blow the whole plan sky-high once it starts to reflect.
What the AI’s shards become under reflection is very sensitive to the ways it resolves internal conflicts. For instance, in humans, many of our values trigger only in a narrow range of situations (e.g., people care about people enough that they probably can’t psychologically murder a hundred thousand people in a row, but they can still drop a nuke), and whether we resolve that as “I should care about people even if they’re not right in front of me” or “I shouldn’t care about people any more than I would if the scenario was abstracted” depends quite a bit on the ways that reflection resolves inconsistencies.
Or consider the conflict “I really enjoy dunking on the outgroup (but have some niggling sense of unease about this)” — we can’t conclude from the fact that the enjoyment of dunking is loud, whereas the niggling doubt is quiet, that the dunking-on-the-outgroup value will be the one left standing after reflection.
As far as I can tell, the “reflection” section of TurnTrout’s essay says ~nothing that addresses this, and amounts to “the agent will become able to tell that it has shards”. OK, sure, it has shards, but only some of them are diamond-related, and many others are cognition-related or suchlike. I don’t see any argument that reflection will result in the AI settling at “maximize diamond” in-particular.
Finally, I’ll note that the diamond maximization problem is not in fact the problem “build an AI that makes a little diamond”, nor even “build an AI that probably makes a decent amount of diamond, while also spending lots of other resources on lots of other stuff” (although the latter is more progress than the former). The diamond maximization problem (as originally posed by MIRI folk) is a challenge of building an AI that definitely optimizes for a particular simple thing, on the theory that if we knew how to do that (in unrealistically simplified models, allowing for implausible amounts of (hyper)computation) then we would have learned something significant about how to point cognition at targets in general.
TurnTrout’s proposal seems to me to be basically “train it around diamonds, do some reward-shaping, and hope that at least some care-about-diamonds makes it across the gap”. I doubt this works (because the optimum of the shattered correlates of the training objectives that it gets are likely to involve tiling the universe with something that isn’t actually diamond, even if you’re lucky-enough that it got a diamond-shard at all, which is dubious), but even if it works a little, it doesn’t seem to me to be teaching us any of the insights that would be possessed by someone who knew how to robustly aim an idealized unbounded (or even hypercomputing) cognitive system in theory.
- Focus on the places where you feel shocked everyone’s dropping the ball by 2 Feb 2023 0:27 UTC; 438 points) (
- Shallow review of live agendas in alignment & safety by 27 Nov 2023 11:10 UTC; 332 points) (
- What I mean by “alignment is in large part about making cognition aimable at all” by 30 Jan 2023 15:22 UTC; 170 points) (
- Shard Theory in Nine Theses: a Distillation and Critical Appraisal by 19 Dec 2022 22:52 UTC; 143 points) (
- Making AIs less likely to be spiteful by 26 Sep 2023 14:12 UTC; 116 points) (
- Focus on the places where you feel shocked everyone’s dropping the ball by 2 Feb 2023 0:27 UTC; 92 points) (EA Forum;
- Shallow review of live agendas in alignment & safety by 27 Nov 2023 11:33 UTC; 76 points) (EA Forum;
- Alignment is mostly about making cognition aimable at all by 30 Jan 2023 15:22 UTC; 57 points) (EA Forum;
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- 2022 (and All Time) Posts by Pingback Count by 16 Dec 2023 21:17 UTC; 53 points) (
- The alignment stability problem by 26 Mar 2023 2:10 UTC; 35 points) (
- Motivating Alignment of LLM-Powered Agents: Easy for AGI, Hard for ASI? by 11 Jan 2024 12:56 UTC; 34 points) (
- EA & LW Forums Weekly Summary (10 − 16 Oct 22′) by 17 Oct 2022 22:51 UTC; 24 points) (EA Forum;
- EA & LW Forums Weekly Summary (10 − 16 Oct 22′) by 17 Oct 2022 22:51 UTC; 12 points) (
I am not a huge fan of shard theory, but other people seem into it a bunch. This post captured at least a bunch of my problems with shard theory (though not all of them, and it’s not a perfect post). This means the post at least has saved me some writing effort a bunch of times.