The niceness shard isn’t just bidding over outcomes, it’s bidding on next thoughts (on my understanding of how this works). And so these thoughts would get bid down
Seems similar to how I conceptualize this paper’s approach to controlling text generation models using gradients from classifiers. You can think of the niceness shard as implementing a classifier for “is this plan nice?”, and updating the latent planning state in directions that make the classifier more inclined to say “yes”.
The linked paper does a similar process, but using a trained classifier, actual gradient descent, and updates LM token representations. Of particular note is the fact that the classifiers used in the paper are pretty weak (~500 training examples), and not at all adversarially robust. It still works for controlling text generation.
I wonder if inserting shards into an AI is really just that straightforward?
Seems similar to how I conceptualize this paper’s approach to controlling text generation models using gradients from classifiers. You can think of the niceness shard as implementing a classifier for “is this plan nice?”, and updating the latent planning state in directions that make the classifier more inclined to say “yes”.
The linked paper does a similar process, but using a trained classifier, actual gradient descent, and updates LM token representations. Of particular note is the fact that the classifiers used in the paper are pretty weak (~500 training examples), and not at all adversarially robust. It still works for controlling text generation.
I wonder if inserting shards into an AI is really just that straightforward?