Do you think we could build a diamond maximizer using those ideas, though?
They’re definitely not sufficient, almost certainly. A full fledged diamond maximizer would need far more machinery, if only to do the maximization and properly learn the representation.
The concern here is that the representation has to cleanly demarcate what we think of as diamonds.
I think this touches on a related concern, namely goodharting. If we even slightly miss-specify the utility function at the boundary and the AI optimize in an unrestrained fashion, we’ll end up with weird situations that are totally de-correlated with what we we’re initially trying to get the AI to optimize.
If we don’t solve this problem, I agree, the problem is extremely difficult at best and completely intractable at worst. However, If we can reign in goodharting, then I don’t think things are intractable.
To make the point, I think the problem of a AI goodharting a representation is very analogous to the problems being tackled in the field of adversarial perturbations for image classification. In this case, the “representation space” is the image itself. The boundaries are classification boundaries set by the classifying neural network. The optimizing AI that goodharts everyting is usually just some form or gradient decent.
However, the field of adversarial examples seems to indicate that it’s possible to at least partially overcome this form of goodharting and, by anaogy, the goodharting that we would see with a diamond maximiser. IMO, the most promising and general solution seems to be to be more bayesian, and keep track of the uncertainty associated with class label. By keeping track of uncertainty in class labels, it’s possible to avoid class boundaries altogether, and optimize towards regions of the space that are more likely to be part of the desired class label.
I can’t seem to dig it up right now, but I once saw a paper where they developed a robust classifier. When they used SGD to change a picture from being classified as a cat to being classified as a dog, the result was that the underlying image went from looking like a dog to looking like a cat. By analogy, an diamond maximizer with a robust classification of diamonds in it’s representation should actually produce diamonds.
Overall, adversarial examples seem to be a microcosm for evaluating this specific kind of goodharting. My optimism that we can do robust ontology identification is tied to the success of that field, but at the moment the problem doesn’t seem to be intractable.
They’re definitely not sufficient, almost certainly. A full fledged diamond maximizer would need far more machinery, if only to do the maximization and properly learn the representation.
Clarification: I meant (but inadequately expressed) “do you think any reasonable extension of these kinds of ideas could get what we want?” Obviously, it would be a quite unfair demand for rigor to demand whether we can do the thing right now.
Thanks for the great reply. I think the remaining disagreement might boil down to the expected difficulty of avoiding Goodhart here. I do agree that using representations is a way around this issue, and it isn’t the representation learning approach’s job to simultaneously deal with Goodharting.
do you think any reasonable extension of these kinds of ideas could get what we want?
Conditional on avoiding Goodhart, I think you could probably get something that looks a lot like a diamond maximiser. It might not be perfect, the situation with the “most diamond” might not be the maximum of it’s utility function, but I would expect the maximum of it’s utility function will still contain a very large amount of diamond. For instance, depending on the representation, and the way the programmers baked in the utilty function, it might have a quirk in it’s utility function of only recognizing something as a diamond if it’s stereotypically “diamond shaped”. This would bar it from just building pure carbon planets to achieve it’s goal.
IMO, you’d need something else outside of the ideas presented to get a “perfect” diamond maximizer.
They’re definitely not sufficient, almost certainly. A full fledged diamond maximizer would need far more machinery, if only to do the maximization and properly learn the representation.
I think this touches on a related concern, namely goodharting. If we even slightly miss-specify the utility function at the boundary and the AI optimize in an unrestrained fashion, we’ll end up with weird situations that are totally de-correlated with what we we’re initially trying to get the AI to optimize.
If we don’t solve this problem, I agree, the problem is extremely difficult at best and completely intractable at worst. However, If we can reign in goodharting, then I don’t think things are intractable.
To make the point, I think the problem of a AI goodharting a representation is very analogous to the problems being tackled in the field of adversarial perturbations for image classification. In this case, the “representation space” is the image itself. The boundaries are classification boundaries set by the classifying neural network. The optimizing AI that goodharts everyting is usually just some form or gradient decent.
The field started when people noticed that even tiny imperceptible perturbations to images in one class would fool a classifier into thinking it was an image from another class. The interesting thing is that when you take this further, you get deep dreaming and inceptionism. The lovecraftian dog-slugs that would arise from the process are are result of the local optimization properties of SGD combined with the flaws of the classifier. Which, I think, is analogous to goodharting in the case of a diamond maximiser with a learnt ontology. The AI will do something weird, it becomes convinced that the world is full of diamonds. Meanwhile, if you ask a human about the world it created, “lovecraftian” will probably precede “diamond” in the description.
However, the field of adversarial examples seems to indicate that it’s possible to at least partially overcome this form of goodharting and, by anaogy, the goodharting that we would see with a diamond maximiser. IMO, the most promising and general solution seems to be to be more bayesian, and keep track of the uncertainty associated with class label. By keeping track of uncertainty in class labels, it’s possible to avoid class boundaries altogether, and optimize towards regions of the space that are more likely to be part of the desired class label.
I can’t seem to dig it up right now, but I once saw a paper where they developed a robust classifier. When they used SGD to change a picture from being classified as a cat to being classified as a dog, the result was that the underlying image went from looking like a dog to looking like a cat. By analogy, an diamond maximizer with a robust classification of diamonds in it’s representation should actually produce diamonds.
Overall, adversarial examples seem to be a microcosm for evaluating this specific kind of goodharting. My optimism that we can do robust ontology identification is tied to the success of that field, but at the moment the problem doesn’t seem to be intractable.
Clarification: I meant (but inadequately expressed) “do you think any reasonable extension of these kinds of ideas could get what we want?” Obviously, it would be a quite unfair demand for rigor to demand whether we can do the thing right now.
Thanks for the great reply. I think the remaining disagreement might boil down to the expected difficulty of avoiding Goodhart here. I do agree that using representations is a way around this issue, and it isn’t the representation learning approach’s job to simultaneously deal with Goodharting.
Conditional on avoiding Goodhart, I think you could probably get something that looks a lot like a diamond maximiser. It might not be perfect, the situation with the “most diamond” might not be the maximum of it’s utility function, but I would expect the maximum of it’s utility function will still contain a very large amount of diamond. For instance, depending on the representation, and the way the programmers baked in the utilty function, it might have a quirk in it’s utility function of only recognizing something as a diamond if it’s stereotypically “diamond shaped”. This would bar it from just building pure carbon planets to achieve it’s goal.
IMO, you’d need something else outside of the ideas presented to get a “perfect” diamond maximizer.