when alignment-by-default works, we can use the system to design a successor without worrying about amplification of alignment errors
Anything neural net related starts with random noise and performs gradient descent style steps. This doesn’t get you the global optimal, it gets you some point that is approximately a local optimal, which depends on the noise, the nature of the search space, and the choice of step size.
If nothing else, the training data will contain sensor noise.
At best you are going to get something that roughly corresponds to human values.
Just because it isn’t obvious where the noise entered the system doesn’t make it noiseless. Just because you gave what we actually want, and the value of a neuron in a neural net the same name, doesn’t make them the same thing.
Consider the large set of references with representative members “What Alice makes long term plans towards”, “What Bobs impulsive action tends towards”, “What Alice says is good and right when her social circle are listening”, “What Carl listens to when deciding which politician to vote for”, “What news makes Eric instinctively feel good”, “what makes Fred presses the reward button during AI training” ect ect.
If these all referred to the same preference ordering over states of the world, then we could call that human values, and have a natural concept.
Trees are a fairly natural concept because “tall green things” and “Lifeforms that are >10% cellulose” point to a similar set of objects. There are many different simple boundaries in concept-space that largely separate trees from non trees. Trees are tightly clustered in thing-space.
To the extent that all those references refer to the same thing, we can’t expect an AI to distinguish between them. To the extent that they refer to different concepts, at best the AI will have a separate concept for each.
Suppose you run the microscope AI, and you find that you have a whole load of concepts that kind of match “human values” to different degrees. These represent different people and different embeddings of value. (Of course, “What Carl listens to when deciding which politician to vote for” contains Carls distrust of political promises. “what makes Fred presses the reward button during AI training” includes the time Fred tripped up and slammed the button by accident. Each of the easily accessible concepts is a bit different and includes its own bit of noise)
Trees are a fairly natural concept because “tall green things” and “Lifeforms that are >10% cellulose” point to a similar set of objects. There are many different simple boundaries in concept-space that largely separate trees from non trees. Trees are tightly clustered in thing-space.
That’s not quite how natural abstractions work. There are lots of edge cases which are sort-of-trees-but-sort-of-not: logs, saplings/acorns, petrified trees, bushes, etc. Yet the abstract category itself is still precise.
An analogy: consider a Gaussian cluster model. Any given cluster will have lots of edge cases, and lots of noise in the individual points. But the cluster itself—i.e. the mean and variance parameters of the cluster—can still be precisely defined. Same with the concept of “tree”, and (I expect) with “human values”.
In general, we can have a precise high-level concept without a hard boundary in the low-level space.
Consider a source of data that is from a sum of several Gaussian distributions. If you have a sufficiently large number of samples from this distribution, you can locate the origional gaussians to arbitrary accuracy. (Of course, if you have a finite number of samples, you will have some inaccuracy in predicting the location of the gaussians, possibly a lot.)
However, not all distributions share this property. If you look at uniform distributions over rectangles in 2d space, you will find that a uniform L shape can be made in 2 different ways. More complicated shapes can be made in even more ways. The property that you can uniquely decompose sum of gaussians into its individual gaussians is not a property that applies to every distribution.
I would expect that whether or not logs, saplings, petrified trees, sparkly plastic christmas trees ect counted as trees would depend on the details of the training data, as well as the network architecture and possibly the random seed.
Note: this is an empirical prediction about current neural networks. I am predicting that if someone, takes 2 networks that have been trained on different datasets, ideally with different architectures, and tries to locate the neuron that holds the concept of “Tree” in each, and then shows both networks an edge case that is kind of like a tree, then the networks will often disagree significantly about how much of a tree it is.
Anything neural net related starts with random noise and performs gradient descent style steps. This doesn’t get you the global optimal, it gets you some point that is approximately a local optimal, which depends on the noise, the nature of the search space, and the choice of step size.
If nothing else, the training data will contain sensor noise.
At best you are going to get something that roughly corresponds to human values.
Just because it isn’t obvious where the noise entered the system doesn’t make it noiseless. Just because you gave what we actually want, and the value of a neuron in a neural net the same name, doesn’t make them the same thing.
Consider the large set of references with representative members “What Alice makes long term plans towards”, “What Bobs impulsive action tends towards”, “What Alice says is good and right when her social circle are listening”, “What Carl listens to when deciding which politician to vote for”, “What news makes Eric instinctively feel good”, “what makes Fred presses the reward button during AI training” ect ect.
If these all referred to the same preference ordering over states of the world, then we could call that human values, and have a natural concept.
Trees are a fairly natural concept because “tall green things” and “Lifeforms that are >10% cellulose” point to a similar set of objects. There are many different simple boundaries in concept-space that largely separate trees from non trees. Trees are tightly clustered in thing-space.
To the extent that all those references refer to the same thing, we can’t expect an AI to distinguish between them. To the extent that they refer to different concepts, at best the AI will have a separate concept for each.
Suppose you run the microscope AI, and you find that you have a whole load of concepts that kind of match “human values” to different degrees. These represent different people and different embeddings of value. (Of course, “What Carl listens to when deciding which politician to vote for” contains Carls distrust of political promises. “what makes Fred presses the reward button during AI training” includes the time Fred tripped up and slammed the button by accident. Each of the easily accessible concepts is a bit different and includes its own bit of noise)
That’s not quite how natural abstractions work. There are lots of edge cases which are sort-of-trees-but-sort-of-not: logs, saplings/acorns, petrified trees, bushes, etc. Yet the abstract category itself is still precise.
An analogy: consider a Gaussian cluster model. Any given cluster will have lots of edge cases, and lots of noise in the individual points. But the cluster itself—i.e. the mean and variance parameters of the cluster—can still be precisely defined. Same with the concept of “tree”, and (I expect) with “human values”.
In general, we can have a precise high-level concept without a hard boundary in the low-level space.
Consider a source of data that is from a sum of several Gaussian distributions. If you have a sufficiently large number of samples from this distribution, you can locate the origional gaussians to arbitrary accuracy. (Of course, if you have a finite number of samples, you will have some inaccuracy in predicting the location of the gaussians, possibly a lot.)
However, not all distributions share this property. If you look at uniform distributions over rectangles in 2d space, you will find that a uniform L shape can be made in 2 different ways. More complicated shapes can be made in even more ways. The property that you can uniquely decompose sum of gaussians into its individual gaussians is not a property that applies to every distribution.
I would expect that whether or not logs, saplings, petrified trees, sparkly plastic christmas trees ect counted as trees would depend on the details of the training data, as well as the network architecture and possibly the random seed.
Note: this is an empirical prediction about current neural networks. I am predicting that if someone, takes 2 networks that have been trained on different datasets, ideally with different architectures, and tries to locate the neuron that holds the concept of “Tree” in each, and then shows both networks an edge case that is kind of like a tree, then the networks will often disagree significantly about how much of a tree it is.