I don’t think there is a fully clean distinction between goal representations and belief/concept representations
Alright, this is making me excited for your upcoming post.
In humans it often seems like the goal of achieving X is roughly equivalent to a deep-rooted belief that achieving X would be good, where “good” is a kinda fuzzy predicate that we typically don’t look at very hard
I like this framing a lot. I like it so much, in fact, that I intend to use it in my upcoming long post compiling arguments against the theoretical soundness and practical viability of CEV.
The orthogonality thesis was always about what agents were possible, not which were likely. [...] the orthogonality thesis doesn’t imply that goals and beliefs are uncorrelated
This is related to what I wrote in footnote 3 of my previous comment to you. But anyway, would you agree that this is an absolutely terrible naming of this concept? When I type in “orthogonal” in Google, the very first thing that pops up is a list of definitions containing “[2] STATISTICS (of variates) statistically independent.” And even if people aren’t meant to be familiar with this definition, the most common and basic use of orthogonal, namely in linear algebra, implies that two vectors are not only linearly independent, but also that they are as far from pointing in a “related” direction as mathematically possible!
It completely boggles the mind that “orthogonality” was the word chosen as the encoding of these ideas.
Anyway, I left the most substantive and important comment for last.
And so, given this, when I postulate a pressure to simplify representations my default assumption is that this will apply to both types of representations—as it seems to in my own brain, which often tries very hard to simplify my moral goals in a roughly analogous way to how it tries to simplify my beliefs.
The thing about this is that you don’t seem to be currently undergoing the type of ontological crisis or massive shift in capabilities that would be analogous to an AI getting meaningfully more intelligent due to algorithmic improvements or increased compute or data (if you actually are, godspeed!)
So would you argue that this type of goal simplification and compression happens organically and continuously even in the absence of such a “phase transition”? I have a non-rigorous feeling that this argument would prove too much by implying more short-term modification of human desires than we actually observe in real life.
Relatedly, would you say that your moral goals are simpler now than they were, say, back when you were a child? I am pretty sure that the answer, at least for me, is “definitely not,” and that basically every single time I have grown “wiser” and had my belief system meaningfully altered, I came out of that process with a deeper appreciation for the complexity of life and for the intricacies and details of what I care about.
As we examine successively more intelligent agents and their representations, the representation of any particular thing will perhaps be more compressed, but also and importantly, more intelligent agents represent things that less intelligent agents don’t represent at all. I’m more intelligent than a mouse, but I wouldn’t say I have a more compressed representation of differential calculus than a mouse does. Terry Tao is likely more intelligent than I am, likely has a more compressed representation of differential calculus than I do, but he also has representations of a bunch of other mathematics I can’t represent at all, so the overall complexity of his representations in total is plausibly higher.
Why wouldn’t the same thing happen for goals? I’m perfectly willing to say I’m smarter than a dog and a dog is smarter than a paramecium, but it sure seems like the dog’s goals are more complex than the paramecium’s, and mine are more complex than the dog’s. Any given fixed goal might have a more compressed representation in the more intelligent animal (I’m not sure it does, but that’s the premise so let’s accept it), but the set of things being represented is also increasing in complexity across organisms. Driving the point home, Terry Tao seems to have goals of proving theorems I don’t even understand the statement of, and these seem like complex goals to me.
So overall I’m not following from the premises to the conclusions. I wish I could make this sharper. Help welcome.
I think what you’re saying just makes a lot of sense, honestly.
I’d suspect one possible counterargument is that, just like how more intelligent agents with more compressed models can more compactly represent complex goals, they are also capable of drawing ever-finer distinctions that allow them to identify possible goals that have very short encodings in the new ontology, but which don’t make sense at all as stand-alone, mostly-coherent targets in the old ontology (because it is simply too weak to represent them). So it’s not just that goals get compressed, but also that new possible kinds of goals (many of them really simple) get added to the game.
But this process should also allow new goals to arise that have ~ any arbitrary encoding length in the new ontology, because it should be just as easy to draw new, subtle distinctions inside a complex goal (which outputs a new medium- or large-complexity goal) as it would be inside a really simple goal (which outputs the type of new super-small-complexity goal that the previous paragraph talks about). So I don’t think this counterargument ultimately works, and I suspect it shouldn’t change our expectations in any meaningful way.
I’m skeptical of the naming being bad, it fits with that definition and the common understanding of the word. The Orthogonality Thesis is saying that the two qualities of goal/value are not necessarily related, which may seem trivial nowadays but there used to be plenty of people going “if the AI becomes smart, even if it is weird, it will be moral towards humans!” through reasoning of the form “smart → not dumb goals like paperclips”.
There’s structure imposed on what minds actually get created, based on what architectures, what humans train the AI on, etc. Just as two vectors can be orthogonal in R^2 while the actual points you plot in the space are correlated.
With what definition? The one most applicable here, dealing with random variables (relative to our subjective uncertainty), says “random variables that are independent”. Independence implies uncorrelation, even if the converse doesn’t hold.
Just as two vectors can be orthogonal in R^2 while the actual points you plot in the space are correlated.
This is totally false as a matter of math if you use the most common definition of orthogonality in this context. I do agree that what you are saying could be correct if you do not think of orthogonality that way and instead simply look at it in terms of the entries of the vectors, but then you enter the realm of trying to capture the “goals” and “beliefs” as specific Euclidean vectors, and I think that isn’t the best idea for generating intuition because one of the points of the Orthogonality thesis seems to be to instead abstract away from the specific representation you choose for intelligence and values (which can bias you one way or another) and to instead focus on the broad, somewhat-informal conclusion.
Ah, I rewrote my comment a few times and lost what I was referencing. I originally was referencing the geometric meaning (as an alternate to your statistical definition), two vectors at a right angle from each other.
But the statistical understanding works from what I can tell? You have your initial space with extreme uncertainty, and the orthogonality thesis simply states that (intelligence, goals) are not related — you can pair some intelligence with any goal. They are independent of each other at this most basic level. This is the orthogonality thesis.
Then, in practice, you condition your probability distribution over that space with your more specific knowledge about what minds will be created, and how they’ll be created. You can consider this as giving you a new space, moving probability around.
As an absurd example: if height/weight of creatures were uncorrelated in principal, but then we update on “this is an athletic human”, then in that new distribution they are correlated! This is what I was trying to get at with my R^2 example, but apologies that I was unclear since I was still coming at it from a frame of normal geometry. (Think, each axis is an independent normal distribution but then you condition on some knowledge that restricts them such that they become correlated)
I agree that it is an informal argument and that pinning it down to very detailed specifics isn’t necessary or helpful at this low-level, I’m merely attempting to explain why orthogonality works. It is a statement about the basic state of minds before we consider details, and they are orthogonal there; because it is an argumentative response to assumptions about “smart → not dumb goals”.
Alright, this is making me excited for your upcoming post.
I like this framing a lot. I like it so much, in fact, that I intend to use it in my upcoming long post compiling arguments against the theoretical soundness and practical viability of CEV.
This is related to what I wrote in footnote 3 of my previous comment to you. But anyway, would you agree that this is an absolutely terrible naming of this concept? When I type in “orthogonal” in Google, the very first thing that pops up is a list of definitions containing “[2] STATISTICS (of variates) statistically independent.” And even if people aren’t meant to be familiar with this definition, the most common and basic use of orthogonal, namely in linear algebra, implies that two vectors are not only linearly independent, but also that they are as far from pointing in a “related” direction as mathematically possible!
It completely boggles the mind that “orthogonality” was the word chosen as the encoding of these ideas.
Anyway, I left the most substantive and important comment for last.
The thing about this is that you don’t seem to be currently undergoing the type of ontological crisis or massive shift in capabilities that would be analogous to an AI getting meaningfully more intelligent due to algorithmic improvements or increased compute or data (if you actually are, godspeed!)
So would you argue that this type of goal simplification and compression happens organically and continuously even in the absence of such a “phase transition”? I have a non-rigorous feeling that this argument would prove too much by implying more short-term modification of human desires than we actually observe in real life.
Relatedly, would you say that your moral goals are simpler now than they were, say, back when you were a child? I am pretty sure that the answer, at least for me, is “definitely not,” and that basically every single time I have grown “wiser” and had my belief system meaningfully altered, I came out of that process with a deeper appreciation for the complexity of life and for the intricacies and details of what I care about.
I’m generally confused by the argument here.
As we examine successively more intelligent agents and their representations, the representation of any particular thing will perhaps be more compressed, but also and importantly, more intelligent agents represent things that less intelligent agents don’t represent at all. I’m more intelligent than a mouse, but I wouldn’t say I have a more compressed representation of differential calculus than a mouse does. Terry Tao is likely more intelligent than I am, likely has a more compressed representation of differential calculus than I do, but he also has representations of a bunch of other mathematics I can’t represent at all, so the overall complexity of his representations in total is plausibly higher.
Why wouldn’t the same thing happen for goals? I’m perfectly willing to say I’m smarter than a dog and a dog is smarter than a paramecium, but it sure seems like the dog’s goals are more complex than the paramecium’s, and mine are more complex than the dog’s. Any given fixed goal might have a more compressed representation in the more intelligent animal (I’m not sure it does, but that’s the premise so let’s accept it), but the set of things being represented is also increasing in complexity across organisms. Driving the point home, Terry Tao seems to have goals of proving theorems I don’t even understand the statement of, and these seem like complex goals to me.
So overall I’m not following from the premises to the conclusions. I wish I could make this sharper. Help welcome.
I think what you’re saying just makes a lot of sense, honestly.
I’d suspect one possible counterargument is that, just like how more intelligent agents with more compressed models can more compactly represent complex goals, they are also capable of drawing ever-finer distinctions that allow them to identify possible goals that have very short encodings in the new ontology, but which don’t make sense at all as stand-alone, mostly-coherent targets in the old ontology (because it is simply too weak to represent them). So it’s not just that goals get compressed, but also that new possible kinds of goals (many of them really simple) get added to the game.
But this process should also allow new goals to arise that have ~ any arbitrary encoding length in the new ontology, because it should be just as easy to draw new, subtle distinctions inside a complex goal (which outputs a new medium- or large-complexity goal) as it would be inside a really simple goal (which outputs the type of new super-small-complexity goal that the previous paragraph talks about). So I don’t think this counterargument ultimately works, and I suspect it shouldn’t change our expectations in any meaningful way.
I’m skeptical of the naming being bad, it fits with that definition and the common understanding of the word. The Orthogonality Thesis is saying that the two qualities of goal/value are not necessarily related, which may seem trivial nowadays but there used to be plenty of people going “if the AI becomes smart, even if it is weird, it will be moral towards humans!” through reasoning of the form “smart → not dumb goals like paperclips”. There’s structure imposed on what minds actually get created, based on what architectures, what humans train the AI on, etc. Just as two vectors can be orthogonal in R^2 while the actual points you plot in the space are correlated.
With what definition? The one most applicable here, dealing with random variables (relative to our subjective uncertainty), says “random variables that are independent”. Independence implies uncorrelation, even if the converse doesn’t hold.
This is totally false as a matter of math if you use the most common definition of orthogonality in this context. I do agree that what you are saying could be correct if you do not think of orthogonality that way and instead simply look at it in terms of the entries of the vectors, but then you enter the realm of trying to capture the “goals” and “beliefs” as specific Euclidean vectors, and I think that isn’t the best idea for generating intuition because one of the points of the Orthogonality thesis seems to be to instead abstract away from the specific representation you choose for intelligence and values (which can bias you one way or another) and to instead focus on the broad, somewhat-informal conclusion.
Ah, I rewrote my comment a few times and lost what I was referencing. I originally was referencing the geometric meaning (as an alternate to your statistical definition), two vectors at a right angle from each other.
But the statistical understanding works from what I can tell? You have your initial space with extreme uncertainty, and the orthogonality thesis simply states that (intelligence, goals) are not related — you can pair some intelligence with any goal. They are independent of each other at this most basic level. This is the orthogonality thesis. Then, in practice, you condition your probability distribution over that space with your more specific knowledge about what minds will be created, and how they’ll be created. You can consider this as giving you a new space, moving probability around. As an absurd example: if height/weight of creatures were uncorrelated in principal, but then we update on “this is an athletic human”, then in that new distribution they are correlated! This is what I was trying to get at with my R^2 example, but apologies that I was unclear since I was still coming at it from a frame of normal geometry. (Think, each axis is an independent normal distribution but then you condition on some knowledge that restricts them such that they become correlated)
I agree that it is an informal argument and that pinning it down to very detailed specifics isn’t necessary or helpful at this low-level, I’m merely attempting to explain why orthogonality works. It is a statement about the basic state of minds before we consider details, and they are orthogonal there; because it is an argumentative response to assumptions about “smart → not dumb goals”.