So if I understand your point correctly, you expect something like “give me more compute” at some point fail to deliver more intelligence since intelligence isn’t just “more compute”?
Yes. And in one sense that is trivial, there’s plenty of algorithms you can run on extremely large compute that do not lead to intelligent behavior, but in another sense it is non-trivial because all the algorithms we have that essentially “create maps” as in representations of some reality need to have that domain specified that they’re supposed to represent or learn, in order to create arbitrary domains an agent needs to make second order signs their goal—see my last post.
Then I wonder, at what point does that matter? Or more specifically, when does that matter in the context of ai-risk?
Clearly there is some relationship between something like “more compute” and “more intelligence” since something too simple cannot be intelligent, but I don’t know where that relationship breaks down. Evolution clearly found a path for optimizing intelligence via proxy in our brains, and I think the fear is that you may yet be able to go quite further than human-level intelligence before the extra compute fails to deliver more meaningful intelligence described in your post.
It seems premature to reject the orthogonality thesis of optimizing for things that “obviously bring more intelligence” before they start to break down.
I think we’re seeing where that relationship is breaking down presently, specifically of compute and intelligence, as, while it’s difficult to see what’s happening inside of top AI companies, it seems like they’re developing new systems/techniques, not just scaling up the same stuff anymore. In principle, though, I’m not sure it’s possible to know in advance when such a correlation will break down, unless you have a deeper model of the relationship between those correlations (first order signs) and the higher level concept in question, which, in this case we do not.
As for the orthogonality thesis, my first goal was to dispute its logic, but I think there are also some very practical lessons here. From what I can tell, the limit on intelligence created by an inability to create higher order values kicks in at a pretty basic level, and relate to the limits all current machine learning and LLM based AI that we see emerge on out of distribution tasks. Up till now, we’ve just found ways to procure more data to train on, but if machine agents can never be arbitrarily curious like humans are through making higher order signs our goals, then they’ll never be more generally intelligent than us.
So if I understand your point correctly, you expect something like “give me more compute” at some point fail to deliver more intelligence since intelligence isn’t just “more compute”?
Yes. And in one sense that is trivial, there’s plenty of algorithms you can run on extremely large compute that do not lead to intelligent behavior, but in another sense it is non-trivial because all the algorithms we have that essentially “create maps” as in representations of some reality need to have that domain specified that they’re supposed to represent or learn, in order to create arbitrary domains an agent needs to make second order signs their goal—see my last post.
Then I wonder, at what point does that matter? Or more specifically, when does that matter in the context of ai-risk?
Clearly there is some relationship between something like “more compute” and “more intelligence” since something too simple cannot be intelligent, but I don’t know where that relationship breaks down. Evolution clearly found a path for optimizing intelligence via proxy in our brains, and I think the fear is that you may yet be able to go quite further than human-level intelligence before the extra compute fails to deliver more meaningful intelligence described in your post.
It seems premature to reject the orthogonality thesis of optimizing for things that “obviously bring more intelligence” before they start to break down.
I think we’re seeing where that relationship is breaking down presently, specifically of compute and intelligence, as, while it’s difficult to see what’s happening inside of top AI companies, it seems like they’re developing new systems/techniques, not just scaling up the same stuff anymore. In principle, though, I’m not sure it’s possible to know in advance when such a correlation will break down, unless you have a deeper model of the relationship between those correlations (first order signs) and the higher level concept in question, which, in this case we do not.
As for the orthogonality thesis, my first goal was to dispute its logic, but I think there are also some very practical lessons here. From what I can tell, the limit on intelligence created by an inability to create higher order values kicks in at a pretty basic level, and relate to the limits all current machine learning and LLM based AI that we see emerge on out of distribution tasks. Up till now, we’ve just found ways to procure more data to train on, but if machine agents can never be arbitrarily curious like humans are through making higher order signs our goals, then they’ll never be more generally intelligent than us.