I think we’re seeing where that relationship is breaking down presently, specifically of compute and intelligence, as, while it’s difficult to see what’s happening inside of top AI companies, it seems like they’re developing new systems/techniques, not just scaling up the same stuff anymore. In principle, though, I’m not sure it’s possible to know in advance when such a correlation will break down, unless you have a deeper model of the relationship between those correlations (first order signs) and the higher level concept in question, which, in this case we do not.
As for the orthogonality thesis, my first goal was to dispute its logic, but I think there are also some very practical lessons here. From what I can tell, the limit on intelligence created by an inability to create higher order values kicks in at a pretty basic level, and relate to the limits all current machine learning and LLM based AI that we see emerge on out of distribution tasks. Up till now, we’ve just found ways to procure more data to train on, but if machine agents can never be arbitrarily curious like humans are through making higher order signs our goals, then they’ll never be more generally intelligent than us.
I think we’re seeing where that relationship is breaking down presently, specifically of compute and intelligence, as, while it’s difficult to see what’s happening inside of top AI companies, it seems like they’re developing new systems/techniques, not just scaling up the same stuff anymore. In principle, though, I’m not sure it’s possible to know in advance when such a correlation will break down, unless you have a deeper model of the relationship between those correlations (first order signs) and the higher level concept in question, which, in this case we do not.
As for the orthogonality thesis, my first goal was to dispute its logic, but I think there are also some very practical lessons here. From what I can tell, the limit on intelligence created by an inability to create higher order values kicks in at a pretty basic level, and relate to the limits all current machine learning and LLM based AI that we see emerge on out of distribution tasks. Up till now, we’ve just found ways to procure more data to train on, but if machine agents can never be arbitrarily curious like humans are through making higher order signs our goals, then they’ll never be more generally intelligent than us.