Yet, some might recognize the problem of model collapse and the relationship between artificial training data and my speculation and express a negative selection bias, ruling out my speculation as infeasible due to complexity and scalability concerns. (And they might be correct. Certainly the scope of what I was talking about is impractical, at a minimum, and very expensive, at a maximum.)
I have no proof yet of what I’m going to say but: a properly distributed training data can be easily tuned with a smaller more robust dataset—this will significantly reduce the cost of compute to align AI systems using an approach similar to ATL.
a properly distributed training data can be easily tuned with a smaller more robust dataset
I think this aligns with human instinct. While it’s not always true, I think that humans are compelled to constantly work to condense what we know. (An instinctual byproduct of knowledge portability and knowledge retention.)
I’m reading a great book right now that talks about this and other things in neuroscience. It has some interesting insights for my work life, not just my interest in artificial intelligence.
Forgot to mention that the principle behind this intuition—largely operating as well in my project is yeah “pareto principle.”
Btw. Novelties, we are somehow wired to be curious—this very thing terrifies me of a future AGI will be superior at exercising curiosity but if such same mechanic can be steered—I see a route that the novelty aspect, a route as well to alignment or a route to a conceptual approach to it...
I have no proof yet of what I’m going to say but: a properly distributed training data can be easily tuned with a smaller more robust dataset—this will significantly reduce the cost of compute to align AI systems using an approach similar to ATL.
I think this aligns with human instinct. While it’s not always true, I think that humans are compelled to constantly work to condense what we know. (An instinctual byproduct of knowledge portability and knowledge retention.)
I’m reading a great book right now that talks about this and other things in neuroscience. It has some interesting insights for my work life, not just my interest in artificial intelligence.
As a for instance: I was surprised to learn that someone has worked out the mathematics to measure novelty. Related Wired article and link to a paper on the dynamics of correlated novelties.
Forgot to mention that the principle behind this intuition—largely operating as well in my project is yeah “pareto principle.”
Btw. Novelties, we are somehow wired to be curious—this very thing terrifies me of a future AGI will be superior at exercising curiosity but if such same mechanic can be steered—I see a route that the novelty aspect, a route as well to alignment or a route to a conceptual approach to it...