I like that this got posted here and I’d like to see more from you, and there are many downvotes at the time I upvote, so I upvoted. However, heavily disagreed; the bitter lesson isn’t the only lesson to take home. scaling compute on your hashmap won’t produce intelligence.
And what does happens when someone does a self-play algorithm on reality, anyway? I recognize they don’t yet scale to physical environments, but why wouldn’t they once one can reliably train on physical environments, given the various other components you’d need to include to get them to work—sure, perhaps you’d have a few import forklift statements? And of course you’d need to give them a game to win, as well—and then uhoh, turns out humans can’t keep up on that game anymore. The obvious game to apply them to would have very concerning outcomes.
I like that this got posted here and I’d like to see more from you, and there are many downvotes at the time I upvote, so I upvoted. However, heavily disagreed; the bitter lesson isn’t the only lesson to take home. scaling compute on your hashmap won’t produce intelligence.
And what does happens when someone does a self-play algorithm on reality, anyway? I recognize they don’t yet scale to physical environments, but why wouldn’t they once one can reliably train on physical environments, given the various other components you’d need to include to get them to work—sure, perhaps you’d have a few
import forklift
statements? And of course you’d need to give them a game to win, as well—and then uhoh, turns out humans can’t keep up on that game anymore. The obvious game to apply them to would have very concerning outcomes.