is this part of the reason so many AI researchers think it’s cool and enlightened to not believe in highly general architectures
I do hear No Free Lunch theorem get thrown around when an architecture fails to solve some problem which its inductive bias doesn’t fit. But I think it’s just thrown around as a vibe.
I do hear No Free Lunch theorem get thrown around when an architecture fails to solve some problem which its inductive bias doesn’t fit. But I think it’s just thrown around as a vibe.