the plausibility of this strategy, to ‘endlessly trade computation for better performance’ and then have very long/parallelized runs, is precisely one of the scariest aspects of automated ML; even more worrying that it’s precisely what some people in the field are gunning for, and especially when they’re directly contributing to the top auto ML scaffolds; although, everything else equal, it might be better to have an early warning sign, than to have a large inference overhang: https://x.com/zhengyaojiang/status/1844756071727923467
(cross-posted from https://x.com/BogdanIonutCir2/status/1844787728342245487)
the plausibility of this strategy, to ‘endlessly trade computation for better performance’ and then have very long/parallelized runs, is precisely one of the scariest aspects of automated ML; even more worrying that it’s precisely what some people in the field are gunning for, and especially when they’re directly contributing to the top auto ML scaffolds; although, everything else equal, it might be better to have an early warning sign, than to have a large inference overhang: https://x.com/zhengyaojiang/status/1844756071727923467