AI researcher
EmpressCelestia
This paper (https://arxiv.org/abs/2010.14701) shows the existence of constant terms in other generative modelling settings and relates it to the entropy of the dataset, where you can’t compress beyond. It also gives empirical evidence that downstream performance in things like “finetuning a generative model to be a classifier” continues to improve as you asymptote to the constant. From a physics perspective, the constant term and coefficients on the power law pieces are “non universal data” while the exponent is going to tell you more about the model, training scheme, problem, etc.
San Diego SSC Meetup 5/6/2018
The Google group that will give you access to event announcements and the calendar is here: https://groups.google.com/forum/#!forum/san-diego-ratadj
There’s also a FB group: https://www.facebook.com/groups/1944288489124568/
I feel like it should just loop back around to being cool at that point but I guess that’s not how it works.