In this context, we’re imitating some probability distribution, and the perturbation means we’re slightly adjusting the probabilities, making some of them higher and some of them lower. The adjustment is small in a multiplicative sense not an additive sense, hence the use of exponentials. Just as a silly example, maybe I’m training on MNIST digits, but I want the 2′s to make up 30% of the distribution rather than just 10%. The math described above would let me train a GAN that generates 2′s 30% of the time.
I’m not sure what is meant by “the difference from a gradient in SGD”, so I’d need more information to say whether it is different from a perturbation or not. But probably it’s different: perturbations in the above sense are perturbations in the probability distribution over the training data.
In this context, we’re imitating some probability distribution, and the perturbation means we’re slightly adjusting the probabilities, making some of them higher and some of them lower. The adjustment is small in a multiplicative sense not an additive sense, hence the use of exponentials. Just as a silly example, maybe I’m training on MNIST digits, but I want the 2′s to make up 30% of the distribution rather than just 10%. The math described above would let me train a GAN that generates 2′s 30% of the time.
I’m not sure what is meant by “the difference from a gradient in SGD”, so I’d need more information to say whether it is different from a perturbation or not. But probably it’s different: perturbations in the above sense are perturbations in the probability distribution over the training data.
Thanks. Your ΔH looked like ∇Q from gradient descent, but you don’t intend to take derivatives, nor maximize x, so I was mistaken.