Interesting, I suppose that does seem somewhat useful; for discussion purposes at the very least. I am curious about how a gradient-based method can work without continuous parameters: that is counter intuitive for me. Can you throw out some keywords? Keywords for what I was talking about: Metropolis-adjusted Langevin algorithm (MALA), Stochastic Newton, any MCMC with ‘hessian’ in the name.
They don’t work without continuous parameters. If you have a probabilistic program that includes both discrete and continuous parameters, you can use gradient methods to generate MH proposals for your continuous parameters. I don’t think there are any publications that discuss this yet.
Oh, ok that makes perfect sense. Breaking inference problems into sub problems and using different methods on the sub problems seems like a common technique.
Interesting, I suppose that does seem somewhat useful; for discussion purposes at the very least. I am curious about how a gradient-based method can work without continuous parameters: that is counter intuitive for me. Can you throw out some keywords? Keywords for what I was talking about: Metropolis-adjusted Langevin algorithm (MALA), Stochastic Newton, any MCMC with ‘hessian’ in the name.
They don’t work without continuous parameters. If you have a probabilistic program that includes both discrete and continuous parameters, you can use gradient methods to generate MH proposals for your continuous parameters. I don’t think there are any publications that discuss this yet.
Oh, ok that makes perfect sense. Breaking inference problems into sub problems and using different methods on the sub problems seems like a common technique.