Today I attended the first of two talks in a two-part mini-workshop on Variational Inference. It’s interesting to think of from the perspective of my recent musings about more science-y vs. engineering mindsets because it highlighted the importance of engineering/algorithmic progress in widening Bayesian methods’ applicability
The presenter, who’s a fairly well known figure in probabilistic ML and has developed some well known statistical inference algorithms, talked about how part of the reason so much time was spent debating philosophical issues in the past was because Bayesian inference wasn’t computationally tractable until the development of Gibbs Sampling in the ’90s by Gelfand & Smith.
To be clear, the type of progress I’m talking about is still “scientific” in the sense of it mostly involves applied math and finding good ways to approximate posterior distributions. But, it’s “engineering” in the sense that it’s the messy sort of work I talked about in my other post, where messy means a lot of the methods don’t have a good theoretical backing and involve making questionable (at least ex ante) statistical assumptions. Now, the counter is of course that we don’t have a theoretical backing yet, but there still may be one in the future.
I’ll probably have more to say about this when the workshop’s over but I partly just wanted to record my thoughts while they were fresh.
Today I attended the first of two talks in a two-part mini-workshop on Variational Inference. It’s interesting to think of from the perspective of my recent musings about more science-y vs. engineering mindsets because it highlighted the importance of engineering/algorithmic progress in widening Bayesian methods’ applicability
The presenter, who’s a fairly well known figure in probabilistic ML and has developed some well known statistical inference algorithms, talked about how part of the reason so much time was spent debating philosophical issues in the past was because Bayesian inference wasn’t computationally tractable until the development of Gibbs Sampling in the ’90s by Gelfand & Smith.
To be clear, the type of progress I’m talking about is still “scientific” in the sense of it mostly involves applied math and finding good ways to approximate posterior distributions. But, it’s “engineering” in the sense that it’s the messy sort of work I talked about in my other post, where messy means a lot of the methods don’t have a good theoretical backing and involve making questionable (at least ex ante) statistical assumptions. Now, the counter is of course that we don’t have a theoretical backing yet, but there still may be one in the future.
I’ll probably have more to say about this when the workshop’s over but I partly just wanted to record my thoughts while they were fresh.