Summary

Expectation propagation (EP) is a popular technique for approximate Bayesian inference, although it has arguably lost favor in recent years to variational inference. Whereas variational inference was made scalable to massive data sets through stochastic variational inference (Hoffman et al., 2013), and thus attractive to machine learning practitioners, expectation propagation is limited by a memory overhead which scales with the number of data points. This paper proposes an extension of EP which removes the complexity requirement and thus enables it to scale to massive data.

Let be a data set split into components, and be a model likelihood with prior . We aim to learn an approximating distribution such that

Following the way Zoubin Ghahramani motivates EP, we would like to minimize (I have no clue who first motivated EP from this way—Zoubin’s slides were how I first learned this). This can be interpreted as a reverse analog of the variational objective: whereas VI is restricted to the support of the posterior and is “mode-seeking” (Minka, 2005), this KL contains at least the support of the posterior and is “inclusive” (Frey, 2000). Minimizing this KL could proceed by iteratively updating the factors . Denoting the leave-one-out posterior , we iteratively minimize for each .

However, is intractable (and so is the iterative version) as it requires calculating the posterior density; therefore we minimize an approximation to it. Consider approximating the leave-one-out posterior with the cavity distribution , which is the approximating distribution without the likelihood approximation. Define the tilted distribution , which is the corresponding cavity distribution with the true likelihood factor plugged in. EP iteratively solves

This objective is a good proxy to if the true likelihood factor contributes to the leave-one-out posterior in the same way as it does in the tilted distribution.

Calculating the cavity distribution for each requires explicit storage of the approximating factors . This implies that EP has an memory requirement which prevents it from scaling to large data sets. To solve this, the authors consider storage of only a single factor ; rather than trying to capture the individual effect of the likelihood factors, the single factor captures only the average effect. Stochastic expectation propagation (SEP) uses the same iterative KL objective to learn the next approximating factor , and additionally updates . By storing only the average effect of the likelihood factors, SEP approximates the true cavity distributions with an averaged cavity distribution .

Discussion

I’m a big fan of the paper. The exposition on EP, assumed density filtering, and message passing is excellent, and it naturally motivates SEP following the technical developments of EP. The use of an averaged cavity distribution is an unfortunate compromise, but it seems necessary in order to scale local approximation techniques such as EP.

The authors make a cool connection to SVI. If the local minimizations in SEP are instead doing the reverse KL, then this is a form of “stochastic” variational message passing. Moreover, if the update for the single factor follows where uses a Robbins-Monro schedule, then this is the same as stochastic variational inference with a mean-field approximation. An interesting extension then is to apply the equivalent of an adaptive learning rate in stochastic approximations for the geometric scale factor when updating .

Hogwild (Nui et al., 2011) and asynchronous versions also naturally follow which is cool. I also think it would have been useful for the paper to fight for the streaming data stance. That is, it’s trivial for stochastic EP to learn on streaming data whereas it is impossible for vanilla EP.

The move to with SEP does require some compromises in contrast to SVI unfortunately. For instance, unlike VI to SVI, EP to SEP compromises the approximation quality in order to make it scalable. That is, SVI still performs the same global minimization of , and shares nice properties through theory of stochastic approximations (Robbins and Monro, 1951), whereas SEP does not preserve the same local minimization as EP. You can imagine scenarios where an averaged likelihood approximator will not work, or even the “minibatch” M of K extension of them. Additionally, it’s not immediately clear how to extend EP beyond approximations which preserve dependence between the local approximations.

Minor comment: like much of Tom Minka’s work, they note that the local KL minimizations can more generally use -divergence measures. I still think this is a little restricting, as in general you just want some approximation; therefore something like a Laplace approximation (Smola et al., 2004) is valid as well, and similarly, nested EP (Riihimaki et al., 2013) is more conducive following this motivation.

I’m excited to hear more about stochastic EP from their poster and spotlight at NIPS! (The same authors have a number of interesting extensions of this paper at a few workshops as well.)