## Section: New Results

### Streaming Bayesian Inference

Participant : Michael Jordan [correspondent] .

Large, streaming data sets are increasingly the norm in science and technology. Simple descriptive statistics can often be readily computed with a constant number of operations for each data point in the streaming setting, without the need to revisit past data or have advance knowledge of future data. But these time and memory restrictions are not generally available for the complex, hierarchical models that practitioners often have in mind when they collect large data sets. Significant progress on scalable learning procedures has been made in recent years. But the underlying models remain simple, and the inferential framework is generally non-Bayesian. The advantages of the Bayesian paradigm (e.g., hierarchical modeling, coherent treatment of uncertainty) currently seem out of reach in the Big Data setting.

An exception to this statement is provided by Hofmann et al. (2010),
who have shown that a class of approximation methods known as
*variational Bayes* (VB) can be usefully deployed for large-scale
data sets. They have applied their approach, referred to as
*stochastic variational inference* (SVI), to the domain of topic
modeling of document collections, an area with a major need for
scalable inference algorithms. VB traditionally uses the variational
lower bound on the marginal likelihood as an objective function, and
the idea of SVI is to apply a variant of stochastic gradient descent
to this objective. Notably, this objective is based on the conceptual
existence of a full data set involving D data points (i.e., documents
in the topic model setting), for a fixed value of D. Although the
stochastic gradient is computed for a single, small subset of data
points (documents) at a time, the posterior being targeted is a posterior
for D data points. This value of D must be specified in advance and is
used by the algorithm at each step. Posteriors for D' data points, for
D' not equal to D, are not obtained as part of the analysis.

We view this lack of a link between the number of documents that have been processed thus far and the posterior that is being targeted as undesirable in many settings involving streaming data. In this project we aim at an approximate Bayesian inference algorithm that is scalable like SVI but is also truly a streaming procedure, in that it yields an approximate posterior for each processed collection of D' data points—and not just a pre-specified "final" number of data points D. To that end, we return to the classical perspective of Bayesian updating, where the recursive application of Bayes theorem provides a sequence of posteriors, not a sequence of approximations to a fixed posterior. To this classical recursive perspective we bring the VB framework; our updates need not be exact Bayesian updates but rather may be approximations such as VB.

Although the empirical success of SVI is the main motivation for our work, we are also motivated by recent developments in computer architectures, which permit distributed and asynchronous computations in addition to streaming computations. A streaming VB algorithm naturally lends itself to distributed and asynchronous implementations.