Tuesday, April 17 at 4:15pm
Frank H.T. Rhodes Hall, 253
We analyze the efficiency of particular Markov chain methods used in Bayesian statistics, giving some of the first meaningful bounds on the convergence rate of such methods as a function of statistical quantities of interest. We first address a popular Gibbs sampling method used for statistical discovery of gene regulatory binding motifs in DNA sequences. We show that, due to multimodality of the posterior distribution, the rate of convergence of the Markov chain often decreases exponentially as a function of the length of the DNA sequence.
This implies that the run time of the algorithm grows exponentially in the sequence length, to attain a fixed accuracy. We then give more general bounds on convergence rates of Bayesian statistics Markov chain methods, in the parametric i.i.d. setting and as a function of the number of observations.
- Tags
-