Nested sampling algorithm

From testwiki
Revision as of 05:14, 30 December 2024 by imported>OAbot (Open access bot: arxiv updated in citation with #oabot.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Bayesian statistics The nested sampling algorithm is a computational approach to the Bayesian statistics problems of comparing models and generating samples from posterior distributions. It was developed in 2004 by physicist John Skilling.[1]

Background

Bayes' theorem can be applied to a pair of competing models M1 and M2 for data D, one of which may be true (though which one is unknown) but which both cannot be true simultaneously. The posterior probability for M1 may be calculated as:

P(M1D)=P(DM1)P(M1)P(D)=P(DM1)P(M1)P(DM1)P(M1)+P(DM2)P(M2)=11+P(DM2)P(DM1)P(M2)P(M1)

The prior probabilities M1 and M2 are already known, as they are chosen by the researcher ahead of time. However, the remaining Bayes factor P(DM2)/P(DM1) is not so easy to evaluate, since in general it requires marginalizing nuisance parameters. Generally, M1 has a set of parameters that can be grouped together and called θ, and M2 has its own vector of parameters that may be of different dimensionality, but is still termed θ. The marginalization for M1 is

P(DM1)=dθP(Dθ,M1)P(θM1)

and likewise for M2. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution P(θD,M1).[2] It is an alternative to methods from the Bayesian literature[3] such as bridge sampling and defensive importance sampling.

Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density Z=P(DM) where M is M1 or M2:

Start with N points θ1,,θN sampled from prior.
for i=1 to j do        % The number of iterations j is chosen by guesswork.
    Li:=min(current likelihood values of the points);
    Xi:=exp(i/N);
    wi:=Xi1Xi
    Z:=Z+Liwi;
    Save the point with least likelihood as a sample point with weight wi.
    Update the point with least likelihood with some Markov chain Monte Carlo steps according to the prior, accepting only steps that
    keep the likelihood above Li.
end
return Z;

At each iteration, Xi is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than θi. The weight factor wi is an estimate of the amount of prior mass that lies between two nested hypersurfaces {θP(Dθ,M)=P(Dθi1,M)} and {θP(Dθ,M)=P(Dθi,M)}. The update step Z:=Z+Liwi computes the sum over i of Liwi to numerically approximate the integral

P(DM)=P(Dθ,M)P(θM)dθ=P(Dθ,M)dP(θM)

In the limit j, this estimator has a positive bias of order 1/N[4] which can be removed by using (11/N) instead of the exp(1/N) in the above algorithm.

The idea is to subdivide the range of f(θ)=P(Dθ,M) and estimate, for each interval [f(θi1),f(θi)], how likely it is a priori that a randomly chosen θ would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.[5]

Choice of MCMC algorithm

The original procedure outlined by Skilling (given above in pseudocode) does not specify what specific Markov chain Monte Carlo algorithm should be used to choose new points with better likelihood.

Skilling's own code examples (such as one in Sivia and Skilling (2006),[6] available on Skilling's website) chooses a random existing point and selects a nearby point chosen by a random distance from the existing point; if the likelihood is better, then the point is accepted, else it is rejected and the process repeated. Mukherjee et al. (2006)[7] found higher acceptance rates by selecting points randomly within an ellipsoid drawn around the existing points; this idea was refined into the MultiNest algorithm[8] which handles multimodal posteriors better by grouping points into likelihood contours and drawing an ellipsoid for each contour.

Implementations

Example implementations demonstrating the nested sampling algorithm are publicly available for download, written in several programming languages.

Applications

Since nested sampling was proposed in 2004, it has been used in many aspects of the field of astronomy. One paper suggested using nested sampling for cosmological model selection and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility."[7] A refinement of the algorithm to handle multimodal posteriors has been suggested as a means to detect astronomical objects in extant datasets.[10] Other applications of nested sampling are in the field of finite element updating where the algorithm is used to choose an optimal finite element model, and this was applied to structural dynamics.[12] This sampling method has also been used in the field of materials modeling. It can be used to learn the partition function from statistical mechanics and derive thermodynamic properties.[13]

Dynamic nested sampling

Dynamic nested sampling is a generalisation of the nested sampling algorithm in which the number of samples taken in different regions of the parameter space is dynamically adjusted to maximise calculation accuracy.[14] This can lead to large improvements in accuracy and computational efficiency when compared to the original nested sampling algorithm, in which the allocation of samples cannot be changed and often many samples are taken in regions which have little effect on calculation accuracy.

Publicly available dynamic nested sampling software packages include:

Dynamic nested sampling has been applied to a variety of scientific problems, including analysis of gravitational waves,[17] mapping distances in space[18] and exoplanet detection.[19]

See also

References

Template:Reflist