Particle filter

From testwiki
Jump to navigation Jump to search

Template:Short description Template:About Template:Use American English

Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference.[1] The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s.[2] The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.[3]

Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology[2][4][5] for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.

Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution.[6] In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.

From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures.[7][8][9][10][11] These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955,[12] and more recently by Jack H. Hetherington in 1984.[13] In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods.[14][15][16] Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.

The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter[17]), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion.[18] Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.

Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics,[19] phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance[20][21] and other fields.

History

Heuristic-like algorithms

From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.

The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines[22] and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.[23][24] The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo',[25] that was proposed by John Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game.[26] In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book[27] published in 1975.

In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms.[28] The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[29] and Crosby (1973).[30] Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.

From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions.[7][8] Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals.[7][8][9][13][14][31][32] The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron chain reactions,[33] but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.[13] One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.[34] In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.[12]

The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter",[35] a slightly modified version of this article appeared in 1996.[36] In April 1993, Neil J. Gordon et al., published in their seminal work[37] an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral[2] and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut[38] on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.[39][40][41][42][43][44]

Mathematical foundations

From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.

The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral[2][4] in 1996. The article[2] also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.

Dan Crisan, Jessica Gaines, and Terry Lyons,[45][46][47] as well as Pierre Del Moral, and Terry Lyons,[48] created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo[8][49][50] made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet[51] proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo[8] proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet.[49][50] The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001[52]

The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books.[8][5] These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter[53]), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models,[10][5][54] backward Markov particle models,[10][55] adaptive mean-field particle models,[6] island-type particle models,[56][57] particle Markov chain Monte Carlo methodologies,[58][59] Sequential Monte Carlo samplers [60][61][62] and Sequential Monte Carlo Approximate Bayesian Computation methods[63] and Sequential Monte Carlo ABC based Bayesian Bootstrap.[64]

The filtering problem

Objective

A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.

A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:

X0X1X2X3signalY0Y1Y2Y3observation

the filtering problem is to estimate sequentially the values of the hidden states Xk, given the values of the observation process Y0,,Yk, at any time step k.

All Bayesian estimates of Xk follow from the posterior density p(xk|y0,y1,...,yk). The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior p(x0,x1,...,xk|y0,y1,...,yk).

The Signal-Observation model

Particle methods often assume Xk and the observations Yk can be modeled in this form:

  • X0,X1, is a Markov process on dx (for some dx1) that evolves according to the transition probability density p(xk|xk1). This model is also often written in a synthetic way as
    Xk|Xk1=xkp(xk|xk1)
with an initial probability density p(x0).
  • The observations Y0,Y1, take values in some state space on dy (for some dy1) and are conditionally independent provided that X0,X1, are known. In other words, each Yk only depends on Xk. In addition, we assume conditional distribution for Yk given Xk=xk are absolutely continuous, and in a synthetic way we have
    Yk|Xk=ykp(yk|xk)

An example of system with these properties is:

Xk=g(Xk1)+Wk1
Yk=h(Xk)+Vk

where both Wk and Vk are mutually independent sequences with known probability density functions and g and h are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions g and h in the above example are linear, and if both Wk and Vk are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).

The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions Xk1Xk of the Markov chain Xk, and to compute the likelihood function xkp(yk|xk) (see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of Xk is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.

Approximate Bayesian computation models

Template:Main In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute.[19] In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal Xk by the Markov chain 𝒳k=(Xk,Yk) and to introduce a virtual observation of the form

𝒴k=Yk+ϵ𝒱kfor some parameterϵ[0,1]

for some sequence of independent random variables 𝒱k with known probability density functions. The central idea is to observe that

Law(Xk|𝒴0=y0,,𝒴k=yk)ϵ0Law(Xk|Y0=y0,,Yk=yk)

The particle filter associated with the Markov process 𝒳k=(Xk,Yk) given the partial observations 𝒴0=y0,,𝒴k=yk, is defined in terms of particles evolving in dx+dy with a likelihood function given with some obvious abusive notation by p(𝒴k|𝒳k). These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter.[65] They were further developed by P. Del Moral, A. Doucet and A. Jasra.[66][67]

The nonlinear filtering equation

Bayes' rule for conditional probability gives:

p(x0,,xk|y0,,yk)=p(y0,,yk|x0,,xk)p(x0,,xk)p(y0,,yk)

where

p(y0,,yk)=p(y0,,yk|x0,,xk)p(x0,,xk)dx0dxkp(y0,,yk|x0,,xk)=l=0kp(yl|xl)p(x0,,xk)=p0(x0)l=1kp(xl|xl1)

Particle filters are also an approximation, but with enough particles they can be much more accurate.[2][4][5][49][50] The nonlinear filtering equation is given by the recursion

Template:NumBlk

with the convention p(x0|y0,,yk1)=p(x0) for k = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.

Feynman-Kac formulation

Template:Main We fix a time horizon n and a sequence of observations Y0=y0,,Yn=yn, and for each k = 0, ..., n we set:

Gk(xk)=p(yk|xk).

In this notation, for any bounded function F on the set of trajectories of Xk from the origin k = 0 up to time k = n, we have the Feynman-Kac formula

F(x0,,xn)p(x0,,xn|y0,,yn)dx0dxn=F(x0,,xn){k=0np(yk|xk)}p(x0,,xn)dx0dxn{k=0np(yk|xk)}p(x0,,xn)dx0dxn=E(F(X0,,Xn)k=0nGk(Xk))E(k=0nGk(Xk))

Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences.[8][10][5] Their interpretations are dependent on the application domain. For instance, if we choose the indicator function Gn(xn)=1A(xn) of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:

E(F(X0,,Xn)|X0A,,XnA)=E(F(X0,,Xn)k=0nGk(Xk))E(k=0nGk(Xk))

and

P(X0A,,XnA)=E(k=0nGk(Xk))

as soon as the normalizing constant is strictly positive.

Particle filters

A Genetic type particle algorithm

Initially, such an algorithm starts with N independent random variables (ξ0i)1iN with common probability density p(x0). The genetic algorithm selection-mutation transitions[2][4]

ξk:=(ξki)1iNselectionξ^k:=(ξ^ki)1iNmutationξk+1:=(ξk+1i)1iN

mimic/approximate the updating-prediction transitions of the optimal filter evolution (Template:EquationNote):

  • During the selection-updating transition we sample N (conditionally) independent random variables ξ^k:=(ξ^ki)1iN with common (conditional) distribution
i=1Np(yk|ξki)j=1Np(yk|ξkj)δξki(dxk)

where δa stands for the Dirac measure at a given state a.

  • During the mutation-prediction transition, from each selected particle ξ^ki we sample independently a transition
ξ^kiξk+1ip(xk+1|ξ^ki),i=1,,N.

In the above displayed formulae p(yk|ξki) stands for the likelihood function xkp(yk|xk) evaluated at xk=ξki, and p(xk+1|ξ^ki) stands for the conditional density p(xk+1|xk) evaluated at xk=ξ^ki.

At each time k, we have the particle approximations

p^(dxk|y0,,yk):=1Ni=1Nδξ^ki(dxk)Np(dxk|y0,,yk)Ni=1Np(yk|ξki)i=1Np(yk|ξkj)δξki(dxk)

and

p^(dxk|y0,,yk1):=1Ni=1Nδξki(dxk)Np(dxk|y0,,yk1)

In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.[5][45][48]

Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density

p(xk|y0,,yk).

For example, we may have N samples from the approximate posterior distribution of Xk, where the samples are labeled with superscripts as:

ξ^k1,,ξ^kN.

Then, expectations with respect to the filtering distribution are approximated by

Template:NumBlk

with

p^(dxk|y0,,yk)=1Ni=1Nδξ^ki(dxk)

where δa stands for the Dirac measure at a given state a. The function f, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation (Template:EquationNote) is satisfied for any bounded function f we write

p(dxk|y0,,yk):=p(xk|y0,,yk)dxkNp^(dxk|y0,,yk)=1Ni=1Nδξ^ki(dxk)

Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines

(ξ^0,ki,ξ^1,ki,,ξ^k1,ki,ξ^k,ki)

of the particles i=1,,N. The random states ξ^l,ki, with the lower indices l=0,...,k, stands for the ancestor of the individual ξ^k,ki=ξ^ki at level l=0,...,k. In this situation, we have the approximation formula

Template:NumBlk

with the empirical measure

p^(d(x0,,xk)|y0,,yk):=1Ni=1Nδ(ξ^0,ki,ξ^1,ki,,ξ^k,ki)(d(x0,,xk))

Here F stands for any founded function on the path space of the signal. In a more synthetic form (Template:EquationNote) is equivalent to

p(d(x0,,xk)|y0,,yk):=p(x0,,xk|y0,,yk)dx0dxkNp^(d(x0,,xk)|y0,,yk):=1Ni=1Nδ(ξ^0,ki,,ξ^k,ki)(d(x0,,xk))

Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.[10][5]

Template:Technical

The general probabilistic principle

The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form ηn+1=Φn+1(ηn) where Φn+1 stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor ηn(dxn)=p(xn|y0,,yn1)dxn

satisfies a nonlinear evolution starting with the probability distribution η0(dx0)=p(x0)dx0. One of the simplest ways to approximate these probability measures is to start with N independent random variables (ξ0i)1iN with common probability distribution η0(dx0)=p(x0)dx0 . Suppose we have defined a sequence of N random variables (ξni)1iN such that

1Ni=1Nδξni(dxn)Nηn(dxn)

At the next step we sample N (conditionally) independent random variables ξn+1:=(ξn+1i)1iN with common law .

Φn+1(1Ni=1Nδξni)NΦn+1(ηn)=ηn+1

A particle interpretation of the filtering equation

We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors

Template:NumBlk

For k = 0 we use the convention p(x0|y0,,y1):=p(x0).

By the law of large numbers, we have

p^(dx0)=1Ni=1Nδξ0i(dx0)Np(x0)dx0

in the sense that

f(x0)p^(dx0)=1Ni=1Nf(ξ0i)Nf(x0)p(dx0)dx0

for any bounded function f. We further assume that we have constructed a sequence of particles (ξki)1iN at some rank k such that

p^(dxk|y0,,yk1):=1Ni=1Nδξki(dxk)Np(xk|y0,,yk1)dxk

in the sense that for any bounded function f we have

f(xk)p^(dxk|y0,,yk1)=1Ni=1Nf(ξki)Nf(xk)p(dxk|y0,,yk1)dxk

In this situation, replacing p(xk|y0,,yk1)dxk by the empirical measure p^(dxk|y0,,yk1) in the evolution equation of the one-step optimal filter stated in (Template:EquationNote) we find that

p(xk+1|y0,,yk)Np(xk+1|x'k)p(yk|xk)p^(dx'k|y0,,yk1)p(yk|x'k)p^(dx'k|y0,,yk1)

Notice that the right hand side in the above formula is a weighted probability mixture

p(xk+1|x'k)p(yk|xk)p^(dx'k|y0,,yk1)p(yk|x'k)p^(dx'k|y0,,yk1)=i=1Np(yk|ξki)i=1Np(yk|ξkj)p(xk+1|ξki)=:q^(xk+1|y0,,yk)

where p(yk|ξki) stands for the density p(yk|xk) evaluated at xk=ξki, and p(xk+1|ξki) stands for the density p(xk+1|xk) evaluated at xk=ξki for i=1,,N.

Then, we sample N independent random variable (ξk+1i)1iN with common probability density q^(xk+1|y0,,yk) so that

p^(dxk+1|y0,,yk):=1Ni=1Nδξk+1i(dxk+1)Nq^(xk+1|y0,,yk)dxk+1Np(xk+1|y0,,yk)dxk+1

Iterating this procedure, we design a Markov chain such that

p^(dxk|y0,,yk1):=1Ni=1Nδξki(dxk)Np(dxk|y0,,yk1):=p(xk|y0,,yk1)dxk

Notice that the optimal filter is approximated at each time step k using the Bayes' formulae

p(dxk|y0,,yk)Np(yk|xk)p^(dxk|y0,,yk1)p(yk|x'k)p^(dx'k|y0,,yk1)=i=1Np(yk|ξki)j=1Np(yk|ξkj)δξki(dxk)

The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure p(dxk|y0,,yk1) by the empirical approximation p^(dxk|y0,,yk1). The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.[10][5]

Some convergence results

The analysis of the convergence of particle filters was started in 1996[2][4] and in 2000 in the book[8] and the series of articles.[48][49][50][51][52][68][69] More recent developments can be found in the books,[10][5] When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates

Ik(f):=f(xk)p(dxk|y0,,yk1)NI^k(f):=f(xk)p^(dxk|y0,,yk1)

are controlled by the non asymptotic uniform estimates

supk0|E(I^k(f))Ik(f)|c1N
supk0E([I^k(f)Ik(f)]2)c2N

for any function f bounded by 1, and for some finite constants c1,c2. In addition, for any x0:

𝐏(|I^k(f)Ik(f)|c1xN+c2xNsup0kn|I^k(f)Ik(f)|cxlog(n)N)>1ex

for some finite constants c1,c2 related to the asymptotic bias and variance of the particle estimate, and some finite constant c. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.

Genealogical trees and Unbiasedness properties

Template:Technical

Genealogical tree based particle smoothing

Tracing back in time the ancestral lines

(ξ^0,ki,ξ^1,ki,,ξ^k1,ki,ξ^k,ki),(ξ0,ki,ξ1,ki,,ξk1,ki,ξk,ki)

of the individuals ξ^ki(=ξ^k,ki) and ξki(=ξk,ki) at every time step k, we also have the particle approximations

p^(d(x0,,xk)|y0,,yk):=1Ni=1Nδ(ξ^0,ki,,ξ^0,ki)(d(x0,,xk))Np(d(x0,,xk)|y0,,yk)Ni=1Np(yk|ξk,ki)j=1Np(yk|ξk,kj)δ(ξ0,ki,,ξ0,ki)(d(x0,,xk)) p^(d(x0,,xk)|y0,,yk1):=1Ni=1Nδ(ξ0,ki,,ξk,ki)(d(x0,,xk))Np(d(x0,,xk)|y0,,yk1):=p(x0,,xk|y0,,yk1)dx0,,dxk

These empirical approximations are equivalent to the particle integral approximations

F(x0,,xn)p^(d(x0,,xk)|y0,,yk):=1Ni=1NF(ξ^0,ki,,ξ^0,ki)NF(x0,,xn)p(d(x0,,xk)|y0,,yk)Ni=1Np(yk|ξk,ki)j=1Np(yk|ξk,kj)F(ξ0,ki,,ξk,ki) F(x0,,xn)p^(d(x0,,xk)|y0,,yk1):=1Ni=1NF(ξ0,ki,,ξk,ki)NF(x0,,xn)p(d(x0,,xk)|y0,,yk1)

for any bounded function F on the random trajectories of the signal. As shown in[54] the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.[10][5]

Unbiased particle estimates of likelihood functions

We use the product formula

p(y0,,yn)=k=0np(yk|y0,,yk1)

with

p(yk|y0,,yk1)=p(yk|xk)p(dxk|y0,,yk1)

and the conventions p(y0|y0,,y1)=p(y0) and p(x0|y0,,y1)=p(x0), for k = 0. Replacing p(xk|y0,,yk1)dxk by the empirical approximation

p^(dxk|y0,,yk1):=1Ni=1Nδξki(dxk)Np(dxk|y0,,yk1)

in the above displayed formula, we design the following unbiased particle approximation of the likelihood function

p(y0,,yn)Np^(y0,,yn)=k=0np^(yk|y0,,yk1)

with

p^(yk|y0,,yk1)=p(yk|xk)p^(dxk|y0,,yk1)=1Ni=1Np(yk|ξki)

where p(yk|ξki) stands for the density p(yk|xk) evaluated at xk=ξki. The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article.[2] Refined variance estimates can be found in[5] and.[10]

Backward particle smoothers

Using Bayes' rule, we have the formula

p(x0,,xn|y0,,yn1)=p(xn|y0,,yn1)p(xn1|xn,y0,,yn1)p(x1|x2,y0,y1)p(x0|x1,y0)

Notice that

p(xk1|xk,(y0,,yk1))p(xk|xk1)p(xk1|(y0,,yk1))p(xk1|(y0,,yk1)p(yk1|xk1)p(xk1|(y0,,yk2)

This implies that

p(xk1|xk,(y0,,yk1))=p(yk1|xk1)p(xk|xk1)p(xk1|y0,,yk2)p(yk1|x'k1)p(xk|x'k1)p(x'k1|y0,,yk2)dx'k1

Replacing the one-step optimal predictors p(xk1|(y0,,yk2))dxk1 by the particle empirical measures

p^(dxk1|(y0,,yk2))=1Ni=1Nδξk1i(dxk1)(Np(dxk1|(y0,,yk2)):=p(xk1|(y0,,yk2))dxk1)

we find that

p(dxk1|xk,(y0,,yk1))Np^(dxk1|xk,(y0,,yk1)):=p(yk1|xk1)p(xk|xk1)p^(dxk1|y0,,yk2)p(yk1|x'k1)p(xk|x'k1)p^(dx'k1|y0,,yk2)=i=1Np(yk1|ξk1i)p(xk|ξk1i)j=1Np(yk1|ξk1j)p(xk|ξk1j)δξk1i(dxk1)

We conclude that

p(d(x0,,xn)|(y0,,yn1))Np^backward(d(x0,,xn)|(y0,,yn1))

with the backward particle approximation

p^backward(d(x0,,xn)|(y0,,yn1))=p^(dxn|(y0,,yn1))p^(dxn1|xn,(y0,,yn1))p^(dx1|x2,(y0,y1))p^(dx0|x1,y0)

The probability measure

p^backward(d(x0,,xn)|(y0,,yn1))

is the probability of the random paths of a Markov chain (𝕏k,n)0knrunning backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles ξki,i=1,,N.

  • Initially (at time k=n) the chain 𝕏n,n chooses randomly a state with the distribution
p^(dxn|(y0,,yn1))=1Ni=1Nδξni(dxn)
  • From time k to the time (k-1), the chain starting at some state 𝕏k,n=ξki for some i=1,,N at time k moves at time (k-1) to a random state 𝕏k1,n chosen with the discrete weighted probability
p^(dxk1|ξki,(y0,,yk1))=j=1Np(yk1|ξk1j)p(ξki|ξk1j)l=1Np(yk1|ξk1l)p(ξki|ξk1l)δξk1j(dxk1)

In the above displayed formula, p^(dxk1|ξki,(y0,,yk1)) stands for the conditional distribution p^(dxk1|xk,(y0,,yk1)) evaluated at xk=ξki. In the same vein, p(yk1|ξk1j) and p(ξki|ξk1j) stand for the conditional densities p(yk1|xk1) and p(xk|xk1) evaluated at xk=ξki and xk1=ξk1j. These models allows to reduce integration with respect to the densities p((x0,,xn)|(y0,,yn1)) in terms of matrix operations with respect to the Markov transitions of the chain described above.[55] For instance, for any function fk we have the particle estimates

p(d(x0,,xn)|(y0,,yn1))fk(xk)Np^backward(d(x0,,xn)|(y0,,yn1))fk(xk)=p^(dxn|(y0,,yn1))p^(dxn1|xn,(y0,,yn1))p^(dxk|xk+1,(y0,,yk))fk(xk)=[1N,,1N]N times𝕄n1𝕄k[fk(ξk1)fk(ξkN)]

where

𝕄k=(𝕄k(i,j))1i,jN:𝕄k(i,j)=p(ξki|ξk1j)p(yk1|ξk1j)l=1Np(ξki|ξk1l)p(yk1|ξk1l)

This also shows that if

F(x0,,xn):=1n+1k=0nfk(xk)

then

F(x0,,xn)p(d(x0,,xn)|(y0,,yn1))NF(x0,,xn)p^backward(d(x0,,xn)|(y0,,yn1))=1n+1k=0n[1N,,1N]N times𝕄n1𝕄n2𝕄k[fk(ξk1)fk(ξkN)]

Some convergence results

We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.

In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by

E(p^(y0,,yn))=p(y0,,yn),E([p^(y0,,yn)p(y0,,yn)1]2)cnN,

for some finite constant c. In addition, for any x0:

𝐏(|1nlogp^(y0,,yn)1nlogp(y0,,yn)|c1xN+c2xN)>1ex

for some finite constants c1,c2 related to the asymptotic bias and variance of the particle estimate, and for some finite constant c.

The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees

Ikpath(F):=F(x0,,xk)p(d(x0,,xk)|y0,,yk1)NI^kpath(F):=F(x0,,xk)p^(d(x0,,xk)|y0,,yk1)=1Ni=1NF(ξ0,ki,,ξk,ki)

are controlled by the non asymptotic uniform estimates

|E(I^kpath(F))Ikpath(F)|c1kN,E([I^kpath(F)Ikpath(F)]2)c2kN,

for any function F bounded by 1, and for some finite constants c1,c2. In addition, for any x0:

𝐏(|I^kpath(F)Ikpath(F)|c1kxN+c2kxNsup0kn|I^kpath(F)Ikpath(F)|cxnlog(n)N)>1ex

for some finite constants c1,c2 related to the asymptotic bias and variance of the particle estimate, and for some finite constant c. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form

F(x0,,xn):=1n+10knfk(xk)

with

Inpath(F)NIn,path(F):=F(x0,,xn)p^backward(d(x0,,xn)|(y0,,yn1))

with functions fk bounded by 1, we have

supn0|E(I^n,path(F))Inpath(F)|c1N

and

E([I^n,path(F)Inpath(F)]2)c2nN+c3N2

for some finite constants c1,c2,c3. More refined estimates including exponentially small probability of errors are developed in.[10]

Sequential Importance Resampling (SIR)

Monte Carlo filter and bootstrap filter

Sequential importance Resampling (SIR), Monte Carlo filtering (Kitagawa 1993[35]), bootstrap filtering algorithm (Gordon et al. 1993[37]) and single distribution resampling (Bejuri W.M.Y.B et al. 2017[70]), are also commonly applied filtering algorithms, which approximate the filtering probability density p(xk|y0,,yk) by a weighted set of N samples

{(wk(i),xk(i)) : i{1,,N}}.

The importance weights wk(i) are approximations to the relative posterior probabilities (or densities) of the samples such that

i=1Nwk(i)=1.

Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average

f(xk)p(xk|y0,,yk)dxki=1Nwk(i)f(xk(i)).

For a finite set of samples, the algorithm performance is dependent on the choice of the proposal distribution

π(xk|x0:k1,y0:k).

The "optimal" proposal distribution is given as the target distribution

π(xk|x0:k1,y0:k)=p(xk|xk1,yk)=p(yk|xk)p(yk|xk)p(xk|xk1)dxkp(xk|xk1).

This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998.[4] When it is difficult to sample transitions according to the distribution p(xk|xk1,yk) one natural strategy is to use the following particle approximation

p(yk|xk)p(yk|xk)p(xk|xk1)dxkp(xk|xk1)dxkNp(yk|xk)p(yk|xk)p^(dxk|xk1)p^(dxk|xk1)=i=1Np(yk|Xki(xk1))j=1Np(yk|Xkj(xk1))δXki(xk1)(dxk)

with the empirical approximation

p^(dxk|xk1)=1Ni=1NδXki(xk1)(dxk)Np(xk|xk1)dxk

associated with N (or any other large number of samples) independent random samples Xki(xk1),i=1,,Nwith the conditional distribution of the random state Xk given Xk1=xk1. The consistency of the resulting particle filter of this approximation and other extensions are developed in.[4] In the above display δa stands for the Dirac measure at a given state a.

However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:

π(xk|x0:k1,y0:k)=p(xk|xk1).

Sequential Importance Resampling (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.

Resampling is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1993[35]) is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For i=1,,N draw samples from the proposal distribution
xk(i)π(xk|x0:k1(i),y0:k)
2) For i=1,,N update the importance weights up to a normalizing constant:
w^k(i)=wk1(i)p(yk|xk(i))p(xk(i)|xk1(i))π(xk(i)|x0:k1(i),y0:k).
Note that when we use the transition prior probability distribution as the importance function,
π(xk(i)|x0:k1(i),y0:k)=p(xk(i)|xk1(i)),
this simplifies to the following :
w^k(i)=wk1(i)p(yk|xk(i)),
3) For i=1,,N compute the normalized importance weights:
wk(i)=w^k(i)j=1Nw^k(j)
4) Compute an estimate of the effective number of particles as
N^𝑒𝑓𝑓=1i=1N(wk(i))2
This criterion reflects the variance of the weights. Other criteria can be found in the article,[6] including their rigorous analysis and central limit theorems.
5) If the effective number of particles is less than a given threshold N^𝑒𝑓𝑓<Nthr, then perform resampling:
a) Draw N particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For i=1,,N set wk(i)=1/N.

The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term Importance Resampling is more accurate because the word "resampling" implies that the initial sampling has already been done.[71]

Sequential importance sampling (SIS)

  • Is the same as sequential importance resampling, but without the resampling stage.

"Direct version" algorithm

Template:Confusing section The "direct version" algorithm Template:Citation needed is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from pxk|y1:k(x|y1:k):

1) Set n = 0 (This will count the number of particles generated so far)
2) Uniformly choose an index i from the range {1,...,N}
3) Generate a test x^ from the distribution p(xk|xk1) with xk1=xk1|k1(i)
4) Generate the probability of y^ using x^ from p(yk|xk),withxk=x^ where yk is the measured value
5) Generate another uniform u from [0,mk] where mk=supxkp(yk|xk)
6) Compare u and p(y^)
6a) If u is larger then repeat from step 2
6b) If u is smaller then save x^ as xk|k(i) and increment n
7) If n == N then quit

The goal is to generate P "particles" at k using only the particles from k1. This requires that a Markov equation can be written (and computed) to generate a xk based only upon xk1. This algorithm uses the composition of the P particles from k1 to generate a particle at k and repeats (steps 2–6) until P particles are generated at k.

This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimension is the particle number. For example, x(k,i) would be the ith particle at k and can also be written xk(i) (as done above in the algorithm). Step 3 generates a potential xk based on a randomly chosen particle (xk1(i)) at time k1 and rejects or accepts it in step 6. In other words, the xk values are generated using the previously generated xk1.

Applications

Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:

Other particle filters

See also

References

Template:Reflist

Bibliography

Template:Refbegin

Template:Refend

Template:Stochastic processes

Template:Statistics

  1. Template:Cite journal
  2. 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 Template:Cite journal
  3. Template:Cite journal
  4. 4.0 4.1 4.2 4.3 4.4 4.5 4.6 Template:Cite journal
  5. 5.00 5.01 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 5.10 5.11 Template:Cite book
  6. 6.0 6.1 6.2 Template:Cite journal
  7. 7.0 7.1 7.2 Template:Cite book
  8. 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 Template:Cite book
  9. 9.0 9.1 Template:Cite journal
  10. 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 Template:Cite book
  11. Template:Cite journal
  12. 12.0 12.1 Template:Cite journal
  13. 13.0 13.1 13.2 Template:Cite journal
  14. 14.0 14.1 Template:Cite journal
  15. Template:Cite journal
  16. Template:Cite journal
  17. Template:Cite journal
  18. Template:Cite journal
  19. 19.0 19.1 19.2 Template:Cite journal
  20. Template:Cite book
  21. Template:Cite book
  22. Template:Cite journal
  23. Template:Cite journal
  24. Template:Cite journal
  25. Template:Cite journal
  26. Template:Cite journal
  27. Template:Cite web
  28. Template:Cite journal
  29. Template:Cite book
  30. Template:Cite book
  31. Template:Cite journal
  32. Template:Cite journal
  33. Template:Cite journal
  34. Template:Cite journal
  35. 35.0 35.1 35.2 Template:Cite journal
  36. Template:Cite journal
  37. 37.0 37.1 Template:Cite journal
  38. Template:Cite journal
  39. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : An unified framework for particle solutions
    LAAS-CNRS, Toulouse, Research Report no. 91137, DRET-DIGILOG- LAAS/CNRS contract, April (1991).
  40. P. Del Moral, G. Rigal, and G. Salut. Nonlinear and non-Gaussian particle filters applied to inertial platform repositioning.
    LAAS-CNRS, Toulouse, Research Report no. 92207, STCAN/DIGILOG-LAAS/CNRS Convention STCAN no. A.91.77.013, (94p.) September (1991).
  41. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Experimental results.
    Convention DRET no. 89.34.553.00.470.75.01, Research report no.2 (54p.), January (1992).
  42. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation. Theoretical results
    Convention DRET no. 89.34.553.00.470.75.01, Research report no.3 (123p.), October (1992).
  43. P. Del Moral, J.-Ch. Noyer, G. Rigal, and G. Salut. Particle filters in radar signal processing : detection, estimation and air targets recognition.
    LAAS-CNRS, Toulouse, Research report no. 92495, December (1992).
  44. P. Del Moral, G. Rigal, and G. Salut. Estimation and nonlinear optimal control : Particle resolution in filtering and estimation.
    Studies on: Filtering, optimal control, and maximum likelihood estimation. Convention DRET no. 89.34.553.00.470.75.01. Research report no.4 (210p.), January (1993).
  45. 45.0 45.1 Template:Cite journal
  46. Template:Cite journal
  47. Template:Cite journal
  48. 48.0 48.1 48.2 Template:Cite journal
  49. 49.0 49.1 49.2 49.3 Template:Cite journal
  50. 50.0 50.1 50.2 50.3 Template:Cite journal
  51. 51.0 51.1 Template:Cite journal
  52. 52.0 52.1 Template:Cite journal
  53. 53.0 53.1 Template:Cite conference
  54. 54.0 54.1 Template:Cite journal
  55. 55.0 55.1 Template:Cite journal
  56. Template:Cite journal
  57. Template:Cite arXiv
  58. Template:Cite journal
  59. Template:Cite arXiv
  60. Template:Cite journal
  61. Template:Cite journal
  62. Template:Cite journal
  63. Template:Cite book
  64. Template:Cite journal
  65. Template:Cite journal
  66. Template:Cite journal
  67. Template:Cite journal
  68. Template:Cite journal
  69. Template:Cite book
  70. Template:Cite journal
  71. Template:Cite book
  72. Template:Cite journal
  73. Template:Cite journal
  74. Template:Cite journal
  75. Template:Cite journal
  76. Template:Cite journal
  77. Bonate P: Pharmacokinetic-Pharmacodynamic Modeling and Simulation. Berlin: Springer; 2011.
  78. Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun, "Monte Carlo Localization: Efficient Position Estimation for Mobile Robots." Proc. of the Sixteenth National Conference on Artificial Intelligence John Wiley & Sons Ltd, 1999.
  79. Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics MIT Press, 2005. Ch. 8.3 Template:ISBN.
  80. Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert. "Robust monte carlo localization for mobile robots." Artificial Intelligence 128.1 (2001): 99–141.
  81. Template:Cite journal
  82. Template:Cite journal
  83. Template:Cite arXiv
  84. Template:Cite journal
  85. Template:Cite journal
  86. Template:Cite journal
  87. Template:Cite conference
  88. Template:Cite journal