Thompson sampling: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Kmfrick
โ†’Description: add inline citations with precise sections
ย 
(No difference)

Latest revision as of 13:09, 10 February 2025

Template:Short description Template:More citations needed

Concrete example of Thompson sampling applied to simulate treatment efficacy evaluation

Thompson sampling,[1][2][3] named after William R. Thompson, is a heuristic for choosing actions that address the explorationโ€“exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Description

Consider a set of contexts ๐’ณ, a set of actions ๐’œ, and rewards in โ„. The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards. Specifically, in each round, the player obtains a context x๐’ณ, plays an action a๐’œ and receives a reward rโ„ following a distribution that depends on the context and the issued action.

The elements of Thompson sampling are as follows:[3]Template:Rp

  1. a likelihood function P(r|θ,a,x);
  2. a set Θ of parameters θ of the distribution of r;
  3. a prior distribution P(θ) on these parameters;
  4. past observations triplets ๐’Ÿ={(x;a;r)};
  5. a posterior distribution P(θ|๐’Ÿ)P(๐’Ÿ|θ)P(θ), where P(๐’Ÿ|θ) is the likelihood function.

Thompson sampling consists of playing the action a๐’œ according to the probability that it maximizes the expected reward; action a is chosen with probability[3]Template:Rp

๐•€[๐”ผ(r|a,x,θ)=maxa๐”ผ(r|a,x,θ)]P(θ|๐’Ÿ)dθ,

where ๐•€ is the indicator function.

In practice, the rule is implemented by sampling. In each round, parameters θ are sampled from the posterior P(θ|๐’Ÿ),[3]Template:Rp and an action a chosen that maximizes ๐”ผ[r|θ,a,x], i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques.[3]Template:Rp

History

Thompson sampling was originally described by Thompson in 1933.[1] It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems.[4][5][6][7][8][9] A first proof of convergence for the bandit case has been shown in 1997.[4] The first application to Markov decision processes was in 2000.[6] A related approach (see Bayesian control rule) was published in 2010.[5] In 2010 it was also shown that Thompson sampling is instantaneously self-correcting.[9] Asymptotic convergence results for contextual bandits were published in 2011.[7] Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising,[10] and accelerated learning in decentralized decision making.[11] A Double Thompson Sampling (D-TS)[12] algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison.

Relationship to other approaches

Probability matching

Template:See also

Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.

Bayesian control rule

A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations.[5] In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.

The setup is as follows. Let a1,a2,,aT be the actions issued by an agent up to time T, and let o1,o2,,oT be the observations gathered by the agent up to time T. Then, the agent issues the action aT+1 with probability:[5]

P(aT+1|a^1:T,o1:T),

where the "hat"-notation a^t denotes the fact that at is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs θΘ over its behaviors, then the Bayesian control rule becomes

P(aT+1|a^1:T,o1:T)=ΘP(aT+1|θ,a^1:T,o1:T)P(θ|a^1:T,o1:T)dθ,

where P(θ|a^1:T,o1:T) is the posterior distribution over the parameter θ given actions a1:T and observations o1:T.

In practice, the Bayesian control amounts to sampling, at each time step, a parameter θ from the posterior distribution P(θ|a^1:T,o1:T), where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations o1,o2,,oT and ignoring the (causal) likelihoods of the actions a1,a2,,aT, and then by sampling the action aT+1 from the action distribution P(aT+1|θ,a^1:T,o1:T).

Upper-Confidence-Bound (UCB) algorithms

Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling[13] or unify regret analysis across both these algorithms and many classes of problems.[14]

References

Template:Reflist

  1. โ†‘ 1.0 1.1 Cite error: Invalid <ref> tag; no text was provided for refs named ref1
  2. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named ref1b
  3. โ†‘ 3.0 3.1 3.2 3.3 3.4 Cite error: Invalid <ref> tag; no text was provided for refs named FnTTutorial
  4. โ†‘ 4.0 4.1 Cite error: Invalid <ref> tag; no text was provided for refs named ref2
  5. โ†‘ 5.0 5.1 5.2 5.3 Cite error: Invalid <ref> tag; no text was provided for refs named ref5
  6. โ†‘ 6.0 6.1 Cite error: Invalid <ref> tag; no text was provided for refs named ref6
  7. โ†‘ 7.0 7.1 Cite error: Invalid <ref> tag; no text was provided for refs named ref4
  8. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named ref3
  9. โ†‘ 9.0 9.1 Cite error: Invalid <ref> tag; no text was provided for refs named ref7
  10. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named ref9
  11. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named ref8
  12. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Wu2016DTS
  13. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named RussoVanRoy2014
  14. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named RussoVanRoy2013