Delay reduction hypothesis

From testwiki
Revision as of 15:08, 24 February 2024 by imported>Pigeonsquatch
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

In operant conditioning, the delay reduction hypothesis (DRH; also known as delay reduction theory) is a quantitative description of how choice among concurrently available chained schedules of reinforcement is allocated. The hypothesis states that the greater improvement in temporal proximity to reinforcement (delay reduction) correlated with the onset of a stimulus, the more effectively that stimulus will function as a conditional reinforcer.[1]

The hypothesis was originally formulated to describe choice behaviour among concurrently available chained schedules of reinforcement;[2] however, the basic principle of delay reduction (Ttx) as the basis for determining a stimulus’ conditionally reinforcing function can be applied more generally to other research areas.[1][3][4]

A variety of empirical data corroborate and are consistent with DRH and it represents one of the most substantiated accounts of conditional reinforcement to date.[5]

Application to Concurrent Chain Schedules

Given two concurrently available chained schedules of reinforcement, Ra and Rb represent the number of responses made during alternative A and B’s initial link stimulus.

ta and tb represent the average duration of each choice’s respective terminal link. T is the average duration to terminal reinforcement from the onset of either initial link stimulus.

RaRa+Rb=(Tta)(Tta)+(Ttb), when ta<T,tb<T=1, when ta<T,tb>T=0, when ta>T,tb<T

The expression Ttx represents the delay reduction on a given alternative.

Extensions to the Original Model

Squires and Fantino (1971)

The original formulation by Fantino predicted that choices with equivalent terminal link durations would produce equal allocation of responding (e.g., 0.5 across two choices) regardless the duration of the initial links.[2] Squires and Fantino (1971) proposed including the rate of terminal reinforcement on each choice alternative.[6]

RaRa+Rb=ra(Tta)ra(Tta)+rb(Ttb), when ta<T,tb<T=1, when ta<T,tb>T=0, when ta>T,tb<T

The rate of terminal reinforcement is rx=nxix+nxtx where ix is the average duration of an initial link and nx is the number of terminal reinforcements obtained during a single entry to a terminal link. A critical prediction of this formulation is that matching is obtained when the terminal links are equal durations.

See also

References

Template:Reflist


Template:Psych-stub

  1. 1.0 1.1 Fantino, E. (1977). Conditioned reinforcement: Choice and information. In W. K. Honig & J. E. R. Staddon (Eds.), Handbook of operant behavior (pp. 313–339). Prentice-Hall
  2. 2.0 2.1 Fantino, E. (1969). Choice and rate of reinforcement. Journal of the Experimental Analysis of Behavior, 12 (5), 723–730. https://doi.org/10.1901/jeab.1969.12-723
  3. Fantino, E. (2012). Optimal and non-optimal behavior across species. Comparative Cognition & Behavior Reviews, 7, 44-54. https://doi.org/10.3819/ccbr.2012.70003
  4. Shahan, T. A., & Cunningham, P. (2015). Conditioned reinforcement and information theory reconsidered. Journal of the Experimental Analysis of Behavior, 103 (2), 405–418. https://doi.org/10.1002/jeab.142
  5. Williams B. A. (1994). Conditioned reinforcement: Neglected or outmoded explanatory construct?. Psychonomic Bulletin & Review, 1(4), 457–475. https://doi.org/10.3758/BF03210950
  6. Squires, N., & Fantino, E. (1971). A model for choice in simple concurrent and concurrent-chains schedules. Journal of the Experimental Analysis of Behavior, 15 (1), 27–38. https://doi.org/10.1901/jeab.1971.15-27