Draft:State transition algorithm

From testwiki
Revision as of 19:35, 3 February 2025 by imported>Citation bot (Add: issue, date, hdl, bibcode, arxiv, pmid, doi. | Use this bot. Report bugs. | Suggested by LeapTorchGear | Category:AfC pending submissions by age/1 day ago | #UCB_Category 52/52)
(diff) โ† Older revision | Latest revision (diff) | Newer revision โ†’ (diff)
Jump to navigation Jump to search

Template:Short description Template:Draft topics Template:AfC topic Template:AfC submission Template:AfC submission

In global optimization, state transition algorithm (STA) is an iterative method that generates a sequence of improving approximate solutions for an optimization problem. Due to its intrinsic properties, STA has the ability to find a global optimal solution in probability and can guarantee an optimal solution.

State transition algorithm [1][2][3][4][5] was firstly proposed by Zhou et al, and it is a stochastic global optimization method and aims to find a possible global or approximate optimal solution in a reasonable amount of time. In STA, a solution to an optimization problem is regarded as a state, and an update of a solution can be regarded as a state transition. Using the state-space representation,[6] in STA, it describes solutions updating in a unified framework, and the execution operators to update solutions are expressed as state transition matrices, which make it easy to understand and flexible to implement:

๐ฑk+1=Ak๐ฑk+Bk๐ฎk
๐ฒk+1=f(๐ฑk+1)

where:

๐ฑk stands for a current state, corresponding to a solution to an optimization problem;
๐ฎk is a function of ๐ฑk and historical states;
๐ฒk is the fitness value at ๐ฑk;
๐€k,๐k are state transformation matrices, which can be considered as execution operators;
f() is the objective function or evaluation function.

As a stochastic global optimization method, STA has the following properties:

  • globality, STA has the ability to search the whole space;
  • optimality, STA can guarantee to find an optimal solution;
  • convergence, the sequence generated by STA is convergent;
  • rapidity, inherent advantages existing in STA to reduce the computational complexity;
  • controllability, STA can control the search space flexibly.

Continuous state transition algorithm (CSTA)

In continuous STA, ๐ฑkโ„n is a continuous variable, and four special state transformation operators are designed to generate new candidate solutions.

State transformation operators

(1) Rotation transformation (RT)

๐ฑk+1=๐ฑk+α1n๐ฑk2Rr๐ฑk

where α is a positive constant, called the rotation factor, Rrโ„n×n is a random matrix with its entries being uniformly distributed random variables defined on the interval [-1,1], and is the 2-norm of a vector. The rotation transformation has the functionality to search in a hypersphere with maximal radius α , that is to say, ๐ฑk+1๐ฑk2α.

(2) Translation transformation (TT)

๐ฑk+1=๐ฑk+βRt๐ฑk๐ฑk1๐ฑk๐ฑk12

where β is a positive constant, called the translation factor, and Rtโ„ is a uniformly distributed random variable defined on the interval [0,1]. The translation transformation has the functionality to search along a line from ๐ฑk1 to ๐ฑk at the starting point ๐ฑk with maximal length β.

(3) Expansion transformation (ET)

๐ฑk+1=๐ฑk+γRe๐ฑk

where γ is a positive constant, called the expansion factor, and Reโ„n×n is a random diagonal matrix with its entries obeying the Gaussian distribution. The expansion transformation has the functionality to expand the entries in ๐ฑk to the range of [,+], searching in the whole space.

(4) Axesion transformation (AT)

๐ฑk+1=๐ฑk+δRa๐ฑk

where δ is a positive constant, called the axesion factor, and Raโ„n×n is a random diagonal matrix with its entries obeying the Gaussian distribution and with only one random position having nonzero value. The axesion transformation aims to search along the axes.

Regular neighbourhood and sampling

For a given solution ๐ฑk, a candidate solution ๐ฑk+1 is generated by using one time of the aforementioned state transformation operators. Since the state transition matrix in each state transformation is random, the generated candidate solution is not unique. Based on a given point, it is not difficult to imagine that a regular neighbourhood will be automatically formed when using certain state transformation operators.

Since the entries in state transition matrix obey certain stochastic distribution, for any given solution, the new candidate becomes a random vector and its corresponding solution (the value of a random vector) can be regarded as a sample. Considering that any two random state transition matrices in each state transformation operator are independent, several times of state transformation (called the degree of search enforcement, SE for short) based on the given solution are performed for certain state transformation operator, yielding SE samples.

An update strategy

As mentioned above, based on the incumbent best solution, a total number of SE candidate solutions are sampled. A new best solution is selected from the candidate set by virtue of the evaluation function, denoted as newBest. Then, an update strategy based on greedy criterion is used to update the incumbent best solution:

Best=newBest, if f(newBest)<f(Best),
Best=Best , otherwise

Algorithm procedure of the basic continuous STA

With the state transformation operators, sampling technique and update strategy, the basic continuous STA can be described as follows:

Step 1: Initiate a random solution Best and set α=αmax=1,αmin=104, β=1,γ=1,δ=1,fc=2,k=0;

Step 2: Generate SE samples based on incumbent Best using Expansion Transformation, and then update the incumbent Best using greedy criterion incorporating SE samples and incumbent Best . Let us denote newBest the best solution in SE samples, if f(newBest)<f(Best), then perform the Translation Transformation similarly to update the incumbent Best;

Step 3: Generate SE samples based on incumbent Best using Rotation Transformation, and then update the incumbent Best using greedy criterion incorporating SE samples and incumbent Best . If f(newBest)<f(Best), then perform the Translation Transformation similarly to update the incumbent Best;

Step 4: Generate SE samples based on incumbent Best using Axesion Transformation, and then update the incumbent Best using greedy criterion incorporating SE samples and incumbent Best . If f(newBest)<f(Best), then perform the Translation Transformation similarly to update the incumbent Best;

Step 5: set k=k+1, if α<αmin, set α=αmax, else set α=α/fc, and return to Step 2 until the maximum of iterations is met.

Philosophy behind the continuous STA

  • The expansion transformation contributes to the globality since it has the functionality to search the whole space;
  • The rotation transformation benefits the optimality since when α is sufficiently small, the incumbent best solution becomes a local optimal solution;
  • The update strategy based on greedy criterion contributes to the convergence, that is to say, the sequence {f(Bestk)k=1} is convergent due to f(Bestk+1)f(Bestk) and the monotone convergence theorem;
  • The sampling technique (it can avoid complete enumeration) and the alternate use of state transformation operators help to reduce computational complexity;
  • The parameters like α,β,γ,δ can be adjusted to control the search space.

Applications of STA

STA has found a variety of applications, like image segmentation,[7] wind power prediction,[8] [9] energy consumption in the alumina evaporation process,[10] resolution of overlapping linear sweep voltammetric peaks,[11] PID controller design,[12][13] feature selection,[14], system modeling,[15] [16] and dynamic optimization [17]and it is shown that STA is comparable to most existing global optimization methods.

References

Template:Reflist

  1. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhou2012State
  2. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhou2019statistical
  3. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhou2014Nonlinear
  4. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhou2016Optimal
  5. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhou2016Discrete
  6. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Friedland2005Control
  7. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Han2017new
  8. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Wang2016new
  9. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Wang2020wind
  10. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Wang2016Optimization
  11. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Wang2016State
  12. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhang2018fractional
  13. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Zhang2019optimal
  14. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Huang2018hybrid
  15. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Xie2016new
  16. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named Liu2020sta
  17. โ†‘ Cite error: Invalid <ref> tag; no text was provided for refs named zhou2019dynamic