Arbitrarily varying channel

From testwiki
Revision as of 20:12, 27 August 2024 by imported>Kvng (Added {{No footnotes}}; and removed {{More citations needed}} tags)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:No footnotes An arbitrarily varying channel (AVC) is a communication channel model used in coding theory, and was first introduced by Blackwell, Breiman, and Thomasian. This particular channel has unknown parameters that can change over time and these changes may not have a uniform pattern during the transmission of a codeword. n uses of this channel can be described using a stochastic matrix Wn:Xn× SnYn, where X is the input alphabet, Y is the output alphabet, and Wn(y|x,s) is the probability over a given set of states S, that the transmitted input x=(x1,,xn) leads to the received output y=(y1,,yn). The state si in set S can vary arbitrarily at each time unit i. This channel was developed as an alternative to Shannon's Binary Symmetric Channel (BSC), where the entire nature of the channel is known, to be more realistic to actual network channel situations.

Capacities and associated proofs

Capacity of deterministic AVCs

An AVC's capacity can vary depending on the certain parameters.

R is an achievable rate for a deterministic AVC code if it is larger than 0, and if for every positive ε and δ, and very large n, length-n block codes exist that satisfy the following equations: 1nlogN>Rδ and maxsSne¯(s)ε, where N is the highest value in Y and where e¯(s) is the average probability of error for a state sequence s. The largest rate R represents the capacity of the AVC, denoted by c.

As you can see, the only useful situations are when the capacity of the AVC is greater than 0, because then the channel can transmit a guaranteed amount of data c without errors. So we start out with a theorem that shows when c is positive in an AVC and the theorems discussed afterward will narrow down the range of c for different circumstances.

Before stating Theorem 1, a few definitions need to be addressed:

  • An AVC is symmetric if sSW(y|x,s)U(s|x)=sSW(y|x,s)U(s|x) for every (x,x,y,s), where x,xX, yY, and U(s|x) is a channel function U:XS.
  • Xr, Sr, and Yr are all random variables in sets X, S, and Y respectively.
  • PXr(x) is equal to the probability that the random variable Xr is equal to x.
  • PSr(s) is equal to the probability that the random variable Sr is equal to s.
  • PXrSrYr is the combined probability mass function (pmf) of PXr(x), PSr(s), and W(y|x,s). PXrSrYr is defined formally as PXrSrYr(x,s,y)=PXr(x)PSr(s)W(y|x,s).
  • H(Xr) is the entropy of Xr.
  • H(Xr|Yr) is equal to the average probability that Xr will be a certain value based on all the values Yr could possibly be equal to.
  • I(XrYr) is the mutual information of Xr and Yr, and is equal to H(Xr)H(Xr|Yr).
  • I(P)=minYrI(XrYr), where the minimum is over all random variables Yr such that Xr, Sr, and Yr are distributed in the form of PXrSrYr.

Theorem 1: c>0 if and only if the AVC is not symmetric. If c>0, then c=maxPI(P).

Proof of 1st part for symmetry: If we can prove that I(P) is positive when the AVC is not symmetric, and then prove that c=maxPI(P), we will be able to prove Theorem 1. Assume I(P) were equal to 0. From the definition of I(P), this would make Xr and Yr independent random variables, for some Sr, because this would mean that neither random variable's entropy would rely on the other random variable's value. By using equation PXrSrYr, (and remembering PXr=P,) we can get,

PYr(y)=xXsSP(x)PSr(s)W(y|x,s)
(since Xr and Yr are independent random variables, W(y|x,s)=W(y|s) for some W)
PYr(y)=xXsSP(x)PSr(s)W(y|s)
(because only P(x) depends on x now)
PYr(y)=sSPSr(s)W(y|s)[xXP(x)]
(because xXP(x)=1)
PYr(y)=sSPSr(s)W(y|s)

So now we have a probability distribution on Yr that is independent of Xr. So now the definition of a symmetric AVC can be rewritten as follows: sSW(y|s)PSr(s)=sSW(y|s)PSr(s) since U(s|x) and W(y|x,s) are both functions based on x, they have been replaced with functions based on s and y only. As you can see, both sides are now equal to the PYr(y) we calculated earlier, so the AVC is indeed symmetric when I(P) is equal to 0. Therefore, I(P) can only be positive if the AVC is not symmetric.

Proof of second part for capacity: See the paper "The capacity of the arbitrarily varying channel revisited: positivity, constraints," referenced below for full proof.

Capacity of AVCs with input and state constraints

The next theorem will deal with the capacity for AVCs with input and/or state constraints. These constraints help to decrease the very large range of possibilities for transmission and error on an AVC, making it a bit easier to see how the AVC behaves.

Before we go on to Theorem 2, we need to define a few definitions and lemmas:

For such AVCs, there exists:

- An input constraint Γ based on the equation g(x)=1ni=1ng(xi), where xX and x=(x1,,xn).
- A state constraint Λ, based on the equation l(s)=1ni=1nl(si), where sX and s=(s1,,sn).
- Λ0(P)=minxX,sSP(x)l(s)
- I(P,Λ) is very similar to I(P) equation mentioned previously, I(P,Λ)=minYrI(XrYr), but now any state s or Sr in the equation must follow the l(s)Λ state restriction.

Assume g(x) is a given non-negative-valued function on X and l(s) is a given non-negative-valued function on S and that the minimum values for both is 0. In the literature I have read on this subject, the exact definitions of both g(x) and l(s) (for one variable xi,) is never described formally. The usefulness of the input constraint Γ and the state constraint Λ will be based on these equations.

For AVCs with input and/or state constraints, the rate R is now limited to codewords of format x1,,xN that satisfy g(xi)Γ, and now the state s is limited to all states that satisfy l(s)Λ. The largest rate is still considered the capacity of the AVC, and is now denoted as c(Γ,Λ).

Lemma 1: Any codes where Λ is greater than Λ0(P) cannot be considered "good" codes, because those kinds of codes have a maximum average probability of error greater than or equal to N12N1nlmax2n(ΛΛ0(P))2, where lmax is the maximum value of l(s). This isn't a good maximum average error probability because it is fairly large, N12N is close to 12, and the other part of the equation will be very small since the (ΛΛ0(P)) value is squared, and Λ is set to be larger than Λ0(P). Therefore, it would be very unlikely to receive a codeword without error. This is why the Λ0(P) condition is present in Theorem 2.

Theorem 2: Given a positive Λ and arbitrarily small α>0, β>0, δ>0, for any block length nn0 and for any type P with conditions Λ0(P)Λ+α and minxXP(x)β, and where PXr=P, there exists a code with codewords x1,,xN, each of type P, that satisfy the following equations: 1nlogN>I(P,Λ)δ, maxl(s)Λe¯(s)exp(nγ), and where positive n0 and γ depend only on α, β, δ, and the given AVC.

Proof of Theorem 2: See the paper "The capacity of the arbitrarily varying channel revisited: positivity, constraints," referenced below for full proof.

Capacity of randomized AVCs

The next theorem will be for AVCs with randomized code. For such AVCs the code is a random variable with values from a family of length-n block codes, and these codes are not allowed to depend/rely on the actual value of the codeword. These codes have the same maximum and average error probability value for any channel because of its random nature. These types of codes also help to make certain properties of the AVC more clear.

Before we go on to Theorem 3, we need to define a couple important terms first:

Wζ(y|x)=sSW(y|x,s)PSr(s)
I(P,ζ) is very similar to the I(P) equation mentioned previously, I(P,ζ)=minYrI(XrYr), but now the pmf PSr(s) is added to the equation, making the minimum of I(P,ζ) based a new form of PXrSrYr, where Wζ(y|x) replaces W(y|x,s).

Theorem 3: The capacity for randomized codes of the AVC is c=maxPI(P,ζ).

Proof of Theorem 3: See paper "The Capacities of Certain Channel Classes Under Random Coding" referenced below for full proof.

See also

References