Coding theory approaches to nucleic acid design

From testwiki
Jump to navigation Jump to search

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

Introduction

DNA sequences are known to appear in the form of double helices in living cells, in which one DNA strand is hybridized to its complementary strand through a series of hydrogen bonds. For the purpose of this entry, we shall focus on only oligonucleotides. DNA computing involves allowing synthetic oligonucleotide strands to hybridize in such a way as to perform computation. DNA computing requires that the self-assembly of the oligonucleotide strands happen in such a way that hybridization should occur in a manner compatible with the goals of computation.

The field of DNA computing was established in Leonard M. Adelman's seminal paper.[1] His work is significant for a number of reasons:

  • It shows how one could use the highly parallel nature of computation performed by DNA to solve problems that are difficult or almost impossible to solve using the traditional methods.
  • It's an example of computation at a molecular level, on the lines of nanocomputing, and this potentially is a major advantage as far as the information density on storage media is considered, which can never be reached by the semiconductor industry.
  • It demonstrates unique aspects of DNA as a data structure.

This capability for massively parallel computation in DNA computing can be exploited in solving many computational problems on an enormously large scale such as cell-based computational systems for cancer diagnostics and treatment, and ultra-high density storage media.[2]

This selection of codewords (sequences of DNA oligonucleotides) is a major hurdle in itself due to the phenomenon of secondary structure formation (in which DNA strands tend to fold onto themselves during hybridization and hence rendering themselves useless in further computations. This is also known as self-hybridization). The Nussinov-Jacobson[3] algorithm is used to predict secondary structures and also to identify certain design criteria that reduce the possibility of secondary structure formation in a codeword. In essence this algorithm shows how the presence of a cyclic structure in a DNA code reduces the complexity of the problem of testing the codewords for secondary structures.

Novel constructions of such codes include using cyclic reversible extended generalized Hadamard matrices, and a binary approach. Before diving into these constructions, we shall revisit certain fundamental genetic terminology. The motivation for the theorems presented in this article, is that they concur with the Nussinov - Jacobson algorithm, in that the existence of cyclic structure helps in reducing complexity and thus prevents secondary structure formation. i.e. these algorithms satisfy some or all the design requirements for DNA oligonucleotides at the time of hybridization (which is the core of the DNA computing process) and hence do not suffer from the problems of self - hybridization.

Definitions

A DNA code is simply a set of sequences over the alphabet 𝒬={𝐴,𝑇,𝐢,𝐺}.

Each purine base is the Watson-Crick complement of a unique pyrimidine base (and vice versa) – adenine and thymine form a complementary pair, as do guanine and cytosine. This pairing can be described as follows – AΒ―=T,TΒ―=A,CΒ―=G,GΒ―=C.

Such pairing is chemically very stable and strong. However, pairing of mismatching bases does occur at times due to biological mutations.

Most of the focus on DNA coding has been on constructing large sets of DNA codewords with prescribed minimum distance properties. For this purpose let us lay down the required groundwork to proceed further.

Let π‘ž=π‘ž1π‘ž2π‘žn be a word of length 𝑛 over the alphabet 𝒬. For 1ijn, we will use the notation π‘ž[i,j] to denote the subsequence π‘žiπ‘ži+1π‘žj. Furthermore, the sequence obtained by reversing π‘ž will be denoted as π‘žR. The Watson-Crick complement, or the reverse-complement of q, is defined to be π‘žRC=π‘žΒ―π‘›π‘žΒ―π‘›1π‘žΒ―1, where π‘žΒ―π‘– denotes the Watson-Crick complement base pair of π‘ži.

For any pair of length-𝑛 words 𝑝 and π‘ž over 𝒬, the Hamming distance 𝑑H(𝑝,π‘ž) is the number of positions 𝑖 at which 𝑝iπ‘ži. Further, define reverse-Hamming distance as 𝑑𝐻R(𝑝,π‘ž)=𝑑H(𝑝,π‘žR). Similarly, reverse-complement Hamming distance is 𝑑HRC(𝑝,π‘ž)=𝑑H(𝑝,π‘žRC). (where RC stands for reverse complement)

Another important code design consideration linked to the process of oligonucleotide hybridization pertains to the GC content of sequences in a DNA code. The GC-content, 𝑀GC(π‘ž), of a DNA sequence π‘ž=π‘ž1π‘ž2π‘žn is defined to be the number of indices 𝑖 such that π‘ži{G,C}. A DNA code in which all codewords have the same GC-content, w, is called a constant GC-content code.

A generalized Hadamard matrix 𝐻𝐻(n,β„‚m) is an 𝑛 × π‘› square matrix with entries taken from the set of π‘šth roots of unity, β„‚m={e2π𝑖𝑙/π‘šl=0,,m1}, that satisfies 𝐻𝐻* = 𝑛𝐼. Here 𝐼 denotes the identity matrix of order 𝑛, while * stands for complex-conjugation. We will only concern ourselves with the case π‘š=𝑝 for some prime 𝑝. A necessary condition for the existence of generalized Hadamard matrices 𝐻(𝑛,β„‚p) is that p|n. The exponent matrix, E(𝑛,β„€p), of 𝐻(𝑛,β„‚p) is the 𝑛×𝑛 matrix with the entries in 𝑍p={0,1,2,,𝑝1}, is obtained by replacing each entry (e2π𝑖l/π‘š) in 𝐻(𝑛,β„‚p) by the exponent 𝑙.

The elements of the Hadamard exponent matrix lie in the Galois field GF(p), and its row vectors constitute the codewords of what shall be called a generalized Hadamard code.

Here, the elements of 𝐸 lie in the Galois field GF(p).

By definition, a generalized Hadamard matrix 𝐻 in its standard form has only 1s in its first row and column. The (𝑛1)×(𝑛1) square matrix formed by the remaining entries of H is called the core of 𝐻, and the corresponding submatrix of the exponent matrix 𝐸 is called the core of construction. Thus, by omission of the all-zero first column cyclic generalized Hadamard codes are possible, whose codewords are the row vectors of the punctured matrix.

Also, the rows of such an exponent matrix satisfy the following two properties: (i) in each of the nonzero rows of the exponent matrix, each element of β„€p appears a constant number, 𝑛/𝑝, of times; and (ii) the Hamming distance between any two rows is 𝑛(𝑝1)/𝑝.[4]

Property U

Let 𝐢𝑝={1,x,x2,,xp1} be the cyclic group generated by π‘₯, where x=exp(2πij/p) is a complex primitive pth root of unity, and p>2 is a fixed prime. Further, let 𝐴=(xai), 𝐡=(xbi) denote arbitrary vectors over β„‚p which are of length 𝑁=pt, where 𝑑 is a positive integer. Define the collection of differences between exponents 𝑄={aibimodp:i=1,2,,N}, where π‘›π‘ž is the multiplicity of element π‘ž of GF(p) which appears in 𝑄.[4]

Vector 𝑄 is said to satisfy Property U if and only if each element π‘ž of GF(p) appears in 𝑄 exactly 𝑑 times (π‘›π‘ž=t,q=0,1,,p1)

The following lemma is of fundamental importance in constructing generalized Hadamard codes.

Lemma. Orthogonality of vectors over 𝐢𝑝 – For fixed primes 𝑝, arbitrary vectors 𝐴,𝐡 of length 𝑁=pt, whose elements are from 𝐢𝑝, are orthogonal if the vector 𝑄 satisfies Property U, where 𝑄 is the collection of differences mod𝑝 between the Hadamard exponents associated with 𝐴,𝐡.

M sequences

Let 𝑉 be an arbitrary vector of length 𝑁 whose elements are in the finite field GF(p), where p is a prime. Let the elements of a vector V constitute the first period of an infinite sequence a(V) which is periodic of period N. If N is the smallest period for conceiving any subsequence, the sequence is called an M-sequence, or a maximal sequence of least period obtained by cyclically permuting N elements. If whenever the elements of V are permuted arbitrarily to yield V*, the sequence a(V*) is an M-sequence, then the sequence a(V) is called M-invariant. The theorems that follow present conditions that ensure M-invariance. In conjunction with a certain uniformity property of polynomial coefficients, these conditions yield a simple method by which complex Hadamard matrices with cyclic core can be constructed.

The goal here is to find cyclic matrix 𝐸=𝐸𝑐 whose elements are in Galois field GF(p) and whose dimension is N=pn1. The rows of 𝐸 will be the nonzero codewords of a linear cyclic code K, if and only if there is polynomial g(x) with coefficients in GF(p), which is a proper divisor of π‘₯𝑁1 and which generates K. In order to haveN nonzero codewords, g(x) must be of degree Nn. Further, in order to generate a cyclic Hadamard core, the vector (of coefficients of) g(x) when operated upon with the cyclic shift operation must be of period N, and the vector difference of two arbitrary rows of 𝐸 (augmented with zero) must satisfy the uniformity condition of Butson,[5] previously referred to as Property U. One necessary condition for N-periodicity is that xN1=g(x)h(x), where h(x) is monic irreducible over.[6] The approach here is to replace the last requirement with the condition that the coefficients of the vector [0,g(x)] are uniformly distributed over GF(p), i.e. each residue 0,1,,p1 appears the same number of times (Property U). A proof that this heuristic approach always produces a cyclic core is given below.

Examples of code construction

Code construction using complex Hadamard matrices

Construction algorithm

Consider a monic irreducible polynomial h(x) over GF(p) of degree 𝑛 having a suitable companion g(x) of degree Nn such that g(x)h(x)=xN1, where the vector [0,g(x)] satisfies Property U. This requires only a simple computer algorithm for long division over GF(p). Since h(x)|xN1, the ideal generated by g(x)mod(xN1) is a cyclic code 𝐾. Moreover, Property U guarantees the nonzero codewords form a cyclic matrix, each row of period N under cyclic permutation, which serves as a cyclic core for the Hadamard matrix H(p,pn). As an example, a cyclic core for H(3,9) results from the companions h(x)=x2+x+2 and g(x)=x6+2x5+2x4+2x2+x+1. The coefficients of g indicate that {0,1,6} is the relative difference set, mod8.

Theorem

Let 𝑝 be a prime and 𝑁+1=𝑝𝑛, with 𝑔(x) a monic polynomial of degree 𝑁𝑛 whose extended vector of coefficients C=[𝑐0,𝑐1,,𝑐N1] are elements of GF(p). Suppose the following conditions hold:

  1. vector C=[𝑐0,𝑐1,,𝑐N1] satisfies the property U, and
  2. g(x)h(x)=xN1, where h(x) is a monic irreducible polynomial of degree n.

Then there exists a p-ary linear cyclic code KΒ― of blocksize N, such that the augmented code K=[0,KΒ―] is the exponent matrix for the Hadamard matrix H(p,pn)=xK, with x=e2πi/p, where the core of H is a cyclic matrix.

Proof:

First note that g(x) is monic and divides xN1 with degree Nn. Now, we need to show that the matrix Ec whose rows are nonzero codewords constitutes a cyclic core for some complex Hadamard matrix H.

Given that 𝐢 satisfies property U, all of the nonzero residues of GF(p) lie in C. By cyclically permuting elements of C, we get the desired exponent matrix Ec where we can get every codeword in Ec by permuting the first codeword. (This is because the sequence obtained by cyclically permuting C is M-invariant.)

We also see that augmentation of each codeword of Ec by adding a leading zero element produces a vector which satisfies Property U. Also, since the code is linear, the modp vector difference of two arbitrary codewords is also a codeword and thus satisfy Property U. Therefore, the row vectors of the augmented code 𝐾 form a Hadamard exponent. Thus, π‘₯𝐾 is the standard form of some complex Hadamard matrix 𝐻.

Thus from the above property, we see that the core of 𝐸 is a circulant matrix consisting of all the N=𝑝k1 cyclic shifts of its first row. Such a core is called a cyclic core where in each element of β„€p appears in each row of 𝐸 exactly (N+1)/p=𝑝k1 times, and the Hamming distance between any two rows is exactly (N+1)(p1)/p=(p1)pk1. The 𝑁 rows of the core 𝐸 form a constant-composition code - one consisting of 𝑁 cyclic shifts of some length 𝑁 over the set β„€p. Hamming distance between any two codewords in β„€p is (p1)𝑝k1.

The following can be inferred from the theorem as explained above. (For more detailed reading, the reader is referred to the paper by Heng and Cooke.[4]) Let 𝑁=π‘π‘˜1 for p prime and kβ„€+. Let g(x)=c0+c1x+c2x2++cNkxNk be a monic polynomial over β„€p, of degree N βˆ’ k such that 𝑔(π‘₯)β„Ž(π‘₯)=π‘₯N1 over β„€p, for some monic irreducible polynomial β„Ž(π‘₯)β„€p[π‘₯]. Suppose that the vector (c0,c1,,cNk,cNk+1,,cN1), with 𝑐i=0 for (N βˆ’ k) < i < N, has the property that it contains each element of β„€p the same number of times. Then, the 𝑁 cyclic shifts of the vector 𝑔=(𝑐0,𝑐1,,𝑐N1) form the core of the exponent matrix of some Hadamard matrix .

DNA codes with constant GC-content can obviously be constructed from constant-composition codes (A constant composition code over a k-ary alphabet has the property that the numbers of occurrences of the k symbols within a codeword is the same for each codeword) over β„€p by mapping the symbols of β„€p to the symbols of the DNA alphabet, 𝒬={𝐴,𝑇,𝐢,𝐺}. For example, using cyclic constant composition code of length 3k1 over β„€3 guaranteed by the theorem proved above and the resulting property, and using the mapping that takes 0 to 𝐴, 1 to 𝑇 and 2 to 𝐺, we obtain a DNA code π’Ÿ with 3k1 and a GC-content of 3k1. Clearly 𝑑𝐻=2.3k1 and in fact since 𝐺¯=𝐢 and no codeword in π’Ÿ contains no symbol 𝐢, we also have 𝑑HRC(π’Ÿ)3k1. This is summarized in the following corollary.[4]

Corollary

For any π‘˜β„€+, there exists DNA codes 𝔻 with 3k1 codewords of length 3k1, constant GC-content 3k1, 𝑑HRC(𝔻)3k1 and in which every codeword is a cyclic shift of a fixed generator codeword 𝑔.

Each of the following vectors generates a cyclic core of a Hadamard matrix H(p,pn) (where 𝑁+1=𝑝𝑛, and 𝑛=3 in this example):[4]

g(1)=(22201221202001110211210200);

g(2)=(20212210222001012112011100).

Where, g(x)=a0+a1x++anxn.

Thus, we see how DNA codes can be obtained from such generators by mapping 0,1,2 onto A,T,G. The actual choice of mapping plays a major role in secondary structure formations in the codewords.

We see that all such mappings yield codes with essentially the same parameters. However the actual choice of mapping has a strong influence on the secondary structure of the codewords. For example, the codeword illustrated was obtained from g(1) via the mapping 0A;1T;2G, while the codeword g(2) was obtained from the same generator g(1) via the mapping 0G;1T;2A.

Code construction via a Binary Mapping

Perhaps a simpler approach to building/designing DNA codewords is by having a binary mapping by looking at the design problem as that of constructing the codewords as binary codes. i.e. map the DNA codeword alphabet 𝒬 onto the set of 2-bit length binary words as shown: 𝐴00, 𝑇01, 𝐢10, 𝐺11.

As we can see, the first bit of a binary image clearly determines which complementary pair it belongs to.

Let π‘ž be a DNA sequence. The sequence b(q) obtained by applying the mapping given above to π‘ž, is called the binary image of π‘ž.

Now, let b(q)=𝑏0𝑏1𝑏2𝑏2n1.

Now, let the subsequence e(q)=𝑏0𝑏2𝑏2n2 be called the even subsequence of b(q), and o(q)=𝑏1𝑏3𝑏5𝑏2n1 be called the odd subsequence of b(q).

Thus, for example, for q=ACGTCC, then, b(q)=001011011010.

Then e(q)=011011 and o(q)=001100.

Let us define an even component as β„°(π’ž)={e(x):xπ’ž}, and an odd component as π’ͺ(π’ž)={o(x):xπ’ž}.

From this choice of binary mapping, the GC-content of DNA sequence π‘ž = Hamming weight of e(q).

Hence, a DNA code π’ž is a constant GC-content codeword if and only if its even component β„°(π’ž) is a constant-weight code.

Let ℬ be a binary code consisting of M codewords of length 𝑛 and minimum distance dmin, such that 𝑐ℬ implies that 𝑐¯ℬ.

For 𝑀>0, consider the constant-weight subcode ℬ𝑀={uℬ:𝑀𝐻(u)=𝑀}, where wH() denotes Hamming weight. Choose 𝑀>0 such that 𝑛2𝑀+𝑑min/2, and consider a DNA code, π’žw, with the following choice for its even and odd components:

β„°={abΒ―:a,bℬw}, π’ͺ={abRC:a,bℬ,a<lexb}.

Where <lex denotes lexicographic ordering. The a<lexb in the definition of π’ͺ ensures that if abRCπ’ͺ, then baRCπ’ͺ, so that distinct codewords in π’ͺ cannot be reverse-complements of each other.

The code β„°w has |ℬw|2 codewords of length 2n and constant weight n.

Furthermore, 𝑑𝐻(β„°w𝑑min) and 𝑑𝐻R(β„°w𝑑min) ( this is because ℬw is a subset of the codewords in ℬ).

Also, 𝑑𝐻(abΒ―,dRCcR)=𝑑𝐻(a,dRC)+𝑑𝐻(bΒ―,cR)=𝑑𝐻(a,dRC)+𝑑𝐻(c,bRC).

Note that b and d both have weight 𝑀. This implies that bRC and dRC have weight 𝑛𝑀.

And due to the weight constraint on 𝑀, we must have for all a,b,c,dℬw, 𝑑𝐻(abΒ―,dRCcR)2𝑑min/2𝑑min.

Thus, the code π’ͺ has M(M1)/2 codewords of length 2n.

From this, we see that dH((π’ͺ))dmin (because the component codewords of (O) are taken from ℬ).

Similarly, dHRC((π’ͺ))dmin.

Therefore, the DNA code

π’ž=w=dminwmaxπ’žw

with wmax=(ndmin/2)/2, has 12M(M1)w=dminwmax|Aw2| codewords of length 2𝑛, and satisfies 𝑑𝐻(ℬ)𝑑min and 𝑑𝐻RC(ℬ)𝑑min.

From the examples listed above, one can wonder what could be the future potential of DNA-based computers?

Despite its enormous potential, this method is highly unlikely to be implemented in home computers or even computers at offices, etc. because of the sheer flexibility and speed as well as cost factors that favor silicon chip based devices used for the computers today.[2]

However, such a method could be used in situations where the only available method is this and requires the accuracy associated with the DNA hybridization mechanism; applications which require operations to be performed with a high degree of reliability.

Currently, there are several software packages, such as the Vienna package,[7] which can predict secondary structure formations in single stranded DNAs (i.e. oligonucleotides) or RNA sequences.

See also

References

Template:Reflist