Hadamard product (matrices)

From testwiki
Jump to navigation Jump to search

Template:Short description

The Hadamard product operates on identically shaped matrices and produces a third matrix of the same dimensions.

In mathematics, the Hadamard product (also known as the element-wise product, entrywise product[1]Template:Rp or Schur product[2]) is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative.[3]

Definition

For two matrices Template:Math and Template:Math of the same dimension Template:Math, the Hadamard product AB (sometimes AB[4][5][6]) is a matrix of the same dimension as the operands, with elements given by[3]

(AB)ij=(A)ij(B)ij.

For matrices of different dimensions (Template:Math and Template:Math, where Template:Math or Template:Math), the Hadamard product is undefined.

Template:AnchorFor example, the Hadamard product for two arbitrary 2 Γ— 3 matrices is:

[231082][314795]=[2×33×11×40×78×92×5]=[63407210]

Properties

  • The Hadamard product is commutative (when working with a commutative ring), associative and distributive over addition. That is, if A, B, and C are matrices of the same size, and k is a scalar: AB=BA,A(BC)=(AB)C,A(B+C)=AB+AC,(kA)B=A(kB)=k(AB),A0=0A=0.
  • The identity matrix under Hadamard multiplication of two Template:Math matrices is an [[matrix of ones|Template:Math matrix where all elements are equal to 1]]. This is different from the identity matrix under regular matrix multiplication, where only the elements of the main diagonal are equal to 1. Furthermore, a matrix has an inverse under Hadamard multiplication if and only if all of the elements are invertible, or equivalently over a field, if and only if none of the elements are equal to zero.[7]
  • For vectors Template:Math and Template:Math, and corresponding diagonal matrices Template:Math and Template:Math with these vectors as their main diagonals, the following identity holds:[1]Template:Rp 𝐱*(AB)𝐲=tr(D𝐱*AD𝐲B𝖳), where Template:Math denotes the conjugate transpose of Template:Math. In particular, using vectors of ones, this shows that the sum of all elements in the Hadamard product is the trace of Template:Math where superscript T denotes the matrix transpose, that is, tr(AB𝖳)=πŸπ–³(AB)𝟏. A related result for square Template:Mvar and Template:Mvar, is that the row-sums of their Hadamard product are the diagonal elements of Template:Math:[8] i(AB)ij=(B𝖳A)jj=(AB𝖳)ii. Similarly, (𝐲𝐱*)A=D𝐲AD𝐱* Furthermore, a Hadamard matrix-vector product can be expressed as: (AB)𝐲=diag(AD𝐲B𝖳) where diag(M) is the vector formed from the diagonals of matrix Template:Mvar.
  • The Hadamard product is a principal submatrix of the Kronecker product.[9][10][11]
  • The Hadamard product satisfies the rank inequality rank(AB)rank(A)rank(B)
  • If Template:Math and Template:Math are positive-definite matrices, then the following inequality involving the Hadamard product holds:[12] i=knλi(AB)i=knλi(AB),k=1,,n, where Template:Math is the Template:Mathth largest eigenvalue of Template:Math.
  • If Template:Mvar and Template:Mvar are diagonal matrices, then[13] D(AB)E=(DAE)B=(DA)(BE)=(AE)(DB)=A(DBE).
  • The Hadamard product of two vectors 𝐚 and 𝐛 is the same as matrix multiplication of the corresponding diagonal matrix of one vector by the other vector: πšπ›=Dπšπ›=Dπ›πš
  • The vector to diagonal matrix diag operator may be expressed using the Hadamard product as: diag(𝐚)=(𝐚𝟏T)I where 𝟏 is a constant vector with elements 1 and I is the identity matrix.

The mixed-product property

The Hadamard product obeys certain relationships with other matrix product operators.

  • If  is the Kronecker product, assuming A has the same dimensions as C and B as D, then (AB)(CD)=(AC)(BD).

Schur product theorem

Template:Main The Hadamard product of two positive-semidefinite matrices is positive-semidefinite.[3][8] This is known as the Schur product theorem,[7] after Russian mathematician Issai Schur. For two positive-semidefinite matrices Template:Mvar and Template:Mvar, it is also known that the determinant of their Hadamard product is greater than or equal to the product of their respective determinants:[8]det(AB)det(A)det(B).

Analogous operations

Other Hadamard operations are also seen in the mathematical literature,[15] namely the Template:Visible anchor and Template:Visible anchor (which are in effect the same thing because of fractional indices), defined for a matrix such that:

Template:AnchorFor B=A2Bij=Aij2

and for B=A12Bij=Aij12

Template:AnchorThe Template:Visible anchor reads:[15] B=A1Bij=Aij1

Template:AnchorA Template:Visible anchor is defined as:[16][17]

C=ABCij=AijBij

In programming languages

Most scientific or numerical programming languages include the Hadamard product, under various names.

In MATLAB, the Hadamard product is expressed as "dot multiply": a .* b, or the function call: times(a, b).[18] It also has analogous dot operators which include, for example, the operators a .^ b and a ./ b.[19] Because of this mechanism, it is possible to reserve * and ^ for matrix multiplication and matrix exponentials, respectively.

The programming language Julia has similar syntax as MATLAB, where Hadamard multiplication is called broadcast multiplication and also denoted with a .* b, and other operators are analogously defined element-wise, for example Hadamard powers use a .^ b.[20] But unlike MATLAB, in Julia this "dot" syntax is generalized with a generic broadcasting operator . which can apply any function element-wise. This includes both binary operators (such as the aforementioned multiplication and exponentiation, as well as any other binary operator such as the Kronecker product), and also unary operators such as ! and √. Thus, any function in prefix notation f can be applied as f.(x).[21]

Python does not have built-in array support, leading to inconsistent/conflicting notations. The NumPy numerical library interprets a*b or a.multiply(b) as the Hadamard product, and uses a@b or a.matmul(b) for the matrix product. With the SymPy symbolic library, multiplication of Template:Mono objects as either a*b or a@b will produce the matrix product. The Hadamard product can be obtained with the method call a.multiply_elementwise(b).[22] Some Python packages include support for Hadamard powers using methods like np.power(a, b), or the Pandas method a.pow(b).

In C++, the Eigen library provides a cwiseProduct member function for the Template:Mono class (a.cwiseProduct(b)), while the Armadillo library uses the operator % to make compact expressions (a % b; a * b is a matrix product).

In GAUSS, and HP Prime, the operation is known as array multiplication.

In Fortran, R, APL, J and Wolfram Language (Mathematica), the multiplication operator * or Γ— apply the Hadamard product, whereas the matrix product is written using matmul, %*%, +.Γ—, +/ .* and ., respectively. The R package matrixcalc introduces the function hadamard.prod() for Hadamard Product of numeric matrices or vectors.[23]

Applications

The Hadamard product appears in lossy compression algorithms such as JPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.Template:Citation needed

In image processing, the Hadamard operator can be used for enhancing, suppressing or masking image regions. One matrix represents the original image, the other acts as weight or masking matrix.

It is used in the machine learning literature, for example, to describe the architecture of recurrent neural networks as GRUs or LSTMs.[24]

It is also used to study the statistical properties of random vectors and matrices. [25][26]

The penetrating face product

The penetrating face product of matrices

According to the definition of V. Slyusar the penetrating face product of the pΓ—g matrix A and n-dimensional matrix B (n > 1) with pΓ—g blocks (B=[Bn]) is a matrix of size B of the form:[27] A[]B=[AB1AB2ABn].

Example

If A=[123456789],B=[B1B2B3]=[147281431221820510254012306283242739]

then

A[]B=[182121642324633210030401252404815036146427143218492481].

Main properties

A[]B=B[]A;[27]
MM=M[](M𝟏T),

where denotes the face-splitting product of matrices,

𝐜M=𝐜[]M, where 𝐜 is a vector.

Applications

The penetrating face product is used in the tensor-matrix theory of digital antenna arrays.[27] This operation can also be used in artificial neural network models, specifically convolutional layers.[28]

See also

References

Template:Reflist

Template:Linear algebra