Resultant

From testwiki
Revision as of 14:12, 26 December 2024 by imported>D.Lazard (Invariance under change of polynomials: minor ce)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:About

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant.[1]

The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.

The resultant of n homogeneous polynomials in n variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant.[2] It is, with Gröbner bases, one of the main tools of elimination theory.

Notation

The resultant of two univariate polynomials Template:Math and Template:Math is commonly denoted res(A,B) or Res(A,B).

In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: resx(A,B) or Resx(A,B).

The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree Template:Math may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as resd,e(A,B) or resxd,e(A,B).

Definition

The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let A=a0xd+a1xd1++ad and B=b0xe+b1xe1++be be nonzero polynomials of degrees Template:Math and Template:Math respectively. Let us denote by 𝒫i the vector space (or free module if the coefficients belong to a commutative ring) of dimension Template:Math whose elements are the polynomials of degree strictly less than Template:Math. The map φ:𝒫e×𝒫d𝒫d+e such that φ(P,Q)=AP+BQ is a linear map between two spaces of the same dimension. Over the basis of the powers of Template:Math (listed in descending order), this map is represented by a square matrix of dimension Template:Math, which is called the Sylvester matrix of Template:Math and Template:Math (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map).

The resultant of Template:Math and Template:Math is thus the determinant |a000b000a1a00b1b00a2a10b2b10a0b0adad1bebe10ad0bead1be100ad00be|, which has Template:Math columns of Template:Math and Template:Math columns of Template:Math (the fact that the first column of Template:Mvar's and the first column of Template:Mvar's have the same length, that is Template:Math, is here only for simplifying the display of the determinant). For instance, taking Template:Math and Template:Math we get |a00b000a1a0b1b00a2a1b2b1b0a3a20b2b10a300b2|.

If the coefficients of the polynomials belong to an integral domain, then res(A,B)=a0eb0d1id1je(λiμj)=a0ei=1dB(λi)=(1)deb0dj=1eA(μj), where λ1,,λd and μ1,,μe are respectively the roots, counted with their multiplicities, of Template:Mvar and Template:Mvar in any algebraically closed field containing the integral domain. This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers.

Properties

In this section and its subsections, Template:Math and Template:Math are two polynomials in Template:Math of respective degrees Template:Math and Template:Math, and their resultant is denoted res(A,B).

Characterizing properties

The following properties hold for the resultant of two polynomials with coefficients in a commutative ring Template:Math. If Template:Mvar is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties.

Zeros

  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree.
  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients.
  • There exists a polynomial Template:Math of degree less than Template:Math and a polynomial Template:Math of degree less than Template:Math such that res(A,B)=AP+BQ. This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.

Invariance by ring homomorphisms

Let Template:Math and Template:Math be two polynomials of respective degrees Template:Math and Template:Math with coefficients in a commutative ring Template:Math, and φ:RS a ring homomorphism of Template:Math into another commutative ring Template:Math. Applying φ to the coefficients of a polynomial extends φ to a homomorphism of polynomial rings R[x]S[x], which is also denoted φ. With this notation, we have:

  • If φ preserves the degrees of Template:Math and Template:Math (that is if deg(φ(A))=d and deg(φ(B))=e), then φ(res(A,B))=res(φ(A),φ(B)).
  • If deg(φ(A))<d and deg(φ(B))<e, then φ(res(A,B))=0.
  • If deg(φ(A))=d and deg(φ(B))=f<e, and the leading coefficient of Template:Math is a0 then φ(res(A,B))=φ(a0)efres(φ(A),φ(B)).
  • If deg(φ(A))=f<d and deg(φ(B))=e, and the leading coefficient of Template:Math is b0 then φ(res(A,B))=(1)e(df)φ(b0)dfres(φ(A),φ(B)).

These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When Template:Math is a polynomial ring in other indeterminates, and Template:Math is the ring obtained by specializing to numerical values some or all indeterminates of Template:Math, these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition.

Invariance under change of variable

This means that the property of the resultant being zero is invariant under linear and projective changes of the variable.

Invariance under change of polynomials

It is only when Template:Tmath and Template:Tmath have the same degree that Template:Tmath cannot be deduced from the degrees of the given polynomials. If either Template:Mvar is monic, or Template:Math, then res(B,ACB)=res(B,A), If Template:Math, then res(B,ACB)=b0e+fdres(B,A).

These properties imply that in the Euclidean algorithm for polynomials, and all its variants (pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm, which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves O(de) arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires O((d+e)3) arithmetic operations.

Generic properties

In this section, we consider two polynomials A=a0xd+a1xd1++ad and B=b0xe+b1xe1++be whose Template:Math coefficients are distinct indeterminates. Let R=[a0,,ad,b0,,be] be the polynomial ring over the integers defined by these indeterminates. The resultant res(A,B) is often called the generic resultant for the degrees Template:Math and Template:Math. It has the following properties.

Homogeneity

The generic resultant for the degrees Template:Math and Template:Math is homogeneous in various ways. More precisely:

Elimination property

Let I=A,B be the ideal generated by two polynomials Template:Math and Template:Math in a polynomial ring R[x], where R=k[y1,,yn] is itself a polynomial ring over a field. If at least one of Template:Math and Template:Math is monic in Template:Mvar, then:

The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows.

As at least one of Template:Math and Template:Math is monic, a tuple (β1,,βn) is a zero of resx(A,B) if and only if there exists α such that (β1,,βn,α) is a common zero of Template:Math and Template:Math. Such a common zero is also a zero of all elements of IR. Conversely, if (β1,,βn) is a common zero of the elements of IR, it is a zero of the resultant, and there exists α such that (β1,,βn,α) is a common zero of Template:Math and Template:Math. So IR and Rresx(A,B) have exactly the same zeros.

Computation

Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient.

As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix), it may be computed by using any algorithm for computing determinants. This needs O(n3) arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice.

It follows from Template:Slink that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees Template:Math and Template:Math may be done in O(de) arithmetic operations in the field of coefficients.

However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem.

The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input (log(s(d+e)), where Template:Math is an upper bound of the number of digits of the input polynomials).

Application to polynomial systems

Resultants were introduced for solving systems of polynomial equations and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems.

Case of two equations in two unknowns

Consider the system of two polynomial equations P(x,y)=0Q(x,y)=0, where Template:Math and Template:Math are polynomials of respective total degrees Template:Math and Template:Math. Then R=resyd,e(P,Q) is a polynomial in Template:Math, which is generically of degree Template:Math (by properties of Template:Slink). A value α of Template:Math is a root of Template:Math if and only if either there exist β in an algebraically closed field containing the coefficients, such that P(α,β)=Q(α,β)=0, or deg(P(α,y))<d and deg(Q(α,y))<e (in this case, one says that Template:Math and Template:Math have a common root at infinity for x=α).

Therefore, solutions to the system are obtained by computing the roots of Template:Math, and for each root α, computing the common root(s) of P(α,y), Q(α,y), and resx(P,Q).

Bézout's theorem results from the value of deg(resy(P,Q))de, the product of the degrees of Template:Math and Template:Math. In fact, after a linear change of variables, one may suppose that, for each root Template:Math of the resultant, there is exactly one value of Template:Math such that Template:Math is a common zero of Template:Math and Template:Math. This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of Template:Math and Template:Math. With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees.

General case

At first glance, it seems that resultants may be applied to a general polynomial system of equations P1(x1,,xn)=0Pk(x1,,xn)=0 by computing the resultants of every pair (Pi,Pj) with respect to xn for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove.

A method, introduced at the end of the 19th century, works as follows: introduce Template:Math new indeterminates U2,,Uk and compute resxn(P1,U2P2++UkPk). This is a polynomial in U2,,Uk whose coefficients are polynomials in x1,,xn1, which have the property that α1,,αn1 is a common zero of these polynomial coefficients, if and only if the univariate polynomials Pi(α1,,αn1,xn) have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials.

To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing.

This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical.

Other applications

Number theory

The discriminant of a polynomial, which is a fundamental tool in number theory, is a01(1)n(n1)/2resx(f(x),f(x)), where a0 is the leading coefficient of f(x) and n its degree.

If α and β are algebraic numbers such that P(α)=Q(β)=0, then γ=α+β is a root of the resultant resx(P(x),Q(zx)), and τ=αβ is a root of resx(P(x),xnQ(z/x)), where n is the degree of Q(y). Combined with the fact that 1/β is a root of ynQ(1/y)=0, this shows that the set of algebraic numbers is a field.

Let K(α) be an algebraic field extension generated by an element α, which has P(x) as minimal polynomial. Every element of βK(α) may be written as β=Q(α), where Q is a polynomial. Then β is a root of resx(P(x),zQ(x)), and this resultant is a power of the minimal polynomial of β.

Algebraic geometry

Given two plane algebraic curves defined as the zeros of the polynomials Template:Math and Template:Math, the resultant allows the computation of their intersection. More precisely, the roots of resy(P,Q) are the x-coordinates of the intersection points and of the common vertical asymptotes, and the roots of resx(P,Q) are the y-coordinates of the intersection points and of the common horizontal asymptotes.

A rational plane curve may be defined by a parametric equation x=P(t)R(t),y=Q(t)R(t), where Template:Math, Template:Math and Template:Math are polynomials. An implicit equation of the curve is given by rest(xRP,yRQ). The degree of this curve is the highest degree of Template:Math, Template:Math and Template:Math, which is equal to the total degree of the resultant.

Symbolic integration

In symbolic integration, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form P(x)Q(x), where Template:Math is a square-free polynomial and Template:Math is a polynomial of lower degree than Template:Math. The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of Template:Math). In fact, the antiderivative is P(x)Q(x)dx=Q(α)=0P(α)Q(α)log(xα), where the sum runs over all complex roots of Template:Math.

The number of algebraic numbers involved by this expression is generally equal to the degree of Template:Math, but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–Trager method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers.

Let S1(r)S2(r)2Sk(r)k=resr(rQ(x)P(x),Q(x)) be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is P(x)Q(x)dx=i=1kSi(α)=0αlog(Ti(α,x)), where the internal sums run over the roots of the Si (if Si=1 the sum is zero, as being the empty sum), and Ti(r,x) is a polynomial of degree Template:Math in Template:Math. The Lazard-Rioboo contribution is the proof that Ti(r,x) is the subresultant of degree Template:Math of rQ(x)P(x) and Q(x). It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence.

Computer algebra

All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra. In fact most computer algebra systems include an efficient implementation of the computation of resultants.

Homogeneous resultant

The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials Template:Math and Template:Math of respective total degrees Template:Math and Template:Math, their homogeneous resultant is the determinant of the matrix over the monomial basis of the linear map (A,B)AP+BQ, where Template:Math runs over the bivariate homogeneous polynomials of degree Template:Math, and Template:Math runs over the homogeneous polynomials of degree Template:Math. In other words, the homogeneous resultant of Template:Math and Template:Math is the resultant of Template:Math and Template:Math when they are considered as polynomials of degree Template:Math and Template:Math (their degree in Template:Math may be lower than their total degree): Res(P(x,y),Q(x,y))=resp,q(P(x,1),Q(x,1)). (The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation).

The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism. That is:

Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant.

Macaulay's resultant

Macaulay's resultant, named after Francis Sowerby Macaulay, also called the multivariate resultant, or the multipolynomial resultant,[3] is a generalization of the homogeneous resultant to Template:Math homogeneous polynomials in Template:Math indeterminates. Macaulay's resultant is a polynomial in the coefficients of these Template:Math homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an algebraically closed field containing the coefficients, or, equivalently, if the Template:Math hyper surfaces defined by the polynomials have a common zero in the Template:Math dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms. However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials.

Resultant of generic homogeneous polynomials

A homogeneous polynomial of degree Template:Math in Template:Math variables may have up to (n+d1n1)=(n+d1)!(n1)!d! coefficients; it is said to be generic, if these coefficients are distinct indeterminates.

Let P1,,Pn be Template:Math generic homogeneous polynomials in Template:Math indeterminates, of respective degrees d1,,dn. Together, they involve i=1n(n+di1n1) indeterminate coefficients. Let Template:Math be the polynomial ring over the integers, in all these indeterminate coefficients. The polynomials P1,,Pn belong thus to C[x1,,xn], and their resultant (still to be defined) belongs to Template:Math.

The Macaulay degree is the integer D=d1++dnn+1, which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix, which is the matrix over the monomial basis of the Template:Math-linear map (Q1,,Qn)Q1P1++QnPn, in which each Qi runs over the homogeneous polynomials of degree Ddi, and the codomain is the Template:Math-module of the homogeneous polynomials of degree Template:Math.

If Template:Math, the Macaulay matrix is the Sylvester matrix, and is a square matrix, but this is no longer true for Template:Math. Thus, instead of considering the determinant, one considers all the maximal minors, that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the Template:Math-ideal generated by these principal minors is a principal ideal, which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes Template:Math, when, for each Template:Math, zero is substituted for all coefficients of Pi, except the coefficient of xidi, for which one is substituted.

Properties of the generic Macaulay resultant

  • The generic Macaulay resultant is an irreducible polynomial.
  • It is homogeneous of degree B/di in the coefficients of Pi, where B=d1dn is the Bézout bound.
  • The product with the resultant of every monomial of degree Template:Math in x1,,xn belongs to the ideal of C[x1,,xn] generated by P1,,Pn.

Resultant of polynomials over a field

From now on, we consider that the homogeneous polynomials P1,,Pn of degrees d1,,dn have their coefficients in a field Template:Math, that is that they belong to k[x1,,xn]. Their resultant is defined as the element of Template:Math obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the Pi.

The main property of the resultant is that it is zero if and only if P1,,Pn have a nonzero common zero in an algebraically closed extension of Template:Math.

The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of Projective Nullstellensatz: If the resultant is nonzero, then x1,,xnDP1,,Pn, where D=d1++dnn+1 is the Macaulay degree, and x1,,xn is the maximal homogeneous ideal. This implies that P1,,Pn have no other common zero than the unique common zero, Template:Math, of x1,,xn.

Computability

As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors, there are algorithms for computing resultants in a finite number of steps.

However, the generic resultant is a polynomial of very high degree (exponential in Template:Math) depending on a huge number of indeterminates. It follows that, except for very small Template:Math and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of Template:Math and of the degrees of the input polynomials.

Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field.

In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying Gaussian elimination to the Macaulay matrix. This provides a computational complexity dO(n), where Template:Math is the maximum degree of input polynomials.

Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of x1,,xn which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the "elimination" of x1,,xn from the input polynomials.

U-resultant Template:Anchor

Macaulay's resultant provides a method, called "U-resultant" by Macaulay, for solving systems of polynomial equations.

Given Template:Math homogeneous polynomials P1,,Pn1, of degrees d1,,dn1, in Template:Math indeterminates x1,,xn, over a field Template:Math, their U-resultant is the resultant of the Template:Math polynomials P1,,Pn1,Pn, where Pn=u1x1++unxn is the generic linear form whose coefficients are new indeterminates u1,,un. Notation ui or Ui for these generic coefficients is traditional, and is the origin of the term U-resultant.

The U-resultant is a homogeneous polynomial in k[u1,,un]. It is zero if and only if the common zeros of P1,,Pn1 form a projective algebraic set of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of Template:Math). If the U-resultant is not zero, its degree is the Bézout bound d1dn1. The U-resultant factorizes over an algebraically closed extension of Template:Math into a product of linear forms. If α1u1++αnun is such a linear factor, then α1,,αn are the homogeneous coordinates of a common zero of P1,,Pn1. Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the Pi at this zero. In other words, the U-resultant provides a completely explicit version of Bézout's theorem.

Extension to more polynomials and computation

The U-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be n1, where n is the number of indeterminates. In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from n1, and the resulting computation can be performed via a specialized Gaussian elimination procedure followed by symbolic determinant computation.

Let P1,,Pk be homogeneous polynomials in x1,,xn, of degrees d1,,dk, over a field Template:Math. Without loss of generality, one may suppose that d1d2dk. Setting di=1 for Template:Math, the Macaulay bound is D=d1++dnn+1.

Let u1,,un be new indeterminates and define Pk+1=u1x1++unxn. In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in x1,,xn, of the linear map (Q1,,Qk+1)P1Q1++Pk+1Qk+1, where, for each Template:Math, Qi runs over the linear space consisting of zero and the homogeneous polynomials of degree Ddi.

Reducing the Macaulay matrix by a variant of Gaussian elimination, one obtains a square matrix of linear forms in u1,,un. The determinant of this matrix is the U-resultant. As with the original U-resultant, it is zero if and only if P1,,Pk have infinitely many common projective zeros (that is if the projective algebraic set defined by P1,,Pk has infinitely many points over an algebraic closure of Template:Math). Again as with the original U-resultant, when this U-resultant is not zero, it factorizes into linear factors over any algebraically closed extension of Template:Math. The coefficients of these linear factors are the homogeneous coordinates of the common zeros of P1,,Pk, and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor.

The number of rows of the Macaulay matrix is less than (ed)n, where Template:Math is the usual mathematical constant, and Template:Math is the arithmetic mean of the degrees of the Pi. It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time dO(n). Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions (Bézout's theorem). This computation may be practically viable when Template:Math, Template:Math and Template:Math are not large.

See also

References

Template:Reflist

Template:Refbegin

Template:Refend

Template:Polynomials