Testwiki:Reference desk/Archives/Mathematics/2016 December 20

From testwiki
Revision as of 07:14, 27 December 2016 by imported>Scsbot (edited by robot: archiving December 20)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Error:not substituted

{| width = "100%"

|- ! colspan="3" align="center" | Mathematics desk |- ! width="20%" align="left" | < December 19 ! width="25%" align="center"|<< Nov | December | Jan >> ! width="20%" align="right" |Current desk > |}

Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 20

Power of a sparse matrix

Let A be an n×n matrix, with entries A0,0=2,A0,1=1,A0,n1=1,Ai>0,i1=1, and zero elsewhere. I wish to compute the n1st (last) row of Ak, and I don't care about the other rows. Is there a fast way (faster than naive matrix multiplication or exponentiation by squaring) to compute this? 24.255.17.182 (talk) 23:05, 20 December 2016 (UTC)

Often it is convenient to diagonalize (or put in Jordan form), then raise to powers, then un-diagonalize. I have not considered whether this is nice for your example. --JBL (talk) 23:53, 20 December 2016 (UTC)
I don't think so- I can show that the characteristic polynomial is fm(λ)=(1)m(λm+1+2λmλm1+1) which doesn't factorize to anything nice in general. 24.255.17.182 (talk) 00:52, 21 December 2016 (UTC)
  • You might try working out by hand a few simple examples (small n, and several small values of k) and form a hypothesis about the desired row. If the hypothesis is simple enough, it just might be easy to prove by induction. Loraof (talk) 00:19, 21 December 2016 (UTC)
You may already know this, but depending on n and k, iterated multiplication could be faster than exponentiation by squaring. Right-multiplying a row vector by A requires only a left shift, two additions, and a cyclic permutation which can be handled as a renumbering of entries. Repeating this k times takes O(k2) time and O(nk) space (assuming k>n, ignoring factors of logn, and considering that the integers grow exponentially with k). Exponentiation by squaring with FFT integer multiplication and O(nc) matrix multiplication would take O(ncklogk) time and O(n2k) space, I think. Iterated multiplication also has a smaller constant factor not only because of the simple algorithm but also because of better locality of reference. -- BenRG (talk) 20:18, 23 December 2016 (UTC)