Skip to content
July 20, 2011 / cdsmith

On fibonacci numbers as an example of asymptotic complexity

Most people have seen the elegant recursive definition of fibonacci numbers:  In Haskell,

fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

and the fact that this performs horridly for higher values of n is well-known.  But there are a couple cute aspects of this example that are less known, so I thought I’d just take a moment to point them out.

First of all, what is the asymptotic complexity of the earlier implementation?  It’s exponential, sure, but let’s take a closer look at the analysis.  Let’s define a function fibtime to approximate the number of time steps needed to calculate the result of fib.  Since it’s straight-forward recursion where each individual function application performs constant work, and since we’re assuming arithmetic can be done in constant time (this isn’t actually true!  See comments later…) we can measure fibtime with the number of function applications needed to get the answer; all of the other work goes into the constant factor.  So it takes only one function application to evaluate the base cases; but for the inductive case, we need to evaluate two smaller instances of the problem as well:

fibtime 0 = 1
fibtime 1 = 1
fibtime n = fibtime (n-1) + fibtime (n-2) + 1

Look familiar?  Well, almost… there’s an extra “+1” in there.  Solve the recurrence relation, though, and that comes out in the constant factor, too… so the time complexity of computing fib n is O(fib n).  Cute, huh?

Okay, so we want something faster.  The normal answer is to convert to tail recursion (or write the obvious imperative algorithm; they amount to the same thing):

Edit 3: Thanks to Raphael in the comments for the comments that led to writing this code in a clearer style, and unifying it with the later version.

fib n = fst . fibpair
    where fibpair 0 = (0,1)
          fibpair n = (b,a+b) where (a,b) = fibpair (n-1)

The asymptotic analysis here is easy, and the running time is O(n).  Equivalently, Haskell allows a nifty definition using lazy lists:

fib n = fibs !! n
    where fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

While this looks substantially different, it turns out to be the same algorithm written in a more declarative style.  So great, we’ve gone from exponential to linear.  Most people stop there.

But can we do better?  Yes, it turns out we can!  A key observation here is that

\left( \begin{tabular}{cc} b-a & a \\ a & b \end{tabular}\right) \times \left( \begin{tabular}{cc} 0 & 1 \\ 1 & 1 \end{tabular}\right) = \left(\begin{tabular}{cc} (a+b)-b & b \\ b & a+b \end{tabular}\right)

In other words, you can encode the state (the a and b from the tail recursive version above) into a matrix, such that making progress involves just multiplying by a constant matrix.  But matrix multiplication is expensive, and even multiplication of constant 2×2 matrices takes some time, so this would still be linear with a somewhat worse constant factor… right?

Wrong!  The key point is that matrix multiplication is associative, and that means that we can rearrange the multiplications to avoid duplicating work.   How do I compute the 16th power of a matrix?  The best way is not to multiply the matrix by itself 16 times, but rather to compute the 8th power, and then square that.  If the power I want is odd, then I will just compute the power one below, and multiply by the original matrix once, but crucially, this can only happen once before I have an even power again!  So overall, computing the nth power of a matrix can be done in O(log n) time.

We don’t really want to represent matrices explicitly, so we instead just encode the first matrix above as the ordered pair (a,b), as we did earlier.  (Note that in this encoding, the identity matrix, which is the zero’th power of any matrix, is just (0,1), which was the same base case earlier… in fact we’re just adding one special case to the earlier code.)  Doing some basic algebra, the result turns out to look like this:

fib n = fst . fibpair
  where
    fibpair 0          = (0,1)
    fibpair n | even n = (2*a*b-a^2, a^2+b^2) where (a,b) = fibpair (n `quot` 2)
              | odd  n = (b, a+b)             where (a,b) = fibpair (n - 1)

So, ever wanted to know the ten millionth fibonacci number?  No problem!  In case you were curious; the answer is 2,089,877 decimal digits long.

Wait a second here… at this point, you should be questioning whether our code really runs in logarithmic time!  Dear reader, I’ve been lying to you.  Just the length of the output from the fibonacci function is actually linear in the value of n!  So it’s outright impossible to produce that output in better than O(n) time.  So was this all a waste?  No, not really.  We made a simplifying assumption up front that all arithmetic can be done in constant time.  For most reasonable values of input, we make that assumption all the time, and it’s useful in practice.  However, it does fail when we hit very large values that exceed normal fixed size data types.  This algorithm doesn’t really run in logarithmic time… but in the exact same sense, neither was the previous version really linear.

Okay, we’ve got that down.  So…. can we do better?

Not really, if we need precise answers.  The fibonacci numbers actually have a closed-form solution involving the golden ratio, but it too requires computing nth powers – this time of some exact representation of the algebraic reals if we don’t want to get rounding error – so it’s not going to be asymptotically better, and is likely to have much worse constant factors.  That said, if all you want is an approximation of results well within the range of a double precision floating point number, you can get that using the closed form, in constant time!

approxfib n = round (phi ** x / sqrt 5)
    where x   = fromIntegral (n+1)
          phi = (1 + sqrt 5) / 2

Edit: Thanks to mathnerd for pointing out that the psi part of Binet’s formula is always less than 1/2, so just rounding the answer is enough without actually subtracting it.

Using double precision floating point, this is exact up to the 70th fibonacci number (approximately 300 trillion), so it does pretty well for small numbers!  The rounding error is very low (as in, less than a billionth of a percent) up through the 1473rd number; but then we hit the upper end of the range of double precision floating point numbers, and things go bad.  If you want an exact answer or arbitrarily large numbers, it won’t do the job; but for reasonable ranges, it gives a decent approximation.

Edit 2: It’s also interesting to note that the closed form solution, known as Binet’s formula, can be derived from the matrix approach mentioned above.  The approach is to decompose the constant matrix from earlier into a diagonal matrix of eigenvalues via a spectral decomposition.  Then its powers can be computed by powers of the diagonal matrix of eigenvalues, which then leads to Binet’s formula.  Conversely, if you do decide to look for an efficient exact representation for computing Binet’s formula with arbitrary size numbers, you end up deciding to work in the field \mathbb{Q}(\varphi), which is an extension field of dimension 2 over the rationals.  If you then derive multiplication formulas for that field, they look very familiar: you end up doing exactly the same math on pairs of numbers that we did for the matrix solution.  Thanks to brandonpelfrey, poulson, and fredrihj from Reddit for the conversation leading to that observation.

And there you have it, more than you ever wanted to know about computing fibonacci numbers.

12 Comments

Leave a Comment
  1. Sjoerd Visscher / Jul 20 2011 12:08 pm

    Lovely stuff. The pow function is implemented as (^) for instances of Num, so fib can also be implemented as:

    data F = F Integer Integer deriving (Eq, Show)

    instance Num F where
    fromInteger n = F n 0
    F a b * F c d = F (a * c + b * d) (b * c + (a – b) * d)

    fib n = f where F _ f = F 1 1 ^ n

  2. Chaddaï Fouché / Jul 20 2011 4:27 pm

    Very nice summary, simple, easy to follow and pretty complete on the big picture. You often see small parts of this lying around but rarely is it all compiled in an interesting and short text.

  3. עמיר / Jul 20 2011 6:47 pm

    Actually, the O(1) solution can also be found through matrix exponentiation. And your “approxfib” uses the exact formula for Fibonacci numbers, so the “round” is only applied to floats without a fractional part. You can drop the psi**x part and the function will always return the same result.

    The trick with matrices is that the matrix [0 1 ; 1 1] is a product of 3 matrices M * D * N, where M*N=1 and D is diagonal. So its n’th power is (M*D*N)^n = M * D^n * N, an the power of a diagonal matrix is just raising the diagonal elements to the relevant power. Computing this with the actual numbers gives the formula of your “approxfib”.

    • cdsmith / Jul 20 2011 8:16 pm

      Yeah, a few people have been going back and forth about this on Reddit. If you look at the eigendecomposition of that matrix, you basically get Binet’s formula. And if you look for efficient ways to compute Binet’s formula quickly and exactly, you end up working in the field Q(phi), and Sjoerd’s Num instance from above. Okay, he’s working in Q(-phi)… basically the same thing! And that, in turn, is basically equivalent to our matrix reasoning, once you note that Q(phi) embeds into a subalgebra of M_2(Q). So you can go around in a full circle there.

  4. Mathnerd314 / Jul 20 2011 6:59 pm

    You don’t need the psi. (see wikipedia)

    • cdsmith / Jul 20 2011 8:17 pm

      Ah, thanks. I didn’t think about that, but yeah, you’re right.

  5. Anonymous / Jul 21 2011 8:05 am

    The third lecture of the “Introduction to Algorithms” course on MIT OpenCourseware goes over the complexity of the matrix multiplication approach and Binet’s Formula:

    http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/video-lectures/lecture-3-divide-and-conquer-strassen-fibonacci-polynomial-multiplication

    @ עמיר : Slick… but you appear to be assuming that exponentiation is an O(1) operation. But for n :: Integer in Haskell isn’t it more like O(ln(n))?

    • cdsmith / Jul 21 2011 8:46 am

      I’m only assuming constant-time exponentiation in the diversion about floating point numbers, where that is accurate (but limited precision). If you look at the part where we calculate powers of matrices (or rather their ordered pair representations), you’ll see that we do use logarithmic time.

      On the other hand, we are assuming that addition and multiplication are constant time, while they should be (relative the value of their operands) something more like O(log n) and O(log n * log (log n) * log (log (log n))), respectively. Since the sizes of the operands are dominated by computations near the result, which is exponential in the input, this means the naive approach is O(n^2), while the approach via exponentiation is something like O(n * log n * log (log n)). I didn’t get into that in the article because the complexity of arbitrary precision multiplication is rather fuzzy.

  6. raphael / Jul 22 2011 8:29 am

    an elegant solution with O(n) and constant space complexity:

    fibtwo 0 = (0,1)
    fibtwo (n+1) = (b,a+b), where (a,b) = fibtwo n

    • cdsmith / Jul 22 2011 8:47 am

      Eww, n+k patterns. :) That said, that does look rather nice as a way of stating it. It’s also much closer to the powers form! The optimization to get to logarithmic time is just adding another case to fibtwo for even n.

  7. Marius / Sep 3 2011 12:54 am

    In your first function “fib 0 = 1” is wrong.
    Furthermore your adaption of raphaels function is in pointfree style, therefore you have to remove the parameter n from fib:
    fib = fst . fibpair
    where fibpair 0 = (0,1)
    fibpair n = (b,a+b) where (a,b) = fibpair (n-1)

Trackbacks

  1. » links for 2011-07-25 (Dhananjay Nene)

Leave a reply to raphael Cancel reply