Making Fibonacci faster [duplicate]

两盒软妹~` 提交于 2019-12-03 05:35:51

问题


I was required to write a simple implementation of Fibonacci's algorithm and then to make it faster.

Here is my initial implementation

public class Fibonacci {

    public static long getFibonacciOf(long n) {
        if (n== 0) {
            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            return getFibonacciOf(n-2) + getFibonacciOf(n-1);
        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

}

As you can see I am using System.currentTimeMillis() to get a simple measure of the time elapsed while computed Fibonacci.

This implementation get rapidly kind of exponentially slow as you can see on the following picture

So I've got a simple optimisation idea. To put previous values in a HashMap and instead of re-computing them each time, to simply take them back from the HashMap if they exist. If they don't exist, we then put them in the HashMap.

Here is the new version of the code

public class FasterFibonacci {

    private static Map<Long, Long> previousValuesHolder;
    static {
        previousValuesHolder = new HashMap<Long, Long>();
        previousValuesHolder.put(Long.valueOf(0), Long.valueOf(0));
        previousValuesHolder.put(Long.valueOf(1), Long.valueOf(1));
    }
    public static long getFibonacciOf(long n) {
        if (n== 0) {

            return 0;
        } else if (n == 1) {
            return 1;
        } else {
            if (previousValuesHolder.containsKey(Long.valueOf(n))) {
                return previousValuesHolder.get(n);
            } {

                long newValue = getFibonacciOf(n-2) + getFibonacciOf(n-1);
                previousValuesHolder.put(Long.valueOf(n),     Long.valueOf(newValue));
                return newValue;
            }

        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner (System.in);
        while (true) {
            System.out.println("Enter n :");
            long n = scanner.nextLong();
            if (n >= 0) {
                long beginTime = System.currentTimeMillis();
                long fibo = getFibonacciOf(n);
                long endTime = System.currentTimeMillis();

                long delta = endTime - beginTime;

                System.out.println("F(" + n + ") = " + fibo + " ... computed     in " + delta + " milliseconds");
            } else {
                break;

            }
        }

    }

This change makes the computing extremely fast. I computes all the values from 2 to 103 in no time at all and I get a long overflow at F(104) (Gives me F(104) = -7076989329685730859, which is wrong). I find it so fast that **I wonder if there is any mistakes in my code (Thank your checking and let me know please) **. Please take a look at the second picture:

Is my faster fibonacci's algorithm's implementation correct (It seems it is to me because it gets the same values as the first version, but since the first version was too slow I could not compute bigger values with it such as F(75))? What other way can I use to make it faster? Or is there a better way to make it faster? Also how can I compute Fibonacci for greater values (such as 150, 200) without getting a **long overflow**? Though it seems fast I would like to push it to the limits. I remember Mr Abrash saying 'The best optimiser is between your two ears', so I believe it can still be improved. Thank you for helping

[Edition Note:] Though this question adresses one of the main point in my question, you can see from above that I have additionnal issues.


回答1:


Dynamic programming

Idea:Instead of recomputing the same value multiple times you just store the value calculated and use them as you go along.

f(n)=f(n-1)+f(n-2) with f(0)=0,f(1)=1. So at the point when you have calculated f(n-1) you can easily calculate f(n) if you store the values of f(n) and f(n-1).

Let's take an array of Bignums first. A[1..200]. Initialize them to -1.

Pseudocode

fact(n)
{
if(A[n]!=-1) return A[n];
A[0]=0;
A[1]=1;
for i=2 to n
  A[i]= addition of A[i],A[i-1];
return A[n]
}

This runs in O(n) time. Check it out yourself.

This technique is also called memoization.

The IDEA

Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past'.

If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then its a big hint for DP. Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).

There are two ways of doing this.

1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. (I have used this idea).

2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming. (MinecraftShamrock used this idea)


There's more!

(Other ways to do this)

Look our quest to get a better solution doesn't end here. You will see a different approach-

If you know how to solve recurrence relation then you will find a solution to this relation

f(n)=f(n-1)+f(n-2) given f(0)=0,f(1)=1

You will arrive at the formula after solving it-

f(n)= (1/sqrt(5))((1+sqrt(5))/2)^n - (1/sqrt(5))((1-sqrt(5))/2)^n

which can be written in more compact form

f(n)=floor((((1+sqrt(5))/2)^n) /sqrt(5) + 1/2)

Complexity

You can get the power a number in O(logn) operations. You have to learn the Exponentiation by squaring.

EDIT: It is good to point out that this doesn't necessarily mean that the fibonacci number can be found in O(logn). Actually the number of digits we need to calculate frows linearly. Probably because of the position where I stated that it seems to claim the wrong idea that factorial of a number can be calculated in O(logn) time. [Bakurui,MinecraftShamrock commented on this]




回答2:


If you need to compute n th fibonacci numbers very frequently I suggest using amalsom's answer.

But if you want to compute a very big fibonacci number, you will run out of memory because you are storing all smaller fibonacci numbers. The following pseudocode only keeps the last two fibonacci numbers in memory, i.e. it requires much less memory:

fibonacci(n) {
    if n = 0: return 0;
    if n = 1: return 1;
    a = 0;
    b = 1;
    for i from 2 to n: {
        sum = a + b;
        a = b;
        b = sum;
    }
    return b;
}

Analysis
This can compute very high fibonacci numbers with quite low memory consumption: We have O(n) time as the loop repeats n-1 times. The space complexity is interesting as well: The nth fibonacci number has a length of O(n), which can easily be shown:
Fn <= 2 * Fn-1
Which means that the nth fibonacci number is at most twice as big as its predecessor. Doubling a number in binary is equivalent with a single left-shift, which increases the number of necessary bits by one. So representing the nth fibonacci number takes at most O(n) space. We have at most three successive fibonacci numbers in memory which makes O(n) + O(n-1) + O(n-2) = O(n) total space consumption. In contrast to this the memoization algorithm always keeps the first n fibonacci numbers in memory, which makes O(n) + O(n-1) + O(n-2) + ... + O(1) = O(n^2) space consumption.

So which way should one use?
The only reason to keep all lower fibonacci numbers in memory is if you need fibonacci numbers very frequently. It is a question of balancing time with memory consumption.




回答3:


Get away from the Fibonacci recursion and use the identities

(F(2n), F(2n-1)) = (F(n)^2 + 2 F(n) F(n-1), F(n)^2+F(n-1)^2)
(F(2n+1), F(2n)) = (F(n+1)^2+F(n)^2, 2 F(n+1) F(n) - F(n)^2)

This allows you to compute (F(m+1), F(m)) in terms of (F(k+1), F(k)) for k half the size of m. Written iteratively with some bit shifting for division by 2, this should give you the theoretical O(log n) speed of exponentiation by squaring while staying entirely within integer arithmetic. (Well, O(log n) arithmetic operations. Since you will be working with numbers with roughly n bits, it won't be O(log n) time once you are forced to switch to a large integer library. After F(50), you will overflow the integer data type, which only goes up to 2^(31).)

(Apologies for not remembering Java well enough to implement this in Java; anyone who wants to is free to edit it in.)




回答4:


  • Fibonacci(0) = 0
  • Fibonacci(1) = 1
  • Fibonacci(n) = Fibonacci(n - 1) + Fibonacci(n - 2), when n >= 2

Usually there are 2 ways to calculate Fibonacci number:

  1. Recursion:

    public long getFibonacci(long n) {
      if(n <= 1) {
        return n;
      } else {
        return getFibonacci(n - 1) + getFibonacci(n - 2);
      }
    }
    

    This way is intuitive and easy to understand, while because it does not reuse calculated Fibonacci number, the time complexity is about O(2^n), but it does not store calculated result, so it saves space a lot, actually the space complexity is O(1).

  2. Dynamic Programming:

    public long getFibonacci(long n) {
      long[] f = new long[(int)(n + 1)];
      f[0] = 0;
      f[1] = 1;
      for(int i=2;i<=n;i++) {
        f[i] = f[i - 1] + f[i - 2];
      }
      return f[(int)n];
    }
    

    This Memoization way calculated Fibonacci numbers and reuse them when calculate next one. The time complexity is pretty good, which is O(n), while space complexity is O(n). Let's investigate whether the space complexity can be optimized... Since f(i) only requires f(i - 1) and f(i - 2), there is not necessary to store all calculated Fibonacci numbers.

    The more efficient implementation is:

    public long getFibonacci(long n) {
      if(n <= 1) {
        return n;
      }
      long x = 0, y = 1;
      long ans;
      for(int i=2;i<=n;i++) {
        ans = x + y;
        x = y;
        y = ans;
      }
      return ans;
    }
    

    With time complexity O(n), and space complexity O(1).

Added: Since Fibonacci number increase amazing fast, long can only handle less than 100 Fibonacci numbers. In Java, we can use BigInteger to store more Fibonacci numbers.




回答5:


Precompute a large number of fib(n) results, and store them as a lookup table inside your algorithm. Bam, free "speed"

Now if you need to compute fib(101) and you already have fibs 0 to 100 stored, this is just like trying to compute fib(1).

Chances are this isn't what this homework is looking for, but it's a completely legit strategy and basically the idea of caching extracted further away from running the algorithm. If you know you're likely to be computing the first 100 fibs often and you need to do it really really fast, there's nothing faster than O(1). So compute those values entirely out of band and store them so they can be looked up later.

Of course, cache values as you compute them too :) Duplicated computation is waste.




回答6:


Here is snipped of code with iterative approach instead of recursion.

Output example:

Enter n: 5
F(5) = 5 ... computed in 1 milliseconds
Enter n: 50
F(50) = 12586269025 ... computed in 0 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 2 milliseconds
Enter n: 500
F(500) = ...4125 ... computed in 0 milliseconds
Enter n: 500000
F(500000) = ...453125 ... computed in 5,718 milliseconds
Enter n: 500000
F(500000) = ...453125 ... computed in 0 milliseconds

Some pieces of results are omitted with ... for better view.

Code snippet:

public class CachedFibonacci {
    private static Map<BigDecimal, BigDecimal> previousValuesHolder;
    static {
        previousValuesHolder = new HashMap<>();
        previousValuesHolder.put(BigDecimal.ZERO, BigDecimal.ZERO);
        previousValuesHolder.put(BigDecimal.ONE, BigDecimal.ONE);
    }

    public static BigDecimal getFibonacciOf(long number) {
        if (0 == number) {
            return BigDecimal.ZERO;
        } else if (1 == number) {
            return BigDecimal.ONE;
        } else {
            if (previousValuesHolder.containsKey(BigDecimal.valueOf(number))) {
                return previousValuesHolder.get(BigDecimal.valueOf(number));
            } else {
                BigDecimal olderValue = BigDecimal.ONE,
                        oldValue = BigDecimal.ONE,
                        newValue = BigDecimal.ONE;

                for (int i = 3; i <= number; i++) {
                    newValue = oldValue.add(olderValue);
                    olderValue = oldValue;
                    oldValue = newValue;
                }
                previousValuesHolder.put(BigDecimal.valueOf(number), newValue);
                return newValue;
            }
        }
    }

    public static void main(String[] args) {
        Scanner scanner = new Scanner(System.in);
        while (true) {
            System.out.print("Enter n: ");
            long inputNumber = scanner.nextLong();
            if (inputNumber >= 0) {
                long beginTime = System.currentTimeMillis();
                BigDecimal fibo = getFibonacciOf(inputNumber);
                long endTime = System.currentTimeMillis();
                long delta = endTime - beginTime;

                System.out.printf("F(%d) = %.0f ... computed in %,d milliseconds\n", inputNumber, fibo, delta);
            } else {
                System.err.println("You must enter number > 0");
                System.out.println("try, enter number again, please:");
                break;
            }
        }
    }
}

This approach runs much faster than the recursive version.

In such a situation, the iterative solution tends to be a bit faster, because each recursive method call takes a certain amount of processor time. In principle, it is possible for a smart compiler to avoid recursive method calls if they follow simple patterns, but most compilers don’t do that. From that point of view, an iterative solution is preferable.




回答7:


Having followed a similar approach some time ago, I've just realized there's another optimization you can make.

If you know two large consecutive answers, you can use this as a starting point. For example, if you know F(100) and F(101), then calculating F(104) is approximately as difficult (*) as calculating F(4) based on F(0) and F(1).

Calculating iteratively up is as efficient calculation-wise as doing the same using cached-recursion, but uses less memory.

Having done some sums, I have also realized that, for any given z < n:

F(n)=F(z) * F(n-z) + F(z-1) * F(n-z-1)

If n is odd, and you choose z=(n+1)/2, then this is reduced to

F(n)=F(z)^2+F(z-1)^2

It seems to me that you should be able to use this by a method I have yet to find, that you should be able use the above info to find F(n) in the number of operations equal to:

the number of bits in n doublings (as per above) + the number of 1 bits in n addings; in the case of 104, this would be (7 bits, 3 '1' bits) = 14 multiplications (squarings), 10 additions.

(*) assuming adding two numbers takes the same time, irrelevant of the size of the two numbers.




回答8:


Here's a way of provably doing it in O(log n) (as the loop runs log n times):

/* 
 * Fast doubling method
 * F(2n) = F(n) * (2*F(n+1) - F(n)).
 * F(2n+1) = F(n+1)^2 + F(n)^2.
 * Adapted from:
 *    https://www.nayuki.io/page/fast-fibonacci-algorithms
 */
private static long getFibonacci(int n) {
    long a = 0;
    long b = 1;
    for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
        long d = a * ((b<<1) - a);
        long e = (a*a) + (b*b);
        a = d;
        b = e;
        if (((n >>> i) & 1) != 0) {
            long c = a+b;
            a = b;
            b = c;
        }
    }
    return a;
}

I am assuming here (as is conventional) that one multiply / add / whatever operation is constant time irrespective of number of bits, i.e. that a fixed-length data type will be used.

This page explains several methods of which this is the fastest. I simply translated it away from using BigInteger for readability. Here's the BigInteger version:

/* 
 * Fast doubling method.
 * F(2n) = F(n) * (2*F(n+1) - F(n)).
 * F(2n+1) = F(n+1)^2 + F(n)^2.
 * Adapted from:
 *    http://www.nayuki.io/page/fast-fibonacci-algorithms
 */
private static BigInteger getFibonacci(int n) {
    BigInteger a = BigInteger.ZERO;
    BigInteger b = BigInteger.ONE;
    for (int i = 31 - Integer.numberOfLeadingZeros(n); i >= 0; i--) {
        BigInteger d = a.multiply(b.shiftLeft(1).subtract(a));
        BigInteger e = a.multiply(a).add(b.multiply(b));
        a = d;
        b = e;
        if (((n >>> i) & 1) != 0) {
            BigInteger c = a.add(b);
            a = b;
            b = c;
        }
    }
    return a;
}


来源:https://stackoverflow.com/questions/29317414/making-fibonacci-faster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!