Calculating 1^X + 2^X + … + N^X mod 1000000007

徘徊边缘 提交于 2019-12-20 17:30:12

问题


Is there any algorithm to calculate (1^x + 2^x + 3^x + ... + n^x) mod 1000000007?
Note: a^b is the b-th power of a.

The constraints are 1 <= n <= 10^16, 1 <= x <= 1000. So the value of N is very large.

I can only solve for O(m log m) if m = 1000000007. It is very slow because the time limit is 2 secs.

Do you have any efficient algorithm?

There was a comment that it might be duplicate of this question, but it is definitely different.


回答1:


You can sum up the series

1**X + 2**X + ... + N**X

with the help of Faulhaber's formula and you'll get a polynomial with an X + 1 power to compute for arbitrary N.

If you don't want to compute Bernoulli Numbers, you can find the the polynomial by solving X + 2 linear equations (for N = 1, N = 2, N = 3, ..., N = X + 2) which is a slower method but easier to implement.

Let's have an example for X = 2. In this case we have an X + 1 = 3 order polynomial:

    A*N**3 + B*N**2 + C*N + D

The linear equations are

      A +    B +   C + D = 1              =  1 
    A*8 +  B*4 + C*2 + D = 1 + 4          =  5
   A*27 +  B*9 + C*3 + D = 1 + 4 + 9      = 14
   A*64 + B*16 + C*4 + D = 1 + 4 + 9 + 16 = 30 

Having solved the equations we'll get

  A = 1/3
  B = 1/2
  C = 1/6
  D =   0 

The final formula is

  1**2 + 2**2 + ... + N**2 == N**3 / 3 + N**2 / 2 + N / 6

Now, all you have to do is to put an arbitrary large N into the formula. So far the algorithm has O(X**2) complexity (since it doesn't depend on N).




回答2:


There are a few ways of speeding up modular exponentiation. From here on, I will use ** to denote "exponentiate" and % to denote "modulus".

First a few observations. It is always the case that (a * b) % m is ((a % m) * (b % m)) % m. It is also always the case that a ** n is the same as (a ** floor(n / 2)) * (a ** (n - floor(n/2)). This means that for an exponent <= 1000, we can always complete the exponentiation in at most 20 multiplications (and 21 mods).

We can also skip quite a few calculations, since (a ** b) % m is the same as ((a % m) ** b) % m and if m is significantly lower than n, we simply have multiple repeating sums, with a "tail" of a partial repeat.




回答3:


I think Vatine’s answer is probably the way to go, but I already typed this up and it may be useful, for this or for someone else’s similar problem.

I don’t have time this morning for a detailed answer, but consider this. 1^2 + 2^2 + 3^2 + ... + n^2 would take O(n) steps to compute directly. However, it’s equivalent to (n(n+1)(2n+1)/6), which can be computed in O(1) time. A similar equivalence exists for any higher integral power x.

There may be a general solution to such problems; I don’t know of one, and Wolfram Alpha doesn’t seem to know of one either. However, in general the equivalent expression is of degree x+1, and can be worked out by computing some sample values and solving a set of linear equations for the coefficients.

However, this would be difficult for large x, such as 1000 as in your problem, and probably could not be done within 2 seconds.

Perhaps someone who knows more math than I do can turn this into a workable solution?

Edit: Whoops, I see Fabian Pijcke had already posted a useful link about Faulhaber's formula before I posted.




回答4:


If you want something easy to implement and fast, try this:

Function Sum(x: Number, n: Integer) -> Number
  P := PolySum(:x, n)
  return P(x)
End

Function PolySum(x: Variable, n: Integer) -> Polynomial
  C := Sum-Coefficients(n)
  P := 0
  For i from 1 to n + 1
    P += C[i] * x^i
  End
  return P
End

Function Sum-Coefficients(n: Integer) -> Vector of Rationals
  A := Create-Matrix(n)
  R := Reduced-Row-Echelon-Form(A)
  return last column of R
End

Function Create-Matrix(n: Integer) -> Matrix of Integers
  A := New (n + 1) x (n + 2) Matrix of Integers
  Fill A with 0s
  Fill first row of A with 1s

  For i from 2 to n + 1
    For j from i to n + 1
      A[i, j] := A[i-1, j] * (j - i + 2)
    End
    A[i, n+2] := A[i, n]
  End
  A[n+1, n+2] := A[n, n+2]
  return A
End

Explanation

Our goal is to obtain a polynomial Q such that Q(x) = sum i^n for i from 1 to x. Knowing that Q(x) = Q(x - 1) + x^n => Q(x) - Q(x - 1) = x^n, we can then make a system of equations like so:

d^0/dx^0( Q(x) - Q(x - 1) ) = d^0/dx^0( x^n )
d^1/dx^1( Q(x) - Q(x - 1) ) = d^1/dx^1( x^n )
d^2/dx^2( Q(x) - Q(x - 1) ) = d^2/dx^2( x^n )
                           ...                            .
d^n/dx^n( Q(x) - Q(x - 1) ) = d^n/dx^n( x^n )

Assuming that Q(x) = a_1*x + a_2*x^2 + ... + a_(n+1)*x^(n+1), we will then have n+1 linear equations with unknowns a1, ..., a_(n+1), and it turns out the coefficient cj multiplying the unknown aj in equation i follows the pattern (where (k)_p = (k!)/(k - p)!):

  • if j < i, cj = 0
  • otherwise, cj = (j)_(i - 1)

and the independent value of the ith equation is (n)_(i - 1). Explaining why gets a bit messy, but you can check the proof here.

The above algorithm is equivalent to solving this system of linear equations. Plenty of implementations and further explanations can be found in https://github.com/fcard/PolySum. The main drawback of this algorithm is that it consumes a lot of memory, even my most memory efficient version uses almost 1gb for n=3000. But it's faster than both SymPy and Mathematica, so I assume it's okay. Compare to Schultz's method, which uses an alternate set of equations.

Examples

It's easy to apply this method by hand for small n. The matrix for n=1 is

| (1)_0   (2)_0   (1)_0 |     |   1       1       1   |
|   0     (2)_1   (1)_1 |  =  |   0       2       1   |

Applying a Gauss-Jordan elimination we then obtain

|   1       0      1/2  |
|   0       1      1/2  |

Result = {a1 = 1/2, a2 = 1/2} => Q(x) = x/2 + (x^2)/2

Note the matrix is always already in row echelon form, we just need to reduce it.

For n=2:

| (1)_0   (2)_0   (3)_0   (2)_0 |     |   1       1       1       1   |
|   0     (2)_1   (3)_1   (2)_1 |  =  |   0       2       3       2   |
|   0       0     (3)_2   (2)_2 |     |   0       0       6       2   |

Applying a Gauss-Jordan elimination we then obtain

|   1       1       0       2/3 |      |   1       0       0       1/6 |
|   0       2       0         1 |  =>  |   0       1       0       1/2 |
|   0       0       1       1/3 |      |   0       0       1       1/3 |

Result = {a1 = 1/6, a2 = 1/2, a3 = 1/3} => Q(x) = x/6 + (x^2)/2 + (x^3)/3

The key to the algorithm's speed is that it doesn't calculate a factorial for every element of the matrix, instead it knows that (k)_p = (k)_(p-1) * (k - (p - 1)), therefore A[i,j] = (j)_(i-1) = (j)_(i-2) * (j - (i - 2)) = A[i-1, j] * (j - (i - 2)), so it uses the previous row to calculate the current one.



来源:https://stackoverflow.com/questions/41695283/calculating-1x-2x-nx-mod-1000000007

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!