using SciPy to integrate a function that returns a matrix or array

后端 未结 5 1553
野趣味
野趣味 2020-12-16 15:07

I have a symbolic array that can be expressed as:

from sympy import lambdify, Matrix

g_sympy = Matrix([[   x,  2*x,  3*x,  4*x,  5*x,  6*x,  7*x,  8*x,   9*         


        
5条回答
  •  不知归路
    2020-12-16 15:43

    In the real case the integral does not have an exact solution, do you mean singularities? Could you be more precise on it, as well as on the size of the matrix that you wish to integrate. I have to admit that sympy is dreadfully slow when it comes to some things (not sure if integration is part of it, but i prefer to stay away from sympy and stick to numpy solution). Do you want to get a more elegant solution, by doing it with a matrix or a faster one?

    -note: apparently i cant add comment to your post to ask for this stuff, so i had to post this as answer, maybe this is because i dont have enough reputation or so?-

    edit: something like this?

        import numpy
        from scipy.integrate import trapz
        g=lambda x: numpy.array([[x,2*x,3*x],[x**2,x**3,x**4]])
        xv=numpy.linspace(0,100,200)
        print trapz(g(xv))
    

    having seen that you want to integrate stuff like sum(a*sin(bx+c)^n*cos(dx+e)^m), for different coefficients for the a,b,c,d,e,m,n, i suggest doing all of those analytically. (should have some formula for that since you can just rewrite sin to complex exponentials

    Another thing i noted when checking those functions a bit better, is that sin(a*x+pi/2) and sin(a*x+pi) and stuff like that can be rewritten to cos or sin in a way that removes the pi/2 or pi. Also what i see is that just by looking at the first element in your matrix of functions:

    a*sin(bx+c)^2+d*cos(bx+c)^2 = a*(sin^2+cos^2)+(d-a)*cos(bx+c)^2 = a+(d-a)*cos(bx+c)^2 
    

    which also simplifies the calculations. If you had the formulas in a way which didnt involve a massive txtfile or so, id check what the most general formula is that you need to integrate, but i guess its something like a*sin^n(bx+c)*cos^m(dx+e), with m and n being 0 1 or 2, and those things can be simplified into something which can be analytically integrated. So if you find out the most general analytical function you got, you can easily make something like

    f=lambda x: [[s1(x),s2(x)],[s3(x),s4(x)]]
    res=f(x2)-f(x1)
    

    where s1(x) etc are just the analytically integrated versions of your functions?

    (not really planning on going through your entire code to see what all the rest does, but is it just integrating those functions in the txt file from a to b or something like that? or is there somewhere something like that you take the square of each function or whatever thing that might mess up the possibility of doing it analytically?)

    this should simplify your integrals i guess?

    first integral and: second one

    hmm, that second link doesnt work, but you get the idea from the first one i guess

    edit, since you do not want analytical solutions: the improvement remains in getting rid of sympy:

    from sympy import sin as SIN
    from numpy import sin as SIN2
    from scipy.integrate import trapz
    import time
    import numpy as np
    
    def integrand(rlin):
        first=np.arange(1,11)[:,None]
        second=np.arange(2,12)[:,None]
        return np.vstack((rlin*first,np.power(rlin,second)))
    
    def simpson2(func,a,b,num):
        a=float(a)
        b=float(b)
        h=(b-a)/num
        p1=a+h*np.arange(1,num,2)
        p2=a+h*np.arange(2,num-1,2)
        points=np.hstack((p1,p2,a,b))
        mult=np.hstack((np.repeat(4,p1.shape[0]),np.repeat(2,p2.shape[0]),1,1))
        return np.dot(integrand(points),mult)*h/3
    
    
    A=np.linspace(0,100.,200)
    
    B=lambda x: SIN(x)
    C=lambda x: SIN2(x)
    
    t0=time.time()
    D=simpson2(B,0,100.,200)
    print time.time()-t0
    t1=time.time()
    E=trapz(C(A))
    print time.time()-t1
    
        t2=time.time()
        F=simpson2(C,0,100.,200)
        print time.time()-t2
    

    results in:

    0.000764131546021 sec for the faster method, but when using sympy
    
    7.58171081543e-05 sec for my slower method, but which uses numpy
    
    0.000519037246704 sec for the faster method, when using numpy, 
    

    conclusion: use numpy, ditch sympy, (my slower numpy method is actually faster in this case, because in this example i only tried it on one sin-function, instead of on a ndarray of them, but the point of ditching sympy still remains when comparing the time of the numpy version of the faster method to the one of the sympy version of the faster method)

提交回复
热议问题