Odd SciPy ODE Integration error

余生颓废 提交于 2019-11-27 22:22:30

I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.

Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:

SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)

A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:

import numpy as np
from pylab import *
import scipy.integrate as spi

def run():
    #Parameter Values
    S0 = 99.
    I0 = 1.
    R0 = 0.
    PopIn= (S0, I0, R0)
    beta= 0.50     
    gamma=1/10.  
    mu = 1/25550.
    t_end = 15000.
    t_start = 1.
    t_step = 1.
    t_interval = np.arange(t_start, t_end, t_step)

    #Solving the differential equation. Solves over t for initial conditions PopIn
    def eq_system(t,PopIn):
        '''Defining SIR System of Equations'''
        #Creating an array of equations
        Eqs= np.zeros((3))
        Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
        Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
        Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
        return Eqs

    ode =  spi.ode(eq_system)

    # BDF method suited to stiff systems of ODEs
    ode.set_integrator('vode',nsteps=500,method='bdf')
    ode.set_initial_value(PopIn,t_start)

    ts = []
    ys = []

    while ode.successful() and ode.t < t_end:
        ode.integrate(ode.t + t_step)
        ts.append(ode.t)
        ys.append(ode.y)

    t = np.vstack(ts)
    s,i,r = np.vstack(ys).T

    fig,ax = subplots(1,1)
    ax.hold(True)
    ax.plot(t,s,label='Susceptible')
    ax.plot(t,i,label='Infected')
    ax.plot(t,r,label='Recovered')
    ax.set_xlim(t_start,t_end)
    ax.set_ylim(0,100)
    ax.set_xlabel('Time')
    ax.set_ylabel('Percent')
    ax.legend(loc=0,fancybox=True)

    return t,s,i,r,fig,ax

Output:

The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.

Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!