ode

Change a constant in ODE calculations under particular conditions with a flag

泄露秘密 提交于 2019-11-29 17:00:40
I have an ODE for calculating how acidicity changes. Everything is working just fine, only I would like to change a constant whenever acidicity reaches a critical point. It is supposed to be some kind of irreversible effect I wish to simulate. My constants are coming from a structure file (c) I load once in the ODE function. [Time,Results] = ode15s(@(x, c) f1(x, c),[0 c.length],x0,options); The main problem I have here is not telling Matlab to change the constant but remember if it happened already during the simulation once. so Matlab should take the irreversibly changed constant rather than

solving two dimension-differential equations in python with scipy

a 夏天 提交于 2019-11-29 16:57:03
i am a newbie to python. I have a simple differential systems, which consists of two variables and two differential equations and initial conditions x0=1, y0=2 : dx/dt=6*y dy/dt=(2t-3x)/4y now i am trying to solve these two differential equations and i choose odeint . Here is my code: import matplotlib.pyplot as pl import numpy as np from scipy.integrate import odeint def func(z,b): x, y=z return [6*y, (b-3*x)/(4*y)] z0=[1,2] t = np.linspace(0,10,11) b=2*t xx=odeint(func, z0, b) pl.figure(1) pl.plot(t, xx[:,0]) pl.legend() pl.show() but the result is incorrect and there is a error message:

Using scipy.integrate.complex_ode instead of scipy.integrate.ode

只愿长相守 提交于 2019-11-29 11:27:30
I am trying to use complex_ode method instead of ode method in scipy.integrate. The help page for complex_ode does not provide example, so I may have done something wrong. This code works properly with scipy.integrate.ode: from scipy.integrate import ode y0, t0 = [1.0j, 2.0], 0 def f(t, y, arg1): return [1j*arg1*y[0] + y[1], -arg1*y[1]**2] def jac(t, y, arg1): return [[1j*arg1, 1], [0, -arg1*2*y[1]]] r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True) r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) t1 = 10 dt = 1 while r.successful() and r.t < t1: r

Replacing negative values in a model (system of ODEs) with zero

耗尽温柔 提交于 2019-11-29 11:24:29
I'm currently working on solving a system of ordinary differential equations using deSolve , and was wondering if there's any way of preventing differential variable values from going below zero. I've seen a few other posts about setting negative values to zero in a vector, data frame, etc., but since this is a biological model (and it doesn't make sense for a T cell count to go negative), I need to stop it from happening to begin with so these values don't skew the results, not just replace the negatives in the final output. My standard approach is to transform the state variables to an

Optimize constants in differential equations in Python

别等时光非礼了梦想. 提交于 2019-11-29 00:43:59
Okay so how would i approach to writing a code to optimize the constants a and b in a differential equation, like dy/dt = a*y^2 + b, using curve_fit? I would be using odeint to solve the ODE and then curve_fit to optimize a and b. If you could please provide input on this situation i would greatly appreciate it! You might be better served by looking at ODEs with Sympy . Scipy/Numpy are fundamentally numerical packages and aren't really set up to do algebraic/symbolic operations. You definitely can do this: import numpy as np from scipy.integrate import odeint from scipy.optimize import curve

Runge-Kutta code not converging with builtin method

Deadly 提交于 2019-11-28 14:28:39
I am trying to implement the runge-kutta method to solve a Lotka-Volterra systtem, but the code (bellow) is not working properly. I followed the recomendations that I found in other topics of the StackOverflow, but the results do not converge with the builtin Runge-Kutta method, like rk4 method available in Pylab, for example. Someone could help me? import matplotlib.pyplot as plt import numpy as np from pylab import * def meurk4( f, x0, t ): n = len( t ) x = np.array( [ x0 ] * n ) for i in range( n - 1 ): h = t[i+1] - t[i] k1 = h * f( x[i], t[i] ) k2 = h * f( x[i] + 0.5 * h * k1, t[i] + 0.5 *

ODE Runge Kutta MATLAB error

China☆狼群 提交于 2019-11-28 12:02:30
问题 so I'm trying to create a Runge Kutta function and this is my code : function [t,U] = RK(f, n, eta, interv) h = (interv(2)-interv(1))/n; t = interv(1):h:interv(2); v(1) = eta(1); w(1) = eta(2); for i=1:n k1 = f([v(i),w(i)]); k2 = f([v(i),w(i)]+h*k1/2); %f(t(i)+h/2, u(:,i)+h*k1/2); k3 = f([v(i),w(i)]+h*k2/2); k4 = f([v(i),w(i)]+h*k3); v(i+1) = v(i) + h*(k1(1)+2*k2(1)+2*k3(1)+k4(1))/6; w(i+1) = w(i) + h*(k1(2)+2*k2(2)+2*k3(2)+k4(2))/6; end U = [v;w]; end Where U is a matrix of 2 lines and n+1

Does scipy.integrate.ode.set_solout work?

被刻印的时光 ゝ 提交于 2019-11-28 11:27:19
The scipy.integrate.ode interface to integration routines provides a method for stopping the integration if a constraint is violated at any step, set_solout . However, I cannot get this method to work, even in the simplest examples. Here's one attempt: import numpy as np from scipy.integrate import ode def f(t, y): """Exponential decay.""" return -y def solout(t, y): if y[0] < 0.5: return -1 else: return 0 y_initial = 1 t_initial = 0 r = ode(f).set_integrator('dopri5') # Integrator that supports solout r.set_initial_value(y_initial, t_initial) r.set_solout(solout) # Integrate until t = 5, but

solving two dimension-differential equations in python with scipy

China☆狼群 提交于 2019-11-28 10:23:23
问题 i am a newbie to python. I have a simple differential systems, which consists of two variables and two differential equations and initial conditions x0=1, y0=2 : dx/dt=6*y dy/dt=(2t-3x)/4y now i am trying to solve these two differential equations and i choose odeint . Here is my code: import matplotlib.pyplot as pl import numpy as np from scipy.integrate import odeint def func(z,b): x, y=z return [6*y, (b-3*x)/(4*y)] z0=[1,2] t = np.linspace(0,10,11) b=2*t xx=odeint(func, z0, b) pl.figure(1)

Using adaptive step sizes with scipy.integrate.ode

ぐ巨炮叔叔 提交于 2019-11-28 05:53:47
The (brief) documentation for scipy.integrate.ode says that two methods ( dopri5 and dop853 ) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat. My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time