numeric
::odesolve
Numerical solution of an ordinary differential equation
MuPAD® notebooks will be removed in a future release. Use MATLAB® live scripts instead.
MATLAB live scripts support most MuPAD functionality, though there are some differences. For more information, see Convert MuPAD Notebooks to MATLAB Live Scripts.
numeric::odesolve(f
,t_{0} .. t
,Y_{0}
, <method
>, <RelativeError = rtol
>, <AbsoluteError = atol
>, <Stepsize = h
>, <MaxStepsize = h_{max}
>, <Alldata = n
>, <Symbolic>) numeric::odesolve(t_{0} .. t
,f
,Y_{0}
, <method
>, <RelativeError = rtol
>, <AbsoluteError = atol
>, <Stepsize = h
>, <MaxStepsize = h_{max}
>, <Alldata = n
>, <Symbolic>)
numeric::odesolve(f, t_{0}..t,
Y_{0})
returns a numerical approximation
of the solution Y(t) of
the first order differential equation (dynamical system) , Y(t_{0})
= Y_{0} with and .
numeric::odesolve
is a general purpose solver
able to deal with initial value problems of various kinds of ordinary
differential equations. Equations of
order p can
be solved by numeric::odesolve
after reformulation
to dynamical system form. This can always be achieved by writing the
equation as a first order system
for the vector . See Example 4.
The following singlestep RungeKuttatype methods are implemented:
EULER1
(order 1)
RKF43
(order 3)
xRKF43
(order 3)
RKF34
(order 4)
xRKF34
(order 4)
RK4
(order 4)
RKF54a
(order 4)
RKF54b
(order 4)
DOPRI54
(order 4)
xDOPRI54
(order 4)
CK54
(order 4)
xRKF54a
(order 4)
xRKF54b
(order 4)
xCK54
(order 4)
RKF45a
(order 5)
RKF45b
(order 5)
DOPRI45
(order 5)
CK45
(order 5)
xRKF45a
(order 5)
xRKF45b
(order 5)
xDOPRI45
(order 5)
xCK45
(order 5)
DOPRI65
(order 5)
xDOPRI65
(order 5)
DOPRI56
(order 6)
xDOPRI56
(order 6)
BUTCHER6
(order 6)
RKF87
(order 7)
xRKF87
(order 7)
DOPRI87
(order 7)
xDOPRI87
(order 7)
RKF78
(order 8)
xRKF78
(order 8)
DOPRI78
(order 8)
xDOPRI78
(order 8)
GAUSS(s)
(order 2 s)
GAUSS
= s
For the Gauss methods, GAUSS(s)
is equivalent
to GAUSS = s
. The positive integer s
indicates
the number of stages. The order of the s
stage
Gauss method is 2 s.
The utility function numeric::ode2vectorfield
may
be used to produce the input parameters f, t_{0}, Y_{0} from
a set of differential expressions representing the ODE. See Example 1.
The input data t_{0}, t and Y_{0} must
not contain symbolic objects which cannot be converted to floating
point values via float
.
Numerical expressions such as , etc.
are accepted.
The vector field f defining
the dynamical system must
be represented by a procedure with two input parameters: the scalar
time t and
the vector Y. numeric::odesolve
internally
calls this function with real floatingpoint values t and
a list Y of
floatingpoint values. It has to return the vector f(t, Y) either
as a list or as a 1dimensional array. The output of f may
contain numerical expressions such as π, etc.
However, all values must be convertible to real or complex floating
point numbers by float
.
Autonomous systems, where f(t, Y) does
not depend on t,
must also be represented by a procedure with 2 arguments t
and Y
.
Scalar functions Y
also must be represented
by a list or an array with one element. For instance, the input data
for the scalar initial value problem may
be of the form
f := proc(t,Y)  /* Y is a 1dimensional vector */ 
local y;  /* represented by a list with */ 
begin  /* one element: Y = [y]. */ 
y := Y[1];  
[t*sin(y)]  /* the output is a list with 1 element */ 
end_proc:  
Y0 := [1]:  /* the initial value */ 
The numerical precision is controlled by the global variable DIGITS
:
an adaptive control of the step size keeps local relative discretization
errors below rtol=10^DIGITS
, unless a different
tolerance is specified via the option RelativeError = rtol
.
For small values of the solution vector Y,
the absolute discretization error can be bounded by the threshold atol
specified
via the option AbsoluteError = atol
.
If AbsoluteError
is not specified, only relative
discretization errors are controlled and kept below rtol
.
The error control may be switched off by specifying a fixed Stepsize
= h
.
Only local errors are controlled by the adaptive mechanism. No control of the global error is provided!
With Y := t > numeric::odesolve(f, t_0..t, Y_0)
,
the numerical solution can be repesented by a MuPAD^{®} function:
the call Y(t)
will start the numerical integration
from t_{0}
to t
.
A more sophisticated form of this function may be generated via Y
:= numeric::odesolve2(f, t_{0}, Y_{0})
.
This equips Y
with a remember mechanism that
uses previously computed values to speed up the computation. See Example 2.
For systems of the special form with
a matrix valued function f(t, Y),
there is a special solver numeric::odesolveGeometric
which
preserves geometric features of the system more faithfully than numeric::odesolve
.
The function is sensitive to the environment variable DIGITS
,
which determines the numerical working precision.
We compute the numerical solution y(10) of the initial value problem , y(0) = 2:
f := proc(t, Y) begin [t*sin(Y[1])] end_proc: numeric::odesolve(f, 0..10, [2])
Alternatively, the utility function numeric::ode2vectorfield
can be used
to generate the input parameters in a more intuitive way:
[f, t0, Y0] := [numeric::ode2vectorfield({y'(t) = t*sin(y(t)), y(0) = 2}, [y(t)])]
numeric::odesolve(f, t0..10, Y0)
delete f, t0, Y0:
We consider , y(0) = 1. The numerical solution may be represented by the function
Y := t > numeric::odesolve((t,Y) > Y, 0..t, [1]):
Calling Y(t)
starts the numerical integration:
Y(5), Y(0), Y(1), Y(PI)
delete Y:
We compute the numerical solution Y(π) = [x(π), y(π)] of the system
.
f := (t, Y) > [Y[1] + Y[2], Y[1]  Y[2]]: Y0 := [1, I]: numeric::odesolve(f, 0..PI, Y0)
The solution of a linear dynamical system with a constant matrix A is . The solution of the system above can also be computed by:
t := PI: tA := array(1..2, 1..2, [[t, t], [t, t]]): numeric::expMatrix(tA, Y0)
delete f, Y0, t, tA:
We compute the numerical solution y(1) of the differential equation with initial conditions y(0) = 0, . The second order equation is converted to a first order system for the vector :
.
f := proc(t, Y) begin [Y[2], Y[1]^2] end_proc: Y0 := [0, 1]: numeric::odesolve(f, 0..1, Y0)
delete f, Y0:
We demonstrate how numerical data can be obtained on a user
defined time mesh t[i]
. The initial value problem
is , y(0)
= 1, the sample points are t_{0} =
0.0, t_{1} =
0.1, …, t_{100} =
10.0. First, we define the differential equation
and the initial condition:
f := (t, Y) > [sin(t)  Y[1]]: Y[0] := [1]:
We define the time mesh:
for i from 0 to 100 do t[i] := i/10 end_for:
The equation is integrated iteratively from t[i1]
to t[i]
with
a working precision of 4 significant decimal places:
DIGITS := 4: for i from 1 to 100 do Y[i] := numeric::odesolve(f, t[i1]..t[i], Y[i1]) end_for:
The following mesh data are produced:
[t[i], Y[i]] $ i = 0..100
These data can be displayed by a list plot:
plotpoints := [[t[i], op(Y[i])] $ i = 0..100]: plot(plot::PointList2d(plotpoints, PointColor = RGB::Black)):
The same plot can be obtained directly via plot::Ode2d
:
plot(plot::Ode2d( [t[i] $ i = 0..100], f, Y[0], [(t, Y) > [t, Y[1]], Style = Points, Color = RGB::Black]))
delete f, t, DIGITS, Y, plotpoints:
We compute the numerical solution y(1) of by
the classical 4th order RungeKutta method RK4
.
By internal local extrapolation, its effective order is 5:
f := (t, Y) > Y: DIGITS := 13: numeric::odesolve(f, 0..1, [1], RK4)
Next, we use local extrapolation xRKF78
of
the 8th order submethod of the RungeKuttaFehlberg pair RKF78
.
This scheme has effective order 9:
numeric::odesolve(f, 0..1, [1], xRKF78)
Both methods yield the same result because of the internal adaptive
error control. However, due to its higher order, the method xRKF78
is
faster.
delete f, DIGITS:
We consider the stiff ODE .
The default method DOPRI78
is explicit and not
very efficient in solving very stiff problems:
f := (t, Y) > [10^4*(cos(t)  Y[1])]: t0 := time(): numeric::odesolve(f, 0..1, [1]), (time()  t0)*msec
We use the implicit Astable method GAUSS(6)
.
For this stiff problem, it is more efficient than the default method DOPRI78
:
t0 := time(): numeric::odesolve(f, 0..1, [1], GAUSS(6)), (time()  t0)*msec
delete t0:
We consider the initial value problem , y(0) = 1. We note that the numerical evaluation of the right hand side of the equation suffers from cancellation effects, when y is small.
f := (t, Y) > [10^20*Y[1]*(1  cos(Y[1]))]: Y0 := [1]:
We first attempt to compute y(1) with
a working precision of 6 digits using the default setting RelativeError
= 10^DIGITS=10^(6)
:
DIGITS := 6: numeric::odesolve(f, 0..1, Y0)
Due to numerical roundoff in the internal steps, no digit of this result is correct. Next, we use a working precision of 20 significant decimal places to eliminate roundoff effects:
DIGITS := 20: numeric::odesolve(f, 0..1, Y0, RelativeError = 10^(6))
Since relative local discretization errors are of the magnitude 10^{ 6}, not all displayed digits are trustworthy. We finally use a working precision of 20 digits and constrain the local relative discretization errors by the tolerance 10^{ 10}:
numeric::odesolve(f, 0..1, Y0, RelativeError = 10^(10))
delete f, Y0, DIGITS:
We compute the numerical solution y(1) of , y(0) = 1 with various methods and various constant step sizes. We compare the result with the exact solution .
f := (t, Y) > Y: Y0 := [1]:
We first use the Euler method of order 1 with two different step sizes:
Y := numeric::odesolve(f, 0..1, Y0, EULER1, Stepsize = 0.1): Y, globalerror = float(exp(1))  Y[1]
Decreasing the step size by a factor of 10 should reduce the global error by a factor of 10. Indeed:
Y := numeric::odesolve(f, 0..1, Y0, EULER1, Stepsize = 0.01): Y, globalerror = float(exp(1))  Y[1]
Next, we use the classical RungeKutta method of order 4 with two different step sizes:
Y := numeric::odesolve(f, 0..1, Y0, RK4, Stepsize = 0.1): Y, globalerror = float(exp(1))  Y[1]
Decreasing the step size by a factor of 10 in a 4th order scheme should reduce the global error by a factor of 10^{4}. Indeed:
Y := numeric::odesolve(f, 0..1, Y0, RK4, Stepsize = 0.01): Y, globalerror = float(exp(1))  Y[1]
delete f, Y0, Y:
We integrate , y(0) = 1 over the interval t ∈ [0, 0.99] with a working precision of 4 digits. The exact solution is . Note the singularity at t = 1.
DIGITS := 4: f := (t, Y) > [Y[1]^2]: Y0 := [1]:
The option Alldata
, equivalent to Alldata
= 1
, yields all mesh data generated during the internal
adaptive process:
numeric::odesolve(f, 0..0.99, Y0, Alldata)
With Alldata = 2
, only each second point
is returned:
numeric::odesolve(f, 0..0.99, Y0, Alldata = 2)
One can control the time mesh using the option Stepsize
= h
:
numeric::odesolve(f, 0..0.99, Y0, Stepsize=0.1, Alldata = 1)
However, with the option Stepsize = h
, no
automatic error control is provided by numeric::odesolve
.
Note the poor approximation y(t)
= 94.3 for t =
0.99 (the exact value is y(0.99)
= 100.0). The next computation with smaller step
size yields better results:
numeric::odesolve(f, 0..0.99, Y0, Stepsize = 0.01, Alldata = 10)
Example 5 demonstrates
how accurate numerical data on a user defined time mesh can be generated
using the automatic error control of numeric::odesolve
.
delete DIGITS, f, Y0:
The second order equation is written as the dynamical system , for the vector Y = [y, z]. A single symbolic step
of the Euler method is computed:
f := proc(t, Y) begin [Y[2], sin(Y[1])] end_proc: numeric::odesolve(f, t0..t0+h, [y0, z0], EULER1, Symbolic)
delete f:

A procedure representing the vector field 

A numerical real value for the initial time 

A numerical real value (the “time”) 

A list or 1dimensional array of numerical values representing the initial condition 

One of the RungeKutta schemes listed below. 

Name of the RungeKutta scheme. See Example 6. For details on these schemes, see the Algorithms section. 

Name of the RungeKutta scheme specified as The methods These methods are implicit Astable schemes. The time steps
are rather costly to compute. The Gauss methods are useful for integrating
stiff ODEs. For nonstiff ODEs, there is usually no need to change
the default method Further, the Gauss methods are symplectic methods. When used
with constant step size ( See Example 7. 

Option specified as Option specified as The internal control mechanism estimates the local discretization
error of a RungeKutta step and adjusts the step size adaptively to
keep this error below the specified tolerances For accepting a solution vector Y.
Roughly speaking, the relative error is controlled when the solution Y is
sufficiently large. For very small solution values Y,
absolute discretization errors are kept below the threshold Specify The error control may be switched off by specifying a fixed The default setting of , ensures that the local discretization errors are of the same order of magnitude as numerical roundoff. Usually there is no need to use these options to change this
setting. However, occasionally the numerical evaluation of the RungeKutta
steps may be illconditioned or step sizes are so small that the time
parameter cannot be incremented by the step size within working precision.
In such a case these options may be used to bound the local discretization
errors and use a higher working precision given by Only positive real numerical values are accepted. NoteThe global error of the result returned by See Example 8. 

Option, specified as Switches off the internal error control and uses a RungeKutta
iteration with constant step size By default, A final step with smaller step size is used to match the end NoteWith this option, there is no automatic error control! Depending on the problem and on the order of the method the result may be a poor numerical approximation of the exact solution. There is usually no need to invoke this option. However, occasionally the builtin adaptive error control mechanism may fail when integrating close to a singularity. In such a case this option may be used to customize a control mechanism for the global error by using different step sizes and investigating the convergence of the corresponding results. Cf. Example 9. 

Option, specified as Restricts adaptive step sizes to values not larger than By default, If a larger stepsize 

Option, specified as Makes With this option, The integer The output list may be useful to inspect the internal numerical process. Also further graphical processing of the mesh data may be useful. Cf. Example 10. 

Makes The call This option may be useful if the specified numerical method applied to a given differential equation is to be investigated symbolically. Cf. Example 11. 
The solution vector Y(t) is
returned as a list or as a 1dimensional array of floatingpoint values.
The type of the result vector coincides with the type of the input
vector Y_{0}
.
With the option Alldata
, a list of mesh data
is returned.
J.C. Butcher: “The Numerical Analysis of Ordinary Differential Equations”, Wiley, Chichester (1987).
E. Hairer, S.P. Norsett and G. Wanner: “Solving Ordinary Differential Equations I”, Springer, Berlin (1993).
All methods presently implemented are adaptive versions of RungeKutta type single step schemes.
The methods RKF43
, RKF34
, RKF54a
, RKF54b
, RKF45a
, RKF45b
, RKF87
, RKF78
, DOPRI54
, DOPRI45
, DOPRI65
, DOPRI56
, DOPRI87
, DOPRI78
, CK54
, CK45
are
embedded pairs of RungeKuttaFehlberg, DormandPrince and CashKarp
type, respectively. Estimates of the local discretization error are
obtained in the usual way by comparing the results of the two submethods
of the pair. The names indicate the orders of the subprocesses. For
instance, RKF34
and RKF43
denote
the same embedded RungeKuttaFehlberg pair with orders 3 and 4. In RKF34
the
result of the fourth order submethod is accepted, whereas RKF43
advances
using the result of the third order submethod. In both cases the discretization
error of the lower order subprocess is estimated and controlled.
For the single methods EULER1
(the first
order Euler scheme), RK4
(the classical fourth
order RungeKutta scheme) and BUTCHER6
(a RungeKutta
scheme of order 6), the relative local error is controlled by comparing
steps with different step sizes. The effective order of these methods
is increased by one through local extrapolation.
Local extrapolation is also available for the submethods of
the embedded pairs. For instance, the method xRKF78
uses
extrapolation of the 8th order subprocess of RKF78
,
yielding a method of effective order 9. The 7th order subprocess
is ignored. The cheap error estimate based on the embedded pair is
not used implying some loss of efficiency when comparing RKF78
(order
8) and xRKF78
(effective order 9).
The call numeric::butcher(method)
returns
the Butcher data of the methods used in numeric::odesolve
.
Here method
is one of EULER1
, RKF43
, RK4
, RKF34
, RKF54a
, RKF54b
, DOPRI54
, CK54
, RKF45a
, RKF45b
, DOPRI45
, CK45
, DOPRI65
, DOPRI56
, BUTCHER6
, RKF87
, DOPRI87
, RKF78
, DOPRI78
.
Only local errors are controlled by the adaptive mechanism. No control of the global error is provided!
The run time of the numerical integration with a method of order p grows
like ,
when DIGITS
is
increased. Computations with high precision goals are very expensive!
High order methods such as the default method DOPRI78
should
be used.
Presently, only single step methods of RungeKutta type are
implemented. Stiff problems cannot be handled efficiently with explicit
methods such as the default method DOPRI78
. For
stiff problems, one may use one of the implicit Astable methods GAUSS(s)
.
See Example 7.
For problems of the special type with
a matrix valued function f(t, Y),
there is a specialized (“geometric”) integration routine numeric::odesolveGeometric
.
Generally, numeric::odesolve
is faster than the
geometric integrator. However, odesolveGeometric
preserves
certain invariants of the system more faithfully.