NUMERICAL METHODS
FOR SOLVING SYSTEMS OF DIFFERENTIAL EQUATIONS
This topic explores
the methods for solving of differential equations (DE) of first order with
initial conditions:
(1)
(2)
The methods for digital solving
of DE are applied for systems of DE of first order which necessitates that
equations of higher order be reduced to systems of equations of first order.
For example DE of second order:
is presented as:
where z is the new function of
x, defined by the second equation. This is a system of two equations for the
functions y(x) and z(x), where its solution gives at the same time y and its derivative.
The mechanism of
digital solving of the equations (1) and (2) is reduced to the following: equation
(1) defines incompletely a curve on the Oxy plane and gives the value of the
derivative in any point of the curve as function of x and y. Equation (2)
specifies a concrete curve from the family of curves satisfying (1) which
passes through a certain point. Solving of the problem (1) – (2) is function of
X. So as to find it values concrete values should be assigned in the expression
for x and the values of y should be computed.
In the most general meaning the digital
solving goes on this way:
 DE assigns the
slope of the curve in a random point as function of x and y. From the
formulation of the problem only one point is known, i.e. the point with coordinates (x_{0}, y_{0})
 From this point we
calculate the slope of the curve x = x_{0} and y = y_{0}.
 A step of advancing
is determined (or it is assigned in advance) h, the coordinates advancing
towards it on the tangent line.
 In the new point we
determine the abscissa x_{1} = x_{0} + x and the ordinate y = y_{1}
 The next steps are
determined by approximation h.
The method thus
described is also known as Euler’s method. It characterizes with certain
instability of the solution finding expressing in the possibility to obtain a
result of inadmissible error. This
problem could be avoided when computing the curvature of the precise solution.
The methods for doing so divide into onestep and multistep.
The onestep methods are based on usage of the
information for the curve in one point and do not make iterations. The
The multistep methods can find the next point with
less computing but they require iterations to achieve a better preciseness. Most
of these methods are called predictor – corrector methods. It is characteristic
of the multistep methods that some difficulties occur in organizing the iterations; their main
advantage, however, are the obtained evaluations of the occurred error along
with the results.
It is convenient to
start presenting the method from the assumption that we have already obtained
approximation y(x) in the interval x_{0} ≤ x ≥ x_{m} The Taylor’s formula for the function y(x) in
the point x = x_{m} :
(3)
Where ym(j) e j–th
derivative of y(x) in the point x = x_{m}
and ξ is in the interval.
By substititing x
in (3) with x_{m} + h we obtain an approximated value of y(x) in the
point x_{m+1} = x_{m} +
h :
It is necessary
that consecutive derivatives of the function y be computed. From equation (1)
we obtain:
Differentiating
towards x:
Where the private
derivatives along x and y are designated
f^{’} with a relevant down index:
As an example we shall
discuss a case in which j = 2. We
obtain:
Ignoring the last
member and computing y_{m+1}:
Error from
disruption:
If the third
derivative does not change much, we write:
Where K is a
constant value.
As it has been
noted above, the method is discussed with allowing certain random
approximation. The values of h can be found in the following way: assigned is m
= 0 by which y_{1 }is computed; from this is determined the approximate
value of the solution in point x = x_{0} + h; assigned is m = 1 and
with the obtained value y_{1} and x_{1} = x_{0} + h y_{2
}is computed; in the same way is determined y_{3}, y_{4},
... , y_{m}, y_{m+1} ... Error from disruption is accrued on
each step.
From the practical
viewpoint the
RungeKutta’s
methods
RungeKutta’s
methods have the following distinguishing properties:
1.
They are onestep: to determine y_{m+1} it is necessary
to have information only about the previous point (x_{m}, y_{m}).
2.
They agree with the
3.
Only computation of the function f(x,y) is needed, and
not of its derivatives.
Euler’s method is
rarely used but knowing it is a staring point toward further examination of the
methods of this class. The graphic presentation is shown in figure 1.
figure 1
Geometric
presentation of Euler’s method
There is a known
point of coordinates (x_{m}, y_{m}) lying on the wanted curve.
A curve with a slope is drawn through this point
We can assume that
y_{m+1} equals to the ordinate at the crossing point of L_{1 }and the straight line x = x_{m+1}
= x+h. The equation of the straight line L_{1} is:
and since
then
The error in x=x_{m+1}
is designated on the figure by e.
The last formula
assigns the Euler’s method. Characteristic of it are the big error from disruption
and instability in some cases, i.e. a small error from rounding increases with the increase of x.
The modified
Euler’s method is based on finding the average value of the slopes of the
tangent lines in the points (x_{m}, y_{m}) and (x_{m}+h,
y_{m}+hy_{m}^{’}). Graphically the method is presented
on figure 2.
figure 2
Geometric
presentation of the modified Euler’s method
The discussed
method can be described by the following algorithm:
1.
A point is found of coordinates (x_{m}, y_{m})
and (x_{m}+h, y_{m}+hy_{m}^{’}) lying on the
straight line L_{1}.
2.
The slope of the curve is computed through the same
point by finding the value of f.
3.
Through these computing is found the straight line L_{2}.
4.
As average value of the two slopes is determined the
punctuated straight line L
5.
The crossing point of this straight line with the
vertical straight line x=x_{m+1} = x_{m}+h is the wanted point
(x_{m+1}, y_{m+1})
The slope of the
straight line L is:
where:
The equation for L
is assigned as:
The last equations
express the modified Euler’s method.
Through
generalization of the methods exposed here RungeKutta’s method can be deduced
for solving equations of third and fourth order. The classical method is
assigned through the equations:
where:
The error from
disruption is:
where K is constant.
Adams
– Bashfoth’s and Adams – Moulton’s methods
Both methods are
based on the
Adams – Bashfoth 
Adams – Moulton 


The methods are of
explicit tur/yp?e and they need accumulated quantity of solution about past
moments of time so that they could be started.
Griar’s method
The RungeKutta’s
methods are onestep and selfstarting which makes them convenient as predictors
or for accruing of a complex of initial solutions before some multistep method
for integration is applied. It is hard to predict the size of the step with
them because of which it is hard to change the rank of integration in the
course of solving the differential equations. When solving systems of DE these
methods significantly increase the quantity of computation load. With the Adams – Bashfoth’s, Adams – Moulton’s and Griar’s methods it is comparatively easy to
determine the size of the step; therefore there is possibility for an easy
change of the rank of integration in the course of the computing process. When
solving systems of DE these methods are convenient to work because they do not
increase enormously the load of computing.
In this project
“the mathematical modeling of elecrotechnological processes”, the system of
differential equations are presented in a normalized form:
where [x(t)] is a
square matrix with unknown functions and [f[x(t)],t] is a matrix column
containing the nonlinear functions of unknown values. In MatLab program
realized are the functions for numerical solving systems of differential
equations in files ode15s.m, ode45.m, ode23.m, ode23s.m,
ode113.m. All the obtained results suggested in discussing the different
models and regulation are obtained through their application.
The tasks which the
project sets can be realized not only through the given functions after making
the systems of differential equations but also through the functions for
finding own vectors of matrices, matrix exponents, and the method of the
variables of the condition.
The own numbers and
the own vectors are in the basis of a number of numerical algorithms for
computing matrix exponents, determining systems of DE, determining the
solutions of systems DE in the temporal or frequency field and others. This
determines the task as one of the most important in the computing mathematics.
The own numbers l_{I} and the own
vectors [x_{i}] of the square matrix [A] are determined by the
following equation:
In Matlab program
the basic functions are:
eig (A) – returns
as a result a vector with the own numbers of the square matrix [A]
[V,D] = eig(A) –
produces a diagonal matrix [D] containing the own numbers of the matrix [A] and
the matrix [V] whose columns contain the components of the own vectors of the
matrix [A]. The matrix exponent:
is a special type
of matrix which contains on the place of each one of its elements a group of
exponential functions. The function for computing matrix exponents is expm(A)
which uses Pade’s approximation.
The formed
equations are solved by the Koshi’s formula:
[x(t)] – variables
of the condition;
[f(t)] – input
influences;
[y(t)] – includes
all values outside which are not variables of the condition
[A], [B], [C], [D]
– matrices containing the parameters of the chain elements