NUMERICAL METHODS FOR SOLVING SYSTEMS OF DIFFERENTIAL EQUATIONS

 

 

This topic explores the methods for solving of differential equations (DE) of first order with initial conditions:

 

(1)


(2)


 

The methods for digital solving of DE are applied for systems of DE of first order which necessitates that equations of higher order be reduced to systems of equations of first order. For example DE of second order:

 


 

is presented as:

 


 

where z is the new function of x, defined by the second equation. This is a system of two equations for the functions y(x) and z(x), where its solution gives at the same time y and its derivative.

The mechanism of digital solving of the equations (1) and (2) is reduced to the following: equation (1) defines incompletely a curve on the Oxy plane and gives the value of the derivative in any point of the curve as function of x and y. Equation (2) specifies a concrete curve from the family of curves satisfying (1) which passes through a certain point. Solving of the problem (1) – (2) is function of X. So as to find it values concrete values should be assigned in the expression for x and the values of y should be computed.

 In the most general meaning the digital solving goes on this way:

 

-       DE assigns the slope of the curve in a random point as function of x and y. From the formulation of the problem only one point is known, i.e. the point  with coordinates (x0, y0)

-       From this point we calculate the slope of the curve x = x0 and y = y0.

-       A step of advancing is determined (or it is assigned in advance) h, the coordinates advancing towards it on the tangent line.

-       In the new point we determine the abscissa x1 = x0 + x and the ordinate y = y1

-       The next steps are determined by approximation h.

 

The method thus described is also known as Euler’s method. It characterizes with certain instability of the solution finding expressing in the possibility to obtain a result of inadmissible error.  This problem could be avoided when computing the curvature of the precise solution. The methods for doing so divide into one-step and multi-step.

The one-step methods are based on usage of the information for the curve in one point and do not make iterations. The Taylor’s method is one of these and it is rather difficult to use. For the purposes of a practical realization of the digital methods for solving of DE the Runge-Kutta method is discussed in more detail here. This group of methods are direct – they do not use iterations which makes them appear more economical but in reality they urge numerous computations of the function. A basic disadvantage is the hard evaluation of the occurred error.

The multi-step methods can find the next point with less computing but they require iterations to achieve a better preciseness. Most of these methods are called predictor – corrector methods. It is characteristic of the multi-step methods that some difficulties occur in  organizing the iterations; their main advantage, however, are the obtained evaluations of the occurred error along with the results.

 

Taylor’s method

 

Taylor’s method theoretically gives the solution of a random DE but it presents little interest in the computing practice. Its importance holds in that it is a basis for comparison and evaluation of various practically convenient methods.

It is convenient to start presenting the method from the assumption that we have already obtained approximation y(x) in the interval x0 ≤ x ≥ xm  The Taylor’s formula for the function y(x) in the point  x = xm :

(3)


 

Where ym(j) e j–th derivative of y(x) in the point  x = xm and ξ is in the interval.

 


 

By substititing x in (3) with xm + h we obtain an approximated value of y(x) in the point  xm+1 = xm + h :

 


 

It is necessary that consecutive derivatives of the function y be computed. From equation (1) we obtain:

 


 

Differentiating towards x:

 


 

Where the private derivatives along x and y are designated  f with a relevant down index:

 


 

As an example we shall discuss a case in which  j = 2. We obtain:

 


 

Ignoring the last member and computing ym+1:

 


 

Error from disruption:

 


 

If the third derivative does not change much, we write:

 


 

Where K is a constant value.

As it has been noted above, the method is discussed with allowing certain random approximation. The values of h can be found in the following way: assigned is m = 0 by which y1 is computed; from this is determined the approximate value of the solution in point x = x0 + h; assigned is m = 1 and with the obtained value y1 and x1 = x0 + h y2 is computed; in the same way is determined y3, y4, ... , ym, ym+1 ... Error from disruption is accrued on each step.

From the practical viewpoint the Taylor’s is connected with some difficulties in finding fx and fy. A lesser error occurs when computing ym’’’ which equals:

 


 

Runge-Kutta’s methods

 

 

Runge-Kutta’s methods have the following distinguishing properties:

1.                               They are one-step: to determine ym+1 it is necessary to have information only about the previous point (xm, ym).

2.                               They agree with the Taylor’s development up to the member with hp, where the power p is different for the different methods and is called order of the method.

3.                               Only computation of the function f(x,y) is needed, and not of its derivatives.

Euler’s method is rarely used but knowing it is a staring point toward further examination of the methods of this class. The graphic presentation is shown in figure 1.

 


 

figure 1

Geometric presentation of Euler’s  method

 

There is a known point of coordinates (xm, ym) lying on the wanted curve. A curve with a slope is drawn through this point

 


 

We can assume that ym+1 equals to the ordinate at the crossing point of L1  and the straight line x = xm+1 = x+h. The equation of the straight line L1 is:

 


 

and since

 


 

then

 


 

The error in x=xm+1 is designated on the figure by e.

The last formula assigns the Euler’s method. Characteristic of it are the big error from disruption and instability in some cases, i.e. a small error from rounding  increases with the increase of x.

The modified Euler’s method is based on finding the average value of the slopes of the tangent lines in the points (xm, ym) and (xm+h, ym+hym). Graphically the method is presented on figure 2.

 


figure 2

Geometric presentation of the modified Euler’s method

 

The discussed method can be described by the following algorithm:

1.                               A point is found of coordinates (xm, ym) and (xm+h, ym+hym) lying on the straight line L1.

2.                               The slope of the curve is computed through the same point by finding the value of f.

3.                               Through these computing is found the straight line L2.

4.                               As average value of the two slopes is determined the punctuated straight line L

5.                               The crossing point of this straight line with the vertical straight line x=xm+1 = xm+h is the wanted point (xm+1, ym+1)

 

The slope of the straight line L is:

 


 

where:

 


 

The equation for L is assigned as:

 


 

The last equations express the modified Euler’s method.

Through generalization of the methods exposed here Runge-Kutta’s method can be deduced for solving equations of third and fourth order. The classical method is assigned through the equations:

 


 

where:

 


 

The error from disruption is:

 


 

where K is constant.

 

Adams – Bashfoth’s and Adams – Moulton’s methods

 

Both methods are based on the Newton’s formula about backward interpolation. The methods of first, second and third rank look like this:

 

Adams – Bashfoth

Adams – Moulton

 

 

 

The methods are of explicit tur/yp?e and they need accumulated quantity of solution about past moments of time so that they could be started.

 

Griar’s method

 


 

The Runge-Kutta’s methods are one-step and self-starting which makes them convenient as predictors or for accruing of a complex of initial solutions before some multi-step method for integration is applied. It is hard to predict the size of the step with them because of which it is hard to change the rank of integration in the course of solving the differential equations. When solving systems of DE these methods significantly increase the quantity of computation load. With the Adams – Bashfoth’s, Adams – Moulton’s and Griar’s methods it is comparatively easy to determine the size of the step; therefore there is possibility for an easy change of the rank of integration in the course of the computing process. When solving systems of DE these methods are convenient to work because they do not increase enormously the load of computing.

In this project “the mathematical modeling of elecro-technological processes”, the system of differential equations are presented in a normalized form:

 


 

where [x(t)] is a square matrix with unknown functions and [f[x(t)],t] is a matrix column containing the non-linear functions of unknown values. In MatLab program realized are the functions for numerical solving systems of differential equations in files ode15s.m, ode45.m, ode23.m, ode23s.m, ode113.m. All the obtained results suggested in discussing the different models and regulation are obtained through their application.

The tasks which the project sets can be realized not only through the given functions after making the systems of differential equations but also through the functions for finding own vectors of matrices, matrix exponents, and the method of the variables  of the condition.

The own numbers and the own vectors are in the basis of a number of numerical algorithms for computing matrix exponents, determining systems of DE, determining the solutions of systems DE in the temporal or frequency field and others. This determines the task as one of the most important in the computing mathematics. The own numbers lI and the own vectors [xi] of the square matrix [A] are determined by the following equation:

 


 

In Matlab program the basic functions are:

eig (A) – returns as a result a vector with the own numbers of the square matrix [A]

[V,D] = eig(A) – produces a diagonal matrix [D] containing the own numbers of the matrix [A] and the matrix [V] whose columns contain the components of the own vectors of the matrix [A]. The matrix exponent:

 


 

is a special type of matrix which contains on the place of each one of its elements a group of exponential functions. The function for computing matrix exponents is expm(A) which uses Pade’s approximation.

The formed equations are solved by the Koshi’s formula:

 


 

[x(t)] – variables of the condition;

[f(t)] – input influences;

[y(t)] – includes all values outside which are not variables of the condition

[A], [B], [C], [D] – matrices containing the parameters of the chain elements