An efficient algorithm based on artificial neural networks and particle

Neural Comput & Applic (2017) 28:171–178
DOI 10.1007/s00521-015-2046-1
ORIGINAL ARTICLE
An efficient algorithm based on artificial neural networks
and particle swarm optimization for solution of nonlinear
Troesch’s problem
Neha Yadav1 • Anupam Yadav2 • Manoj Kumar3 • Joong Hoon Kim1
Received: 6 May 2015 / Accepted: 17 August 2015 / Published online: 1 September 2015
Ó The Natural Computing Applications Forum 2015
Abstract In this article, a simple and efficient approach
for the approximate solution of a nonlinear differential
equation known as Troesch’s problem is proposed. In this
article, a mathematical model of the Troesch’s problem is
described which arises in confinement of plasma column
by radiation pressure. An artificial neural network (ANN)
technique with gradient descent and particle swarm optimization is used to obtain the numerical solution of the
Troesch’s problem. This method overcomes the difficulty
arising in the solution of Troesch’s problem in the literature
for eigenvalues of higher magnitude. The results obtained
by the ANN method have been compared with the analytical solutions as well as with some other existing
numerical techniques. It is observed that our results are
more approximate and solution is provided on continuous
finite time interval unlike the other numerical techniques.
The main advantage of the proposed approach is that once
the network is trained, it allows evaluating the solution at
& Joong Hoon Kim
[email protected]
Neha Yadav
[email protected]
Anupam Yadav
[email protected]
Manoj Kumar
[email protected]
1
School of Civil, Environmental and Architectural
Engineering, Korea University, 136-713 Seoul, South Korea
2
Department of Sciences and Humanities, National Institute of
Technology, Srinagar, Garhwal 246174, Uttarakhand, India
3
Department of Mathematics, Motilal Nehru National Institute
of Technology, Allahabad 211004, U.P., India
any required number of points for higher magnitude of
eigenvalues with less computing time and memory.
Keywords Artificial neural network technique Backpropagation algorithm Plasma column Particle
swarm optimization
1 Introduction
Differential equations are often used to model problems in
science and engineering that involve the change in one
variable with respect to another. To obtain solution of some
linear differential equation is not a difficult task nowadays,
but the problem with nonlinear differential equation is still
open and challenging. Nonlinear phenomenon appears in
different branches of engineering such as control system,
fluid dynamics, aerodynamics and electronic engineering.
These nonlinear behaviors are represented with nonlinear
differential equations, and few of them cannot be solved
analytically, so the need for approximate solution of nonlinear differential equations arises. Various numerical
algorithms, e.g., Runge–Kutta, Adams Bashforth, finite
difference and differential transform method, exist for
calculating numerical solution with least rounding off
errors and more stability. However, all these methods
either require discretization of domain into set of points or
require converting some nonlinear phenomenon of problem
into linear one. To get a better approximate solution for a
problem, one has to construct an appropriate mesh, and
construction of an appropriate mesh is sometimes a tedious
task, especially for the complex boundaries.
In this paper, we consider a nonlinear boundary value
problem which arises in the investigation of the confinement of a plasma column by radiation pressure known as
123
172
Troesch’s problem, which was first described and solved
by Weibel [1]. Later, analytical solution of this problem is
obtained in terms of Jacobi elliptic function by Roberts and
Shipmann in [2]. It has become a widely used test problem,
and a number of algorithms have been used to obtain its
approximate numerical solution. Scott in [3] used an
imbedding method; modified decomposition technique is
used by Khuri [4] to obtain numerical solution of Troesch’s
problem. Feng et al. [5] presented a modified homotopy
perturbation technique, and Chang et al. [6] proposed a
new technique based on one-dimensional differential
transform of nonlinear functions for Troesch’s problem. A
new method based on variable transformation is presented
by Chang [7] to solve nonlinear Troesch’s problem. Other
numerical techniques such as shooting method by Chang
[8], sinc-Galerkin method by Zarebnia and Sajjadian [9],
homotopy perturbation method by Vazquez-Leal et al. [10]
and B-spline approach by Khuri and Sayfy [11] are also
presented in the literature for the approximate solution of
Troesch’s problem. It has been presented in the literature
that some existing numerical methods such as Adomian
decomposition method by Khuri [4], variational iteration
method by Chang [7] and modified homotopy perturbation
method by Chang and Chang [6] fail to solve the Troesch’s
problem for k [ 1. Although some methods such as differential transform method are able to solve the problem
for higher magnitude, but to provide more accurate solution, more number of terms are required for series convergence that increases computational work significantly.
Due to the complexity in generating mesh, mesh-free
methods are developed, and considerable efforts have been
devoted in recent years for development of them. The main
aim of these methods is to remove the difficulty arising in
grid discretization. To construct a neural network which
approximates a set of given differential equations has many
advantages over the other existing methods by Shirvany
et al. [12]. First of all, the solution obtained via ANN is
continuous over whole domain of consideration, while the
other methods provide solution only over discrete points,
and the solution between these points can be obtained by
some interpolation technique. If we increase the number of
sampling points or dimension of the problem, computational complexity in ANN remains acceptable. On the other
hand, in standard numerical methods the computational
complexity increases quickly when the number of sampling
points increases. Also the solution search proceeds without
coordinate transformation to compute the solution values
rapidly.
The aim of this article is to present a more generalized
approach for solution of nonlinear Troesch’s problem for
eigenvalues of higher magnitude. To fulfill this aim, we
apply the ANN method for its solution. A gradient descent
optimization technique is used to optimize the network
123
Neural Comput & Applic (2017) 28:171–178
parameters in the ANN to solve Troesch’s problem when
the eigenvalues are relatively small, i.e., 0 \ k B 1; however, for large eigenvalues, gradient descent optimization
technique in ANN fails to provide the solution, because of
the derivative based algorithm, and the stiffness ration near
x = 1 increases as k increases. This situation demands a
better optimization algorithm. Recently, Kennedy and
Mendes [13] proposed a new technique for non linear
optimization, called particle swarm optimization (PSO),
which is designed on a simple concept of swarm intelligence and requires less computational time in comparison
with other classical optimization techniques. The authors
presented that PSO can train feed-forward neural networks
with a performance similar to the backpropagation method.
Also, several researchers have adopted PSO for feed-forward neural networks learning [13–21].
Since PSO is a non-gradient-based algorithm technique,
we used this technique to optimize the network parameters
in ANN for the solution of Troesch’s problem for eigenvalues of high magnitude. The main advantage of the ANN
method based on PSO learning algorithm is that it provides
the continuous solution for the Troesch’s problem over the
entire domain for eigenvalues of higher magnitude which
overcomes the difficulties arising in the other numerical
techniques in the literature for higher eigenvalues. Performance of the ANN method is tested by calculating the
numerical solutions of the problem for different cases, and
comparison has been presented with analytical and other
numerical results that are available in the literature.
The rest of this article is organized as follows: Mathematical model of Troesch’s problem is presented in Sect. 2.
Approximation technique based on ANN for Troesch’s
problem is presented in Sect. 3. In Sect. 4, drawbacks of
some conventional methods as well as the gradient descent
optimization techniques for optimizing ANN parameters
are analyzed, and particle swarm optimization technique is
also presented in this section. Implementation of ANN
technique on Troesch’s problem is presented in Sect. 5 for
some cases of the problem; also simulated results are
compared with the analytical and numerical solutions in
this section. Finally, in Sect. 6, conclusion is given to
summarize the results.
2 Mathematical model of Troesch’s problem
Troesch’s problem is discussed by Weibel [1] and arises in
the confinement of a plasma column by radiation pressure.
Later, Troesch in [22] analyses the problem and solved it
numerically. Mathematical model of Troesch’s problem
can be given as a two-point boundary value problem
defined as:
Neural Comput & Applic (2017) 28:171–178
y00 ¼ k sinh ky;
173
0x1
ð1Þ
together with the boundary conditions
yð0Þ ¼ 0;
yð1Þ ¼ 1
ð2Þ
The closed-form solution of this problem is given by
Roberts and Shipmann [2] in terms of the Jacobian elliptic
function as:
0
2
y ð0Þ
1
yðxÞ ¼ sinh1
sc kx; 1 u0 ð0Þ2
ð3Þ
k
2
4
pffiffiffiffiffiffiffiffiffiffiffiffi
where y0 ð0Þ ¼ 2 1 m, with m being the solution of the
following transcendental equation:
sinhðk=2Þ
pffiffiffiffiffiffiffiffiffiffiffiffi ¼ scðk; mÞ
1m
and the Jacobian elliptic function scðk; mÞ
ð4Þ
sin /
¼ cos
/,
/
R
m and k are connected by the integral k ¼
0
where /,
1
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi dh.
1m sin2 h
It has been shown by Roberts et al. and Lagaris et al. [2,
23] that y(x) has singularity located at:
1
8
xs ¼ ln pffiffiffiffiffiffiffiffiffiffiffiffi
ð5Þ
k
2 1m
From Eq. (5), it can be seen that the singularity lies
within the integration range if y0 ð0Þ [ 8 en . Hence, the
Troesch’s problem becomes very difficult to solve by some
existing numerical methods, and this difficulty increases as
the value of k increases.
3 ANN approximation of Troesch’s problem
ANN has function approximation capabilities and can be
used to solve initial or boundary value problems approximately by constructing a trial solution which exactly satisfies the boundaries. The constructed trial solution is an
approximation to the solution of the boundary value
problem for some optimized value of parameters, e.g.,
weights and biases. Thus, the problem of finding the
approximate solution over some collocation points will
turn to minimization problem with some optimized value
of parameters [24–28]. Hence, we construct a trial solution
for Troesch’s problem mentioned in Eqs. (1) and (2) using
ANN as:
yT ðx; ~
wÞ ¼ x þ xðx 1ÞNðx; ~
wÞ
ð6Þ
where ~
w represents the adjustable neural network parameters involving weights and biases. The trial solution
yT ðx; ~
wÞ given in Eq. (6) represents an approximate solution for the Troesch’s problem with respect to some optimized values of unknown parameters. Thus, the problem of
finding the approximate solution for Eq. (1) over some
collocation points in the domain [0, 1] is equivalent to
wÞ that will satisfy the concalculate the functional yT ðx; ~
strained minimization problem. Hence, the sum of square
due to error can be written in the following form:
~Þ ¼ fy00 ðxi Þ f ðxi ; y0 ðxi ÞÞg
Eðw
ð7Þ
where
wÞ ¼ 1 þ ð2x 1ÞNðx; ~
wÞ þ ðx2 xÞ N 0 ðx; ~
wÞ
y0T ðxi ; ~
ð8Þ
wÞ ¼ 2Nðx; ~
wÞ þ ð4x 2ÞN 0 ðx; ~
wÞ
y00T ðxi ; ~
00
~
þ ð2x 1ÞN ðx; wÞ
ð9Þ
Neural network is trained to minimize the error function
constructed in Eq. (7). The residual E(x) is computed
corresponding to every entry x, which is obtained from
substitution of trial function yT ðx; ~
wÞ into Eq. (1). For
training of the network parameters or to minimize the error
in Eq. (7), gradient descent optimization technique has
been used.
4 Analysis of the method
In this section, we have discussed the problem which arises
in the solution of Troesch’s problem of the traditional
methods in details. Further, we explained why the artificial
neural network method with gradient descent optimization
fails for relatively large eigenvalues. Finally, we propose
an ANN method using particle swarm optimization technique to optimize the parameters used in ANN that will
make the ANN to work for large eigenvalues.
4.1 Conventional methods
We can rewrite the Troesch’s problem given in Eqs. (1)–
(2) in the following form of system of differential equations as given by Khuri and Sayfy [11]:
8 0
<y ¼ s
s0 ¼ k sinhðkyÞ
; where 0 x 1
ð10Þ
:
yð0Þ ¼ 0; yð1Þ ¼ 1
The Jacobian matrix of the above system can be given
by
Jðy; sÞ ¼
0
1
k2 coshðkxÞ 0
ð11Þ
Thus, the eigenvalues of the Jacobian matrix are
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
k ¼ k coshðkxÞ, and at the endpoints of the interval,
eigenvalues are:
123
174
Neural Comput & Applic (2017) 28:171–178
k ð0Þ ¼ k;
k ð1Þ ¼ k
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
coshðkÞ
ð12Þ
Here, it can be seen that for higher values of k the
eigenvalues k* becomes extremely large. The Jacobian
matrix J given by Eq. (11) is normal if it satisfies the folP 2 P
lowing equation,
jki j ¼
ja ij j2 , where ki* are eigenvalues of a given matrix A with entries aij. Therefore,
2k2 coshðkxÞ ¼ 1 þ k4 cosh2 ðkxÞ
or;
ð13Þ
k2 coshðkxÞ ¼ 1
ð14Þ
Equation (14) is true for only small values of k, and it
does not satisfy for the larger value of k. So, we can linearize Eq. (1) for smaller values of k, i.e., 0 \ k 1 as:
y00 ¼ k2 y
ð15Þ
Thus, y00 & 0, i.e., the solution represents a straight line,
so it is a simple case, not of particular importance. Due to
this reason, conventional methods, such as finite differences, are convenient for only small eigenvalues and not
applicable for large eigenvalues.
Table 1 Pseudo-code for PSO
4.2 ANN method with gradient descent optimization
technique
The ANN technique for solving Troesch’s problem using
gradient descent optimization works for relatively small
values of k, i.e., for 0 \ k 1 and the pseudo code for the
algorithm is presented by Table 1. It can be also seen that
this technique provides better results than the other conventional numerical methods in terms of accuracy as given
in Tables 2 and 3. However, for large eigenvalues (say
k [ 1) this technique is not acceptable, since a small
change in the value of k creates a large change in sinh term,
and the parameters are not updated using gradient descent
optimization because it involves the derivative term. To
overcome this difficulty for handling larger eigenvalues,
we have proposed in this article the particle swarm optimization technique to update ANN parameters and shown
that also for large eigenvalues ANN techniques give the
better results than the other conventional numerical methods in terms of accuracy as described in Table 4.
Step (1) Initialization
Randomly initialize all the particles ( X 1t , X 2t ,..., X tp s ) of swarm size ps in search range
[ X min , X max ]
Initialize the velocity (V1t , V2t ,..., V pt s ) in the range [Vmin , Vmax ]
Set t = 0 : iteration
Calculate ( fit1t , fit2t ,..., fit tp s ) of X : fitness values
Set X it to be Pbest = ( Pbest1t , Pbest2t ,..., Pbest tp s ) .
Set the particle with best fitness value Gbest
Step (2) Reproduction and Updating
While (Stopping criterion is not satisfied) do
for i = 1: ps do
Vit +1 = c1 Vit + c2 ( X it − Gbestit ) + c3 ( X it − Pbestit )
X it +1 = X it + Vit +1
Pbestit +1 = Pbestit
Gbest t +1 = Gbest t
Evaluate fitness ( X it +1 )
If (fitness ( Pbestit ) < fitness ( X it +1 ) ) then
Update Pbestit +1
end if
If (fitness ( Gbest t +1 ) < fitness ( Pbestit +1 ) ) then
Update Gbest t +1
end for
end while
123
Neural Comput & Applic (2017) 28:171–178
175
Table 2 Solution of Troesch’s problem for k = 0.5
x
Exact solution
Solution by
ANN method
Absolute error in
Laplace method
Absolute error in
perturbation method
Absolute error in
spline method
Absolute error in
ANN method
0.1
0.0951769020
0.095196311
7.7 9 10-4
8.2 9 10-4
7.7 9 10-4
1.9 9 10-4
0.191035515
1.5 9 10
-3
1.6 9 10
-3
-3
4.0 9 10-4
2.1 9 10
-3
2.3 9 10
-3
-3
1.0 9 10-3
2.7 9 10
-3
2.9 9 10
-3
-3
1.7 9 10-3
3.0 9 10
-3
3.2 9 10
-3
-3
2.4 9 10-3
-3
3.4 9 10
-3
-3
3.1 9 10
2.9 9 10-3
0.2
0.3
0.4
0.5
0.1906338691
0.2866534030
0.3835229288
0.4815373854
0.287674299
0.3852715
0.483987547
1.5 9 10
2.1 9 10
2.7 9 10
3.0 9 10
0.6
0.5810019749
0.58398387
3.1 9 10
0.7
0.8
0.6822351326
0.7855717867
0.685422276
0.788464295
3.0 9 10-3
2.4 9 10-3
3.2 9 10-3
2.7 9 10-3
3.0 9 10-3
2.4 9 10-3
3.1 9 10-3
2.8 9 10-3
0.9
0.8913669875
0.89327053
1.5 9 10-3
1.6 9 10-3
1.5 9 10-3
1.2 9 10-4
Table 3 Solution of Troesch’s problem for k = 1
x
Exact
solution
Solution by ANN
method
Absolute error in
Laplace method
Absolute error in
perturbation method
Absolute error in
spline method
Absolute error in
ANN method
0.1
0.0817970
0.0816330
2.9 9 10-3
3.6 9 10-3
2.8 9 10-3
1.6 9 10-4
0.1642021
-3
7.1 9 10
-2
-3
3.2 9 10-4
1.0 9 10
-2
-3
5.0 9 10-3
1.3 9 10
-2
-2
7.0 9 10-3
-2
-2
1.2 9 10
9.0 9 10-3
0.2
0.3
0.4
0.1645309
0.2491674
0.3367322
0.2542334
0.3437576
5.9 9 10
-3
8.2 9 10
-2
1.0 9 10
-2
5.6 9 10
8.2 9 10
1.0 9 10
0.5
0.4283472
0.4374208
1.2 9 10
1.6 9 10
0.6
0.5252740
0.5362026
1.3 9 10-2
1.7 9 10-2
1.3 9 10-2
1.0 9 10-2
-2
-2
-2
0.7
0.8
0.6289711
0.7411684
0.6410254
0.7527489
1.3 9 10
1.1 9 10-2
1.7 9 10
1.5 9 10-2
1.3 9 10
1.1 9 10-2
1.2 9 10-2
1.1 9 10-2
0.9
0.8639700
0.8721660
7.4 9 10-3
9.7 9 10-3
7.4 9 10-3
8.1 9 10-3
Table 4 Numerical solution of Troesch’s problem for k = 5
x
Fortran code [8]
TWPBVP
B-spline method [8]
ANN method
0.0
0.00000000
0.00000000
0.00000000
0.2
0.01075342
0.01002027
0.01711550
0.4
0.03320051
0.03099793
0.04114780
0.8
0.25821664
0.24170496
0.24961022
0.9
0.45506034
0.42461830
0.45979906
1.0
1.00000000
1.00000000
1.00000000
4.3 ANN method with particle swarm optimization
(PSO) technique
Due to the importance of soft computing techniques and
integrity of ANN and PSO, both these techniques are well
popular to solve the optimization problems, especially for
the problems where conventional methods are not able to
locate the global optimum value. PSO is a non-gradientbased probabilistic search method and inspires from the
social behavior of fish and swarm [13–18]. Gradient-based
algorithms are often used to optimize network parameters
as the computational cost for non-gradient-based algorithm
is comparatively high. In case of PSO, for optimizing the
weight parameters we define mean sum of square as a fitness evaluation function as given below:
1
1X
2
ðf ðxi ; yðxi Þ; y0 ; y00 ; kÞÞ
p1 i¼1
p
Fj ¼
2
1X
2
ðBf ðxi ; yðxi Þ; y0 ; y00 ; kÞÞ
p2 i¼1
p
þ
j ¼ 1; 2; 3. . .
ð16Þ
where j is the flight number, p1 is the number of time steps,
p2 is the number of initial or boundary conditions, f* is the
algebraic sum of differential equation neural network representation that constitutes a given ordinary differential
equation and B is the operator defining initial or boundary
conditions. Our target is to minimize Fj by using PSO as it
is best for finding global from a huge space of the input
data set.
In PSO, problem space is constructed by random generation of particles or swarms (Jordehi [17]). The fitness of
the particle is defined by the function f : Rn ! R, and to
update the initial position of the particles, three choices can
be made to wise move: toward its own direction, toward
123
176
Neural Comput & Applic (2017) 28:171–178
position of the ith particle in D-dimensional search space is
Xit ðxti1 ; xti2 ; . . .; xtiD Þ
with
a
flag
of
velocity
Vit ðvti1 ; vti2 ; . . .; vtid Þ at any moment t where i = 1 to swarm
(ps). Let Pbestti and Gbestti are the latest best position of the
particle and global best at the moment t. From the theory of
particle swarm optimization, the change in the position and
velocity of each particle is governed by the following two
equations:
Vitþ1 ¼ c1 Vit þ c2 Xit Gbestti þ c3 Xit Pbestti
ð17Þ
Xitþ1 ¼ Xit þ Vitþ1
ð18Þ
The exhaustive procedure of PSO is described in
Table 1.
5 Numerical simulation
Fig. 1 MAE in the solution for each combination of grid size n and
number of hidden nodes H while solving Troesch’s problem
the globally best particle and toward the personal best
particle. It is better to choose a path which incorporates all
these three influences in a single influence instead of
moving along a single path. Mathematically, suppose the
Fig. 2 Absolute error in the
ANN approximation for
k = 0.5
Fig. 3 Absolute error in the
ANN approximation for
k = 1.0
123
In this section, we use ANN approximation given in Eq. (6)
to solve Troesch’s problem for different values of the
parameters k. It has been already proven in the literature
that combination of ANN produces less generalized error
than the individual network [28]. Also the combination of
neural network consists of different number of neurons in
Neural Comput & Applic (2017) 28:171–178
177
Fig. 4 ANN solutions of
Troesch’s problem using PSO
for different values of k
the hidden layer, different number of training points in the
domain and different starting weights for network training.
To illustrate the ANN technique using gradient descent
algorithm for solving nonlinear Troesch’s problem for
0 B k B 1, we have considered three-layered neural network with all combinations of H = 10, 20, 25, 30, 40
(hidden nodes) and N = 10, 20, 30, 50, 100 (training
points) with 30 different sets of starting weights. We
choose the lowest mean absolute error (MAE) in the differential equation among all the runs with different starting
weight to represent that combination. Figure 1 shows MAE
in the solution for each combination of parameters, hidden
nodes and grid points. Out of all 5 9 5 9 30 = 300 runs,
the best performing ANN had a MAE in the solution of
1.26 9 10-4, which represents the combination of n = 50
and H = 40.
Thus, we choose the best ANN representative as n = 50
and H = 40 for further computation of solution of nonlinear Troesch’s problem defined in Eq. (1). In Tables 2
and 3, the numerical solutions obtained by the ANN for
k = 0.5 and k = 1, respectively, are compared with the
exact solution given by Eq. (3) and other numerical
methods, namely Laplace decomposition method [4], perturbation method [5] and spline method [8], and the
absolute errors calculated corresponding to these methods
are also presented.
Figures 2 and 3 show the absolute errors calculated in
the solution obtained using ANN with gradient descent
algorithm to the exact solution given in Eq. (3) which
represents that the method is highly accurate.
As mentioned in previous section, the ANN method
using gradient descent optimization technique fails to
obtain an acceptable approximation for the case when
k [ 1. Hence, as an alternative the PSO is used in ANN for
optimizing the ANN parameters for k [ 1. An initial
population of 100 particles has been taken which is divided
over 10 subswarms having 10 particles each. The PSO is
used to perform the global search optimization to update
the weights and biases of the constructed neural network.
Following control parameters are used in PSO:
Dimension ¼ 30;
Fitness ¼ MSE;
Inertia weight ¼
1
;
2 logð2Þ
Self-confidence constants ¼ 0:5 þ logð2Þ;
In Table 4, the numerical solution obtained by the ANN
using PSO for k = 5 is compared with the numerical
approximation of the exact solution given by a FORTRAN
code called TWPBVP and B-spline method [8].
Tables 2, 3 and 4 show that the solution via ANN for
Troesch’s problem is more accurate than the other techniques inside the domain of consideration. We applied
ANN with PSO technique to solve Troesch’s problem for a
wide range of cases of k. Figure 4 shows the solution of the
Troesch’s problem for k = 0.5, 1.0, 2.0, 4.0, 5.0
It is worth mentioning that for any eigenvalues, the
ANN technique based on PSO learning algorithm is easy to
apply and yields a reasonable approximation to the solution
with only few computing time and memory.
6 Conclusion
The closed-form solution of Troesch’s problem is given in
terms of Jacobi elliptic function and has singularity that
makes the problem difficult to solve analytically, and this
difficulty increases as the value of k increases. The proposed approach based on the PSO and ANN removes the
difficulty arising in the solution of Troesch’s problem for
higher eigenvalues. Since particle swarm optimization is a
global search algorithm which overcomes the difficulty of
locating global optimum value in solving optimization
problem. So the applicability of PSO is essential for optimizing network parameters in neural networks. The
obtained approximate numerical solution using the ANN
method maintains good accuracy compared with the exact
solution and other numerical methods.
123
178
Acknowledgments This work was supported by National Research
Foundation of Korea (NRF) Grant funded by the Korean government
(MSIP) (NRF-2013R1A2A1A01013886) and the Brain Korea 21
(BK-21) fellowship from the Ministry of Education of Korea.
References
1. Weibel ES (1958) Confinement of a plasma column by radiation
pressure. In: Landshoff RKM (ed) The plasma in a magnetic field.
Stanford University Press, Stanford, pp 60–76
2. Roberts SM, Shipmann J (1976) On the closed form solution of
Troesch’s problem. J Comput Phys 21(3):291–304
3. Scott MR (1975) On the conversion of boundary value problems
into stable initial value problems via several invariant imbedding
algorithms. In: Aziz AK (ed) Numerical solutions of boundary
value problems for ordinary differential equations. Academic
Press, New York, pp 89–146
4. Khuri SA (2003) A numerical algorithm for solving the Troesch’s
problem. Int J Comput Math 80(4):493–498
5. Feng X, Mei L, He G (2007) An efficient algorithm for solving
Troesch’s problem. Appl Math Comput 189(1):500–507
6. Chang SH, Chang IL (2008) A new algorithm for calculating the
one dimensional differential transform of non linear functions.
Appl Math Comput 195(2):799–808
7. Chang SH (2010) A variational iteration method for solving
Troesch’s problem. J Comput Appl Math 234(10):3043–3047
8. Chang SH (2010) Numerical solution of Troesch’s problem by
simple shooting method. Appl Math Comput 216(11):3303–3306
9. Zarebnia M, Sajjadian M (2012) The sinc-Galerkin method for
solving Troesch’s problem. Math Comput Model 56(9–10):
218–228
10. Vazquez-Leal H, Khan Y, Fernandez-Anaya G et al (2012) A
general solution for Troesch’s problem. Math Probl Eng, Article
ID 208375
11. Khuri SA, Sayfy A (2011) Troesch’s problem: a B-spline collocation approach. Math Comput Model 54(9–10):1907–1918
12. Shirvany Y, Hayati M, Moradian R (2008) Numerical solution of
the nonlinear Schrodinger equation by feedforward neural networks. Commun Nonlinear Sci Numer Simul 13(10):2132–2145
13. Kennedy J, Mendes R (1995) Particle swarm optimization. Proc
IEEE Int Conf Neural Netw 4:1942–1948
14. Mendes R, Cortez P, Rocha M, Neves J (2002) Particle swarms
for feed forward neural network training. Proc Int Jt Conf Neural
Netw 2:1895–1899
123
Neural Comput & Applic (2017) 28:171–178
15. Yadav A, Deep K (2013) Shrinking hypersphere based trajectory
of particles in PSO. Appl Math Comput 220(1):246–267
16. Khan JA, Zahoor RMA, Qureshi IM (2009) Swarm intelligence
for the problems of non-linear ordinary differential equations and
its application to well known Wessinger’s equation. Eur J Sci Res
34(4):514–525
17. Jordehi AR (2014) Particle swarm optimization for dynamic
optimization problems: a review. Neural Comput Appl 25(7–8):
1507–1516
18. Jhang JR, Zhang J, Lok TM, Lyu MR (2007) A hybrid particle
swarm optimization-back propagation algorithm for feed forward
neural network training. Appl Math Comput 185(2):1026–1037
19. Tsoulos IG, Gavrilis D, Glavas E (2009) Solving differential
equations with constructed neural networks. Neurocomputing
72(10–12):2385–2391
20. Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a multidimensional complex space.
IEEE Trans Evol Comput 6(1):58–73
21. Pedersen MEH, Chipperfield AJ (2010) Simplifying particle
swarm optimization. Appl Soft Comput 10(2):618–628
22. Troesch BA (1976) A simple approach to a sensitive two-point
boundary value problem. J Comput Phys 21(3):279–290
23. Lagaris IE, Likas A (1998) Artificial neural networks for solving
ordinary and partial differential equations. IEEE Trans Neural
Networks 9(5):987–1000
24. Malek A, Shekari Beidokhti R (2006) Numerical solution for
high order differential equations using a hybrid neural network—
optimization method. Appl Math Comput 183(1):260–271
25. McFall KS, Mahan JR (2009) Artificial neural network method
for solution of boundary value problems with exact satisfaction of
arbitrary boundary conditions. IEEE Trans Neural Networks
20(8):1221–1233
26. Kumar M, Yadav N (2011) Multilayer perceptrons and radial
basis function neural network methods for the solution of differential equations: a survey. Comput Math Appl 62(10):
3796–3811
27. Lagaris IE, Likas A, Papageorgiou DG (2000) Neural network
methods for boundary value problems with irregular boundaries.
IEEE Trans Neural Netw 11(5):1041–1049
28. Mcfall KS (2013) Automated design parameter selection for
neural networks solving coupled partial differential equations
with discontinuities. J Frankl Inst 350(2):300–317