THE GRADIENT PROJECTION ALGORITHM WITH ADAPTIVE MUTATION STEP LENGTH FOR NON-PROBABILISTIC RELIABILITY INDEX

Original scientific paper Aiming at the problems of selection parameter step-size and premature convergence that occurred when searching the local area in the optimal design of adaptive gradient projection algorithm in this paper, adaptive variable step-size mechanism strategy and adaptive variable step-size mechanism were established. They were introduced into the gradient projection algorithm, and were used to control iteration step length. Through the examples of non-probabilistic reliability index, it can be showed that the method could quickly and accurately calculate the reliability index when the model had multiple variables and complex limit state function. To compare and contrast this algorithm with the simple gradient projection algorithm, this algorithm is not sensitive to the initial point position. And it not only takes into account both local performance and global search ability, but also has fast convergence speed and high precision. So it is an efficient and practical optimization algorithm.


Introduction
Reliability problem derives from uncertainties existing in the engineering.The probabilistic model cannot be defined for the reason of no statistical properties parameters.And the superiority membership function is difficult to be determined due to the lack of adequate data.In view of the fact that the parameters' ranges or limits are easy to be determined, it is ideal to choose non-probabilistic reliability method based on interval analysis [1].
Over-estimated interval expansion is an important problem in interval analysis [2].The root cause of the problem is the basic hypothesis in interval arithmetic: all parameters participating in computing are mutually independent, and their values change independently without mutual influence in the interval range.However, in fact, this assumption often may not be tenable.There are classes of approaches for solving well the overestimation interval expansion problem: exact methods and approximate methods.Exact methods mainly include: (1) Transformation into optimization problem.It disposes of the problem by transforming interval computation into traditional programming problem.Under the assumption that all the interval variables are completely independent, to transform into optimization problem, establish the corresponding pure mathematical model, the exact range of function value can be generally gotten, although some transformed optimization problems calculations are complex.(2) Use interval combination method.According to monotone theorem of real-valued function, the upper and lower bound of function can be obtained by the function calculation values corresponding to end-values of independent variables [3].If there are multiple independent variables, by arranging or combining the upper and lower bound of each independent variable, the accurate interval range of function value can be obtained.
Sometimes, if the limit state function is complicated and Variable is various, the calculated amount of exact solution would be excessive, or even incalculable.A computer program algorithm will undoubtedly bring convenience to rapidly and accurately solve nonprobabilistic reliability problems with complex multivariable functions.On account of the explicit physical significance and rapid iteration [4], gradient projection method has been applied to solving reliability indexes of interval model [5].Li Shijun [6] proved its feasibility and effectiveness by practical example.However, his study found that if gradient projection method was only adopted, the most calculated probable failure point would rely on the selection of initial point.It is difficult to ensure ideal accuracy of the iterative convergence point.To combine gradient projection with space dimension reduction method, the efficient search can be terminated on actual target point.However, its searched dangerous point would surely be confined to angular points of multidimensional cube box model composed by standardized interval variables.Some articles [7÷9] proved that it is just a kind of special circumstances, not universal.
In this paper, adaptive step-size strategy and mutation step-size mechanism were introduced into the gradient projection algorithm.In the process of iteration, the algorithm could automatically choose step strategy in terms of the actual situation in the solving process.When the search turned into local optimal area as well as the iteration appeared premature convergence, it could adaptively mutate step length and help the next search to get rid of local optimal area.In this way, it effectively avoided premature convergence and improved the global optimization capabilities of the algorithm.Furthermore, by the example of solving non-probabilistic reliability indexes, it is showed that the algorithm integrates the advantages of gradient projection algorithm and adaptive algorithm.It not only takes into account both local performance and global search ability, but also has high precision to dispose the optimization problem of highdimensional and complex non-probabilistic reliability.

Interval reliability index
Assuming x = (x 1 , x 2 ,…, x n ) was a vector composed of interval variables which could affect structure function.The value range of x could be expressed as x  x I = x l , x u .Then, based on function M = g(x) = g(x 1 , x 2 ,…, x n ), the structure non-probabilistic reliability index η could be expressed as: In the formula, M c and M r were respectively the centre and radius of interval M.
Introducing standardized interval vector , In the formula, x c and x r were respectively the centre and radius of interval x.Replacing x with δ, performance function can be converted into equivalent function of standardized interval vector δ: At the same time, expansion space of δ can be got: ( , , , ): ( , ),( 1,2, , ) Based on infinite norm, η, the interval reliability index, should be defined as the shortest distance from the origin to the failure surface measured by infinite norm ∞ ⋅ in the expansion space C δ ∞ of standardized vector δ.η could be denoted as: Under the constraint: It was apparent from these that the solving process of reliability index measured by infinite norm is an essentially optimization problem with equality constraint.

Background introduction of gradient projection algorithm 3.1 Gradient algorithm and its improvement
In 1847, Cauchy put forward gradient method [9], which was a minimization algorithm taking negative gradient direction as the search direction, in which objective function value could be expected to fall at the fastest rate.In unconstrained optimization problems solving, the method has some advantages, such as simple, much less computations, small store capacity, low request to initial point, and good global convergence.The gradient method reflects a local property of objective function.Seen from the local angle, descent direction is a truly fastest descent direction of objective function value.However, looked from the global angle, at dealing with some complicated nonlinear optimization problems, the gradient method only has Q-linear convergence.Because the optimization procedure adopts "zigzag pattern" to approach the minimum point, the searching needs to goes through a lot of detours, even if the moving distance toward the minimum point is not too long.It has greatly made the convergence rate to slow down.
In order to improve the performance of algorithm, the gradient algorithm has been made a variety of improvements.Barzilai and Borwein [10] proposed twopoint step-size gradient method, which was also called BB gradient method.It not only made general functions to achieve local R-linear convergence, but also achieve R super-linear convergence at arbitrary initial point in twodimension convex quadratic objective functions.The method immensely improved convergence speed, but it could not ensure that the search showed a declined tendency to each iteration function value.Dai [11] proved that cross-step (AS) gradient algorithm integrated both of the advantages.It avoided saw-tooth phenomenon.Furthermore, under the positive semi-definite system, freely given the initial point, all two-dimension functions had Q super-linear convergence with two steps, while all arbitrary dimensions functions had R-linear convergence.
In the 1950s, Hestenes and Stiefel proposed linear conjugate gradient method (HS algorithm) [12].Fletcher and Reeves [13] put forward non-linear conjugate gradient method (FR algorithm) in 1960s.Subsequently, based on FR algorithm, a variety of conjugate gradient methods and hybrid conjugate gradient methods were proposed, such as PRP algorithm, LS algorithm, CD algorithm, DY algorithm, and modified PRP algorithm proposed by Yuan [14].All these algorithms have conjugate property in nature.Combined conjugacy with gradient algorithm, to construct a set of conjugate direction by use of gradient of known points, and seek for the minimal point along the direction, conjugate gradient method overcomes the disadvantage of gradient method at slow convergence speed, and has super-linear quadratic convergence rate with n steps.Generally speaking, methods of PRP, LS and HS have superior numerical performance; however, their convergence is not ideal.In contrast, FR, CD and DY methods have good convergence but poor numerical performance, which is hard to balance sufficient descent and global convergence.
In view of the case with constraint, non-feasible points may be obtained by search along the negative gradient direction.In 1960, Rosen put forward the gradient projection algorithm.When iteration point was in the feasible region, negative gradient of objective function should be taken as search direction.If iteration point was on the boundary of the feasible region, negative gradient projection on the boundary of the feasible region should be set as the direction, so as to ensure the constructed direction is feasible and declining.

Gradient projection algorithm
Gradient project algorithm belongs to line search method, both search direction k d and step size k . Due to non-probabilistic reliability function being usually a non-linear function, constraint equations , that is to search the point owned the minimum in the maximum absolute value of coordinate component on the standardized failure surface.The value is only connected with the point coordinate component, and the analytical expression does not exist.It is hard to give negative gradient direction of objective function.So the traditional gradient projection algorithm cannot be directly used for the solving of nonprobabilistic reliability index.According to the mathematical meaning of gradient, the similar gradient vector of objective function should be constructed [6].
In the formula ( 7 It is obvious that ( ) That means search is along the tangent directions of failure surface.Meanwhile, ( ) it points to the descent direction of objective function.
For general iterative search algorithms, choosing properly the initial point, iteration convergence can be made rapidly and the desired accuracy can be achieved.But the choice of initial point depends on familiarity with research object and experience in engineering.As studied algorithm, selection condition of the initial point is expected to be broad.Therefore, the algorithm relaxes restrictions on the initial point, as long as initial point fell upon the limit state surface, i.e. it satisfied the constraint equation

Adaptive step-size strategy and mutation step-size mechanism 3.3.1 Adaptive step-size strategy
McCormick and Tapia found that gradient projection algorithm uses precise step-size rule in research and needs large computational quantity, so it was rarely adopted [15].Hager and Park believed that only in some difficult problems with simple constraints, precise step-size rule could be used to make iteration point jump out from a neighbourhood of local minimum point and turn to the global minimum point [16].
In the complex constrained optimization, which does not usually use precise step-size rule, premature convergence phenomenon may occur in gradient projection algorithm after local optimum is reached.Compared with other step-size strategies, adaptive stepsize strategy has a higher convergence performance.In the light of the previous iterative information, it adjusts Linear search point (0),(k 1) Adopting adaptive step-size strategy, firstly we should get the maximum and the second maximum absolute values of coordinate component on the failure surface in the previous iteration.When the difference of the two absolute values was big, we used a larger step in the next step iteration automatically; otherwise, a smaller step-size should be adopted.The step length must be suitable to the difference of the two absolute values in the previous iteration (for convenient statement, it was expressed by q (k,1) ) and the difference between the reliability indexes in previous two times iteration (it was expressed by q (k,2) ).So iteration step +1 k α in the +1 k step could be designed as non-linear function consisting of μ, q (k,1) and q (k,2) .

Mutation step-size mechanism
In early running stage of the algorithm, a large step was a benefit to jump out of local minimum point, complete global optimization task well.However, a small step can help reach the local convergence quickly in the late running.This idea could achieve great effectiveness, but it cannot solve the problem of premature convergence as dealt with complex optimization problems.When the search went into each local area, the step size was calculated according to step-size strategy in Fig. 2 on the basis of the condition of "else" or "else if".The formula does not contain adaptive mutation step-size m, and stepsize was only connected with step-size coefficient μ, q (k,1) and q (k,2) .
When the search went into local area and premature convergence appeared, variation mechanism should be lead in.Mutation step-size could help subsequent search to get rid of the local optimum, and don't stop to search into other areas of the solution space until the global optimal solution was found.Step-size was set to be the mutation step-size: In the formula (10), m was adaptive mutation step length.It can be computed based on nonlinear function of iteration time ) ( 1, ( ), ( )) In the formula (11), the f(δ (j+1) ) is reliability indexes of current iteration as premature convergence appears, and the f(δ (j) ) is reliability indexes of the previous iteration.
Figure 2 Step-size strategy and algorithm flow diagram When premature convergence appears in the process of solution space search, non-linear approaches of adaptive mutation step length can help that search jumps out of local search area effectively and achieves optimization in other areas.When getting to the next area, step-size will update in non-linear way according to the formula (9), so as to take into account both local performance and global search ability.Taking advantage of interaction of adaptive step-size strategy and mutation step-size mechanism, the way which nonlinearly controls step-length of gradient project algorithm in the process of solution space search, can effectively coordinate global and local optimization capacities of the algorithm.

Numerical calculation examples 4.1 Calculation example 1
There was a quadratic non-linear performance function denoted as C δ ≤ 's vertex.The two points located in the angular points of multi-dimensional hypercube box model, which was composed of the standardized interval variables.Due to 3,03288 > 1,51712, D 2 should be rejected.According to the one-dimensional optimization algorithm in literature [17], the most probable failure point was D 1 , η = 1,51712.However, in fact, it was not the case.Point (2,475; 0,8; 1,5) was selected as the initial point on the surface ( ) 0 G δ = .To set up parameters as μ = 0,2 and  As shown in the Tab. 1, search was only conducted in surface δ 2 = 0,8.The point was (1,3917; 0,8; 1,4071) in the third and fourth iteration.At this moment, the local premature convergence arose, and search should be cut off in this point.It could be considered that η = 1,4071, and the relative error was (1,4071 -1,4048)/1,4048 = 0,1637 %.In the next step of iteration, mutation step-size mechanism should be started.To set mutation step length as 0,075, subsequent search got rid of local optimum quickly.It searched other areas of solution space, and the search cut-off at point (1,4047; 0,8; 1,4048) in the fifth iteration.The global optimal was found and the reliability index was η = 1,4048.Its component coordinates were δ 1 = 1,4047, δ 2 = 0,8, δ 3 = 1,4048, and it was not the intersection of hyperline 1   Obviously, to solve non-probabilistic reliability by dimensionality reduction search has limitation, which can be verified by the results in literature [6].However, its iterations had 15 times in literature [6].And if it didn't use space dimension reduction method and appeared local premature convergence, the search couldn't get rid of local optimum automatically, also couldn't get the global optimal finally.It led to the phenomenon that the most probable failure point calculated by the algorithm in literature [6] depended on the selection of initial point.For example, (1,75; 0,1; 1,535) was selected for initial point, the final search point could be got at point (1,4069; 0,7014; 1,4069).So the reliability index was η = 1,4069.Although the value was close to global optimal, there were still accuracy errors.In example 1, Fig. 3 shows this paper's search process and results compared with literature [6].
Similarly, if initial point was chosen at another point (1,75; 0,1; 1,535) for calculation example 1 in literature [6], this paper's algorithm iterated 49 steps, and eventually the search ended on the global optimal point (1,4046; 0,7989; 1,4048), as shown in Tab. 2. There was still η = 1,4048.It was in accordance with the previous case that the initial point was set at (2,475; 0,8; 1,5).Its excellent accuracy was proved.Meanwhile, it illustrated the algorithm was not sensitive to the initial point, and had good robustness.
To launch adaptive variable step-size mechanism in the next step of all the above 18 times of local premature convergence, eventually the search ended at the 100 th iterative search point (−1,1556; −1,1556; 1,1556).The global optimal was achieved, and the reliability index was η = 1,1556.It is equal to the results that combined gradient projection algorithm with dimension reduction search in literature [6].
To change search starting point position many times, all searches ended in the global optimal point (−1,1556; −1,1556; 1,1556).It further illustrated that this paper's algorithm had better robustness.By contrast, the gradient projection algorithm was solely adopted in calculation example 2 in literature [6].Its search ended in the local premature convergence point −1,1631; −1,1229; 1,1631.The reliability index was η = 1,1631 and error was 0,649 %.In literature [6], the subsequent search is made equal to the first component absolute value and the third component of the absolute value, i.e. δ r = −δ h .That could be reduced to a twodimensional search.At last, search ended in −1,1556; −1,1556 and reached the target point.To be sure, the method is artistic, but the most dangerous point must be limited at the angular points of multi-dimensional hypercube box model, which is composed of the standardized interval variables.In fact, it is doubtful that all of the most dangerous points are in those locations.So it does not have a universal significance.Furthermore, the reason that search end at non-target points is not taking adaptive variable step-size mechanism.It leads, that the gradient project algorithm cannot effectively jump out of local optimal area.

Conclusions
Because of the simple algorithm and clear mathematical meaning, gradient projection algorithm can be widely used in the probabilistic reliability analysis.This paper established adaptive step-size strategy and adaptive mutation step-size mechanism.The strategy had non-linear relationship with step-size coefficient μ, the difference q (k,1) of the maximum and second maximum absolute value of coordinate components in the previous iteration, and the difference q (k,2) of reliability indexes in the previous two iterations.Similarly, the mechanism had non-linear relationship with iteration time k+1, the reliability index f(δ (j+1) ) as appeared premature convergence in the current iteration, and the reliability index f(δ (j) ) as appeared premature convergence in the previous iteration.
The strategy and mechanism could automatically make a large step in infancy of gradient projection algorithm, while making a small step in the steady state.At the same time, step-size could mutate automatically when it fell into the local state of premature convergence.It could effectively help the subsequent search to seek for optimization in other areas.
Experiments showed that the proposed algorithm in this paper controlled iteration step-size by adaptive stepsize strategy and adaptive mutation step-size mechanism.Compared with simple gradient projection algorithm, it has faster convergence speed and higher accuracy.Taking local and global search performance capacities into consideration, it showed the advantages of being effective and practical in solving problems of non-probabilistic reliability index.

 2 (
can also be spread at the iteration point k δ and the linear items are chosen, i.e. to transform non-linear constraints into linear constraints at the iteration points, then to substitute the spreading surface for limit state surface of performance function, project negative gradient direction into the surface by means of projection matrix, the search direction k d can be constructed.If linear search point (0),(k 1) δ + is not on limit state surface, quasi-Newton iteration must be used for dragging search point back to the surface 1

Figure 1
Figure 1 The diagram of gradient projection iterative algorithm Aiming at the solving of non-probabilistic reliability indexes, assuming the objective function is

), 1 ±f
locate in the corresponding position where the coordinate component of point ( ) k δ has the maximum absolute value, and 1 denotes the positive component, −1 denotes the negative component.δ in the position ( ) kδ , and projecting it onto the surface (or curve) on the basis of the gradient projection algorithm, we could determine the search direction( ) 5,5.The points D 1 (1,51712; 1,51712; 1,51712) and D 2 (3,03288; 3,03288; 3,03288) were its limit state surface ( of the step-size strategy and algorithm flow.The search process is shown in Tab. 1.

Figure 3
Figure 3 This paper's search process and results compared with literature [6] step in the monotonous way.So the strategy can make search step not only decrease but also increase.Armijo-Goldstein strategy, the common imprecise onedimensional search strategy, obtains iteration step by onedimensional search each time.It ensures that the objective function value has a satisfactory deduction.But objective function must be differentiated in iterative operation each time.The non-probabilistic reliability objective function,

Table 1
Search iterative process of calculation example 1

Table 2
Iteration process after the initial point was changed

Table 2
Iteration process after the initial point was changed (continuation)

Table 3
Search and iterative process of example 2

Table 3
Search and iterative process of example 2 (continuation)

Table 3
Search and iterative process of example 2 (continuation)