Multi-step discrete-time Zhang neural networks with application to time-varying nonlinear optimization

As a special kind of recurrent neural networks, Zhang neural network (ZNN) has been successfully applied to various time-variant problems solving. In this paper, we first propose a special two-step Zhang et al. discretization (ZeaD) formula and a general two-step ZeaD formula, whose truncation errors are ${O}(\tau^3)$ and ${O}(\tau^2)$, respectively, and $\tau>0$ denotes the sampling gap. We also propose a general five-step ZeaD formula with truncation error ${O}(\tau^5)$, and prove that the special and general two-step ZdaD formulas is convergent but the general five-step ZeaD formula is not zero-stable, thus is not convergent. Then, to solve the time-varying nonlinear optimization (TVNO) in real time, based on the Taylor series expansion and the above two convergent two-step ZeaD formulas, we discrete the continuous-time ZNN (CTZNN) model of TVNO proposed in the literature, and thus get a special two-step discrete-time ZNN (DTZNN) model and a general two-step DTZNN model, which contains a free parameter $a_1\in(-1/2,+\infty)$. Theoretical analyses indicate that the sequence generated by the first DTZNN model is not convergent, and for any $a_1\in(-1/2,+\infty)$ and the step-size $h\in(0,(2+4a_1)/(1+a_1))$, the sequence generated by the second DTZNN model converges to zero in an $\mathcal{O}(\tau^2)$ manner, where $\mathcal{O}(\tau^2)$ denotes a vector with every entries being $O(\tau^2)$. Furthermore, we prove that for any fixed $a_1\in(-1/2,+\infty)$, the constant $(2+4a_1)/(1+a_1)$ is the tight upper bound of the step-size $h$ and the constant $(1+2a_1)/(1+a_1)$ is the optimal step-size. Finally, some numerical results and comparisons are provided and analyzed to substantiate the efficacy of the proposed DTZNN models.

Article

Download

View PDF