This paper proposes a novel multivalued recurrent neural network model driven by external inputs,along with two innovative learning algorithms.By incorporating a multivalued activation function,the proposed model can ...This paper proposes a novel multivalued recurrent neural network model driven by external inputs,along with two innovative learning algorithms.By incorporating a multivalued activation function,the proposed model can achieve multivalued many-to-one associative memory,and the newly developed algorithms enable effective storage of many-to-one patterns in the coefficient matrix while maintaining the indispensability of inputs in many-to-one associative memory.The proposed learning algorithm addresses a critical limitation of existing models which fail to ensure completely erroneous outputs when facing partial input missing in many-to-one associative memory tasks.The methodology is rigorously derived through theoretical analysis,incorporating comprehensive verification of both the existence and global exponential stability of equilibrium points.Demonstrative examples are provided in the paper to show the effectiveness of the proposed theory.展开更多
In this paper, a novel design procedure is proposed for synthesizing high-capacity auto-associative memories based on complex-valued neural networks with real-imaginary-type activation functions and constant delays. S...In this paper, a novel design procedure is proposed for synthesizing high-capacity auto-associative memories based on complex-valued neural networks with real-imaginary-type activation functions and constant delays. Stability criteria dependent on external inputs of neural networks are derived. The designed networks can retrieve the stored patterns by external inputs rather than initial conditions. The derivation can memorize the desired patterns with lower-dimensional neural networks than real-valued neural networks, and eliminate spurious equilibria of complex-valued neural networks. One numerical example is provided to show the effectiveness and superiority of the presented results.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.62376105,12101208,and 61906072)the Fundamental Research Funds for the Central Universities(Grant No.2662022XXQD001).
文摘This paper proposes a novel multivalued recurrent neural network model driven by external inputs,along with two innovative learning algorithms.By incorporating a multivalued activation function,the proposed model can achieve multivalued many-to-one associative memory,and the newly developed algorithms enable effective storage of many-to-one patterns in the coefficient matrix while maintaining the indispensability of inputs in many-to-one associative memory.The proposed learning algorithm addresses a critical limitation of existing models which fail to ensure completely erroneous outputs when facing partial input missing in many-to-one associative memory tasks.The methodology is rigorously derived through theoretical analysis,incorporating comprehensive verification of both the existence and global exponential stability of equilibrium points.Demonstrative examples are provided in the paper to show the effectiveness of the proposed theory.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61503338,61573316,61374152,and 11302195)the Natural Science Foundation of Zhejiang Province,China(Grant No.LQ15F030005)
文摘In this paper, a novel design procedure is proposed for synthesizing high-capacity auto-associative memories based on complex-valued neural networks with real-imaginary-type activation functions and constant delays. Stability criteria dependent on external inputs of neural networks are derived. The designed networks can retrieve the stored patterns by external inputs rather than initial conditions. The derivation can memorize the desired patterns with lower-dimensional neural networks than real-valued neural networks, and eliminate spurious equilibria of complex-valued neural networks. One numerical example is provided to show the effectiveness and superiority of the presented results.