Deep learning technology has been widely applied in the finance industry, particularly in the study of stock price prediction. This paper focuses on the prediction accuracy and performance of long-term features and pr...Deep learning technology has been widely applied in the finance industry, particularly in the study of stock price prediction. This paper focuses on the prediction accuracy and performance of long-term features and proposes a Wide & Deep Asymmetrical Bidirectional Legendre Memory Units that captures long-term dependencies in time series through the immediate backpropagation of bidirectional recurrent modules and Legendre polynomial memory units. The proposed model achieves superior stock trend prediction capabilities by combining the memory and generalization capabilities of the Wide & Deep model. Experimental results on the daily trading data set of the constituents of the CSI 300 index demonstrate that the proposed model outperforms several baseline models in medium and long-term trend prediction.展开更多
A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The ...A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The proposed architecture is investigated with special emphasis on low power and small decoder latency,in which a dedicated register-exchange module is designed to provide tentative survivor symbols with zero latency,and a high-speed trace back logic is presented to meet the tight latency budget specified for 1000BASE-T transceiver.Furthermore,clock-gating register banks are constructed for power saving.VLSI implementation reveals that,the proposed architecture provides about 40% savings in power consumption compared to the traditional register-exchange architecture.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
Memristor-based memory devices are a solution to the energy efficiency bottleneck faced by von Neumann architectures through memory-computer fusion at the physical level[1].Compute-in-memory(CIM)technology achieves hi...Memristor-based memory devices are a solution to the energy efficiency bottleneck faced by von Neumann architectures through memory-computer fusion at the physical level[1].Compute-in-memory(CIM)technology achieves high-efficiency computation through the deep integration of memory and computing units via memristor crossbar arrays[2-4].Among them,analogue compute-in-memory(ACIM)technology capitalizes on the non-volatile and tunable resistive properties of memristive devices.展开更多
文摘Deep learning technology has been widely applied in the finance industry, particularly in the study of stock price prediction. This paper focuses on the prediction accuracy and performance of long-term features and proposes a Wide & Deep Asymmetrical Bidirectional Legendre Memory Units that captures long-term dependencies in time series through the immediate backpropagation of bidirectional recurrent modules and Legendre polynomial memory units. The proposed model achieves superior stock trend prediction capabilities by combining the memory and generalization capabilities of the Wide & Deep model. Experimental results on the daily trading data set of the constituents of the CSI 300 index demonstrate that the proposed model outperforms several baseline models in medium and long-term trend prediction.
文摘A novel approach to survivor memory unit of Decision Feedback Sequence Estimator(DFSE) for 1000BASE-T transceiver based on hybrid architecture of the classical register-exchange and trace-back methods is proposed.The proposed architecture is investigated with special emphasis on low power and small decoder latency,in which a dedicated register-exchange module is designed to provide tentative survivor symbols with zero latency,and a high-speed trace back logic is presented to meet the tight latency budget specified for 1000BASE-T transceiver.Furthermore,clock-gating register banks are constructed for power saving.VLSI implementation reveals that,the proposed architecture provides about 40% savings in power consumption compared to the traditional register-exchange architecture.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.
文摘Memristor-based memory devices are a solution to the energy efficiency bottleneck faced by von Neumann architectures through memory-computer fusion at the physical level[1].Compute-in-memory(CIM)technology achieves high-efficiency computation through the deep integration of memory and computing units via memristor crossbar arrays[2-4].Among them,analogue compute-in-memory(ACIM)technology capitalizes on the non-volatile and tunable resistive properties of memristive devices.