Spiking Neural Network(SNN)inspired by the biological triggering mechanism of neurons to provide a novel solution for plant disease detection,offering enhanced performance and efficiency in contrast to Artificial Neur...Spiking Neural Network(SNN)inspired by the biological triggering mechanism of neurons to provide a novel solution for plant disease detection,offering enhanced performance and efficiency in contrast to Artificial Neural Networks(ANN).Unlike conventional ANNs,which process static images without fully capturing the inherent temporal dynamics,our approach represents the first implementation of SNNs tailored explicitly for agricultural disease classification,integrating an encoding method to convert static RGB plant images into temporally encoded spike trains.Additionally,while Bernoulli trials and standard deep learning architectures likeConvolutionalNeuralNetworks(CNNs)and Fully Connected Neural Networks(FCNNs)have been used extensively,our work is the first to integrate these trials within an SNN framework specifically for agricultural applications.This integration not only refines spike regulation and reduces computational overhead by 30%but also delivers superior accuracy(93.4%)in plant disease classification,marking a significant advancement in precision agriculture that has not been previously explored.Our approach uniquely transforms static plant leaf images into time-dependent representations,leveraging SNNs’intrinsic temporal processing capabilities.This approach aligns with the inherent ability of SNNs to capture dynamic,timedependent patterns,making them more suitable for detecting disease activations in plants than conventional ANNs that treat inputs as static entities.Unlike prior works,our hybrid encoding scheme dynamically adapts to pixel intensity variations(via threshold),enabling robust feature extraction under diverse agricultural conditions.The dual-stage preprocessing customizes the SNN’s behavior in two ways:the encoding threshold is derived from pixel distributions in diseased regions,and Bernoulli trials selectively reduce redundant spikes to ensure energy efficiency on low-power devices.We used a comprehensive dataset of 87,000 RGB images of plant leaves,which included 38 distinct classes of healthy and unhealthy leaves.To train and evaluate three distinct neural network architectures,DeepSNN,SimpleCNN,and SimpleFCNN,the dataset was rigorously preprocessed,including stochastic rotation,horizontal flip,resizing,and normalization.Moreover,by integrating Bernoulli trials to regulate spike generation,ourmethod focuses on extracting themost relevant featureswhile reducingcomputational overhead.Using a comprehensivedatasetof87,000RGB images across 38 classes,we rigorously preprocessed the data and evaluated three architectures:DeepSNN,SimpleCNN,and SimpleFCNN.The results demonstrate that DeepSNN outperforms the other models,achieving superior accuracy,efficient feature extraction,and robust spike management,thereby establishing the potential of SNNs for real-time,energy-efficient agricultural applications.展开更多
The hot deformation behavior and microstructure evolution of industrial grade American Iron and Steel Institute(AISI)M35 high-speed steel produced by electroslag remelting at different parameters were investigated.The...The hot deformation behavior and microstructure evolution of industrial grade American Iron and Steel Institute(AISI)M35 high-speed steel produced by electroslag remelting at different parameters were investigated.The results indicated that grains coarsening and M2C carbides decomposing appeared in the steel at 1150℃for 5 min,and the network carbides were broken and deformed radially after the hot deformation.A constitutive equation was determined based on the corrected flow stress-strain curves considering the effects of friction and temperature,and a constitutive model with strain-compensated was established.The dynamic recrystallization(DRX)characteristic values were calculated based on the Cingara-McQueen model,and the grain distribution under different conditions was observed and analyzed.Significantly,the action mechanisms of carbides on the DRX were illuminated.It was found from a functional relation between average grain size and Z parameter that grain size increased with increasing temperature and decreasing strain rate.Optimal parameters for the hot deformation were determined as 980-1005℃~0.01-0.015 s^(−1)and 1095-1110℃~0.01-0.037 s^(−1)at the strain ranging from 0.05 to 0.8.Increasing the strain rate appropriately during deformation process was suggested to obtain fine and uniformly distributed carbides.Besides,an industrial grade forging deformation had also verified practicability of the above parameters.展开更多
Finite element method is used to simulate the high-speed melt spinning process,based on the equation system proposed by Doufas et al.Calculation predicts a neck-like deformation,as well as the related profiles of velo...Finite element method is used to simulate the high-speed melt spinning process,based on the equation system proposed by Doufas et al.Calculation predicts a neck-like deformation,as well as the related profiles of velocity,diameter,temperature,chain orientation,and crystallinity in the fiber spinning process.Considering combined effects on the process such as flow-induced crystallization,viscoelasticity,filament cooling,air drag,inertia,surface tension and gravity,the simulated material flow behaviors are consistent with those observed for semi-crystalline polymers under various spinning conditions,The structure change of polymer coils in the necking region described by the evolution of conformation tensor is also investigated.Based on the relaxation mechanism of macromolecules in flow field different types of morphology change of polymer chains before and in the neck are proposed,giving a complete prospect of structure evolution and crystallization of semi-crystalline polymer in the high speed fiber spinning process.展开更多
Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and proces...Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and process factors during interval operations and can accumulate over multiple intervals.The aim is to enhance the robustness of high-speed rail station arrival and departure track utilization schemes.Design/methodologylapproach-To achieve this objective,the paper simulates actual train operations,incorporating the fluctuations in interval operation times into the utilization of arrival and departure tracks at the station.The Monte Carlo simulation method is adopted to solve this problem.This approach transforms a nonlinear model,which includes constraints from probability distribution functions and is difficult to solve directly,into a linear programming model that is easier to handle.The method then linearly weights two objectives to optimize the solution.Findings-Through the application of Monte Carlo simulation,the study successfully converts the complex nonlinear model with probability distribution function constraints into a manageable linear programming model.By continuously adjusting the weighting coefficients of the linear objectives,the method is able to optimize the Pareto solution.Notably,this approach does not require extensive scene data to obtain a satisfactory Pareto solution set.Originality/value-The paper contributes to the field by introducing a novel method for optimizing high-speed rail station arrival and departure track utilization in the presence of fluctuations in interval operation times.The use of Monte Carlo simulation to transform the problem into a tractable linear programming model represents a significant advancement.Furthermore,the method's ability to produce satisfactory Pareto solutions without relying on extensive data sets adds to its practical value and applicability in real-world scenarios.展开更多
The paper first discusses shortcomings of classical adjacent-frame difference. Sec ondly, based on the image energy and high order statistic(HOS) theory, background reconstruction constraints are setup. Under the help...The paper first discusses shortcomings of classical adjacent-frame difference. Sec ondly, based on the image energy and high order statistic(HOS) theory, background reconstruction constraints are setup. Under the help of block-processing technology, background is reconstructed quickly. Finally, background difference is used to detect motion regions instead of adjacent frame difference. The DSP based platform tests indicate the background can be recovered losslessly in about one second, and moving regions are not influenced by moving target speeds. The algorithm has important usage both in theory and applications.展开更多
In order to study how welding parameters affect welding quality and droplet transfer, a synchronous acquisition and analysis system is established to acquire and analyze electrical signal and instantaneous images of d...In order to study how welding parameters affect welding quality and droplet transfer, a synchronous acquisition and analysis system is established to acquire and analyze electrical signal and instantaneous images of droplet transfer simultaneously, which is based on a self-developed soft-switching inverter. On the one hand, welding current and voltage signals are acquired and analyzed by a self-developed dynamic wavelet analyzer. On the other hand, images are filtered and optimized after they are captured by high-speed camera. The results show that instantaneous waveforms and statistical data of electrical signal contribute to make an overall assessment of welding quality, and that optimized high-speed images allow a visual and clear observation of droplet transfer process. The analysis of both waveforms and images leads to a further research on droplet transfer mechanism and provides a basis for precise control of droplet transfer.展开更多
The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and im...The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and improve the guidance of train control efficiency.Based on the train operation data and delay-causing data of the Wuhan-Guangzhou high-speed railway,the relevant algorithms in the natural language processing field are used to process the delay-causing text data.It also integrates the train operatingenvironment information and delay-causing text information so as to develop a cause-based train delay propagation prediction model.The Word2vec model is first used to vectorize the delay-causing text description after word segmentation.The mean model or the term frequency-inverse document frequency-weighted model is then used to generate the delay-causing sentence vector based on the original word vector.Afterward,the train operating-environment features and delay-causing sentence vector are input into the extreme gradient boosting(XGBoost)regression algorithm to develop a delay propagation prediction model.In this work,4 text feature processing methods and 8 regression algorithms are considered.The results demonstrate that the XGBoost regression algorithm has the highest prediction accuracy using the test features processed by the continuous bag of words and the mean models.Compared with the prediction model that only considers the train-operating-environment features,the results show that the prediction accuracy of the model is significantly improved with multi-ple regression algorithms after integrating the delay-causing feature.展开更多
In this paper, Spike-and-Slab Dirichlet Process (SS-DP) priors are introduced and discussed for non-parametric Bayesian modeling and inference, especially in the mixture models context. Specifying a spike-and-slab bas...In this paper, Spike-and-Slab Dirichlet Process (SS-DP) priors are introduced and discussed for non-parametric Bayesian modeling and inference, especially in the mixture models context. Specifying a spike-and-slab base measure for DP priors combines the merits of Dirichlet process and spike-and-slab priors and serves as a flexible approach in Bayesian model selection and averaging. Computationally, Bayesian Expectation-Maximization (BEM) is utilized to obtain MAP estimates. Two simulated examples in mixture modeling and time series analysis contexts demonstrate the models and computational methodology.展开更多
Fault frequency of catenary is related to meteo-rological conditions. In this work, based on the historical data, catenary fault frequency and weather-related fault rate are introduced to analyse the correlation betwe...Fault frequency of catenary is related to meteo-rological conditions. In this work, based on the historical data, catenary fault frequency and weather-related fault rate are introduced to analyse the correlation between catenary faults and meteorological conditions, and further the effect of meteorological conditions on catenary oper-ation. Moreover, machine learning is used for catenary fault prediction. As with the single decision tree, only a small number of training samples can be classified cor-rectly by each weak classifier, the AdaBoost algorithm is adopted to adjust the weights of misclassified samples and weak classifiers, and train multiple weak classifiers. Finally, the weak classifiers are combined to construct a strong classifier, with which the final prediction result is obtained. In order to validate the prediction method, an example is provided based on the historical data from a railway bureau of China. The result shows that the mapping relation between meteorological conditions and catenary faults can be established accurately by AdaBoost algorithm. The AdaBoost algorithm can accurately predict a catenary fault if the meteorological conditions are provided.展开更多
Neuromorphic photonic systems offer significant advantages for parallel,high-speed,and low-power computing,among which spiking neural networks emerge as a powerful bio-inspired alternative.This study demonstrates,to o...Neuromorphic photonic systems offer significant advantages for parallel,high-speed,and low-power computing,among which spiking neural networks emerge as a powerful bio-inspired alternative.This study demonstrates,to our knowledge,a novel approach to all-optical spiking processing and reservoir computing using passive silicon microring resonators(MRRs).A key innovation is the demonstration of deterministic optical spiking and spectro-temporal coincidence detection without the need for pump-and-probe methods,simplifying the architecture and improving efficiency.By leveraging injection of excitatory optical signals at negative wavelength detuning relative to the MRR’s cold resonances,the system delivers prompt and high-contrast optical spiking events,essential for effective chip-integrated photonic spiking neural networks.Building on this,a photonic spiking reservoir computer is implemented using a single silicon MRR.The system encodes input information through a novel spectro-temporal scheme and classifies the Iris-Flower dataset with 92%accuracy.This performance is achieved with just 48 reservoir virtual nodes,averaging only three spikes per flower sample,hence highlighting the system’s efficiency and sparsity.These findings unlock novel neuromorphic photonic frameworks with MRRs,enabling sparse all-optical spiking processing and reservoir computing,particularly promising to be adapted in future coupled MRR structures and with binary output weights for light-enabled edge computing and sensing applications.展开更多
A mathematical model of the particle heating process in the reaction shaft of flash smelting furnace was established and the calculation was performed.The results indicate that radiation plays a significant role in th...A mathematical model of the particle heating process in the reaction shaft of flash smelting furnace was established and the calculation was performed.The results indicate that radiation plays a significant role in the heat transfer process within the first 0.6 m in the upper part of the reaction shaft,whilst the convection is dominant in the area below 0.6 m for the particle heating.In order to accelerate the particle ignition,it is necessary to enhance the convection,thus to speed up the particle heating.A high-speed preheated oxygen jet technology was then suggested to replace the nature gas combustion in the flash furnace,aiming to create a lateral disturbance in the gaseous phase around the particles,so as to achieve a slip velocity between the two phases and a high convective heat transfer coefficient.Numerical simulation was carried out for the cases with the high-speed oxygen jet and the normal nature gas burners.The results show that with the high-speed jet technology,particles are heated up more rapidly and ignited much earlier,especially within the area of the radial range of R=0.3−0.6 m.As a result,a more efficient smelting process can be achieved under the same operational condition.展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
Stereo-electroencephalography (SEEG) is the main investigation method for pre-surgical evaluation of patients suffering from drug-resistant partial epilepsy. SEEG signals reflect two types of paroxysmal activity: i...Stereo-electroencephalography (SEEG) is the main investigation method for pre-surgical evaluation of patients suffering from drug-resistant partial epilepsy. SEEG signals reflect two types of paroxysmal activity: ictal activity and interictal activity or interictal spikes (IS). The relationship between IS and ictal activity is an essential and recurrent question in epiletology. In this paper, we present a distributed and parallel architecture for space and temporal distribution analysis of IS, based on a distributed and collaborative methodology. The proposed approach exploits the SEEG data using vector analysis of the corresponding signals among multi-agents system. The objective is to present a new method to analyze and classify IS during wakefulness (W), light sleep (LS) and deep sleep (DS) stages. Temporal and spatial relationships between IS and seizure onset zone are compared during wakefulness, light sleep and deep sleep. Results show that space and temporal distribution for real data are not random but correlated.展开更多
High-speed flows have consistently presented significant challenges to experimental research due to their complex and unsteady characteristics.This study investigates the use of the megahertz-frequency particle image ...High-speed flows have consistently presented significant challenges to experimental research due to their complex and unsteady characteristics.This study investigates the use of the megahertz-frequency particle image velocimetry(MHz-PIV)technique to enhance time resolution under high-speed flow conditions.In our experiments,five high-speed cameras were utilized in rapid succession to capture images of the same measurement area,achieving ultra-high time resolution particle image data.Through advanced image processing techniques,we corrected optical distortions and identified common areas among the captured images.The implementation of a sliding average algorithm,along with spectral analysis of the compressible turbulent flow field based on velocity data,facilitated a comprehensive analysis.The results confirm the capability of MHz-PIV for high-frequency sampling,significantly reducing reliance on individual camera performance.This approach offers a refined measurement method with superior spatiotemporal resolution for high-speed flow experiments.展开更多
基金supported in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF),funded by the Ministry of Education(NRF-2021R1A6A1A03039493).
文摘Spiking Neural Network(SNN)inspired by the biological triggering mechanism of neurons to provide a novel solution for plant disease detection,offering enhanced performance and efficiency in contrast to Artificial Neural Networks(ANN).Unlike conventional ANNs,which process static images without fully capturing the inherent temporal dynamics,our approach represents the first implementation of SNNs tailored explicitly for agricultural disease classification,integrating an encoding method to convert static RGB plant images into temporally encoded spike trains.Additionally,while Bernoulli trials and standard deep learning architectures likeConvolutionalNeuralNetworks(CNNs)and Fully Connected Neural Networks(FCNNs)have been used extensively,our work is the first to integrate these trials within an SNN framework specifically for agricultural applications.This integration not only refines spike regulation and reduces computational overhead by 30%but also delivers superior accuracy(93.4%)in plant disease classification,marking a significant advancement in precision agriculture that has not been previously explored.Our approach uniquely transforms static plant leaf images into time-dependent representations,leveraging SNNs’intrinsic temporal processing capabilities.This approach aligns with the inherent ability of SNNs to capture dynamic,timedependent patterns,making them more suitable for detecting disease activations in plants than conventional ANNs that treat inputs as static entities.Unlike prior works,our hybrid encoding scheme dynamically adapts to pixel intensity variations(via threshold),enabling robust feature extraction under diverse agricultural conditions.The dual-stage preprocessing customizes the SNN’s behavior in two ways:the encoding threshold is derived from pixel distributions in diseased regions,and Bernoulli trials selectively reduce redundant spikes to ensure energy efficiency on low-power devices.We used a comprehensive dataset of 87,000 RGB images of plant leaves,which included 38 distinct classes of healthy and unhealthy leaves.To train and evaluate three distinct neural network architectures,DeepSNN,SimpleCNN,and SimpleFCNN,the dataset was rigorously preprocessed,including stochastic rotation,horizontal flip,resizing,and normalization.Moreover,by integrating Bernoulli trials to regulate spike generation,ourmethod focuses on extracting themost relevant featureswhile reducingcomputational overhead.Using a comprehensivedatasetof87,000RGB images across 38 classes,we rigorously preprocessed the data and evaluated three architectures:DeepSNN,SimpleCNN,and SimpleFCNN.The results demonstrate that DeepSNN outperforms the other models,achieving superior accuracy,efficient feature extraction,and robust spike management,thereby establishing the potential of SNNs for real-time,energy-efficient agricultural applications.
基金support from Open Project of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing(No.41622030)Danyang Coinch New Material Technology Co.,Ltd.
文摘The hot deformation behavior and microstructure evolution of industrial grade American Iron and Steel Institute(AISI)M35 high-speed steel produced by electroslag remelting at different parameters were investigated.The results indicated that grains coarsening and M2C carbides decomposing appeared in the steel at 1150℃for 5 min,and the network carbides were broken and deformed radially after the hot deformation.A constitutive equation was determined based on the corrected flow stress-strain curves considering the effects of friction and temperature,and a constitutive model with strain-compensated was established.The dynamic recrystallization(DRX)characteristic values were calculated based on the Cingara-McQueen model,and the grain distribution under different conditions was observed and analyzed.Significantly,the action mechanisms of carbides on the DRX were illuminated.It was found from a functional relation between average grain size and Z parameter that grain size increased with increasing temperature and decreasing strain rate.Optimal parameters for the hot deformation were determined as 980-1005℃~0.01-0.015 s^(−1)and 1095-1110℃~0.01-0.037 s^(−1)at the strain ranging from 0.05 to 0.8.Increasing the strain rate appropriately during deformation process was suggested to obtain fine and uniformly distributed carbides.Besides,an industrial grade forging deformation had also verified practicability of the above parameters.
基金This work was financially supported by the National Natural Science Foundation of China(Nos.20204007,50390090,20490220,10590355)the Doctoral Foundation of National Education Committee of China(No.20030248008)the 863 Project of China(No.2002AA336120).
文摘Finite element method is used to simulate the high-speed melt spinning process,based on the equation system proposed by Doufas et al.Calculation predicts a neck-like deformation,as well as the related profiles of velocity,diameter,temperature,chain orientation,and crystallinity in the fiber spinning process.Considering combined effects on the process such as flow-induced crystallization,viscoelasticity,filament cooling,air drag,inertia,surface tension and gravity,the simulated material flow behaviors are consistent with those observed for semi-crystalline polymers under various spinning conditions,The structure change of polymer coils in the necking region described by the evolution of conformation tensor is also investigated.Based on the relaxation mechanism of macromolecules in flow field different types of morphology change of polymer chains before and in the neck are proposed,giving a complete prospect of structure evolution and crystallization of semi-crystalline polymer in the high speed fiber spinning process.
文摘Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and process factors during interval operations and can accumulate over multiple intervals.The aim is to enhance the robustness of high-speed rail station arrival and departure track utilization schemes.Design/methodologylapproach-To achieve this objective,the paper simulates actual train operations,incorporating the fluctuations in interval operation times into the utilization of arrival and departure tracks at the station.The Monte Carlo simulation method is adopted to solve this problem.This approach transforms a nonlinear model,which includes constraints from probability distribution functions and is difficult to solve directly,into a linear programming model that is easier to handle.The method then linearly weights two objectives to optimize the solution.Findings-Through the application of Monte Carlo simulation,the study successfully converts the complex nonlinear model with probability distribution function constraints into a manageable linear programming model.By continuously adjusting the weighting coefficients of the linear objectives,the method is able to optimize the Pareto solution.Notably,this approach does not require extensive scene data to obtain a satisfactory Pareto solution set.Originality/value-The paper contributes to the field by introducing a novel method for optimizing high-speed rail station arrival and departure track utilization in the presence of fluctuations in interval operation times.The use of Monte Carlo simulation to transform the problem into a tractable linear programming model represents a significant advancement.Furthermore,the method's ability to produce satisfactory Pareto solutions without relying on extensive data sets adds to its practical value and applicability in real-world scenarios.
文摘The paper first discusses shortcomings of classical adjacent-frame difference. Sec ondly, based on the image energy and high order statistic(HOS) theory, background reconstruction constraints are setup. Under the help of block-processing technology, background is reconstructed quickly. Finally, background difference is used to detect motion regions instead of adjacent frame difference. The DSP based platform tests indicate the background can be recovered losslessly in about one second, and moving regions are not influenced by moving target speeds. The algorithm has important usage both in theory and applications.
基金This work was supported by National Natural Science Foundation of China ( No. 50875088) Natural Science Foundation of Guangdong Province, China ( No. 07006479).
文摘In order to study how welding parameters affect welding quality and droplet transfer, a synchronous acquisition and analysis system is established to acquire and analyze electrical signal and instantaneous images of droplet transfer simultaneously, which is based on a self-developed soft-switching inverter. On the one hand, welding current and voltage signals are acquired and analyzed by a self-developed dynamic wavelet analyzer. On the other hand, images are filtered and optimized after they are captured by high-speed camera. The results show that instantaneous waveforms and statistical data of electrical signal contribute to make an overall assessment of welding quality, and that optimized high-speed images allow a visual and clear observation of droplet transfer process. The analysis of both waveforms and images leads to a further research on droplet transfer mechanism and provides a basis for precise control of droplet transfer.
基金This work was supported by the National Nature Science Foundation of China(Nos.71871188 and U1834209)the Research and development project of China National Railway Group Co.,Ltd(No.P2020X016).
文摘The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and improve the guidance of train control efficiency.Based on the train operation data and delay-causing data of the Wuhan-Guangzhou high-speed railway,the relevant algorithms in the natural language processing field are used to process the delay-causing text data.It also integrates the train operatingenvironment information and delay-causing text information so as to develop a cause-based train delay propagation prediction model.The Word2vec model is first used to vectorize the delay-causing text description after word segmentation.The mean model or the term frequency-inverse document frequency-weighted model is then used to generate the delay-causing sentence vector based on the original word vector.Afterward,the train operating-environment features and delay-causing sentence vector are input into the extreme gradient boosting(XGBoost)regression algorithm to develop a delay propagation prediction model.In this work,4 text feature processing methods and 8 regression algorithms are considered.The results demonstrate that the XGBoost regression algorithm has the highest prediction accuracy using the test features processed by the continuous bag of words and the mean models.Compared with the prediction model that only considers the train-operating-environment features,the results show that the prediction accuracy of the model is significantly improved with multi-ple regression algorithms after integrating the delay-causing feature.
文摘In this paper, Spike-and-Slab Dirichlet Process (SS-DP) priors are introduced and discussed for non-parametric Bayesian modeling and inference, especially in the mixture models context. Specifying a spike-and-slab base measure for DP priors combines the merits of Dirichlet process and spike-and-slab priors and serves as a flexible approach in Bayesian model selection and averaging. Computationally, Bayesian Expectation-Maximization (BEM) is utilized to obtain MAP estimates. Two simulated examples in mixture modeling and time series analysis contexts demonstrate the models and computational methodology.
基金supported by the Scientific and Technological Research and Development Program of China Railway Corporation under Grant N2018G023by the Science and Technology Projects of Sichuan Province under Grants 2018RZ0075
文摘Fault frequency of catenary is related to meteo-rological conditions. In this work, based on the historical data, catenary fault frequency and weather-related fault rate are introduced to analyse the correlation between catenary faults and meteorological conditions, and further the effect of meteorological conditions on catenary oper-ation. Moreover, machine learning is used for catenary fault prediction. As with the single decision tree, only a small number of training samples can be classified cor-rectly by each weak classifier, the AdaBoost algorithm is adopted to adjust the weights of misclassified samples and weak classifiers, and train multiple weak classifiers. Finally, the weak classifiers are combined to construct a strong classifier, with which the final prediction result is obtained. In order to validate the prediction method, an example is provided based on the historical data from a railway bureau of China. The result shows that the mapping relation between meteorological conditions and catenary faults can be established accurately by AdaBoost algorithm. The AdaBoost algorithm can accurately predict a catenary fault if the meteorological conditions are provided.
基金UK Research and Innovation(EP/V025198/1)HORIZON EUROPE European Innovation Council(Spike Pro,101129904)European Research Council(788793)。
文摘Neuromorphic photonic systems offer significant advantages for parallel,high-speed,and low-power computing,among which spiking neural networks emerge as a powerful bio-inspired alternative.This study demonstrates,to our knowledge,a novel approach to all-optical spiking processing and reservoir computing using passive silicon microring resonators(MRRs).A key innovation is the demonstration of deterministic optical spiking and spectro-temporal coincidence detection without the need for pump-and-probe methods,simplifying the architecture and improving efficiency.By leveraging injection of excitatory optical signals at negative wavelength detuning relative to the MRR’s cold resonances,the system delivers prompt and high-contrast optical spiking events,essential for effective chip-integrated photonic spiking neural networks.Building on this,a photonic spiking reservoir computer is implemented using a single silicon MRR.The system encodes input information through a novel spectro-temporal scheme and classifies the Iris-Flower dataset with 92%accuracy.This performance is achieved with just 48 reservoir virtual nodes,averaging only three spikes per flower sample,hence highlighting the system’s efficiency and sparsity.These findings unlock novel neuromorphic photonic frameworks with MRRs,enabling sparse all-optical spiking processing and reservoir computing,particularly promising to be adapted in future coupled MRR structures and with binary output weights for light-enabled edge computing and sensing applications.
基金funded by Jinguan Copper of Tongling Non-ferrous Metals Group Co., Ltd.
文摘A mathematical model of the particle heating process in the reaction shaft of flash smelting furnace was established and the calculation was performed.The results indicate that radiation plays a significant role in the heat transfer process within the first 0.6 m in the upper part of the reaction shaft,whilst the convection is dominant in the area below 0.6 m for the particle heating.In order to accelerate the particle ignition,it is necessary to enhance the convection,thus to speed up the particle heating.A high-speed preheated oxygen jet technology was then suggested to replace the nature gas combustion in the flash furnace,aiming to create a lateral disturbance in the gaseous phase around the particles,so as to achieve a slip velocity between the two phases and a high convective heat transfer coefficient.Numerical simulation was carried out for the cases with the high-speed oxygen jet and the normal nature gas burners.The results show that with the high-speed jet technology,particles are heated up more rapidly and ignited much earlier,especially within the area of the radial range of R=0.3−0.6 m.As a result,a more efficient smelting process can be achieved under the same operational condition.
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
文摘Stereo-electroencephalography (SEEG) is the main investigation method for pre-surgical evaluation of patients suffering from drug-resistant partial epilepsy. SEEG signals reflect two types of paroxysmal activity: ictal activity and interictal activity or interictal spikes (IS). The relationship between IS and ictal activity is an essential and recurrent question in epiletology. In this paper, we present a distributed and parallel architecture for space and temporal distribution analysis of IS, based on a distributed and collaborative methodology. The proposed approach exploits the SEEG data using vector analysis of the corresponding signals among multi-agents system. The objective is to present a new method to analyze and classify IS during wakefulness (W), light sleep (LS) and deep sleep (DS) stages. Temporal and spatial relationships between IS and seizure onset zone are compared during wakefulness, light sleep and deep sleep. Results show that space and temporal distribution for real data are not random but correlated.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFA0405700).
文摘High-speed flows have consistently presented significant challenges to experimental research due to their complex and unsteady characteristics.This study investigates the use of the megahertz-frequency particle image velocimetry(MHz-PIV)technique to enhance time resolution under high-speed flow conditions.In our experiments,five high-speed cameras were utilized in rapid succession to capture images of the same measurement area,achieving ultra-high time resolution particle image data.Through advanced image processing techniques,we corrected optical distortions and identified common areas among the captured images.The implementation of a sliding average algorithm,along with spectral analysis of the compressible turbulent flow field based on velocity data,facilitated a comprehensive analysis.The results confirm the capability of MHz-PIV for high-frequency sampling,significantly reducing reliance on individual camera performance.This approach offers a refined measurement method with superior spatiotemporal resolution for high-speed flow experiments.