In order to directly construct the mapping between multiple state parameters and remaining useful life(RUL),and reduce the interference of random error on prediction accuracy,a RUL prediction model of aeroengine based...In order to directly construct the mapping between multiple state parameters and remaining useful life(RUL),and reduce the interference of random error on prediction accuracy,a RUL prediction model of aeroengine based on principal component analysis(PCA)and one-dimensional convolution neural network(1D-CNN)is proposed in this paper.Firstly,multiple state parameters corresponding to massive cycles of aeroengine are collected and brought into PCA for dimensionality reduction,and principal components are extracted for further time series prediction.Secondly,the 1D-CNN model is constructed to directly study the mapping between principal components and RUL.Multiple convolution and pooling operations are applied for deep feature extraction,and the end-to-end RUL prediction of aeroengine can be realized.Experimental results show that the most effective principal component from the multiple state parameters can be obtained by PCA,and the long time series of multiple state parameters can be directly mapped to RUL by 1D-CNN,so as to improve the efficiency and accuracy of RUL prediction.Compared with other traditional models,the proposed method also has lower prediction error and better robustness.展开更多
Ultrasonic guided wave is an attractive monitoring technique for large-scale structures but is vulnerable to changes in environmental and operational conditions(EOC),which are inevitable in the normal inspection of ci...Ultrasonic guided wave is an attractive monitoring technique for large-scale structures but is vulnerable to changes in environmental and operational conditions(EOC),which are inevitable in the normal inspection of civil and mechanical structures.This paper thus presents a robust guided wave-based method for damage detection and localization under complex environmental conditions by singular value decomposition-based feature extraction and one-dimensional convolutional neural network(1D-CNN).After singular value decomposition-based feature extraction processing,a temporal robust damage index(TRDI)is extracted,and the effect of EOCs is well removed.Hence,even for the signals with a very large temperature-varying range and low signal-to-noise ratios(SNRs),the final damage detection and localization accuracy retain perfect 100%.Verifications are conducted on two different experimental datasets.The first dataset consists of guided wave signals collected from a thin aluminum plate with artificial noises,and the second is a publicly available experimental dataset of guided wave signals acquired on a composite plate with a temperature ranging from 20℃to 60℃.It is demonstrated that the proposed method can detect and localize the damage accurately and rapidly,showing great potential for application in complex and unknown EOC.展开更多
Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr...Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.展开更多
Pulse pile-up is a problem in nuclear spectroscopy and nuclear reaction studies that occurs when two pulses overlap and distort each other,degrading the quality of energy and timing information.Different methods have ...Pulse pile-up is a problem in nuclear spectroscopy and nuclear reaction studies that occurs when two pulses overlap and distort each other,degrading the quality of energy and timing information.Different methods have been used for pile-up rejection,both digital and analogue,but some pile-up events may contain pulses of interest and need to be reconstructed.The paper proposes a new method for reconstructing pile-up events acquired with a neutron detector array(NEDA)using an one-dimensional convolutional autoencoder(1D-CAE).The datasets for training and testing the 1D-CAE are created from data acquired from the NEDA.The new pile-up signal reconstruction method is evaluated from the point of view of how similar the reconstructed signals are to the original ones.Furthermore,it is analysed considering the result of the neutron-gamma discrimination based on charge comparison,comparing the result obtained from original and reconstructed signals.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
Image based individual dairy cattle recognition has gained much attention recently. In order to further improve the accuracy of individual dairy cattle recognition, an algorithm based on deep convolutional neural netw...Image based individual dairy cattle recognition has gained much attention recently. In order to further improve the accuracy of individual dairy cattle recognition, an algorithm based on deep convolutional neural network( DCNN) is proposed in this paper,which enables automatic feature extraction and classification that outperforms traditional hand craft features. Through making multigroup comparison experiments including different network layers,different sizes of convolution kernel and different feature dimensions in full connection layer,we demonstrate that the proposed method is suitable for dairy cattle classification. The experimental results show that the accuracy is significantly higher compared to two traditional image processing algorithms: scale invariant feature transform( SIFT) algorithm and bag of feature( BOF) model.展开更多
In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera an...In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera and microscope were simultaneously used to obtain concrete surface images used as the input data for the DCNN.Thereafter,training,validation,and testing of the DCNNs were performed based on the DSLR camera and microscope image data.Results of the analysis indicated that the DCNN employing DSLR image data achieved a relatively higher accuracy.The accuracy of the DSLR-derived image data was attributed to the relatively wider range of the DSLR camera,which was beneficial for extracting a larger number of features.Moreover,the DSLR camera procured more realistic images than the microscope.Thus,when the compressive strength of concrete was evaluated using the DCNN employing a DSLR camera,time and cost were reduced,whereas the usefulness increased.Furthermore,an indirect comparison of the accuracy of the DCNN with that of existing non-destructive methods for evaluating the strength of concrete proved the reliability of DCNN-derived concrete strength predictions.In addition,it was determined that the DCNN used for concrete strength evaluations in this study can be further expanded to detect and evaluate various deteriorative factors that affect the durability of structures,such as salt damage,carbonation,sulfation,corrosion,and freezing-thawing.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
Integrated with sensors,processors,and radio frequency(RF)communication modules,intelligent bearing could achieve the autonomous perception and autonomous decision-making,guarantying the safety and reliability during ...Integrated with sensors,processors,and radio frequency(RF)communication modules,intelligent bearing could achieve the autonomous perception and autonomous decision-making,guarantying the safety and reliability during their use.However,because of the resource limitations of the end device,processors in the intelligent bearing are unable to carry the computational load of deep learning models like convolutional neural network(CNN),which involves a great amount of multiplicative operations.To minimize the computation cost of the conventional CNN,based on the idea of AdderNet,a 1-D adder neural network with a wide first-layer kernel(WAddNN)suitable for bearing fault diagnosis is proposed in this paper.The proposed method uses the l1-norm distance between filters and input features as the output response,thus making the whole network almost free of multiplicative operations.The whole model takes the original signal as the input,uses a wide kernel in the first adder layer to extract features and suppress the high frequency noise,and then uses two layers of small kernels for nonlinear mapping.Through experimental comparison with CNN models of the same structure,WAddNN is able to achieve a similar accuracy as CNN models with significantly reduced computational cost.The proposed model provides a new fault diagnosis method for intelligent bearings with limited resources.展开更多
Effective features are essential for fault diagnosis.Due to the faint characteristics of a single line-to-ground(SLG)fault,fault line detection has become a challenge in resonant grounding distribution systems.This pa...Effective features are essential for fault diagnosis.Due to the faint characteristics of a single line-to-ground(SLG)fault,fault line detection has become a challenge in resonant grounding distribution systems.This paper proposes a novel fault line detection method using waveform fusion and one-dimensional convolutional neural networks(1-D CNN).After an SLG fault occurs,the first-half waves of zero-sequence currents are collected and superimposed with each other to achieve waveform fusion.The compelling feature of fused waveforms is extracted by 1-D CNN to determine whether the fused waveform source contains the fault line.Then,the 1-D CNN output is used to update the value of the counter in order to identify the fault line.Given the lack of fault data in existing distribution systems,the proposed method only needs a small quantity of data for model training and fault line detection.In addition,the proposed method owns fault-tolerant performance.Even if a few samples are misjudged,the fault line can still be detected correctly based on the full output results of 1-D CNN.Experimental results verified that the proposed method can work effectively under various fault conditions.展开更多
Emotion recognition from speech data is an active and emerging area of research that plays an important role in numerous applications,such as robotics,virtual reality,behavior assessments,and emergency call centers.Re...Emotion recognition from speech data is an active and emerging area of research that plays an important role in numerous applications,such as robotics,virtual reality,behavior assessments,and emergency call centers.Recently,researchers have developed many techniques in this field in order to ensure an improvement in the accuracy by utilizing several deep learning approaches,but the recognition rate is still not convincing.Our main aim is to develop a new technique that increases the recognition rate with reasonable cost computations.In this paper,we suggested a new technique,which is a one-dimensional dilated convolutional neural network(1D-DCNN)for speech emotion recognition(SER)that utilizes the hierarchical features learning blocks(HFLBs)with a bi-directional gated recurrent unit(BiGRU).We designed a one-dimensional CNN network to enhance the speech signals,which uses a spectral analysis,and to extract the hidden patterns from the speech signals that are fed into a stacked one-dimensional dilated network that are called HFLBs.Each HFLB contains one dilated convolution layer(DCL),one batch normalization(BN),and one leaky_relu(Relu)layer in order to extract the emotional features using a hieratical correlation strategy.Furthermore,the learned emotional features are feed into a BiGRU in order to adjust the global weights and to recognize the temporal cues.The final state of the deep BiGRU is passed from a softmax classifier in order to produce the probabilities of the emotions.The proposed model was evaluated over three benchmarked datasets that included the IEMOCAP,EMO-DB,and RAVDESS,which achieved 72.75%,91.14%,and 78.01%accuracy,respectively.展开更多
Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor ...Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor one-resistor(1T1R)memristor arrays is limited by the non-ideality of the devices,which prevents the hardware implementation of large and complex networks.In this work,we propose the depthwise separable convolution and bidirectional gate recurrent unit(DSC-BiGRU)network,a lightweight and highly robust hybrid neural network based on 1T1R arrays that enables efficient processing of EEG signals in the temporal,frequency and spatial domains by hybridizing DSC and BiGRU blocks.The network size is reduced and the network robustness is improved while ensuring the network classification accuracy.In the simulation,the measured non-idealities of the 1T1R array are brought into the network through statistical analysis.Compared with traditional convolutional networks,the network parameters are reduced by 95%and the network classification accuracy is improved by 21%at a 95%array yield rate and 5%tolerable error.This work demonstrates that lightweight and highly robust networks based on memristor arrays hold great promise for applications that rely on low consumption and high efficiency.展开更多
In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model...In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.展开更多
Growing demand for seafood and reduced fishery harvests have raised intensive farming of marine aquaculture in coastal regions,which may cause severe coastal water problems without adequate environmental management.Ef...Growing demand for seafood and reduced fishery harvests have raised intensive farming of marine aquaculture in coastal regions,which may cause severe coastal water problems without adequate environmental management.Effective mapping of mariculture areas is essential for the protection of coastal environments.However,due to the limited spatial coverage and complex structures,it is still challenging for traditional methods to accurately extract mariculture areas from medium spatial resolution(MSR)images.To solve this problem,we propose to use the full resolution cascade convolutional neural network(FRCNet),which maintains effective features over the whole training process,to identify mariculture areas from MSR images.Specifically,the FRCNet uses a sequential full resolution neural network as the first-level subnetwork,and gradually aggregates higher-level subnetworks in a cascade way.Meanwhile,we perform a repeated fusion strategy so that features can receive information from different subnetworks simultaneously,leading to rich and representative features.As a result,FRCNet can effectively recognize different kinds of mariculture areas from MSR images.Results show that FRCNet obtained better performance than other classical and recently proposed methods.Our developed methods can provide valuable datasets for large-scale and intelligent modeling of the marine aquaculture management and coastal zone planning.展开更多
基金supported by Jiangsu Social Science Foundation(No.20GLD008)Science,Technology Projects of Jiangsu Provincial Department of Communications(No.2020Y14)Joint Fund for Civil Aviation Research(No.U1933202)。
文摘In order to directly construct the mapping between multiple state parameters and remaining useful life(RUL),and reduce the interference of random error on prediction accuracy,a RUL prediction model of aeroengine based on principal component analysis(PCA)and one-dimensional convolution neural network(1D-CNN)is proposed in this paper.Firstly,multiple state parameters corresponding to massive cycles of aeroengine are collected and brought into PCA for dimensionality reduction,and principal components are extracted for further time series prediction.Secondly,the 1D-CNN model is constructed to directly study the mapping between principal components and RUL.Multiple convolution and pooling operations are applied for deep feature extraction,and the end-to-end RUL prediction of aeroengine can be realized.Experimental results show that the most effective principal component from the multiple state parameters can be obtained by PCA,and the long time series of multiple state parameters can be directly mapped to RUL by 1D-CNN,so as to improve the efficiency and accuracy of RUL prediction.Compared with other traditional models,the proposed method also has lower prediction error and better robustness.
基金Supported by National Natural Science Foundation of China(Grant Nos.52272433 and 11874110)Jiangsu Provincial Key R&D Program(Grant No.BE2021084)Technical Support Special Project of State Administration for Market Regulation(Grant No.2022YJ11).
文摘Ultrasonic guided wave is an attractive monitoring technique for large-scale structures but is vulnerable to changes in environmental and operational conditions(EOC),which are inevitable in the normal inspection of civil and mechanical structures.This paper thus presents a robust guided wave-based method for damage detection and localization under complex environmental conditions by singular value decomposition-based feature extraction and one-dimensional convolutional neural network(1D-CNN).After singular value decomposition-based feature extraction processing,a temporal robust damage index(TRDI)is extracted,and the effect of EOCs is well removed.Hence,even for the signals with a very large temperature-varying range and low signal-to-noise ratios(SNRs),the final damage detection and localization accuracy retain perfect 100%.Verifications are conducted on two different experimental datasets.The first dataset consists of guided wave signals collected from a thin aluminum plate with artificial noises,and the second is a publicly available experimental dataset of guided wave signals acquired on a composite plate with a temperature ranging from 20℃to 60℃.It is demonstrated that the proposed method can detect and localize the damage accurately and rapidly,showing great potential for application in complex and unknown EOC.
基金supported by the Key Research and Development Program of Jiangsu Province under Grant BE2022059-3,CTBC Bank through the Industry-Academia Cooperation Project,as well as by the Ministry of Science and Technology of Taiwan through Grants MOST-108-2218-E-002-055,MOST-109-2223-E-009-002-MY3,MOST-109-2218-E-009-025,and MOST431109-2218-E-002-015.
文摘Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.
基金partially supported by MICIU MCIN/AEI/10.13039/501100011033Spain with grant PID2020-118265GB-C42,-C44,PRTR-C17.I01+1 种基金Generalitat Valenciana,Spain with grant CIPROM/2022/54,ASFAE/2022/031,CIAPOS/2021/114the EU NextGenerationEU,ESF funds,and the National Science Centre (NCN),Poland (grant No.2020/39/D/ST2/00466)
文摘Pulse pile-up is a problem in nuclear spectroscopy and nuclear reaction studies that occurs when two pulses overlap and distort each other,degrading the quality of energy and timing information.Different methods have been used for pile-up rejection,both digital and analogue,but some pile-up events may contain pulses of interest and need to be reconstructed.The paper proposes a new method for reconstructing pile-up events acquired with a neutron detector array(NEDA)using an one-dimensional convolutional autoencoder(1D-CAE).The datasets for training and testing the 1D-CAE are created from data acquired from the NEDA.The new pile-up signal reconstruction method is evaluated from the point of view of how similar the reconstructed signals are to the original ones.Furthermore,it is analysed considering the result of the neutron-gamma discrimination based on charge comparison,comparing the result obtained from original and reconstructed signals.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金Science and Technology Support Plan Project of Tianjin Municipal Science and Technology Commission(No.15ZCZDNC00130)
文摘Image based individual dairy cattle recognition has gained much attention recently. In order to further improve the accuracy of individual dairy cattle recognition, an algorithm based on deep convolutional neural network( DCNN) is proposed in this paper,which enables automatic feature extraction and classification that outperforms traditional hand craft features. Through making multigroup comparison experiments including different network layers,different sizes of convolution kernel and different feature dimensions in full connection layer,we demonstrate that the proposed method is suitable for dairy cattle classification. The experimental results show that the accuracy is significantly higher compared to two traditional image processing algorithms: scale invariant feature transform( SIFT) algorithm and bag of feature( BOF) model.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2018R1A2B6007333)This study was supported by 2018 Research Grant from Kangwon National University.
文摘In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera and microscope were simultaneously used to obtain concrete surface images used as the input data for the DCNN.Thereafter,training,validation,and testing of the DCNNs were performed based on the DSLR camera and microscope image data.Results of the analysis indicated that the DCNN employing DSLR image data achieved a relatively higher accuracy.The accuracy of the DSLR-derived image data was attributed to the relatively wider range of the DSLR camera,which was beneficial for extracting a larger number of features.Moreover,the DSLR camera procured more realistic images than the microscope.Thus,when the compressive strength of concrete was evaluated using the DCNN employing a DSLR camera,time and cost were reduced,whereas the usefulness increased.Furthermore,an indirect comparison of the accuracy of the DCNN with that of existing non-destructive methods for evaluating the strength of concrete proved the reliability of DCNN-derived concrete strength predictions.In addition,it was determined that the DCNN used for concrete strength evaluations in this study can be further expanded to detect and evaluate various deteriorative factors that affect the durability of structures,such as salt damage,carbonation,sulfation,corrosion,and freezing-thawing.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.
基金support provided by the China National Key Research and Development Program of China under Grant 2019YFB2004300the National Natural Science Foundation of China under Grant 51975065 and 51805051.
文摘Integrated with sensors,processors,and radio frequency(RF)communication modules,intelligent bearing could achieve the autonomous perception and autonomous decision-making,guarantying the safety and reliability during their use.However,because of the resource limitations of the end device,processors in the intelligent bearing are unable to carry the computational load of deep learning models like convolutional neural network(CNN),which involves a great amount of multiplicative operations.To minimize the computation cost of the conventional CNN,based on the idea of AdderNet,a 1-D adder neural network with a wide first-layer kernel(WAddNN)suitable for bearing fault diagnosis is proposed in this paper.The proposed method uses the l1-norm distance between filters and input features as the output response,thus making the whole network almost free of multiplicative operations.The whole model takes the original signal as the input,uses a wide kernel in the first adder layer to extract features and suppress the high frequency noise,and then uses two layers of small kernels for nonlinear mapping.Through experimental comparison with CNN models of the same structure,WAddNN is able to achieve a similar accuracy as CNN models with significantly reduced computational cost.The proposed model provides a new fault diagnosis method for intelligent bearings with limited resources.
基金supported by the National Natural Science Foundation of China through the Project of Research of Flexible and Adaptive Arc-Suppression Method for Single-Phase Grounding Fault in Distribution Networks(No.51677030).
文摘Effective features are essential for fault diagnosis.Due to the faint characteristics of a single line-to-ground(SLG)fault,fault line detection has become a challenge in resonant grounding distribution systems.This paper proposes a novel fault line detection method using waveform fusion and one-dimensional convolutional neural networks(1-D CNN).After an SLG fault occurs,the first-half waves of zero-sequence currents are collected and superimposed with each other to achieve waveform fusion.The compelling feature of fused waveforms is extracted by 1-D CNN to determine whether the fused waveform source contains the fault line.Then,the 1-D CNN output is used to update the value of the counter in order to identify the fault line.Given the lack of fault data in existing distribution systems,the proposed method only needs a small quantity of data for model training and fault line detection.In addition,the proposed method owns fault-tolerant performance.Even if a few samples are misjudged,the fault line can still be detected correctly based on the full output results of 1-D CNN.Experimental results verified that the proposed method can work effectively under various fault conditions.
基金supported by the National Research Foundation of Korea funded by the Korean Government through the Ministry of Science and ICT under Grant NRF-2020R1F1A1060659 and in part by the 2020 Faculty Research Fund of Sejong University。
文摘Emotion recognition from speech data is an active and emerging area of research that plays an important role in numerous applications,such as robotics,virtual reality,behavior assessments,and emergency call centers.Recently,researchers have developed many techniques in this field in order to ensure an improvement in the accuracy by utilizing several deep learning approaches,but the recognition rate is still not convincing.Our main aim is to develop a new technique that increases the recognition rate with reasonable cost computations.In this paper,we suggested a new technique,which is a one-dimensional dilated convolutional neural network(1D-DCNN)for speech emotion recognition(SER)that utilizes the hierarchical features learning blocks(HFLBs)with a bi-directional gated recurrent unit(BiGRU).We designed a one-dimensional CNN network to enhance the speech signals,which uses a spectral analysis,and to extract the hidden patterns from the speech signals that are fed into a stacked one-dimensional dilated network that are called HFLBs.Each HFLB contains one dilated convolution layer(DCL),one batch normalization(BN),and one leaky_relu(Relu)layer in order to extract the emotional features using a hieratical correlation strategy.Furthermore,the learned emotional features are feed into a BiGRU in order to adjust the global weights and to recognize the temporal cues.The final state of the deep BiGRU is passed from a softmax classifier in order to produce the probabilities of the emotions.The proposed model was evaluated over three benchmarked datasets that included the IEMOCAP,EMO-DB,and RAVDESS,which achieved 72.75%,91.14%,and 78.01%accuracy,respectively.
基金Project supported by the National Key Research and Development Program of China(Grant No.2019YFB2205102)the National Natural Science Foundation of China(Grant Nos.61974164,62074166,61804181,62004219,62004220,and 62104256).
文摘Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor one-resistor(1T1R)memristor arrays is limited by the non-ideality of the devices,which prevents the hardware implementation of large and complex networks.In this work,we propose the depthwise separable convolution and bidirectional gate recurrent unit(DSC-BiGRU)network,a lightweight and highly robust hybrid neural network based on 1T1R arrays that enables efficient processing of EEG signals in the temporal,frequency and spatial domains by hybridizing DSC and BiGRU blocks.The network size is reduced and the network robustness is improved while ensuring the network classification accuracy.In the simulation,the measured non-idealities of the 1T1R array are brought into the network through statistical analysis.Compared with traditional convolutional networks,the network parameters are reduced by 95%and the network classification accuracy is improved by 21%at a 95%array yield rate and 5%tolerable error.This work demonstrates that lightweight and highly robust networks based on memristor arrays hold great promise for applications that rely on low consumption and high efficiency.
文摘In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.
基金supported by the National Natural Science Foundation of China[grant numbers 42101404,42107498]the National Key Research and Development Program of China[grant number 2020YFC1807501].
文摘Growing demand for seafood and reduced fishery harvests have raised intensive farming of marine aquaculture in coastal regions,which may cause severe coastal water problems without adequate environmental management.Effective mapping of mariculture areas is essential for the protection of coastal environments.However,due to the limited spatial coverage and complex structures,it is still challenging for traditional methods to accurately extract mariculture areas from medium spatial resolution(MSR)images.To solve this problem,we propose to use the full resolution cascade convolutional neural network(FRCNet),which maintains effective features over the whole training process,to identify mariculture areas from MSR images.Specifically,the FRCNet uses a sequential full resolution neural network as the first-level subnetwork,and gradually aggregates higher-level subnetworks in a cascade way.Meanwhile,we perform a repeated fusion strategy so that features can receive information from different subnetworks simultaneously,leading to rich and representative features.As a result,FRCNet can effectively recognize different kinds of mariculture areas from MSR images.Results show that FRCNet obtained better performance than other classical and recently proposed methods.Our developed methods can provide valuable datasets for large-scale and intelligent modeling of the marine aquaculture management and coastal zone planning.