Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, mos...Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.展开更多
With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recogn...With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling. To characterize the behavior of nonlinear circuits, a DNN based modeling approach is proposed in this paper. The procedure is illustrated by modeling a power amplifier (PA), which is a typical nonlinear circuit in electronic systems. The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data. Training and validation are carried out in Tensorflow deep learning framework. Compared with the commonly used polynomial model, the proposed DNN model exhibits a faster convergence rate and improves the mean squared error by 13 dB. The results demonstrate that the proposed DNN model can accurately depict the input-output characteristics of nonlinear circuits in both training and validation data sets.展开更多
A novel scalable model of substrate components for deep n-well (DNW) RF MOSFETs with different number of fingers is presented for the first time. The test structure developed in [1] is employed to directly access the ...A novel scalable model of substrate components for deep n-well (DNW) RF MOSFETs with different number of fingers is presented for the first time. The test structure developed in [1] is employed to directly access the characteristics of the substrate to extract the different substrate components. A methodology is developed to directly extract the parameters for the substrate network from the measured data. By using the measured two-port data of a set of nMOSFETs with different number of fingers, with the DNW in grounded and float configuration, respectively, the parameters of the scalable substrate model are obtained. The method and the substrate model are further verified and validated by matching the measured and simulated output admittances. Excellent agreement up to 40 GHz for configurations in common-source has been achieved.展开更多
Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Ou...Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.展开更多
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ...Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.展开更多
Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and...Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.展开更多
Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area...Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area is important for breeding area planning,production value estimation,ecological survey,and storm surge prevention.However,as the aquaculture area expands,the seawater background becomes increasingly complex and spectral characteristics differ dramatically,making it difficult to determine the aquaculture area.In this study,we used a high-resolution remote-sensing satellite GF-2 image to introduce a deep-learning Richer Convolutional Features(RCF)network model to extract the aquaculture area.Then we used the density of aquaculture as an assessment index to assess the vulnerability of aquaculture areas in Sanduao.The results demonstrate that this method does not require land and water separation of the area in advance,and good extraction can be achieved in the areas with more sediment and waves,with an extraction accuracy>93%,which is suitable for large-scale aquaculture area extraction.Vulnerability assessment results indicate that the density of aquaculture in the eastern part of Sanduao is considerably high,reaching a higher vulnerability level than other parts.展开更多
This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate represent...This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate representation that has a good correspondence with both of the input contexts and the output pixel data of face images. The sequences of the facial expression parameters are modeled using context-dependent HMMs with static and dynamic features. The mapping from the expression parameters to the target pixel images are trained using DNNs. We examine the required amount of the training data for HMMs and DNNs and compare the performance of the proposed technique with the conventional PCA-based technique through objective and subjective evaluation experiments.展开更多
文摘Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.
文摘With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling. To characterize the behavior of nonlinear circuits, a DNN based modeling approach is proposed in this paper. The procedure is illustrated by modeling a power amplifier (PA), which is a typical nonlinear circuit in electronic systems. The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data. Training and validation are carried out in Tensorflow deep learning framework. Compared with the commonly used polynomial model, the proposed DNN model exhibits a faster convergence rate and improves the mean squared error by 13 dB. The results demonstrate that the proposed DNN model can accurately depict the input-output characteristics of nonlinear circuits in both training and validation data sets.
文摘A novel scalable model of substrate components for deep n-well (DNW) RF MOSFETs with different number of fingers is presented for the first time. The test structure developed in [1] is employed to directly access the characteristics of the substrate to extract the different substrate components. A methodology is developed to directly extract the parameters for the substrate network from the measured data. By using the measured two-port data of a set of nMOSFETs with different number of fingers, with the DNW in grounded and float configuration, respectively, the parameters of the scalable substrate model are obtained. The method and the substrate model are further verified and validated by matching the measured and simulated output admittances. Excellent agreement up to 40 GHz for configurations in common-source has been achieved.
文摘Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.
基金Supported by National Natural Science Foundation of China(Grant Nos.U1564201,61573171,61403172,51305167)China Postdoctoral Science Foundation(Grant Nos.2015T80511,2014M561592)+3 种基金Jiangsu Provincial Natural Science Foundation of China(Grant No.BK20140555)Six Talent Peaks Project of Jiangsu Province,China(Grant Nos.2015-JXQC-012,2014-DZXX-040)Jiangsu Postdoctoral Science Foundation,China(Grant No.1402097C)Jiangsu University Scientific Research Foundation for Senior Professionals,China(Grant No.14JDG028)
文摘Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
基金This work was supported by the National Key Research and Development Program of China(2018YFC2001302)National Natural Science Foundation of China(91520202)+2 种基金Chinese Academy of Sciences Scientific Equipment Development Project(YJKYYQ20170050)Beijing Municipal Science and Technology Commission(Z181100008918010)Youth Innovation Promotion Association of Chinese Academy of Sciences,and Strategic Priority Research Program of Chinese Academy of Sciences(XDB32040200).
文摘Brain encoding and decoding via functional magnetic resonance imaging(fMRI)are two important aspects of visual perception neuroscience.Although previous researchers have made significant advances in brain encoding and decoding models,existing methods still require improvement using advanced machine learning techniques.For example,traditional methods usually build the encoding and decoding models separately,and are prone to overfitting on a small dataset.In fact,effectively unifying the encoding and decoding procedures may allow for more accurate predictions.In this paper,we first review the existing encoding and decoding methods and discuss the potential advantages of a“bidirectional”modeling strategy.Next,we show that there are correspondences between deep neural networks and human visual streams in terms of the architecture and computational rules.Furthermore,deep generative models(e.g.,variational autoencoders(VAEs)and generative adversarial networks(GANs))have produced promising results in studies on brain encoding and decoding.Finally,we propose that the dual learning method,which was originally designed for machine translation tasks,could help to improve the performance of encoding and decoding models by leveraging large-scale unpaired data.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1402003)the National Natural Science Foundation of China(No.41671436)the Innovation Project of LREIS(No.O88RAA01YA)
文摘Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area is important for breeding area planning,production value estimation,ecological survey,and storm surge prevention.However,as the aquaculture area expands,the seawater background becomes increasingly complex and spectral characteristics differ dramatically,making it difficult to determine the aquaculture area.In this study,we used a high-resolution remote-sensing satellite GF-2 image to introduce a deep-learning Richer Convolutional Features(RCF)network model to extract the aquaculture area.Then we used the density of aquaculture as an assessment index to assess the vulnerability of aquaculture areas in Sanduao.The results demonstrate that this method does not require land and water separation of the area in advance,and good extraction can be achieved in the areas with more sediment and waves,with an extraction accuracy>93%,which is suitable for large-scale aquaculture area extraction.Vulnerability assessment results indicate that the density of aquaculture in the eastern part of Sanduao is considerably high,reaching a higher vulnerability level than other parts.
文摘This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate representation that has a good correspondence with both of the input contexts and the output pixel data of face images. The sequences of the facial expression parameters are modeled using context-dependent HMMs with static and dynamic features. The mapping from the expression parameters to the target pixel images are trained using DNNs. We examine the required amount of the training data for HMMs and DNNs and compare the performance of the proposed technique with the conventional PCA-based technique through objective and subjective evaluation experiments.