The shear wave(S-wave)velocity is a critical rock elastic parameter in shale reservoirs,especially for evaluating shale fracability.To effectively supplement S-wave velocity under the condition of no actual measuremen...The shear wave(S-wave)velocity is a critical rock elastic parameter in shale reservoirs,especially for evaluating shale fracability.To effectively supplement S-wave velocity under the condition of no actual measurement data,this paper proposes a physically-data driven method for the S-wave velocity prediction in shale reservoirs based on the class activation mapping(CAM)technique combined with a physically constrained two-dimensional Convolutional Neural Network(2D-CNN).High-sensitivity log curves related to S-wave velocity are selected as the basis from the data sensitivity analysis.Then,we establish a petrophysical model of complex multi-mineral components based on the petrophysical properties of porous medium and the Biot-Gassmann equation.This model can help reduce the dispersion effect and constrain the 2D-CNN.In deep learning,the 2D-CNN model is optimized using the Adam,and the class activation maps(CAMs)are obtained by replacing the fully connected layer with the global average pooling(GAP)layer,resulting in explainable results.The model is then applied to wells A,B1,and B2 in the southern Songliao Basin,China and compared with the unconstrained model and the petrophysical model.The results show higher prediction accuracy and generalization ability,as evidenced by correlation coefficients and relative errors of 0.98 and 2.14%,0.97 and 2.35%,0.96 and 2.89%in the three test wells,respectively.Finally,we present the defined C-factor as a means of evaluating the extent of concern regarding CAMs in regression problems.When the results of the petrophysical model are added to the 2D feature maps,the C-factor values are significantly increased,indicating that the focus of 2D-CNN can be significantly enhanced by incorporating the petrophysical model,thereby imposing physical constraints on the 2D-CNN.In addition,we establish the SHAP model,and the results of the petrophysical model have the highest average SHAP values across the three test wells.This helps to assist in proving the importance of constraints.展开更多
Automated detection of Motor Imagery(MI)tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation.Prediction of MI tasks can be performed with the help of Electroencephalogram(...Automated detection of Motor Imagery(MI)tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation.Prediction of MI tasks can be performed with the help of Electroencephalogram(EEG)signals recorded by placing electrodes on the scalp of subjects;however,accurate prediction of MI tasks remains a challenge due to noise that is incurred during the EEG signal recording process,the extraction of a feature vector with high interclass variance,and accurate classification.The proposed method consists of preprocessing,feature extraction,and classification.First,EEG signals are denoised using a bandpass filter followed by Independent Component Analysis(ICA).Multiple channels are combined to form a single surrogate channel.Short Time Fourier Transform(STFT)is then applied to convert time domain EEG signals into the frequency domain.Handcrafted and automated features are extracted from EEG signals and then concatenated to form a single feature vector.We propose a customized two-dimensional Convolutional Neural Network(CNN)for automated feature extraction with high interclass variance.Feature selection is performed using Particle Swarm Optimization(PSO)to obtain optimal features.The final feature vector is passed to three different classifiers:Support Vector Machine(SVM),Random Forest(RF),and Long Short-Term Memory(LSTM).The final decision is made using the Model-Agnostic Meta Learning(MAML).The Proposed method has been tested on two datasets,including PhysioNet and BCI Competition IV-2a,and it achieved better results in terms of accuracy and F1 score than existing state-of-the-art methods.The proposed framework achieved an accuracy and F1 score of 96%on the PhysioNet dataset and 95.5%on the BCI Competition IV,dataset 2a.We also present SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM)explainable techniques to enhance model interpretability in a clinical setting.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream a...<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream approaches to video understanding can be categorized into two-dimensional and three-dimensional convolutional neural networks. Although three-dimensional convolutional filters can learn the temporal correlation between different frames by extracting the features of multiple frames simultaneously, it results in an explosive number of parameters and calculation cost. Methods based on two-dimensional convolutional neural networks use fewer parameters;they often incorporate optical flow to compensate for their inability to learn temporal relationships. However, calculating the corresponding optical flow results in additional calculation cost;further, it necessitates the use of another model to learn the features of optical flow. We proposed an action recognition framework based on the two-dimensional convolutional neural network;therefore, it was necessary to resolve the lack of temporal relationships. To expand the temporal receptive field, we proposed a multi-scale temporal shift module, which was then combined with a temporal feature difference extraction module to extract the difference between the features of different frames. Finally, the model was compressed to make it more compact. We evaluated our method on two major action recognition benchmarks: the HMDB51 and UCF-101 datasets. Before compression, the proposed method achieved an accuracy of 72.83% on the HMDB51 dataset and 96.25% on the UCF-101 dataset. Following compression, the accuracy was still impressive, at 95.57% and 72.19% on each dataset. The final model was more compact than most related works.</span>展开更多
基金supported by the National Natural Science Foundation of China(Nos.42374150,42374152)Natural Science Foundation of Shandong Province(ZR2020MD050).
文摘The shear wave(S-wave)velocity is a critical rock elastic parameter in shale reservoirs,especially for evaluating shale fracability.To effectively supplement S-wave velocity under the condition of no actual measurement data,this paper proposes a physically-data driven method for the S-wave velocity prediction in shale reservoirs based on the class activation mapping(CAM)technique combined with a physically constrained two-dimensional Convolutional Neural Network(2D-CNN).High-sensitivity log curves related to S-wave velocity are selected as the basis from the data sensitivity analysis.Then,we establish a petrophysical model of complex multi-mineral components based on the petrophysical properties of porous medium and the Biot-Gassmann equation.This model can help reduce the dispersion effect and constrain the 2D-CNN.In deep learning,the 2D-CNN model is optimized using the Adam,and the class activation maps(CAMs)are obtained by replacing the fully connected layer with the global average pooling(GAP)layer,resulting in explainable results.The model is then applied to wells A,B1,and B2 in the southern Songliao Basin,China and compared with the unconstrained model and the petrophysical model.The results show higher prediction accuracy and generalization ability,as evidenced by correlation coefficients and relative errors of 0.98 and 2.14%,0.97 and 2.35%,0.96 and 2.89%in the three test wells,respectively.Finally,we present the defined C-factor as a means of evaluating the extent of concern regarding CAMs in regression problems.When the results of the petrophysical model are added to the 2D feature maps,the C-factor values are significantly increased,indicating that the focus of 2D-CNN can be significantly enhanced by incorporating the petrophysical model,thereby imposing physical constraints on the 2D-CNN.In addition,we establish the SHAP model,and the results of the petrophysical model have the highest average SHAP values across the three test wells.This helps to assist in proving the importance of constraints.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2601).
文摘Automated detection of Motor Imagery(MI)tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation.Prediction of MI tasks can be performed with the help of Electroencephalogram(EEG)signals recorded by placing electrodes on the scalp of subjects;however,accurate prediction of MI tasks remains a challenge due to noise that is incurred during the EEG signal recording process,the extraction of a feature vector with high interclass variance,and accurate classification.The proposed method consists of preprocessing,feature extraction,and classification.First,EEG signals are denoised using a bandpass filter followed by Independent Component Analysis(ICA).Multiple channels are combined to form a single surrogate channel.Short Time Fourier Transform(STFT)is then applied to convert time domain EEG signals into the frequency domain.Handcrafted and automated features are extracted from EEG signals and then concatenated to form a single feature vector.We propose a customized two-dimensional Convolutional Neural Network(CNN)for automated feature extraction with high interclass variance.Feature selection is performed using Particle Swarm Optimization(PSO)to obtain optimal features.The final feature vector is passed to three different classifiers:Support Vector Machine(SVM),Random Forest(RF),and Long Short-Term Memory(LSTM).The final decision is made using the Model-Agnostic Meta Learning(MAML).The Proposed method has been tested on two datasets,including PhysioNet and BCI Competition IV-2a,and it achieved better results in terms of accuracy and F1 score than existing state-of-the-art methods.The proposed framework achieved an accuracy and F1 score of 96%on the PhysioNet dataset and 95.5%on the BCI Competition IV,dataset 2a.We also present SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM)explainable techniques to enhance model interpretability in a clinical setting.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
文摘<span style="font-family:Verdana;">Convolutional neural networks, which have achieved outstanding performance in image recognition, have been extensively applied to action recognition. The mainstream approaches to video understanding can be categorized into two-dimensional and three-dimensional convolutional neural networks. Although three-dimensional convolutional filters can learn the temporal correlation between different frames by extracting the features of multiple frames simultaneously, it results in an explosive number of parameters and calculation cost. Methods based on two-dimensional convolutional neural networks use fewer parameters;they often incorporate optical flow to compensate for their inability to learn temporal relationships. However, calculating the corresponding optical flow results in additional calculation cost;further, it necessitates the use of another model to learn the features of optical flow. We proposed an action recognition framework based on the two-dimensional convolutional neural network;therefore, it was necessary to resolve the lack of temporal relationships. To expand the temporal receptive field, we proposed a multi-scale temporal shift module, which was then combined with a temporal feature difference extraction module to extract the difference between the features of different frames. Finally, the model was compressed to make it more compact. We evaluated our method on two major action recognition benchmarks: the HMDB51 and UCF-101 datasets. Before compression, the proposed method achieved an accuracy of 72.83% on the HMDB51 dataset and 96.25% on the UCF-101 dataset. Following compression, the accuracy was still impressive, at 95.57% and 72.19% on each dataset. The final model was more compact than most related works.</span>