With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h...With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the adv...Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the advent of new datasets like the Extended Modified National Institute of Standards and Technology dataset(EMNIST).The EMNIST dataset represents a challenge for both machine-learning and deep-learning techniques due to inter-class similarity and intra-class variability.Inter-class similarity exists because of the similarity between the shapes of certain characters in the dataset.The presence of intra-class variability is mainly due to different shapes written by different writers for the same character.In this research,we have optimized a deep residual network to achieve higher accuracy vs.the published state-of-the-art results.This approach is mainly based on the prebuilt deep residual network model ResNet18,whose architecture has been enhanced by using the optimal number of residual blocks and the optimal size of the receptive field of the first convolutional filter,the replacement of the first max-pooling filter by an average pooling filter,and the addition of a drop-out layer before the fully connected layer.A distinctive modification has been introduced by replacing the final addition layer with a depth concatenation layer,which resulted in a novel deep architecture having higher accuracy vs.the pure residual architecture.Moreover,the dataset images’sizes have been adjusted to optimize their visibility in the network.Finally,by tuning the training hyperparameters and using rotation and shear augmentations,the proposed model outperformed the state-of-the-art models by achieving average accuracies of 95.91%and 90.90%for the Letters and Balanced dataset sections,respectively.Furthermore,the average accuracies were improved to 95.9%and 91.06%for the Letters and Balanced sections,respectively,by using a group of 5 instances of the trained models and averaging the output class probabilities.展开更多
To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) s...To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.展开更多
The key-blocks are the main reason accounting for structural failure in discontinuous rock slopes, and automated identification of these block types is critical for evaluating the stability conditions. This paper pres...The key-blocks are the main reason accounting for structural failure in discontinuous rock slopes, and automated identification of these block types is critical for evaluating the stability conditions. This paper presents a classification framework to categorize rock blocks based on the principles of block theory. The deep convolutional neural network(CNN) procedure was utilized to analyze a total of 1240 highresolution images from 130 slope masses at the South Pars Special Zone, Assalouyeh, Southwest Iran.Based on Goodman’s theory, a recognition system has been implemented to classify three types of rock blocks, namely, key blocks, trapped blocks, and stable blocks. The proposed prediction model has been validated with the loss function, root mean square error(RMSE), and mean square error(MSE). As a justification of the model, the support vector machine(SVM), random forest(RF), Gaussian naïve Bayes(GNB), multilayer perceptron(MLP), Bernoulli naïve Bayes(BNB), and decision tree(DT) classifiers have been used to evaluate the accuracy, precision, recall, F1-score, and confusion matrix. Accuracy and precision of the proposed model are 0.95 and 0.93, respectively, in comparison with SVM(accuracy = 0.85, precision = 0.85), RF(accuracy = 0.71, precision = 0.71), GNB(accuracy = 0.75,precision = 0.65), MLP(accuracy = 0.88, precision = 0.9), BNB(accuracy = 0.75, precision = 0.69), and DT(accuracy = 0.85, precision = 0.76). In addition, the proposed model reduced the loss function to less than 0.3 and the RMSE and MSE to less than 0.2, which demonstrated a low error rate during processing.展开更多
In recent years,deep convolution neural network has exhibited excellent performance in computer vision and has a far-reaching impact.Traditional plant taxonomic identification requires high expertise,which is time-con...In recent years,deep convolution neural network has exhibited excellent performance in computer vision and has a far-reaching impact.Traditional plant taxonomic identification requires high expertise,which is time-consuming.Most nature reserves have problems such as incomplete species surveys,inaccurate taxonomic identification,and untimely updating of status data.Simple and accurate recognition of plant images can be achieved by applying convolutional neural network technology to explore the best network model.Taking 24 typical desert plant species that are widely distributed in the nature reserves in Xinjiang Uygur Autonomous Region of China as the research objects,this study established an image database and select the optimal network model for the image recognition of desert plant species to provide decision support for fine management in the nature reserves in Xinjiang,such as species investigation and monitoring,by using deep learning.Since desert plant species were not included in the public dataset,the images used in this study were mainly obtained through field shooting and downloaded from the Plant Photo Bank of China(PPBC).After the sorting process and statistical analysis,a total of 2331 plant images were finally collected(2071 images from field collection and 260 images from the PPBC),including 24 plant species belonging to 14 families and 22 genera.A large number of numerical experiments were also carried out to compare a series of 37 convolutional neural network models with good performance,from different perspectives,to find the optimal network model that is most suitable for the image recognition of desert plant species in Xinjiang.The results revealed 24 models with a recognition Accuracy,of greater than 70.000%.Among which,Residual Network X_8GF(RegNetX_8GF)performs the best,with Accuracy,Precision,Recall,and F1(which refers to the harmonic mean of the Precision and Recall values)values of 78.33%,77.65%,69.55%,and 71.26%,respectively.Considering the demand factors of hardware equipment and inference time,Mobile NetworkV2 achieves the best balance among the Accuracy,the number of parameters and the number of floating-point operations.The number of parameters for Mobile Network V2(MobileNetV2)is 1/16 of RegNetX_8GF,and the number of floating-point operations is 1/24.Our findings can facilitate efficient decision-making for the management of species survey,cataloging,inspection,and monitoring in the nature reserves in Xinjiang,providing a scientific basis for the protection and utilization of natural plant resources.展开更多
Residual neural network (ResNet) is a powerful neural network architecture that has proven to be excellent in extracting spatial and channel-wise information of images. ResNet employs a residual learning strategy that...Residual neural network (ResNet) is a powerful neural network architecture that has proven to be excellent in extracting spatial and channel-wise information of images. ResNet employs a residual learning strategy that maps inputs directly to outputs, making it less difficult to optimize. In this paper, we incorporate differential information into the original residual block to improve the representative ability of the ResNet, allowing the modified network to capture more complex and metaphysical features. The proposed DFNet preserves the features after each convolutional operation in the residual block, and combines the feature maps of different levels of abstraction through the differential information. To verify the effectiveness of DFNet on image recognition, we select six distinct classification datasets. The experimental results show that our proposed DFNet has better performance and generalization ability than other state-of-the-art variants of ResNet in terms of classification accuracy and other statistical analysis.展开更多
Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to ef...Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms.展开更多
Two-dimensional materials with active sites are expected to replace platinum as large-scale hydrogen production catalysts.However,the rapid discovery of excellent two-dimensional hydrogen evolution reaction catalysts ...Two-dimensional materials with active sites are expected to replace platinum as large-scale hydrogen production catalysts.However,the rapid discovery of excellent two-dimensional hydrogen evolution reaction catalysts is seriously hindered due to the long experiment cycle and the huge cost of high-throughput calculations of adsorption energies.Considering that the traditional regression models cannot consider all the potential sites on the surface of catalysts,we use a deep learning method with crystal graph convolutional neural networks to accelerate the discovery of high-performance two-dimensional hydrogen evolution reaction catalysts from two-dimensional materials database,with the prediction accuracy as high as 95.2%.The proposed method considers all active sites,screens out 38 high performance catalysts from 6,531 two-dimensional materials,predicts their adsorption energies at different active sites,and determines the potential strongest adsorption sites.The prediction accuracy of the two-dimensional hydrogen evolution reaction catalysts screening strategy proposed in this work is at the density-functional-theory level,but the prediction speed is 10.19 years ahead of the high-throughput screening,demonstrating the capability of crystal graph convolutional neural networks-deep learning method for efficiently discovering high-performance new structures over a wide catalytic materials space.展开更多
Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning du...Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning due to a symmetricalU-structure for better feature extraction and fusing and suitable for small datasets.To enhance the segmentation performance of U-Net,cascaded U-Net proposes to put two U-Nets successively to segment targets from coarse to fine.However,the plain cascaded U-Net faces the problem of too less between connections so the contextual information learned by the former U-Net cannot be fully used by the latter one.In this article,we devise novel Inner Cascaded U-Net and Inner Cascaded U^(2)-Net as improvements to plain cascaded U-Net for medical image segmentation.The proposed Inner Cascaded U-Net adds inner nested connections between two U-Nets to share more contextual information.To further boost segmentation performance,we propose Inner Cascaded U^(2)-Net,which applies residual U-block to capture more global contextual information from different scales.The proposed models can be trained from scratch in an end-to-end fashion and have been evaluated on Multimodal Brain Tumor Segmentation Challenge(BraTS)2013 and ISBI Liver Tumor Segmentation Challenge(LiTS)dataset in comparison to related U-Net,cascaded U-Net,U-Net++,U^(2)-Net and state-of-the-art methods.Our experiments demonstrate that our proposed Inner Cascaded U-Net and Inner Cascaded U^(2)-Net achieve better segmentation performance in terms of dice similarity coefficient and hausdorff distance as well as get finer outline segmentation.展开更多
In this paper,the L_(2,∞)normalization of the weight matrices is used to enhance the robustness and accuracy of the deep neural network(DNN)with Relu as activation functions.It is shown that the L_(2,∞)normalization...In this paper,the L_(2,∞)normalization of the weight matrices is used to enhance the robustness and accuracy of the deep neural network(DNN)with Relu as activation functions.It is shown that the L_(2,∞)normalization leads to large dihedral angles between two adjacent faces of the DNN function graph and hence smoother DNN functions,which reduces over-fitting of the DNN.A global measure is proposed for the robustness of a classification DNN,which is the average radius of the maximal robust spheres with the training samples as centers.A lower bound for the robustness measure in terms of the L_(2,∞)norm is given.Finally,an upper bound for the Rademacher complexity of DNNs with L_(2,∞)normalization is given.An algorithm is given to train DNNs with the L_(2,∞)normalization and numerical experimental results are used to show that the L_(2,∞)normalization is effective in terms of improving the robustness and accuracy.展开更多
Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex...Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex targets is still a problem that needs to be solved.The authors of this research would like to present an improved defect detection model for detecting small and complex defect targets in steel surfaces.During steel strip production,mechanical forces and environmental factors cause surface defects of the steel strip.Therefore,the detection of such defects is key to the production of high-quality products.Moreover,surface defects of the steel strip cause great economic losses to the high-tech industry.So far,few studies have explored methods of identifying the defects,and most of the currently available algorithms are not sufficiently effective.Therefore,this study presents an improved real-time metallic surface defect detection model based on You Only Look Once(YOLOv5)specially designed for small networks.For the smaller features of the target,the conventional part is replaced with a depthwise convolution and channel shuffle mechanism.Then assigning weights to Feature Pyramid Networks(FPN)output features and fusing them,increases feature propagation and the network’s characterization ability.The experimental results reveal that the improved proposed model outperforms other comparable models in terms of accuracy and detection time.The precision of the proposed model achieved by mAP@0.5 is 77.5%on the Northeastern University,Dataset(NEU-DET)and 70.18%on the GC10-DET datasets.展开更多
The prompt spread of Coronavirus(COVID-19)subsequently adorns a big threat to the people around the globe.The evolving and the perpetually diagnosis of coronavirus has become a critical challenge for the healthcare se...The prompt spread of Coronavirus(COVID-19)subsequently adorns a big threat to the people around the globe.The evolving and the perpetually diagnosis of coronavirus has become a critical challenge for the healthcare sector.Drastically increase of COVID-19 has rendered the necessity to detect the people who are more likely to get infected.Lately,the testing kits for COVID-19 are not available to deal it with required proficiency,along with-it countries have been widely hit by the COVID-19 disruption.To keep in view the need of hour asks for an automatic diagnosis system for early detection of COVID-19.It would be a feather in the cap if the early diagnosis of COVID-19 could reveal that how it has been affecting the masses immensely.According to the apparent clinical research,it has unleashed that most of the COVID-19 cases are more likely to fall for a lung infection.The abrupt changes do require a solution so the technology is out there to pace up,Chest X-ray and Computer tomography(CT)scan images could significantly identify the preliminaries of COVID-19 like lungs infection.CT scan and X-ray images could flourish the cause of detecting at an early stage and it has proved to be helpful to radiologists and the medical practitioners.The unbearable circumstances compel us to flatten the curve of the sufferers so a need to develop is obvious,a quick and highly responsive automatic system based on Artificial Intelligence(AI)is always there to aid against the masses to be prone to COVID-19.The proposed Intelligent decision support system for COVID-19 empowered with deep learning(ID2S-COVID19-DL)study suggests Deep learning(DL)based Convolutional neural network(CNN)approaches for effective and accurate detection to the maximum extent it could be,detection of coronavirus is assisted by using X-ray and CT-scan images.The primary experimental results here have depicted the maximum accuracy for training and is around 98.11 percent and for validation it comes out to be approximately 95.5 percent while statistical parameters like sensitivity and specificity for training is 98.03 percent and 98.20 percent respectively,and for validation 94.38 percent and 97.06 percent respectively.The suggested Deep Learning-based CNN model unleashed here opts for a comparable performance with medical experts and it ishelpful to enhance the working productivity of radiologists. It could take the curvedown with the downright contribution of radiologists, rapid detection ofCOVID-19, and to overcome this current pandemic with the proven efficacy.展开更多
基金Project ZR2023MF111 supported by Shandong Provincial Natural Science Foundation。
文摘With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
文摘Even though much advancements have been achieved with regards to the recognition of handwritten characters,researchers still face difficulties with the handwritten character recognition problem,especially with the advent of new datasets like the Extended Modified National Institute of Standards and Technology dataset(EMNIST).The EMNIST dataset represents a challenge for both machine-learning and deep-learning techniques due to inter-class similarity and intra-class variability.Inter-class similarity exists because of the similarity between the shapes of certain characters in the dataset.The presence of intra-class variability is mainly due to different shapes written by different writers for the same character.In this research,we have optimized a deep residual network to achieve higher accuracy vs.the published state-of-the-art results.This approach is mainly based on the prebuilt deep residual network model ResNet18,whose architecture has been enhanced by using the optimal number of residual blocks and the optimal size of the receptive field of the first convolutional filter,the replacement of the first max-pooling filter by an average pooling filter,and the addition of a drop-out layer before the fully connected layer.A distinctive modification has been introduced by replacing the final addition layer with a depth concatenation layer,which resulted in a novel deep architecture having higher accuracy vs.the pure residual architecture.Moreover,the dataset images’sizes have been adjusted to optimize their visibility in the network.Finally,by tuning the training hyperparameters and using rotation and shear augmentations,the proposed model outperformed the state-of-the-art models by achieving average accuracies of 95.91%and 90.90%for the Letters and Balanced dataset sections,respectively.Furthermore,the average accuracies were improved to 95.9%and 91.06%for the Letters and Balanced sections,respectively,by using a group of 5 instances of the trained models and averaging the output class probabilities.
基金This work is funded by National Natural Science Foundation of China(Nos.42202292,42141011)the Program for Jilin University(JLU)Science and Technology Innovative Research Team(No.2019TD-35).The authors would also like to thank the reviewers and editors whose critical comments are very helpful in preparing this article.
文摘To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.
基金support provided by the National Natural Science Foundation of China(Grant No.42077235)the National Key Research and Development Program of China(Grant No.2018YFC1505104).
文摘The key-blocks are the main reason accounting for structural failure in discontinuous rock slopes, and automated identification of these block types is critical for evaluating the stability conditions. This paper presents a classification framework to categorize rock blocks based on the principles of block theory. The deep convolutional neural network(CNN) procedure was utilized to analyze a total of 1240 highresolution images from 130 slope masses at the South Pars Special Zone, Assalouyeh, Southwest Iran.Based on Goodman’s theory, a recognition system has been implemented to classify three types of rock blocks, namely, key blocks, trapped blocks, and stable blocks. The proposed prediction model has been validated with the loss function, root mean square error(RMSE), and mean square error(MSE). As a justification of the model, the support vector machine(SVM), random forest(RF), Gaussian naïve Bayes(GNB), multilayer perceptron(MLP), Bernoulli naïve Bayes(BNB), and decision tree(DT) classifiers have been used to evaluate the accuracy, precision, recall, F1-score, and confusion matrix. Accuracy and precision of the proposed model are 0.95 and 0.93, respectively, in comparison with SVM(accuracy = 0.85, precision = 0.85), RF(accuracy = 0.71, precision = 0.71), GNB(accuracy = 0.75,precision = 0.65), MLP(accuracy = 0.88, precision = 0.9), BNB(accuracy = 0.75, precision = 0.69), and DT(accuracy = 0.85, precision = 0.76). In addition, the proposed model reduced the loss function to less than 0.3 and the RMSE and MSE to less than 0.2, which demonstrated a low error rate during processing.
基金supported by the West Light Foundation of the Chinese Academy of Sciences(2019-XBQNXZ-A-007)the National Natural Science Foundation of China(12071458,71731009).
文摘In recent years,deep convolution neural network has exhibited excellent performance in computer vision and has a far-reaching impact.Traditional plant taxonomic identification requires high expertise,which is time-consuming.Most nature reserves have problems such as incomplete species surveys,inaccurate taxonomic identification,and untimely updating of status data.Simple and accurate recognition of plant images can be achieved by applying convolutional neural network technology to explore the best network model.Taking 24 typical desert plant species that are widely distributed in the nature reserves in Xinjiang Uygur Autonomous Region of China as the research objects,this study established an image database and select the optimal network model for the image recognition of desert plant species to provide decision support for fine management in the nature reserves in Xinjiang,such as species investigation and monitoring,by using deep learning.Since desert plant species were not included in the public dataset,the images used in this study were mainly obtained through field shooting and downloaded from the Plant Photo Bank of China(PPBC).After the sorting process and statistical analysis,a total of 2331 plant images were finally collected(2071 images from field collection and 260 images from the PPBC),including 24 plant species belonging to 14 families and 22 genera.A large number of numerical experiments were also carried out to compare a series of 37 convolutional neural network models with good performance,from different perspectives,to find the optimal network model that is most suitable for the image recognition of desert plant species in Xinjiang.The results revealed 24 models with a recognition Accuracy,of greater than 70.000%.Among which,Residual Network X_8GF(RegNetX_8GF)performs the best,with Accuracy,Precision,Recall,and F1(which refers to the harmonic mean of the Precision and Recall values)values of 78.33%,77.65%,69.55%,and 71.26%,respectively.Considering the demand factors of hardware equipment and inference time,Mobile NetworkV2 achieves the best balance among the Accuracy,the number of parameters and the number of floating-point operations.The number of parameters for Mobile Network V2(MobileNetV2)is 1/16 of RegNetX_8GF,and the number of floating-point operations is 1/24.Our findings can facilitate efficient decision-making for the management of species survey,cataloging,inspection,and monitoring in the nature reserves in Xinjiang,providing a scientific basis for the protection and utilization of natural plant resources.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI under Grant JP22H03643Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)under Grant JPMJSP2145JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation under Grant JPMJFS2115.
文摘Residual neural network (ResNet) is a powerful neural network architecture that has proven to be excellent in extracting spatial and channel-wise information of images. ResNet employs a residual learning strategy that maps inputs directly to outputs, making it less difficult to optimize. In this paper, we incorporate differential information into the original residual block to improve the representative ability of the ResNet, allowing the modified network to capture more complex and metaphysical features. The proposed DFNet preserves the features after each convolutional operation in the residual block, and combines the feature maps of different levels of abstraction through the differential information. To verify the effectiveness of DFNet on image recognition, we select six distinct classification datasets. The experimental results show that our proposed DFNet has better performance and generalization ability than other state-of-the-art variants of ResNet in terms of classification accuracy and other statistical analysis.
基金supported by the National Natural Science Foundation of China(NSFC)(61831002)the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 734798Innovation Project of the Common Key Technology of Chongqing Science and Technology Industry(Grant no.cstc2018jcyjAX0383).
文摘Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms.
基金The authors are grateful for the financial support provided by the National Key Laboratory of Science and Technology on Micro/Nano Fabrication of China,the National Natural Science Foundation of China (No.21901157)the SJTU Global Strategic Partnership Fund (2020 SJTU-HUJI)the National Key R&D Program of China (2021YFC2100100).
文摘Two-dimensional materials with active sites are expected to replace platinum as large-scale hydrogen production catalysts.However,the rapid discovery of excellent two-dimensional hydrogen evolution reaction catalysts is seriously hindered due to the long experiment cycle and the huge cost of high-throughput calculations of adsorption energies.Considering that the traditional regression models cannot consider all the potential sites on the surface of catalysts,we use a deep learning method with crystal graph convolutional neural networks to accelerate the discovery of high-performance two-dimensional hydrogen evolution reaction catalysts from two-dimensional materials database,with the prediction accuracy as high as 95.2%.The proposed method considers all active sites,screens out 38 high performance catalysts from 6,531 two-dimensional materials,predicts their adsorption energies at different active sites,and determines the potential strongest adsorption sites.The prediction accuracy of the two-dimensional hydrogen evolution reaction catalysts screening strategy proposed in this work is at the density-functional-theory level,but the prediction speed is 10.19 years ahead of the high-throughput screening,demonstrating the capability of crystal graph convolutional neural networks-deep learning method for efficiently discovering high-performance new structures over a wide catalytic materials space.
基金supported in part by the National Nature Science Foundation of China(No.62172299)in part by the Shanghai Municipal Science and Technology Major Project(No.2021SHZDZX0100)in part by the Fundamental Research Funds for the Central Universi-ties of China.
文摘Deep neural networks are now widely used in the medical image segmentation field for their performance superiority and no need of manual feature extraction.U-Net has been the baseline model since the very beginning due to a symmetricalU-structure for better feature extraction and fusing and suitable for small datasets.To enhance the segmentation performance of U-Net,cascaded U-Net proposes to put two U-Nets successively to segment targets from coarse to fine.However,the plain cascaded U-Net faces the problem of too less between connections so the contextual information learned by the former U-Net cannot be fully used by the latter one.In this article,we devise novel Inner Cascaded U-Net and Inner Cascaded U^(2)-Net as improvements to plain cascaded U-Net for medical image segmentation.The proposed Inner Cascaded U-Net adds inner nested connections between two U-Nets to share more contextual information.To further boost segmentation performance,we propose Inner Cascaded U^(2)-Net,which applies residual U-block to capture more global contextual information from different scales.The proposed models can be trained from scratch in an end-to-end fashion and have been evaluated on Multimodal Brain Tumor Segmentation Challenge(BraTS)2013 and ISBI Liver Tumor Segmentation Challenge(LiTS)dataset in comparison to related U-Net,cascaded U-Net,U-Net++,U^(2)-Net and state-of-the-art methods.Our experiments demonstrate that our proposed Inner Cascaded U-Net and Inner Cascaded U^(2)-Net achieve better segmentation performance in terms of dice similarity coefficient and hausdorff distance as well as get finer outline segmentation.
基金partially supported by NKRDP under Grant No.2018YFA0704705the National Natural Science Foundation of China under Grant No.12288201.
文摘In this paper,the L_(2,∞)normalization of the weight matrices is used to enhance the robustness and accuracy of the deep neural network(DNN)with Relu as activation functions.It is shown that the L_(2,∞)normalization leads to large dihedral angles between two adjacent faces of the DNN function graph and hence smoother DNN functions,which reduces over-fitting of the DNN.A global measure is proposed for the robustness of a classification DNN,which is the average radius of the maximal robust spheres with the training samples as centers.A lower bound for the robustness measure in terms of the L_(2,∞)norm is given.Finally,an upper bound for the Rademacher complexity of DNNs with L_(2,∞)normalization is given.An algorithm is given to train DNNs with the L_(2,∞)normalization and numerical experimental results are used to show that the L_(2,∞)normalization is effective in terms of improving the robustness and accuracy.
文摘Deep learning has been constantly improving in recent years,and a significant number of researchers have devoted themselves to the research of defect detection algorithms.Detection and recognition of small and complex targets is still a problem that needs to be solved.The authors of this research would like to present an improved defect detection model for detecting small and complex defect targets in steel surfaces.During steel strip production,mechanical forces and environmental factors cause surface defects of the steel strip.Therefore,the detection of such defects is key to the production of high-quality products.Moreover,surface defects of the steel strip cause great economic losses to the high-tech industry.So far,few studies have explored methods of identifying the defects,and most of the currently available algorithms are not sufficiently effective.Therefore,this study presents an improved real-time metallic surface defect detection model based on You Only Look Once(YOLOv5)specially designed for small networks.For the smaller features of the target,the conventional part is replaced with a depthwise convolution and channel shuffle mechanism.Then assigning weights to Feature Pyramid Networks(FPN)output features and fusing them,increases feature propagation and the network’s characterization ability.The experimental results reveal that the improved proposed model outperforms other comparable models in terms of accuracy and detection time.The precision of the proposed model achieved by mAP@0.5 is 77.5%on the Northeastern University,Dataset(NEU-DET)and 70.18%on the GC10-DET datasets.
基金Data and Artificial Intelligence Scientific Chair at Umm AlQura University.
文摘The prompt spread of Coronavirus(COVID-19)subsequently adorns a big threat to the people around the globe.The evolving and the perpetually diagnosis of coronavirus has become a critical challenge for the healthcare sector.Drastically increase of COVID-19 has rendered the necessity to detect the people who are more likely to get infected.Lately,the testing kits for COVID-19 are not available to deal it with required proficiency,along with-it countries have been widely hit by the COVID-19 disruption.To keep in view the need of hour asks for an automatic diagnosis system for early detection of COVID-19.It would be a feather in the cap if the early diagnosis of COVID-19 could reveal that how it has been affecting the masses immensely.According to the apparent clinical research,it has unleashed that most of the COVID-19 cases are more likely to fall for a lung infection.The abrupt changes do require a solution so the technology is out there to pace up,Chest X-ray and Computer tomography(CT)scan images could significantly identify the preliminaries of COVID-19 like lungs infection.CT scan and X-ray images could flourish the cause of detecting at an early stage and it has proved to be helpful to radiologists and the medical practitioners.The unbearable circumstances compel us to flatten the curve of the sufferers so a need to develop is obvious,a quick and highly responsive automatic system based on Artificial Intelligence(AI)is always there to aid against the masses to be prone to COVID-19.The proposed Intelligent decision support system for COVID-19 empowered with deep learning(ID2S-COVID19-DL)study suggests Deep learning(DL)based Convolutional neural network(CNN)approaches for effective and accurate detection to the maximum extent it could be,detection of coronavirus is assisted by using X-ray and CT-scan images.The primary experimental results here have depicted the maximum accuracy for training and is around 98.11 percent and for validation it comes out to be approximately 95.5 percent while statistical parameters like sensitivity and specificity for training is 98.03 percent and 98.20 percent respectively,and for validation 94.38 percent and 97.06 percent respectively.The suggested Deep Learning-based CNN model unleashed here opts for a comparable performance with medical experts and it ishelpful to enhance the working productivity of radiologists. It could take the curvedown with the downright contribution of radiologists, rapid detection ofCOVID-19, and to overcome this current pandemic with the proven efficacy.