Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with region...Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with regional-scale geotechnical parameters.To explore rainfall-induced LSM,this study proposes a hybrid model that combines the physically-based probabilistic model(PPM)with convolutional neural network(CNN).The PPM is capable of effectively capturing the spatial distribution of landslides by incorporating the probability of failure(POF)considering the slope stability mechanism under rainfall conditions.This significantly characterizes the variation of POF caused by parameter uncertainties.CNN was used as a binary classifier to capture the spatial and channel correlation between landslide conditioning factors and the probability of landslide occurrence.OpenCV image enhancement technique was utilized to extract non-landslide points based on the POF of landslides.The proposed model comprehensively considers physical mechanics when selecting non-landslide samples,effectively filtering out samples that do not adhere to physical principles and reduce the risk of overfitting.The results indicate that the proposed PPM-CNN hybrid model presents a higher prediction accuracy,with an area under the curve(AUC)value of 0.85 based on the landslide case of the Niangniangba area of Gansu Province,China compared with the individual CNN model(AUC=0.61)and the PPM(AUC=0.74).This model can also consider the statistical correlation and non-normal probability distributions of model parameters.These results offer practical guidance for future research on rainfall-induced LSM at the regional scale.展开更多
This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemb...This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure.展开更多
3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Des...3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.展开更多
The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
In the burgeoning field of anomaly detection within attributed networks,traditional methodologies often encounter the intricacies of network complexity,particularly in capturing nonlinearity and sparsity.This study in...In the burgeoning field of anomaly detection within attributed networks,traditional methodologies often encounter the intricacies of network complexity,particularly in capturing nonlinearity and sparsity.This study introduces an innovative approach that synergizes the strengths of graph convolutional networks with advanced deep residual learning and a unique residual-based attention mechanism,thereby creating a more nuanced and efficient method for anomaly detection in complex networks.The heart of our model lies in the integration of graph convolutional networks that capture complex structural relationships within the network data.This is further bolstered by deep residual learning,which is employed to model intricate nonlinear connections directly from input data.A pivotal innovation in our approach is the incorporation of a residual-based attention mech-anism.This mechanism dynamically adjusts the importance of nodes based on their residual information,thereby significantly enhancing the sensitivity of the model to subtle anomalies.Furthermore,we introduce a novel hypersphere mapping technique in the latent space to distinctly separate normal and anomalous data.This mapping is the key to our model’s ability to pinpoint anomalies with greater precision.An extensive experimental setup was used to validate the efficacy of the proposed model.Using attributed social network datasets,we demonstrate that our model not only competes with but also surpasses existing state-of-the-art methods in anomaly detection.The results show the exceptional capability of our model to handle the multifaceted nature of real-world networks.展开更多
Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address ...Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications ...The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.展开更多
Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Ou...Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.展开更多
Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial pro...Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial process parameters and production indicators.While the integrated method of adaptive signal decomposition combined with time series models could effectively predict process variables,it does have limitations in capturing the high-frequency detail of the operation state when applied to complex chemical processes.In light of this,a novel Multiscale Multi-radius Multi-step Convolutional Neural Network(Msrt Net)is proposed for mining spatiotemporal multiscale information.First,the industrial data from the Fluid Catalytic Cracking(FCC)process decomposition using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise(CEEMDAN)extract the multi-energy scale information of the feature subset.Then,convolution kernels with varying stride and padding structures are established to decouple the long-period operation process information encapsulated within the multi-energy scale data.Finally,a reconciliation network is trained to reconstruct the multiscale prediction results and obtain the final output.Msrt Net is initially assessed for its capability to untangle the spatiotemporal multiscale relationships among variables in the Tennessee Eastman Process(TEP).Subsequently,the performance of Msrt Net is evaluated in predicting product yield for a 2.80×10^(6) t/a FCC unit,taking diesel and gasoline yield as examples.In conclusion,Msrt Net can decouple and effectively extract spatiotemporal multiscale information from chemical process data and achieve a approximately reduction of 30%in prediction error compared to other time-series models.Furthermore,its robustness and transferability underscore its promising potential for broader applications.展开更多
To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 ...To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 simulations of one-dimensional compression tests on coarse-grained sand using the three-dimensional(3D)discrete element method(DEM)were conducted to construct a database.In this process,the positions of the particles were randomly altered,and the particle assemblages changed.Interestingly,besides confirming the influence of particle size distribution parameters,the stress-strain curves differed despite an identical gradation size statistic when the particle position varied.Subsequently,the obtained data were partitioned into training,validation,and testing datasets at a 7:2:1 ratio.To convert the DEM model into a multi-dimensional matrix that computers can recognize,the 3D DEM models were first sliced to extract multi-layer two-dimensional(2D)cross-sectional data.Redundant information was then eliminated via gray processing,and the data were stacked to form a new 3D matrix representing the granular soil’s fabric.Subsequently,utilizing the Python language and Pytorch framework,a 3D convolutional neural networks(CNNs)model was developed to establish the relationship between the constrained modulus obtained from DEM simulations and the soil’s fabric.The mean squared error(MSE)function was utilized to assess the loss value during the training process.When the learning rate(LR)fell within the range of 10-5e10-1,and the batch sizes(BSs)were 4,8,16,32,and 64,the loss value stabilized after 100 training epochs in the training and validation dataset.For BS?32 and LR?10-3,the loss reached a minimum.In the testing set,a comparative evaluation of the predicted constrained modulus from the 3D CNNs versus the simulated modulus obtained via DEM reveals a minimum mean absolute percentage error(MAPE)of 4.43%under the optimized condition,demonstrating the accuracy of this approach.Thus,by combining DEM and CNNs,the variation of soil’s mechanical characteristics related to its random fabric would be efficiently evaluated by directly tracking the particle assemblages.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On...System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On the other hand, artificial neural networks that completely neglect the topology of the reaction networks often have poor generalization. In this paper, a framework is proposed for learning local representations from largescale chemical reaction networks. At first, the features of naphtha pyrolysis reactions are extracted by applying complex network characterization methods. The selected features are then used as inputs in convolutional architectures. Different CNN models are established and compared to optimize the neural network structure.After the pre-training and fine-tuning step, the ultimate CNN model reduces the computational cost of the previous kinetic model by over 300 times and predicts the yields of main products with the average error of less than 3%. The obtained results demonstrate the high efficiency of the proposed framework.展开更多
Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area...Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area is important for breeding area planning,production value estimation,ecological survey,and storm surge prevention.However,as the aquaculture area expands,the seawater background becomes increasingly complex and spectral characteristics differ dramatically,making it difficult to determine the aquaculture area.In this study,we used a high-resolution remote-sensing satellite GF-2 image to introduce a deep-learning Richer Convolutional Features(RCF)network model to extract the aquaculture area.Then we used the density of aquaculture as an assessment index to assess the vulnerability of aquaculture areas in Sanduao.The results demonstrate that this method does not require land and water separation of the area in advance,and good extraction can be achieved in the areas with more sediment and waves,with an extraction accuracy>93%,which is suitable for large-scale aquaculture area extraction.Vulnerability assessment results indicate that the density of aquaculture in the eastern part of Sanduao is considerably high,reaching a higher vulnerability level than other parts.展开更多
The perfectly matched layer(PML) was first introduced by Berenger as an absorbing boundary condition for electromagnetic wave propagation.In this article,a method is developed to ex-tend the PML to simulating seismi...The perfectly matched layer(PML) was first introduced by Berenger as an absorbing boundary condition for electromagnetic wave propagation.In this article,a method is developed to ex-tend the PML to simulating seismic wave propagation in fluid-saturated porous medium.This non-physical boundary is used at the computational edge of a Forsyte polynomial convolutional differenti-ator(FPCD) algorithm as an absorbing boundary condition to truncate unbounded media.The incor-poration of PML in Biot's equations is given.Numerical results show that the PML absorbing bound-ary condition attenuates the outgoing waves effectively and eliminates the reflections adequately.展开更多
In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability t...In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability to handle complex data correlations in practical applications.These limitations stem from the difficulty in establishing multiple hierarchies and acquiring adaptive weights for each of them.To address this issue,this paper introduces the latest concept of complex hypergraphs and constructs a versatile high-order multi-level data correlation model.This model is realized by establishing a three-tier structure of complexes-hypergraphs-vertices.Specifically,we start by establishing hyperedge clusters on a foundational network,utilizing a second-order hypergraph structure to depict potential correlations.For this second-order structure,truncation methods are used to assess and generate a three-layer composite structure.During the construction of the composite structure,an adaptive learning strategy is implemented to merge correlations across different levels.We evaluate this model on several popular datasets and compare it with recent state-of-the-art methods.The comprehensive assessment results demonstrate that the proposed model surpasses the existing methods,particularly in modeling implicit data correlations(the classification accuracy of nodes on five public datasets Cora,Citeseer,Pubmed,Github Web ML,and Facebook are 86.1±0.33,79.2±0.35,83.1±0.46,83.8±0.23,and 80.1±0.37,respectively).This indicates that our approach possesses advantages in handling datasets with implicit multi-level structures.展开更多
Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been ...Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.展开更多
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ...Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.展开更多
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ...Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.展开更多
基金funding support from the National Natural Science Foundation of China(Grant Nos.U22A20594,52079045)Hong-Zhi Cui acknowledges the financial support of the China Scholarship Council(Grant No.CSC:202206710014)for his research at Universitat Politecnica de Catalunya,Barcelona.
文摘Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with regional-scale geotechnical parameters.To explore rainfall-induced LSM,this study proposes a hybrid model that combines the physically-based probabilistic model(PPM)with convolutional neural network(CNN).The PPM is capable of effectively capturing the spatial distribution of landslides by incorporating the probability of failure(POF)considering the slope stability mechanism under rainfall conditions.This significantly characterizes the variation of POF caused by parameter uncertainties.CNN was used as a binary classifier to capture the spatial and channel correlation between landslide conditioning factors and the probability of landslide occurrence.OpenCV image enhancement technique was utilized to extract non-landslide points based on the POF of landslides.The proposed model comprehensively considers physical mechanics when selecting non-landslide samples,effectively filtering out samples that do not adhere to physical principles and reduce the risk of overfitting.The results indicate that the proposed PPM-CNN hybrid model presents a higher prediction accuracy,with an area under the curve(AUC)value of 0.85 based on the landslide case of the Niangniangba area of Gansu Province,China compared with the individual CNN model(AUC=0.61)and the PPM(AUC=0.74).This model can also consider the statistical correlation and non-normal probability distributions of model parameters.These results offer practical guidance for future research on rainfall-induced LSM at the regional scale.
文摘This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure.
文摘3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
文摘In the burgeoning field of anomaly detection within attributed networks,traditional methodologies often encounter the intricacies of network complexity,particularly in capturing nonlinearity and sparsity.This study introduces an innovative approach that synergizes the strengths of graph convolutional networks with advanced deep residual learning and a unique residual-based attention mechanism,thereby creating a more nuanced and efficient method for anomaly detection in complex networks.The heart of our model lies in the integration of graph convolutional networks that capture complex structural relationships within the network data.This is further bolstered by deep residual learning,which is employed to model intricate nonlinear connections directly from input data.A pivotal innovation in our approach is the incorporation of a residual-based attention mech-anism.This mechanism dynamically adjusts the importance of nodes based on their residual information,thereby significantly enhancing the sensitivity of the model to subtle anomalies.Furthermore,we introduce a novel hypersphere mapping technique in the latent space to distinctly separate normal and anomalous data.This mapping is the key to our model’s ability to pinpoint anomalies with greater precision.An extensive experimental setup was used to validate the efficacy of the proposed model.Using attributed social network datasets,we demonstrate that our model not only competes with but also surpasses existing state-of-the-art methods in anomaly detection.The results show the exceptional capability of our model to handle the multifaceted nature of real-world networks.
基金supported by the National Natural Science Foundation of China(Grant Nos.62472149,62376089,62202147)Hubei Provincial Science and Technology Plan Project(2023BCB04100).
文摘Accurate traffic flow prediction has a profound impact on modern traffic management. Traffic flow has complex spatial-temporal correlations and periodicity, which poses difficulties for precise prediction. To address this problem, a Multi-head Self-attention and Spatial-Temporal Graph Convolutional Network (MSSTGCN) for multiscale traffic flow prediction is proposed. Firstly, to capture the hidden traffic periodicity of traffic flow, traffic flow is divided into three kinds of periods, including hourly, daily, and weekly data. Secondly, a graph attention residual layer is constructed to learn the global spatial features across regions. Local spatial-temporal dependence is captured by using a T-GCN module. Thirdly, a transformer layer is introduced to learn the long-term dependence in time. A position embedding mechanism is introduced to label position information for all traffic sequences. Thus, this multi-head self-attention mechanism can recognize the sequence order and allocate weights for different time nodes. Experimental results on four real-world datasets show that the MSSTGCN performs better than the baseline methods and can be successfully adapted to traffic prediction tasks.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金supported by National Key R&D Program of China(No.2022YFB3104500)Natural Science Foundation of Jiangsu Province(No.BK20222013)Scientific Research Foundation of Nanjing Institute of Technology(No.3534113223036)。
文摘The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.
文摘Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.
文摘Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial process parameters and production indicators.While the integrated method of adaptive signal decomposition combined with time series models could effectively predict process variables,it does have limitations in capturing the high-frequency detail of the operation state when applied to complex chemical processes.In light of this,a novel Multiscale Multi-radius Multi-step Convolutional Neural Network(Msrt Net)is proposed for mining spatiotemporal multiscale information.First,the industrial data from the Fluid Catalytic Cracking(FCC)process decomposition using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise(CEEMDAN)extract the multi-energy scale information of the feature subset.Then,convolution kernels with varying stride and padding structures are established to decouple the long-period operation process information encapsulated within the multi-energy scale data.Finally,a reconciliation network is trained to reconstruct the multiscale prediction results and obtain the final output.Msrt Net is initially assessed for its capability to untangle the spatiotemporal multiscale relationships among variables in the Tennessee Eastman Process(TEP).Subsequently,the performance of Msrt Net is evaluated in predicting product yield for a 2.80×10^(6) t/a FCC unit,taking diesel and gasoline yield as examples.In conclusion,Msrt Net can decouple and effectively extract spatiotemporal multiscale information from chemical process data and achieve a approximately reduction of 30%in prediction error compared to other time-series models.Furthermore,its robustness and transferability underscore its promising potential for broader applications.
基金supported by the National Key R&D Program of China (Grant No.2022YFC3003401)the National Natural Science Foundation of China (Grant Nos.42041006 and 42377137).
文摘To efficiently predict the mechanical parameters of granular soil based on its random micro-structure,this study proposed a novel approach combining numerical simulation and machine learning algorithms.Initially,3500 simulations of one-dimensional compression tests on coarse-grained sand using the three-dimensional(3D)discrete element method(DEM)were conducted to construct a database.In this process,the positions of the particles were randomly altered,and the particle assemblages changed.Interestingly,besides confirming the influence of particle size distribution parameters,the stress-strain curves differed despite an identical gradation size statistic when the particle position varied.Subsequently,the obtained data were partitioned into training,validation,and testing datasets at a 7:2:1 ratio.To convert the DEM model into a multi-dimensional matrix that computers can recognize,the 3D DEM models were first sliced to extract multi-layer two-dimensional(2D)cross-sectional data.Redundant information was then eliminated via gray processing,and the data were stacked to form a new 3D matrix representing the granular soil’s fabric.Subsequently,utilizing the Python language and Pytorch framework,a 3D convolutional neural networks(CNNs)model was developed to establish the relationship between the constrained modulus obtained from DEM simulations and the soil’s fabric.The mean squared error(MSE)function was utilized to assess the loss value during the training process.When the learning rate(LR)fell within the range of 10-5e10-1,and the batch sizes(BSs)were 4,8,16,32,and 64,the loss value stabilized after 100 training epochs in the training and validation dataset.For BS?32 and LR?10-3,the loss reached a minimum.In the testing set,a comparative evaluation of the predicted constrained modulus from the 3D CNNs versus the simulated modulus obtained via DEM reveals a minimum mean absolute percentage error(MAPE)of 4.43%under the optimized condition,demonstrating the accuracy of this approach.Thus,by combining DEM and CNNs,the variation of soil’s mechanical characteristics related to its random fabric would be efficiently evaluated by directly tracking the particle assemblages.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金Supported by the National Natural Science Foundation of China(U1462206)
文摘System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On the other hand, artificial neural networks that completely neglect the topology of the reaction networks often have poor generalization. In this paper, a framework is proposed for learning local representations from largescale chemical reaction networks. At first, the features of naphtha pyrolysis reactions are extracted by applying complex network characterization methods. The selected features are then used as inputs in convolutional architectures. Different CNN models are established and compared to optimize the neural network structure.After the pre-training and fine-tuning step, the ultimate CNN model reduces the computational cost of the previous kinetic model by over 300 times and predicts the yields of main products with the average error of less than 3%. The obtained results demonstrate the high efficiency of the proposed framework.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1402003)the National Natural Science Foundation of China(No.41671436)the Innovation Project of LREIS(No.O88RAA01YA)
文摘Sanduao is an important sea-breeding bay in Fujian,South China and holds a high economic status in aquaculture.Quickly and accurately obtaining information including the distribution area,quantity,and aquaculture area is important for breeding area planning,production value estimation,ecological survey,and storm surge prevention.However,as the aquaculture area expands,the seawater background becomes increasingly complex and spectral characteristics differ dramatically,making it difficult to determine the aquaculture area.In this study,we used a high-resolution remote-sensing satellite GF-2 image to introduce a deep-learning Richer Convolutional Features(RCF)network model to extract the aquaculture area.Then we used the density of aquaculture as an assessment index to assess the vulnerability of aquaculture areas in Sanduao.The results demonstrate that this method does not require land and water separation of the area in advance,and good extraction can be achieved in the areas with more sediment and waves,with an extraction accuracy>93%,which is suitable for large-scale aquaculture area extraction.Vulnerability assessment results indicate that the density of aquaculture in the eastern part of Sanduao is considerably high,reaching a higher vulnerability level than other parts.
基金supported by the National Natural ScienceFoundation of China (No. 40804008)
文摘The perfectly matched layer(PML) was first introduced by Berenger as an absorbing boundary condition for electromagnetic wave propagation.In this article,a method is developed to ex-tend the PML to simulating seismic wave propagation in fluid-saturated porous medium.This non-physical boundary is used at the computational edge of a Forsyte polynomial convolutional differenti-ator(FPCD) algorithm as an absorbing boundary condition to truncate unbounded media.The incor-poration of PML in Biot's equations is given.Numerical results show that the PML absorbing bound-ary condition attenuates the outgoing waves effectively and eliminates the reflections adequately.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.12275179 and 11875042)the Natural Science Foundation of Shanghai Municipality,China(Grant No.21ZR1443900)。
文摘In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability to handle complex data correlations in practical applications.These limitations stem from the difficulty in establishing multiple hierarchies and acquiring adaptive weights for each of them.To address this issue,this paper introduces the latest concept of complex hypergraphs and constructs a versatile high-order multi-level data correlation model.This model is realized by establishing a three-tier structure of complexes-hypergraphs-vertices.Specifically,we start by establishing hyperedge clusters on a foundational network,utilizing a second-order hypergraph structure to depict potential correlations.For this second-order structure,truncation methods are used to assess and generate a three-layer composite structure.During the construction of the composite structure,an adaptive learning strategy is implemented to merge correlations across different levels.We evaluate this model on several popular datasets and compare it with recent state-of-the-art methods.The comprehensive assessment results demonstrate that the proposed model surpasses the existing methods,particularly in modeling implicit data correlations(the classification accuracy of nodes on five public datasets Cora,Citeseer,Pubmed,Github Web ML,and Facebook are 86.1±0.33,79.2±0.35,83.1±0.46,83.8±0.23,and 80.1±0.37,respectively).This indicates that our approach possesses advantages in handling datasets with implicit multi-level structures.
基金This work was supported by the Kyonggi University Research Grant 2022.
文摘Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.
基金Supported by National Natural Science Foundation of China(Grant Nos.U1564201,61573171,61403172,51305167)China Postdoctoral Science Foundation(Grant Nos.2015T80511,2014M561592)+3 种基金Jiangsu Provincial Natural Science Foundation of China(Grant No.BK20140555)Six Talent Peaks Project of Jiangsu Province,China(Grant Nos.2015-JXQC-012,2014-DZXX-040)Jiangsu Postdoctoral Science Foundation,China(Grant No.1402097C)Jiangsu University Scientific Research Foundation for Senior Professionals,China(Grant No.14JDG028)
文摘Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
文摘Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.