Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,ter...Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.展开更多
Maximizing network lifetime is measured as the primary issue in Mobile Ad-hoc Networks(MANETs).In geographically routing based models,packet transmission seems to be more appropriate in dense circumstances.The involve...Maximizing network lifetime is measured as the primary issue in Mobile Ad-hoc Networks(MANETs).In geographically routing based models,packet transmission seems to be more appropriate in dense circumstances.The involvement of the Heuristic model directly is not appropriate to offer an effectual solution as it becomes NP-hard issues;therefore investigators concentrate on using Meta-heuristic approaches.Dragonfly Optimization(DFO)is an effective meta-heuristic approach to resolve these problems by providing optimal solutions.Moreover,Meta-heuristic approaches(DFO)turn to be slower in convergence problems and need proper computational time while expanding network size.Thus,DFO is adaptively improved as Adaptive Dragonfly Optimization(ADFO)to fit this model and re-formulated using graph-based m-connection establishment(G-𝑚𝑚CE)to overcome computational time and DFO’s convergence based problems,considerably enhancing DFO performance.In(G-𝑚𝑚CE),Connectivity Zone(CZ)is chosen among source to destination in which optimality should be under those connected regions and ADFO is used for effective route establishment in CZ indeed of complete networking model.To measure complementary features of ADFO and(G-𝑚𝑚CE),hybridization of DFO-(G-𝑚𝑚CE)is anticipated over dense circumstances with reduced energy consumption and delay to enhance network lifetime.The simulation was performed in MATLAB environment.展开更多
In the field of autonomous robots,achieving complete precision is challenging,underscoring the need for human intervention,particularly in ensuring safety.Human Autonomy Teaming(HAT)is crucial for promoting safe and e...In the field of autonomous robots,achieving complete precision is challenging,underscoring the need for human intervention,particularly in ensuring safety.Human Autonomy Teaming(HAT)is crucial for promoting safe and efficient human-robot collaboration in dynamic indoor environments.This paper introduces a framework designed to address these precision gaps,enhancing safety and robotic interactions within such settings.Central to our approach is a hybrid graph system that integrates the Generalized Voronoi Diagram(GVD)with spatio-temporal graphs,effectively combining human feedback,environmental factors,and key waypoints.An integral component of this system is the improved Node Selection Algorithm(iNSA),which utilizes the revised Grey Wolf Optimization(rGWO)for better adaptability and performance.Furthermore,an obstacle tracking model is employed to provide predictive data,enhancing the efficiency of the system.Human insights play a critical role,from supplying initial environmental data and determining key waypoints to intervening during unexpected challenges or dynamic environmental changes.Extensive simulation and comparison tests confirm the reliability and effectiveness of our proposed model,highlighting its unique advantages in the domain of HAT.This comprehensive approach ensures that the system remains robust and responsive to the complexities of real-world applications.展开更多
In this study, we used the multi-resolution graph-based clustering (MRGC) method for determining the electrofacies (EF) and lithofacies (LF) from well log data obtained from the intraplatform bank gas fields loc...In this study, we used the multi-resolution graph-based clustering (MRGC) method for determining the electrofacies (EF) and lithofacies (LF) from well log data obtained from the intraplatform bank gas fields located in the Amu Darya Basin. The MRGC could automatically determine the optimal number of clusters without prior knowledge about the structure or cluster numbers of the analyzed data set and allowed the users to control the level of detail actually needed to define the EF. Based on the LF identification and successful EF calibration using core data, an MRGC EF partition model including five clusters and a quantitative LF interpretation chart were constructed. The EF clusters 1 to 5 were interpreted as lagoon, anhydrite flat, interbank, low-energy bank, and high-energy bank, and the coincidence rate in the cored interval could reach 85%. We concluded that the MRGC could be accurately applied to predict the LF in non-cored but logged wells. Therefore, continuous EF clusters were partitioned and corresponding LF were characteristics &different LF were analyzed interpreted, and the distribution and petrophysical in the framework of sequence stratigraphy.展开更多
A better understanding of the relationship between the structure and functions of urban and suburban spaces is one of the avenues of research still open for geographical information science.The research presented in t...A better understanding of the relationship between the structure and functions of urban and suburban spaces is one of the avenues of research still open for geographical information science.The research presented in this paper develops several graph-based metrics whose objective is to characterize some local and global structural properties that reflect the way the overall building layout can be cross-related to the one of the road layout.Such structural properties are modeled as an aggregation of parcels,buildings,and road networks.We introduce several computational measures(Ratio Minimum Distance,Minimum Ratio Minimum Distance,and Metric Compactness)that respectively evaluate the capability for a given road to be connected with the whole road network.These measures reveal emerging sub-network structures and point out differences between less-connective and moreconnective parts of the network.Based on these local and global properties derived from the topological and graph-based representation,and on building density metrics,this paper proposes an analysis of road and building layouts at different levels of granularity.The metrics developed are applied to a case study in which the derived properties reveal coherent as well as incoherent neighborhoods that illustrate the potential of the approach and the way buildings and roads can be relatively connected in a given urban environment.Overall,and by integrating the parcels and buildings layouts,this approach complements other previous and related works that mainly retain the configurational structure of the urban network as well as morphological studies whose focus is generally limited to the analysis of the building layout.展开更多
Simultaneous localization and mapping(SLAM)is widely used in many robot applications to acquire the unknown environment's map and the robots location.Graph-based SLAM is demonstrated to be effective in large-scale...Simultaneous localization and mapping(SLAM)is widely used in many robot applications to acquire the unknown environment's map and the robots location.Graph-based SLAM is demonstrated to be effective in large-scale scenarios,and it intuitively performs the SLAM as a pose graph.But because of the high data overlap rate,traditional graph-based SLAM is not efficient in some respects,such as real time performance and memory usage.To reduce1 data overlap rate,a graph-based SLAM with distributed submap strategy(DSS)is presented.In its front-end,submap based scan matching is processed and loop closing detection is conducted.Moreover in its back-end,pose graph is updated for global optimization and submap merging.From a series of experiments,it is demonstrated that graph-based SLAM with DSS reduces 51.79%data overlap rate,decreases 39.70%runtime and 24.60%memory usage.The advantages over other low overlap rate method is also proved in runtime,memory usage,accuracy and robustness performance.展开更多
The number of botnet malware attacks on Internet devices has grown at an equivalent rate to the number of Internet devices that are connected to the Internet.Bot detection using machine learning(ML)with flow-based fea...The number of botnet malware attacks on Internet devices has grown at an equivalent rate to the number of Internet devices that are connected to the Internet.Bot detection using machine learning(ML)with flow-based features has been extensively studied in the literature.Existing flow-based detection methods involve significant computational overhead that does not completely capture network communication patterns that might reveal other features ofmalicious hosts.Recently,Graph-Based Bot Detection methods using ML have gained attention to overcome these limitations,as graphs provide a real representation of network communications.The purpose of this study is to build a botnet malware detection system utilizing centrality measures for graph-based botnet detection and ML.We propose BotSward,a graph-based bot detection system that is based on ML.We apply the efficient centrality measures,which are Closeness Centrality(CC),Degree Centrality(CC),and PageRank(PR),and compare them with others used in the state-of-the-art.The efficiency of the proposed method is verified on the available Czech Technical University 13 dataset(CTU-13).The CTU-13 dataset contains 13 real botnet traffic scenarios that are connected to a command-and-control(C&C)channel and that cause malicious actions such as phishing,distributed denial-of-service(DDoS)attacks,spam attacks,etc.BotSward is robust to zero-day attacks,suitable for large-scale datasets,and is intended to produce better accuracy than state-of-the-art techniques.The proposed BotSward solution achieved 99%accuracy in botnet attack detection with a false positive rate as low as 0.0001%.展开更多
Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to bes...Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.展开更多
Many cutting-edge methods are now possible in real-time commercial settings and are growing in popularity on cloud platforms.By incorporating new,cutting-edge technologies to a larger extent without using more infrast...Many cutting-edge methods are now possible in real-time commercial settings and are growing in popularity on cloud platforms.By incorporating new,cutting-edge technologies to a larger extent without using more infrastructures,the information technology platform is anticipating a completely new level of devel-opment.The following concepts are proposed in this research paper:1)A reliable authentication method Data replication that is optimised;graph-based data encryp-tion and packing colouring in Redundant Array of Independent Disks(RAID)sto-rage.At the data centre,data is encrypted using crypto keys called Key Streams.These keys are produced using the packing colouring method in the web graph’s jump graph.In order to achieve space efficiency,the replication is carried out on optimised many servers employing packing colours.It would be thought that more connections would provide better authentication.This study provides an innovative architecture with robust security,enhanced authentication,and low cost.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms...In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.展开更多
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t...In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.展开更多
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol...Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.展开更多
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so...Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.展开更多
To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The...To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks.展开更多
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e...BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the s...A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the spatial spectrum and the directions of arrival(DOA)of interferences to overcome the drawbacks associated with conventional adaptive beamforming(ABF)methods.The mainlobe interferences are identified by calculating the correlation coefficients between direction steering vectors(SVs)and rejected by the BMP pretreatment.Then,IAA is subsequently employed to reconstruct a sidelobe interference-plus-noise covariance matrix for the preferable ABF and residual interference suppression.Simulation results demonstrate the excellence of the proposed method over normal methods based on BMP and eigen-projection matrix perprocessing(EMP)under both uncorrelated and coherent circumstances.展开更多
基金National Natural Science Foundation of China(No.62372100)。
文摘Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.
文摘Maximizing network lifetime is measured as the primary issue in Mobile Ad-hoc Networks(MANETs).In geographically routing based models,packet transmission seems to be more appropriate in dense circumstances.The involvement of the Heuristic model directly is not appropriate to offer an effectual solution as it becomes NP-hard issues;therefore investigators concentrate on using Meta-heuristic approaches.Dragonfly Optimization(DFO)is an effective meta-heuristic approach to resolve these problems by providing optimal solutions.Moreover,Meta-heuristic approaches(DFO)turn to be slower in convergence problems and need proper computational time while expanding network size.Thus,DFO is adaptively improved as Adaptive Dragonfly Optimization(ADFO)to fit this model and re-formulated using graph-based m-connection establishment(G-𝑚𝑚CE)to overcome computational time and DFO’s convergence based problems,considerably enhancing DFO performance.In(G-𝑚𝑚CE),Connectivity Zone(CZ)is chosen among source to destination in which optimality should be under those connected regions and ADFO is used for effective route establishment in CZ indeed of complete networking model.To measure complementary features of ADFO and(G-𝑚𝑚CE),hybridization of DFO-(G-𝑚𝑚CE)is anticipated over dense circumstances with reduced energy consumption and delay to enhance network lifetime.The simulation was performed in MATLAB environment.
基金supported by the Mississippi Space Grant Consortium under NASA EPSCoR RID grant.
文摘In the field of autonomous robots,achieving complete precision is challenging,underscoring the need for human intervention,particularly in ensuring safety.Human Autonomy Teaming(HAT)is crucial for promoting safe and efficient human-robot collaboration in dynamic indoor environments.This paper introduces a framework designed to address these precision gaps,enhancing safety and robotic interactions within such settings.Central to our approach is a hybrid graph system that integrates the Generalized Voronoi Diagram(GVD)with spatio-temporal graphs,effectively combining human feedback,environmental factors,and key waypoints.An integral component of this system is the improved Node Selection Algorithm(iNSA),which utilizes the revised Grey Wolf Optimization(rGWO)for better adaptability and performance.Furthermore,an obstacle tracking model is employed to provide predictive data,enhancing the efficiency of the system.Human insights play a critical role,from supplying initial environmental data and determining key waypoints to intervening during unexpected challenges or dynamic environmental changes.Extensive simulation and comparison tests confirm the reliability and effectiveness of our proposed model,highlighting its unique advantages in the domain of HAT.This comprehensive approach ensures that the system remains robust and responsive to the complexities of real-world applications.
基金supported by the National Science and Technology Major Project of China(No.2011ZX05029-003)CNPC Science Research and Technology Development Project,China(No.2013D-0904)
文摘In this study, we used the multi-resolution graph-based clustering (MRGC) method for determining the electrofacies (EF) and lithofacies (LF) from well log data obtained from the intraplatform bank gas fields located in the Amu Darya Basin. The MRGC could automatically determine the optimal number of clusters without prior knowledge about the structure or cluster numbers of the analyzed data set and allowed the users to control the level of detail actually needed to define the EF. Based on the LF identification and successful EF calibration using core data, an MRGC EF partition model including five clusters and a quantitative LF interpretation chart were constructed. The EF clusters 1 to 5 were interpreted as lagoon, anhydrite flat, interbank, low-energy bank, and high-energy bank, and the coincidence rate in the cored interval could reach 85%. We concluded that the MRGC could be accurately applied to predict the LF in non-cored but logged wells. Therefore, continuous EF clusters were partitioned and corresponding LF were characteristics &different LF were analyzed interpreted, and the distribution and petrophysical in the framework of sequence stratigraphy.
文摘A better understanding of the relationship between the structure and functions of urban and suburban spaces is one of the avenues of research still open for geographical information science.The research presented in this paper develops several graph-based metrics whose objective is to characterize some local and global structural properties that reflect the way the overall building layout can be cross-related to the one of the road layout.Such structural properties are modeled as an aggregation of parcels,buildings,and road networks.We introduce several computational measures(Ratio Minimum Distance,Minimum Ratio Minimum Distance,and Metric Compactness)that respectively evaluate the capability for a given road to be connected with the whole road network.These measures reveal emerging sub-network structures and point out differences between less-connective and moreconnective parts of the network.Based on these local and global properties derived from the topological and graph-based representation,and on building density metrics,this paper proposes an analysis of road and building layouts at different levels of granularity.The metrics developed are applied to a case study in which the derived properties reveal coherent as well as incoherent neighborhoods that illustrate the potential of the approach and the way buildings and roads can be relatively connected in a given urban environment.Overall,and by integrating the parcels and buildings layouts,this approach complements other previous and related works that mainly retain the configurational structure of the urban network as well as morphological studies whose focus is generally limited to the analysis of the building layout.
基金the Project Fund for Key Discipline of the Shanghai Municipal Education Commission(No.J50104)the Major State Basic Research Development Program of China(No.2017YFB0403500)。
文摘Simultaneous localization and mapping(SLAM)is widely used in many robot applications to acquire the unknown environment's map and the robots location.Graph-based SLAM is demonstrated to be effective in large-scale scenarios,and it intuitively performs the SLAM as a pose graph.But because of the high data overlap rate,traditional graph-based SLAM is not efficient in some respects,such as real time performance and memory usage.To reduce1 data overlap rate,a graph-based SLAM with distributed submap strategy(DSS)is presented.In its front-end,submap based scan matching is processed and loop closing detection is conducted.Moreover in its back-end,pose graph is updated for global optimization and submap merging.From a series of experiments,it is demonstrated that graph-based SLAM with DSS reduces 51.79%data overlap rate,decreases 39.70%runtime and 24.60%memory usage.The advantages over other low overlap rate method is also proved in runtime,memory usage,accuracy and robustness performance.
文摘The number of botnet malware attacks on Internet devices has grown at an equivalent rate to the number of Internet devices that are connected to the Internet.Bot detection using machine learning(ML)with flow-based features has been extensively studied in the literature.Existing flow-based detection methods involve significant computational overhead that does not completely capture network communication patterns that might reveal other features ofmalicious hosts.Recently,Graph-Based Bot Detection methods using ML have gained attention to overcome these limitations,as graphs provide a real representation of network communications.The purpose of this study is to build a botnet malware detection system utilizing centrality measures for graph-based botnet detection and ML.We propose BotSward,a graph-based bot detection system that is based on ML.We apply the efficient centrality measures,which are Closeness Centrality(CC),Degree Centrality(CC),and PageRank(PR),and compare them with others used in the state-of-the-art.The efficiency of the proposed method is verified on the available Czech Technical University 13 dataset(CTU-13).The CTU-13 dataset contains 13 real botnet traffic scenarios that are connected to a command-and-control(C&C)channel and that cause malicious actions such as phishing,distributed denial-of-service(DDoS)attacks,spam attacks,etc.BotSward is robust to zero-day attacks,suitable for large-scale datasets,and is intended to produce better accuracy than state-of-the-art techniques.The proposed BotSward solution achieved 99%accuracy in botnet attack detection with a false positive rate as low as 0.0001%.
基金supported by the DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowshipsupported by the NGA under Contract No.HM04762110003.
文摘Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.
文摘Many cutting-edge methods are now possible in real-time commercial settings and are growing in popularity on cloud platforms.By incorporating new,cutting-edge technologies to a larger extent without using more infrastructures,the information technology platform is anticipating a completely new level of devel-opment.The following concepts are proposed in this research paper:1)A reliable authentication method Data replication that is optimised;graph-based data encryp-tion and packing colouring in Redundant Array of Independent Disks(RAID)sto-rage.At the data centre,data is encrypted using crypto keys called Key Streams.These keys are produced using the packing colouring method in the web graph’s jump graph.In order to achieve space efficiency,the replication is carried out on optimised many servers employing packing colours.It would be thought that more connections would provide better authentication.This study provides an innovative architecture with robust security,enhanced authentication,and low cost.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.
基金supported by the National Natural Science Foundation of China(No.62373027).
文摘In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.
基金Supported by the Natural Science Foundation of Chongqing(General Program,NO.CSTB2022NSCQ-MSX0884)Discipline Teaching Special Project of Yangtze Normal University(csxkjx14)。
文摘In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.
基金supported by Science and Technology Innovation Programfor Postgraduate Students in IDP Subsidized by Fundamental Research Funds for the Central Universities(Project No.ZY20240335)support of the Research Project of the Key Technology of Malicious Code Detection Based on Data Mining in APT Attack(Project No.2022IT173)the Research Project of the Big Data Sensitive Information Supervision Technology Based on Convolutional Neural Network(Project No.2022011033).
文摘Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.
文摘Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.
文摘To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks.
基金Supported by Beijing Hospitals Authority Youth Programme,No.QML20200505.
文摘BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金The National Natural Science Foundation of China(No.U19B2031).
文摘A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the spatial spectrum and the directions of arrival(DOA)of interferences to overcome the drawbacks associated with conventional adaptive beamforming(ABF)methods.The mainlobe interferences are identified by calculating the correlation coefficients between direction steering vectors(SVs)and rejected by the BMP pretreatment.Then,IAA is subsequently employed to reconstruct a sidelobe interference-plus-noise covariance matrix for the preferable ABF and residual interference suppression.Simulation results demonstrate the excellence of the proposed method over normal methods based on BMP and eigen-projection matrix perprocessing(EMP)under both uncorrelated and coherent circumstances.