For the large sparse saddle point problems, Pan and Li recently proposed in [H. K. Pan, W. Li, Math. Numer. Sinica, 2009, 31(3): 231-242] a corrected Uzawa algorithm based on a nonlinear Uzawa algorithm with two no...For the large sparse saddle point problems, Pan and Li recently proposed in [H. K. Pan, W. Li, Math. Numer. Sinica, 2009, 31(3): 231-242] a corrected Uzawa algorithm based on a nonlinear Uzawa algorithm with two nonlinear approximate inverses, and gave the detailed convergence analysis. In this paper, we focus on the convergence analysis of this corrected Uzawa algorithm, some inaccuracies in [H. K. Pan, W. Li, Math. Numer. Sinica, 2009, 31(3): 231-242] are pointed out, and a corrected convergence theorem is presented. A special case of this modified Uzawa algorithm is also discussed.展开更多
There were many contradictory evaluation criteria to select next-hop in the delay-disruption tolerance networks(DTN).To solve this problem,an attribute hierarchical model was proposed,in which the predefined criteria ...There were many contradictory evaluation criteria to select next-hop in the delay-disruption tolerance networks(DTN).To solve this problem,an attribute hierarchical model was proposed,in which the predefined criteria were summarized as static identity attributes,forwarding desire attributes and delivery capability attributes(IDC).Based on this model,a novel multi-attributes congestion aware routing(MACAR) scheme with uncertain information for next-hop selection was presented,by adopting an decision theory to aggregate attributes with belief structure and computing partial ordering relations.The simulation results show that MACAR presents higher successful delivery rate,lower average delay and effectively alleviate congestion.展开更多
Cloud detection and classification form a basis in weather analysis. Split window algorithm (SWA) is one of the simple and matured algorithms used to detect and classify water and ice clouds in the atmosphere using sa...Cloud detection and classification form a basis in weather analysis. Split window algorithm (SWA) is one of the simple and matured algorithms used to detect and classify water and ice clouds in the atmosphere using satellite data. The recent availability of Himawari-8 data has considerably strengthened the possibility of better cloud classification owing to its enhanced multi-band configuration as well as high temporal resolution. In SWA, cloud classification is attained by considering the spatial distributions of the brightness temperature (BT) and brightness temperature difference (BTD) of thermal infrared bands. In this study, we compare unsupervised classification results of SWA using the band pair of band 13 and 15 (SWA13-15, 10 and 12 μm bands), versus that of band 15 and 16 (SWA15-16, 12 and 13 μm bands) over the Japan area. Different threshold values of BT and BTD are chosen in winter and summer seasons to categorize cloud regions into nine different types. The accuracy of classification is verified by using the cloud-top height information derived from the data of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). For this purpose, six different paths of the space-borne lidar are selected in both summer and winter seasons, on the condition that the time span of overpass falls within the time ranges between 01:00 and 05:00 UTC, which corresponds to the local time around noon. The result of verification indicates that the classification based on SWA13-15 can detect more cloud types as compared with that based on SWA15-16 in both summer and winter seasons, though the latter combination is useful for delineating cumulonimbus underneath dense cirrus展开更多
The process of ranking scientific publications in dynamic citation networks plays a crucial rule in a variety of applications. Despite the availability of a number of ranking algorithms, most of them use common popula...The process of ranking scientific publications in dynamic citation networks plays a crucial rule in a variety of applications. Despite the availability of a number of ranking algorithms, most of them use common popularity metrics such as the citation count, h-index, and Impact Factor (IF). These adopted metrics cause a problem of bias in favor of older publications that took enough time to collect as many citations as possible. This paper focuses on solving the problem of bias by proposing a new ranking algorithm based on the PageRank (PR) algorithm;it is one of the main page ranking algorithms being widely used. The developed algorithm considers a newly suggested metric called the Citation Average rate of Change (CAC). Time information such as publication date and the citation occurrence’s time are used along with citation data to calculate the new metric. The proposed ranking algorithm was tested on a dataset of scientific papers in the field of medical physics published in the Dimensions database from years 2005 to 2017. The experimental results have shown that the proposed ranking algorithm outperforms the PageRank algorithm in ranking scientific publications where 26 papers instead of only 14 were ranked among the top 100 papers of this dataset. In addition, there were no radical changes or unreasonable jump in the ranking process, i.e., the correlation rate between the results of the proposed ranking method and the original PageRank algorithm was 92% based on the Spearman correlation coefficient.展开更多
To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number...To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.展开更多
We implemented a 3-3-1 algorithm in order to provide safe and simple self-titration in patients who newly initiated BOT as well as who were already on BOT and evaluated its utility in clinical setting. A total of 46 p...We implemented a 3-3-1 algorithm in order to provide safe and simple self-titration in patients who newly initiated BOT as well as who were already on BOT and evaluated its utility in clinical setting. A total of 46 patients, 21 patients in the newly-initiated group and 25 patients in the existing BOT group performed dose adjustment using 3-3-1 algorithm. HbA1c was significantly improved 4 weeks after the initiation from 8.5% ± 1.2% at baseline to 7.3% ± 0.7% at the final evaluation (p 0.01, vs. Baseline). The average daily insulin units increased throughout the study period from 10.1 ± 6.7 at baseline to 14.6 ± 8.9 units at the final evaluation. Weight didn’t significantly change throughout the study (p = 0.12). The incidents of hypoglycemia were 0.8/month during the insulin dose self-adjustment period and 0.4/month during the follow-up period. The 3-3-1 algorithm using insulin glargine provided a safe and simple dose adjustment and demonstrated its utility in patients who were newly introduced to insulin treatment as well as who were already on BOT.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
To minimize battery consumption for portable devices, the prescheduling policy of battery-aware scheduling was improved by optimizing slack distribution. A battery-aware compound task scheduling (BACTS) algorithm co...To minimize battery consumption for portable devices, the prescheduling policy of battery-aware scheduling was improved by optimizing slack distribution. A battery-aware compound task scheduling (BACTS) algorithm considering various aspects including task deadline, current and execution time was proposed and evaluated with the previously prevailing earliest deadline first (EDF) algorithm. The results indicate the proposed BACTS algorithm manages to figure out a feasible schedule (if available) in battery-aware task scheduling even for disorganized connected task graphs beyond the solving ability of EDF. Its schedule achieves better performance with lower charge consumption after prescheduling, and also lower or equal optimum charge consumption after voltage scaling.展开更多
In this paper, an adaptive subcarrier allocation scheme with reconfiguration of operating parameters for Cognitive Radio Networks (CRN) is presented. A QoS-conscious spectrum decision frame work is projected, where sp...In this paper, an adaptive subcarrier allocation scheme with reconfiguration of operating parameters for Cognitive Radio Networks (CRN) is presented. A QoS-conscious spectrum decision frame work is projected, where spectrum bands are determined by considering the application requirements as well as the dynamic nature of the spectrum bands. The novel subcarrier allocation algorithm is developed to fulfill different performance objective as a solution for subcarrier allocation and power allocation problem for Cognitive Radio (CR) users in CRNs. It employs operating frequency parameter modification using Proportional Resource Algorithm and Genetic Algorithm (GA). The multi objective optimization problem with equality and inequality constraint is considered. Moreover, a dynamic subcarrier allocations scheme is developed based on GA to decide on the spectrum bands adaptively dependent on the time-varying CR network capacity. The proposed algorithm targets to achieve maximum data rate for each subcarrier, maximize the overall network throughput and maximize the number of satisfied user under the constraints of bandwidth and guarantee Quality of Service (QoS) requirement from dynamic spectrum management (DSM) perspective. Moreover, it determines the best available channel.展开更多
The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computi...The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs.The cloud service provider fulfills different user requirements using virtualization-where a single physical machine can host multiple VirtualMachines.Each virtualmachine potentially represents a different user environment such as operating system,programming environment,and applications.However,these cloud services use a large amount of electrical energy and produce greenhouse gases.To reduce the electricity cost and greenhouse gases,energy efficient algorithms must be designed.One specific area where energy efficient algorithms are required is virtual machine consolidation.With virtualmachine consolidation,the objective is to utilize the minimumpossible number of hosts to accommodate the required virtual machines,keeping in mind the service level agreement requirements.This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host.The online algorithm is analyzed using a competitive analysis approach.In addition,an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms.Our proposed online algorithm consumed 25%less energy and performed 43%fewer migrations than the benchmark algorithms.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
In a cloud manufacturing environment with abundant functionally equivalent cloud services,users naturally desire the highest-quality service(s).Thus,a comprehensive measurement of quality of service(QoS)is needed.Opti...In a cloud manufacturing environment with abundant functionally equivalent cloud services,users naturally desire the highest-quality service(s).Thus,a comprehensive measurement of quality of service(QoS)is needed.Opti-mizing the plethora of cloud services has thus become a top priority.Cloud ser-vice optimization is negatively affected by untrusted QoS data,which are inevitably provided by some users.To resolve these problems,this paper proposes a QoS-aware cloud service optimization model and establishes QoS-information awareness and quantification mechanisms.Untrusted data are assessed by an information correction method.The weights discovered by the variable precision Rough Set,which mined the evaluation indicators from historical data,providing a comprehensive performance ranking of service quality.The manufacturing cloud service optimization algorithm thus provides a quantitative reference for service selection.In experimental simulations,this method recommended the optimal services that met users’needs,and effectively reduced the impact of dis-honest users on the selection results.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
The performance of central processing units(CPUs)can be enhanced by integrating multiple cores into a single chip.Cpu performance can be improved by allocating the tasks using intelligent strategy.If Small tasks wait ...The performance of central processing units(CPUs)can be enhanced by integrating multiple cores into a single chip.Cpu performance can be improved by allocating the tasks using intelligent strategy.If Small tasks wait for long time or executes for long time,then CPU consumes more power.Thus,the amount of power consumed by CPUs can be reduced without increasing the frequency.Lines are used to connect cores,which are organized together to form a network called network on chips(NOCs).NOCs are mainly used in the design of processors.However,its performance can still be enhanced by reducing power consumption.The main problem lies with task scheduling,which fully utilizes the network.Here,we propose a novel randomfit algorithm for NOCs based on power-aware optimization.In this algorithm,tasks that are under the same application are mapped to the neighborhoods of the same application,whereas tasks belonging to different applications are mapped to the processor cores on the basis of a series of steps.This scheduling process is performed during the run time.Experiment results show that the proposed randomfit algorithm reduces the amount of power consumed and increases system performance based on effective scheduling.展开更多
Floorplanning is a prominent area in the Very Large-Scale Integrated (VLSI) circuit design automation, because it influences the performance, size, yield and reliability of the VLSI chips. It is the process of estimat...Floorplanning is a prominent area in the Very Large-Scale Integrated (VLSI) circuit design automation, because it influences the performance, size, yield and reliability of the VLSI chips. It is the process of estimating the positions and shapes of the modules. A high packing density, small feature size and high clock frequency make the Integrated Circuit (IC) to dissipate large amount of heat. So, in this paper, a methodology is presented to distribute the temperature of the module on the layout while simultaneously optimizing the total area and wirelength by using a hybrid Particle Swarm Optimization-Harmony Search (HPSOHS) algorithm. This hybrid algorithm employs diversification technique (PSO) to obtain global optima and intensification strategy (HS) to achieve the best solution at the local level and Modified Corner List algorithm (MCL) for floorplan representation. A thermal modelling tool called hotspot tool is integrated with the proposed algorithm to obtain the temperature at the block level. The proposed algorithm is illustrated using Microelectronics Centre of North Carolina (MCNC) benchmark circuits. The results obtained are compared with the solutions derived from other stochastic algorithms and the proposed algorithm provides better solution.展开更多
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms...In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.展开更多
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t...In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.展开更多
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol...Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.展开更多
基金Supported by the National Natural Science Foundation of China(11201422)the Natural Science Foundation of Zhejiang Province(Y6110639,LQ12A01017)
文摘For the large sparse saddle point problems, Pan and Li recently proposed in [H. K. Pan, W. Li, Math. Numer. Sinica, 2009, 31(3): 231-242] a corrected Uzawa algorithm based on a nonlinear Uzawa algorithm with two nonlinear approximate inverses, and gave the detailed convergence analysis. In this paper, we focus on the convergence analysis of this corrected Uzawa algorithm, some inaccuracies in [H. K. Pan, W. Li, Math. Numer. Sinica, 2009, 31(3): 231-242] are pointed out, and a corrected convergence theorem is presented. A special case of this modified Uzawa algorithm is also discussed.
基金Project(60973127) supported by the National Natural Science Foundation of ChinaProject(09JJ3123) supported by the Natural Science Foundation of Hunan Province,China
文摘There were many contradictory evaluation criteria to select next-hop in the delay-disruption tolerance networks(DTN).To solve this problem,an attribute hierarchical model was proposed,in which the predefined criteria were summarized as static identity attributes,forwarding desire attributes and delivery capability attributes(IDC).Based on this model,a novel multi-attributes congestion aware routing(MACAR) scheme with uncertain information for next-hop selection was presented,by adopting an decision theory to aggregate attributes with belief structure and computing partial ordering relations.The simulation results show that MACAR presents higher successful delivery rate,lower average delay and effectively alleviate congestion.
文摘Cloud detection and classification form a basis in weather analysis. Split window algorithm (SWA) is one of the simple and matured algorithms used to detect and classify water and ice clouds in the atmosphere using satellite data. The recent availability of Himawari-8 data has considerably strengthened the possibility of better cloud classification owing to its enhanced multi-band configuration as well as high temporal resolution. In SWA, cloud classification is attained by considering the spatial distributions of the brightness temperature (BT) and brightness temperature difference (BTD) of thermal infrared bands. In this study, we compare unsupervised classification results of SWA using the band pair of band 13 and 15 (SWA13-15, 10 and 12 μm bands), versus that of band 15 and 16 (SWA15-16, 12 and 13 μm bands) over the Japan area. Different threshold values of BT and BTD are chosen in winter and summer seasons to categorize cloud regions into nine different types. The accuracy of classification is verified by using the cloud-top height information derived from the data of Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). For this purpose, six different paths of the space-borne lidar are selected in both summer and winter seasons, on the condition that the time span of overpass falls within the time ranges between 01:00 and 05:00 UTC, which corresponds to the local time around noon. The result of verification indicates that the classification based on SWA13-15 can detect more cloud types as compared with that based on SWA15-16 in both summer and winter seasons, though the latter combination is useful for delineating cumulonimbus underneath dense cirrus
文摘The process of ranking scientific publications in dynamic citation networks plays a crucial rule in a variety of applications. Despite the availability of a number of ranking algorithms, most of them use common popularity metrics such as the citation count, h-index, and Impact Factor (IF). These adopted metrics cause a problem of bias in favor of older publications that took enough time to collect as many citations as possible. This paper focuses on solving the problem of bias by proposing a new ranking algorithm based on the PageRank (PR) algorithm;it is one of the main page ranking algorithms being widely used. The developed algorithm considers a newly suggested metric called the Citation Average rate of Change (CAC). Time information such as publication date and the citation occurrence’s time are used along with citation data to calculate the new metric. The proposed ranking algorithm was tested on a dataset of scientific papers in the field of medical physics published in the Dimensions database from years 2005 to 2017. The experimental results have shown that the proposed ranking algorithm outperforms the PageRank algorithm in ranking scientific publications where 26 papers instead of only 14 were ranked among the top 100 papers of this dataset. In addition, there were no radical changes or unreasonable jump in the ranking process, i.e., the correlation rate between the results of the proposed ranking method and the original PageRank algorithm was 92% based on the Spearman correlation coefficient.
基金supported by National Natural Science Foundation of China(61502405,61300039)Provincial Science Foundation of Hunan Province(14JJ3130)+1 种基金Fujian Educational Bureau(JA15368)Xiamen University of Technology(YKJ13024R,XYK201437)
基金The Natural Science Foundation of Education Ministry of Shaanxi Province(No.15JK1672)the Industrial Research Project of Shaanxi Province(No.2017GY-092)Special Fund for Key Discipline Construction of General Institutions of Higher Education in Shaanxi Province
文摘To solve the problem of time-awarc test case prioritization,a hybrid algorithm composed of integer linear programming and the genetic algorithm(ILP-GA)is proposed.First,the test case suite which cm maximize the number of covered program entities a d satisfy time constraints is selected by integer linea progamming.Secondly,the individual is encoded according to the cover matrices of entities,and the coverage rate of program entities is used as the fitness function and the genetic algorithm is used to prioritize the selected test cases.Five typical open source projects are selected as benchmark programs.Branch and method are selected as program entities,and time constraint percentages a e 25%and 75%.The experimental results show that the ILP-GA convergence has faster speed and better stability than ILP-additional and IP-total in most cases,which contributes to the detection of software defects as early as possible and reduces the software testing costs.
文摘We implemented a 3-3-1 algorithm in order to provide safe and simple self-titration in patients who newly initiated BOT as well as who were already on BOT and evaluated its utility in clinical setting. A total of 46 patients, 21 patients in the newly-initiated group and 25 patients in the existing BOT group performed dose adjustment using 3-3-1 algorithm. HbA1c was significantly improved 4 weeks after the initiation from 8.5% ± 1.2% at baseline to 7.3% ± 0.7% at the final evaluation (p 0.01, vs. Baseline). The average daily insulin units increased throughout the study period from 10.1 ± 6.7 at baseline to 14.6 ± 8.9 units at the final evaluation. Weight didn’t significantly change throughout the study (p = 0.12). The incidents of hypoglycemia were 0.8/month during the insulin dose self-adjustment period and 0.4/month during the follow-up period. The 3-3-1 algorithm using insulin glargine provided a safe and simple dose adjustment and demonstrated its utility in patients who were newly introduced to insulin treatment as well as who were already on BOT.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金Supported by the National High Technology Research and Development Program of China (863 Program) (2002AA1Z1490)the Spe-cialized Research Fund for the Doctoral Program of Higher Education of China (20040486049)
文摘To minimize battery consumption for portable devices, the prescheduling policy of battery-aware scheduling was improved by optimizing slack distribution. A battery-aware compound task scheduling (BACTS) algorithm considering various aspects including task deadline, current and execution time was proposed and evaluated with the previously prevailing earliest deadline first (EDF) algorithm. The results indicate the proposed BACTS algorithm manages to figure out a feasible schedule (if available) in battery-aware task scheduling even for disorganized connected task graphs beyond the solving ability of EDF. Its schedule achieves better performance with lower charge consumption after prescheduling, and also lower or equal optimum charge consumption after voltage scaling.
文摘In this paper, an adaptive subcarrier allocation scheme with reconfiguration of operating parameters for Cognitive Radio Networks (CRN) is presented. A QoS-conscious spectrum decision frame work is projected, where spectrum bands are determined by considering the application requirements as well as the dynamic nature of the spectrum bands. The novel subcarrier allocation algorithm is developed to fulfill different performance objective as a solution for subcarrier allocation and power allocation problem for Cognitive Radio (CR) users in CRNs. It employs operating frequency parameter modification using Proportional Resource Algorithm and Genetic Algorithm (GA). The multi objective optimization problem with equality and inequality constraint is considered. Moreover, a dynamic subcarrier allocations scheme is developed based on GA to decide on the spectrum bands adaptively dependent on the time-varying CR network capacity. The proposed algorithm targets to achieve maximum data rate for each subcarrier, maximize the overall network throughput and maximize the number of satisfied user under the constraints of bandwidth and guarantee Quality of Service (QoS) requirement from dynamic spectrum management (DSM) perspective. Moreover, it determines the best available channel.
文摘The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs.The cloud service provider fulfills different user requirements using virtualization-where a single physical machine can host multiple VirtualMachines.Each virtualmachine potentially represents a different user environment such as operating system,programming environment,and applications.However,these cloud services use a large amount of electrical energy and produce greenhouse gases.To reduce the electricity cost and greenhouse gases,energy efficient algorithms must be designed.One specific area where energy efficient algorithms are required is virtual machine consolidation.With virtualmachine consolidation,the objective is to utilize the minimumpossible number of hosts to accommodate the required virtual machines,keeping in mind the service level agreement requirements.This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host.The online algorithm is analyzed using a competitive analysis approach.In addition,an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms.Our proposed online algorithm consumed 25%less energy and performed 43%fewer migrations than the benchmark algorithms.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
基金supported by the National Natural Science Foundation,China (Grant No:61602413,Jianwei Zheng,https://www.nsfc.gov.cn)the Natural Science Foundation of Zhejiang Province (Grant No:LY15E050007,Wenlong Ma,http://zjnsf.kjt.zj.gov.cn/portal/index.html).
文摘In a cloud manufacturing environment with abundant functionally equivalent cloud services,users naturally desire the highest-quality service(s).Thus,a comprehensive measurement of quality of service(QoS)is needed.Opti-mizing the plethora of cloud services has thus become a top priority.Cloud ser-vice optimization is negatively affected by untrusted QoS data,which are inevitably provided by some users.To resolve these problems,this paper proposes a QoS-aware cloud service optimization model and establishes QoS-information awareness and quantification mechanisms.Untrusted data are assessed by an information correction method.The weights discovered by the variable precision Rough Set,which mined the evaluation indicators from historical data,providing a comprehensive performance ranking of service quality.The manufacturing cloud service optimization algorithm thus provides a quantitative reference for service selection.In experimental simulations,this method recommended the optimal services that met users’needs,and effectively reduced the impact of dis-honest users on the selection results.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.
文摘The performance of central processing units(CPUs)can be enhanced by integrating multiple cores into a single chip.Cpu performance can be improved by allocating the tasks using intelligent strategy.If Small tasks wait for long time or executes for long time,then CPU consumes more power.Thus,the amount of power consumed by CPUs can be reduced without increasing the frequency.Lines are used to connect cores,which are organized together to form a network called network on chips(NOCs).NOCs are mainly used in the design of processors.However,its performance can still be enhanced by reducing power consumption.The main problem lies with task scheduling,which fully utilizes the network.Here,we propose a novel randomfit algorithm for NOCs based on power-aware optimization.In this algorithm,tasks that are under the same application are mapped to the neighborhoods of the same application,whereas tasks belonging to different applications are mapped to the processor cores on the basis of a series of steps.This scheduling process is performed during the run time.Experiment results show that the proposed randomfit algorithm reduces the amount of power consumed and increases system performance based on effective scheduling.
文摘Floorplanning is a prominent area in the Very Large-Scale Integrated (VLSI) circuit design automation, because it influences the performance, size, yield and reliability of the VLSI chips. It is the process of estimating the positions and shapes of the modules. A high packing density, small feature size and high clock frequency make the Integrated Circuit (IC) to dissipate large amount of heat. So, in this paper, a methodology is presented to distribute the temperature of the module on the layout while simultaneously optimizing the total area and wirelength by using a hybrid Particle Swarm Optimization-Harmony Search (HPSOHS) algorithm. This hybrid algorithm employs diversification technique (PSO) to obtain global optima and intensification strategy (HS) to achieve the best solution at the local level and Modified Corner List algorithm (MCL) for floorplan representation. A thermal modelling tool called hotspot tool is integrated with the proposed algorithm to obtain the temperature at the block level. The proposed algorithm is illustrated using Microelectronics Centre of North Carolina (MCNC) benchmark circuits. The results obtained are compared with the solutions derived from other stochastic algorithms and the proposed algorithm provides better solution.
基金supported by the National Natural Science Foundation of China(No.62373027).
文摘In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.
基金Supported by the Natural Science Foundation of Chongqing(General Program,NO.CSTB2022NSCQ-MSX0884)Discipline Teaching Special Project of Yangtze Normal University(csxkjx14)。
文摘In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.
基金supported by Science and Technology Innovation Programfor Postgraduate Students in IDP Subsidized by Fundamental Research Funds for the Central Universities(Project No.ZY20240335)support of the Research Project of the Key Technology of Malicious Code Detection Based on Data Mining in APT Attack(Project No.2022IT173)the Research Project of the Big Data Sensitive Information Supervision Technology Based on Convolutional Neural Network(Project No.2022011033).
文摘Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.