Data center industries have been facing huge energy challenges due to escalating power consumption and associated carbon emissions.In the context of carbon neutrality,the integration of data centers with renewable ene...Data center industries have been facing huge energy challenges due to escalating power consumption and associated carbon emissions.In the context of carbon neutrality,the integration of data centers with renewable energy has become a prevailing trend.To advance the renewable energy integration in data centers,it is imperative to thoroughly explore the data centers’operational flexibility.Computing workloads and refrigeration systems are recognized as two promising flexible resources for power regulationwithin data centermicro-grids.This paper identifies and categorizes delay-tolerant computing workloads into three types(long-running non-interruptible,long-running interruptible,and short-running)and develops mathematical time-shifting models for each.Additionally,this paper examines the thermal dynamics of the computer room and derives a time-varying temperature model coupled to refrigeration power.Building on these models,this paper proposes a two-stage,multi-time scale optimization scheduling framework that jointly coordinates computing workloads time-shift in day-ahead scheduling and refrigeration power control in intra-day dispatch to mitigate renewable variability.A case study demonstrates that the framework effectively enhances the renewable-energy utilization,improves the operational economy of the data center microgrid,and mitigates the impact of renewable power uncertainty.The results highlight the potential of coordinated computing workloads and thermal system flexibility to support greener,more cost-effective data center operation.展开更多
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in t...To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.展开更多
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ...Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.展开更多
In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However...In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges.展开更多
Miniature air quality sensors are widely used in urban grid-based monitoring due to their flexibility in deployment and low cost.However,the raw data collected by these devices often suffer from low accuracy caused by...Miniature air quality sensors are widely used in urban grid-based monitoring due to their flexibility in deployment and low cost.However,the raw data collected by these devices often suffer from low accuracy caused by environmental interference and sensor drift,highlighting the need for effective calibration methods to improve data reliability.This study proposes a data correction method based on Bayesian Optimization Support Vector Regression(BO-SVR),which combines the nonlinear modeling capability of Support Vector Regression(SVR)with the efficient global hyperparameter search of Bayesian Optimization.By introducing cross-validation loss as the optimization objective and using Gaussian process modeling with an Expected Improvement acquisition strategy,the approach automatically determines optimal hyperparameters for accurate pollutant concentration prediction.Experiments on real-world micro-sensor datasets demonstrate that BO-SVR outperforms traditional SVR,grid search SVR,and random forest(RF)models across multiple pollutants,including PM_(2.5),PM_(10),CO,NO_(2),SO_(2),and O_(3).The proposed method achieves lower prediction residuals,higher fitting accuracy,and better generalization,offering an efficient and practical solution for enhancing the quality of micro-sensor air monitoring data.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number...In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.展开更多
A successful mechanical property data-driven prediction model is the core of the optimal design of hot rolling process for hot-rolled strips. However, the original industrial data, usually unbalanced, are inevitably m...A successful mechanical property data-driven prediction model is the core of the optimal design of hot rolling process for hot-rolled strips. However, the original industrial data, usually unbalanced, are inevitably mixed with fluctuant and abnormal values. Models established on the basis of the data without data processing can cause misleading results, which cannot be used for the optimal design of hot rolling process. Thus, a method of industrial data processing of C-Mn steel was proposed based on the data analysis. The Bayesian neural network was employed to establish the reliable mechanical property prediction models for the optimal design of hot rolling process. By using the multi-objective optimization algorithm and considering the individual requirements of costumers and the constraints of the equipment, the optimal design of hot rolling process was successfully applied to the rolling process design for Q345B steel with 0.017% Nb and 0.046% Ti content removed. The optimal process design results were in good agreement with the industrial trials results, which verify the effectiveness of the optimal design of hot rolling process.展开更多
The performance of an optimized aerodynamic shape is further improved by a second-step optimization using the design knowledge discovered by a data mining technique based on Proper Orthogonal Decomposition(POD) in the...The performance of an optimized aerodynamic shape is further improved by a second-step optimization using the design knowledge discovered by a data mining technique based on Proper Orthogonal Decomposition(POD) in the present study. Data generated in the first-step optimization by using evolution algorithms is saved as the source data, among which the superior data with improved objectives and maintained constraints is chosen. Only the geometry components of the superior data are picked out and used for constructing the snapshots of POD. Geometry characteristics of the superior data illustrated by POD bases are the design knowledge, by which the second-step optimization can be rapidly achieved. The optimization methods are demonstrated by redesigning a transonic compressor rotor blade, NASA Rotor 37, in the study to maximize the peak adiabatic efficiency, while maintaining the total pressure ratio and mass flow rate.Firstly, the blade is redesigned by using a particle swarm optimization method, and the adiabatic efficiency is increased by 1.29%. Then, the second-step optimization is performed by using the design knowledge, and a 0.25% gain on the adiabatic efficiency is obtained. The results are presented and addressed in detail, demonstrating that geometry variations significantly change the pattern and strength of the shock wave in the blade passage. The former reduces the separation loss,while the latter reduces the shock loss, and both favor an increase of the adiabatic efficiency.展开更多
The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information ...The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information flow.To ensure effective transmission of wide-frequency electrical information by the communication protocol of a WAMS,this study performs real-time traffic monitoring and analysis of the data network of a power information system,and establishes corresponding network optimization strategies to solve existing transmission problems.This study utilizes the traffic analysis results obtained using the current real-time dynamic monitoring system to design an optimization strategy,covering the optimization in three progressive levels:the underlying communication protocol,source data,and transmission process.Optimization of the system structure and scheduling optimization of data information are validated to be feasible and practical via tests.展开更多
Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a...Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.展开更多
Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to...Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to efficient production. Thus, the cooling process of iron ore pellets was optimized using mathematical model and data mining techniques. A mathematical model was established and validated by steady-state production data, and the results show that the calculated values coincide very well with the measured values. Based on the proposed model, effects of important process parameters on gas-pellet temperature profiles within the circular cooler were analyzed to better understand the entire cooling process. Two data mining techniques—Association Rules Induction and Clustering were also applied on the steady-state production data to obtain expertise operating rules and optimized targets. Finally, an optimized control strategy for the circular cooler was proposed and an operation guidance system was developed. The system could realize the visualization of thermal process at steady state and provide operation guidance to optimize the circular cooler.展开更多
In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfe...In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.展开更多
A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial partic...A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.展开更多
Based on the traditional numerical simulation and optimization algorithms,in combination with the layered injection and production"hard data"monitored at real time by automatic control technology,a systemati...Based on the traditional numerical simulation and optimization algorithms,in combination with the layered injection and production"hard data"monitored at real time by automatic control technology,a systematic approach for detailed water injection design using data-driven algorithms is proposed.First the data assimilation technology is used to match geological model parameters under the constraint of observed well dynamics;the flow relationships between injectors and producers in the block are calculated based on automatic identification method for layered injection-production flow relationship;multi-layer and multi-direction production splitting technique is used to calculate the liquid and oil production of producers in different layers and directions and obtain quantified indexes of water injection effect.Then,machine learning algorithms are applied to evaluate the effectiveness of water injection in different layers of wells and to perform the water injection direction adjustment.Finally,the particle swarm algorithm is used to optimize the detailed water injection plan and to make production predictions.This method and procedure make full use of the automation and intelligence of data-driven and machine learning algorithms.This method was used to match the data of a complex faulted reservoir in eastern China,achieving a fitting level of 85%.The cumulative oil production in the example block for 12 months after optimization is 8.2%higher than before.This method can help design detailed water injection program for mature oilfields.展开更多
In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matri...In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matrix-based reduction method depends on whether the numerical attributes can be properly discretized or not.So a discretization algorithm based on particle swarm optimization(PSO) is proposed. Moreover, hybrid weights are adopted in the process of particles evolution. Comparative calculations for certain equipment are completed to demonstrate the effectiveness of the proposed algorithm. The results indicate that the proposed algorithm has better performance than other popular algorithms such as class-attribute interdependence maximization(CAIM)discretization method and entropy-based discretization method.展开更多
We present an electrical grid optimization method for economical benefit. After simplifying an IEEE feeder diagram, we build a compact smart grid system including a photovoltaic-inverter system, a shunt capacitor, an ...We present an electrical grid optimization method for economical benefit. After simplifying an IEEE feeder diagram, we build a compact smart grid system including a photovoltaic-inverter system, a shunt capacitor, an on-load tapchanger(OLTC) and transmission lines. The system power factor(PF) regulation and reactive power dispatching are indispensable to improve power quality. Our control method uses predictive weather and load data to decide engaging or tripping the shunt capacitor, or reactive power injection by the photovoltaic-inverter system, ultimately to keep the system PF in a good range. From the perspective of economics, the economical model is considered as a decision maker in our predictive data control method.Capacitor-only control strategy is a common photovoltaic(PV)regulation method, which is treated as a baseline case. Simulations with GridLAB-D on profiled loads and residential loads have been carried out. The comparison results with baseline control strategy and our predictive data control method show the appreciable economical benefit of our method.展开更多
基金supported by Science and Technology Standard Project of Guangdong Electric Power Design Institute(ER11301W,ER11811W).
文摘Data center industries have been facing huge energy challenges due to escalating power consumption and associated carbon emissions.In the context of carbon neutrality,the integration of data centers with renewable energy has become a prevailing trend.To advance the renewable energy integration in data centers,it is imperative to thoroughly explore the data centers’operational flexibility.Computing workloads and refrigeration systems are recognized as two promising flexible resources for power regulationwithin data centermicro-grids.This paper identifies and categorizes delay-tolerant computing workloads into three types(long-running non-interruptible,long-running interruptible,and short-running)and develops mathematical time-shifting models for each.Additionally,this paper examines the thermal dynamics of the computer room and derives a time-varying temperature model coupled to refrigeration power.Building on these models,this paper proposes a two-stage,multi-time scale optimization scheduling framework that jointly coordinates computing workloads time-shift in day-ahead scheduling and refrigeration power control in intra-day dispatch to mitigate renewable variability.A case study demonstrates that the framework effectively enhances the renewable-energy utilization,improves the operational economy of the data center microgrid,and mitigates the impact of renewable power uncertainty.The results highlight the potential of coordinated computing workloads and thermal system flexibility to support greener,more cost-effective data center operation.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by National Natural Science Foundation of China(No.62163036).
文摘To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20464,62066005Innovation Project of Guangxi Graduate Education under Grant No.YCSW2024313.
文摘Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.
基金funding support from the National Natural Science Foundation of China(No.52204065,No.ZX20230398)supported by a grant from the Human Resources Development Program(No.20216110100070)of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)。
文摘In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges.
文摘Miniature air quality sensors are widely used in urban grid-based monitoring due to their flexibility in deployment and low cost.However,the raw data collected by these devices often suffer from low accuracy caused by environmental interference and sensor drift,highlighting the need for effective calibration methods to improve data reliability.This study proposes a data correction method based on Bayesian Optimization Support Vector Regression(BO-SVR),which combines the nonlinear modeling capability of Support Vector Regression(SVR)with the efficient global hyperparameter search of Bayesian Optimization.By introducing cross-validation loss as the optimization objective and using Gaussian process modeling with an Expected Improvement acquisition strategy,the approach automatically determines optimal hyperparameters for accurate pollutant concentration prediction.Experiments on real-world micro-sensor datasets demonstrate that BO-SVR outperforms traditional SVR,grid search SVR,and random forest(RF)models across multiple pollutants,including PM_(2.5),PM_(10),CO,NO_(2),SO_(2),and O_(3).The proposed method achieves lower prediction residuals,higher fitting accuracy,and better generalization,offering an efficient and practical solution for enhancing the quality of micro-sensor air monitoring data.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
基金supported by the National Natural Science Foundation of China (No. 11502211)
文摘In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.
文摘A successful mechanical property data-driven prediction model is the core of the optimal design of hot rolling process for hot-rolled strips. However, the original industrial data, usually unbalanced, are inevitably mixed with fluctuant and abnormal values. Models established on the basis of the data without data processing can cause misleading results, which cannot be used for the optimal design of hot rolling process. Thus, a method of industrial data processing of C-Mn steel was proposed based on the data analysis. The Bayesian neural network was employed to establish the reliable mechanical property prediction models for the optimal design of hot rolling process. By using the multi-objective optimization algorithm and considering the individual requirements of costumers and the constraints of the equipment, the optimal design of hot rolling process was successfully applied to the rolling process design for Q345B steel with 0.017% Nb and 0.046% Ti content removed. The optimal process design results were in good agreement with the industrial trials results, which verify the effectiveness of the optimal design of hot rolling process.
基金supported by the National Natural Science Foundation of China(Nos.51676003,51206003,and 11702305)
文摘The performance of an optimized aerodynamic shape is further improved by a second-step optimization using the design knowledge discovered by a data mining technique based on Proper Orthogonal Decomposition(POD) in the present study. Data generated in the first-step optimization by using evolution algorithms is saved as the source data, among which the superior data with improved objectives and maintained constraints is chosen. Only the geometry components of the superior data are picked out and used for constructing the snapshots of POD. Geometry characteristics of the superior data illustrated by POD bases are the design knowledge, by which the second-step optimization can be rapidly achieved. The optimization methods are demonstrated by redesigning a transonic compressor rotor blade, NASA Rotor 37, in the study to maximize the peak adiabatic efficiency, while maintaining the total pressure ratio and mass flow rate.Firstly, the blade is redesigned by using a particle swarm optimization method, and the adiabatic efficiency is increased by 1.29%. Then, the second-step optimization is performed by using the design knowledge, and a 0.25% gain on the adiabatic efficiency is obtained. The results are presented and addressed in detail, demonstrating that geometry variations significantly change the pattern and strength of the shock wave in the blade passage. The former reduces the separation loss,while the latter reduces the shock loss, and both favor an increase of the adiabatic efficiency.
文摘The application and development of a wide-area measurement system(WAMS)has enabled many applications and led to several requirements based on dynamic measurement data.Such data are transmitted as big data information flow.To ensure effective transmission of wide-frequency electrical information by the communication protocol of a WAMS,this study performs real-time traffic monitoring and analysis of the data network of a power information system,and establishes corresponding network optimization strategies to solve existing transmission problems.This study utilizes the traffic analysis results obtained using the current real-time dynamic monitoring system to design an optimization strategy,covering the optimization in three progressive levels:the underlying communication protocol,source data,and transmission process.Optimization of the system structure and scheduling optimization of data information are validated to be feasible and practical via tests.
文摘Particle Swarm Optimization(PSO)has been utilized as a useful tool for solving intricate optimization problems for various applications in different fields.This paper attempts to carry out an update on PSO and gives a review of its recent developments and applications,but also provides arguments for its efficacy in resolving optimization problems in comparison with other algorithms.Covering six strategic areas,which include Data Mining,Machine Learning,Engineering Design,Energy Systems,Healthcare,and Robotics,the study demonstrates the versatility and effectiveness of the PSO.Experimental results are,however,used to show the strong and weak parts of PSO,and performance results are included in tables for ease of comparison.The results stress PSO’s efficiency in providing optimal solutions but also show that there are aspects that need to be improved through combination with algorithms or tuning to the parameters of the method.The review of the advantages and limitations of PSO is intended to provide academics and practitioners with a well-rounded view of the methods of employing such a tool most effectively and to encourage optimized designs of PSO in solving theoretical and practical problems in the future.
基金Item Sponsored by National Natural Science Foundation of China(51174253)
文摘Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to efficient production. Thus, the cooling process of iron ore pellets was optimized using mathematical model and data mining techniques. A mathematical model was established and validated by steady-state production data, and the results show that the calculated values coincide very well with the measured values. Based on the proposed model, effects of important process parameters on gas-pellet temperature profiles within the circular cooler were analyzed to better understand the entire cooling process. Two data mining techniques—Association Rules Induction and Clustering were also applied on the steady-state production data to obtain expertise operating rules and optimized targets. Finally, an optimized control strategy for the circular cooler was proposed and an operation guidance system was developed. The system could realize the visualization of thermal process at steady state and provide operation guidance to optimize the circular cooler.
基金supported by the National Natural Science Foundation ofChina (Grant No. 71704016,71331008, 71402010)the Natural Science Foundation of HunanProvince (Grant No. 2017JJ2267)+1 种基金the Educational Economy and Financial Research Base ofHunan Province (Grant No. 13JCJA2)the Project of China Scholarship Council forOverseas Studies (201508430121, 201208430233).
文摘In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.
文摘A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.
基金Supported by the Key Program of Petro China Exploration&Production Company(Grant No.kt2017-17-01-1 and kt2017-17-06-1)Consulting Project of Chinese Academy of Engineering(Grant No.2019-XZ-17)
文摘Based on the traditional numerical simulation and optimization algorithms,in combination with the layered injection and production"hard data"monitored at real time by automatic control technology,a systematic approach for detailed water injection design using data-driven algorithms is proposed.First the data assimilation technology is used to match geological model parameters under the constraint of observed well dynamics;the flow relationships between injectors and producers in the block are calculated based on automatic identification method for layered injection-production flow relationship;multi-layer and multi-direction production splitting technique is used to calculate the liquid and oil production of producers in different layers and directions and obtain quantified indexes of water injection effect.Then,machine learning algorithms are applied to evaluate the effectiveness of water injection in different layers of wells and to perform the water injection direction adjustment.Finally,the particle swarm algorithm is used to optimize the detailed water injection plan and to make production predictions.This method and procedure make full use of the automation and intelligence of data-driven and machine learning algorithms.This method was used to match the data of a complex faulted reservoir in eastern China,achieving a fitting level of 85%.The cumulative oil production in the example block for 12 months after optimization is 8.2%higher than before.This method can help design detailed water injection program for mature oilfields.
基金the National Natural Science Foundation of China(No.51775090)the General Program of Civil Aviation Flight University of China(No.J2015-39)
文摘In order to increase the fault diagnosis efficiency and make the fault data mining be realized, the decision table containing numerical attributes must be discretized for further calculations. The discernibility matrix-based reduction method depends on whether the numerical attributes can be properly discretized or not.So a discretization algorithm based on particle swarm optimization(PSO) is proposed. Moreover, hybrid weights are adopted in the process of particles evolution. Comparative calculations for certain equipment are completed to demonstrate the effectiveness of the proposed algorithm. The results indicate that the proposed algorithm has better performance than other popular algorithms such as class-attribute interdependence maximization(CAIM)discretization method and entropy-based discretization method.
文摘We present an electrical grid optimization method for economical benefit. After simplifying an IEEE feeder diagram, we build a compact smart grid system including a photovoltaic-inverter system, a shunt capacitor, an on-load tapchanger(OLTC) and transmission lines. The system power factor(PF) regulation and reactive power dispatching are indispensable to improve power quality. Our control method uses predictive weather and load data to decide engaging or tripping the shunt capacitor, or reactive power injection by the photovoltaic-inverter system, ultimately to keep the system PF in a good range. From the perspective of economics, the economical model is considered as a decision maker in our predictive data control method.Capacitor-only control strategy is a common photovoltaic(PV)regulation method, which is treated as a baseline case. Simulations with GridLAB-D on profiled loads and residential loads have been carried out. The comparison results with baseline control strategy and our predictive data control method show the appreciable economical benefit of our method.