Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality ...Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality solutions efficiently by balancing exploration of the search space and exploitation of promising solutions.While heuristic optimization algorithms vary in their specific details,they often exhibit common patterns that are essential to their effectiveness.This paper aims to analyze and explore common patterns in heuristic optimization algorithms.Through a comprehensive review of the literature,we identify the patterns that are commonly observed in these algorithms,including initialization,local search,diversity maintenance,adaptation,and stochasticity.For each pattern,we describe the motivation behind it,its implementation,and its impact on the search process.To demonstrate the utility of our analysis,we identify these patterns in multiple heuristic optimization algorithms.For each case study,we analyze how the patterns are implemented in the algorithm and how they contribute to its performance.Through these case studies,we show how our analysis can be used to understand the behavior of heuristic optimization algorithms and guide the design of new algorithms.Our analysis reveals that patterns in heuristic optimization algorithms are essential to their effectiveness.By understanding and incorporating these patterns into the design of new algorithms,researchers can develop more efficient and effective optimization algorithms.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an ...Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an innovative tool for identifying G uniquely by means of the family of kernels ùd(G) =(ker(T H,G ')) (G: H) = p. For all finite 3-groups G of coclass cc(G) = 1, the family ùd(G) is determined explicitly. The results are applied to the Galois groups G =Gal(F3 (∞)/ F) of the Hilbert 3-class towers of all real quadratic fields F = Q(√d) with fundamental discriminants d > 1, 3-class group Cl3(F) □ C3 × C3, and total 3-principalization in each of their four unramified cyclic cubic extensions E/F. A systematic statistical evaluation is given for the complete range 1 d 7, and a few exceptional cases are pointed out for 1 d 8.展开更多
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of...Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.展开更多
Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main ...Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.展开更多
This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a no...This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a nonlinear constrained single-objective optimization problem where the total line loss (TLL) and the total voltage deviations (TVD) are to be minimized separately by incorporating optimal placement of DG units and shunt capacitors with constraints which include limits on voltage, sizes of installed capacitors and DG. This BGSA is applied on the balanced IEEE 10 Bus distribution network and the results are compared with conventional binary particle swarm optimization.展开更多
提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神...提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神经网络和线性回归方法具有更高的精度和范化能力.
Abstract:
A new method is proposed to predict the fabric shearing property with least square support vector machines ( LS-SVM ). The genetic algorithm is investigated to select the parameters of LS-SVM models as a means of improving the LS- SVM prediction. After normalizing the sampling data, the sampling data are inputted into the model to gain the prediction result. The simulation results show the prediction model gives better forecasting accuracy and generalization ability than BP neural network and linear regression method.展开更多
This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injecti...This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injection-molded parts.At its core,the framework employs a greedy algorithm that generates runner systems based on adjacency and shortest path principles,leading to improvements in both mechanical strength and material efficiency.The design optimization is validated through a series of rigorous experimental tests,including three-point bending and torsion tests performed on key-socket frames,ensuring that the optimized designs meet practical performance requirements.A critical innovation of the framework is the development of the Adjacent Element Temperature-Driven Prestress Algorithm(AETDPA),which refines the prediction of mechanical failure and strength fitting.This algorithm has been shown to deliver mesh-independent accuracy,thereby enhancing the reliability of simulation results across various design iterations.The framework’s adaptability is further demonstrated by its ability to adjust optimization methods based on the unique geometry of each part,thus accelerating the overall design process while ensuring struc-tural integrity.In addition to its immediate applications in injection molding,the study explores the potential extension of this framework to metal additive manufacturing,opening new avenues for its use in advanced manufacturing technologies.Numerical simulations,including finite element analysis,support the experimental findings and confirm that the optimized designs provide a balanced combination of strength,durability,and efficiency.Furthermore,the integration challenges with existing injection molding practices are addressed,underscoring the framework’s scalability and industrial relevance.Overall,this hybrid topology optimization framework offers a computationally efficient and robust solution for advanced manufacturing applications,promising significant improvements in design efficiency,cost-effectiveness,and product performance.Future work will focus on further enhancing algorithm robustness and exploring additional applications across diverse manufacturing processes.展开更多
The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and sym...The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and symbolic function and step size factor is proposed.It establishes a new updating method of step factor that is related to step factor and error signal.This work makes an analysis from 3 aspects:theoretical analysis,theoretical verification and specific experiments.The experimental results show that the proposed algorithm is superior to other variable step size algorithms in convergence speed and steady-state error.展开更多
Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorit...Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorithm, for modular multiplicative inverse (MMI). Analysis of the proposed algorithm shows that it is more efficient than the Extended Euclid algorithm (XEA). In addition, if a MMI does not exist, then it is not necessary to use the Backtracking procedure in the proposed algorithm;this case requires fewer operations on every step (divisions, multiplications, additions, assignments and push operations on stack), than the XEA. Overall, XEA uses more multiplications, additions, assignments and twice as many variables than the proposed algorithm.展开更多
The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level m...The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.展开更多
In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting....In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.展开更多
This work proposes a novel approach for multi-type optimal placement of flexible AC transmission system(FACTS) devices so as to optimize multi-objective voltage stability problem. The current study discusses a way for...This work proposes a novel approach for multi-type optimal placement of flexible AC transmission system(FACTS) devices so as to optimize multi-objective voltage stability problem. The current study discusses a way for locating and setting of thyristor controlled series capacitor(TCSC) and static var compensator(SVC) using the multi-objective optimization approach named strength pareto multi-objective evolutionary algorithm(SPMOEA). Maximization of the static voltage stability margin(SVSM) and minimizations of real power losses(RPL) and load voltage deviation(LVD) are taken as the goals or three objective functions, when optimally locating multi-type FACTS devices. The performance and effectiveness of the proposed approach has been validated by the simulation results of the IEEE 30-bus and IEEE 118-bus test systems. The proposed approach is compared with non-dominated sorting particle swarm optimization(NSPSO) algorithm. This comparison confirms the usefulness of the multi-objective proposed technique that makes it promising for determination of combinatorial problems of FACTS devices location and setting in large scale power systems.展开更多
A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the pote...A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the potential fairness issue that may arise from the maximum sum-rate based objective function and the optical power constraints are set considering the non-negativity of the transmit signal, the requirement of the human eyes safety and all users' quality of service(Qo S). Then, the SSA is utilized to solve this optimization problem. Moreover, to demonstrate the superiority of the proposed strategy, it is compared with the fixed power allocation(FPA) and the gain ratio power allocation(GRPA) schemes. Results show that regardless of the number of users considered, the sum-rate achieved by SSA consistently outperforms that of FPA and GRPA schemes. Specifically, compared to FPA and GRPA schemes, the sum-rate obtained by SSA is increased by 40.45% and 53.44% when the number of users is 7, respectively. The proposed SSA also has better performance in terms of user fairness. This work will benefit the design and development of the NOMA-visible light communication(VLC) systems.展开更多
The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
AIM To examine the practice pattern in Kaiser Permanente Southern California(KPSC), i.e., gastroenterology(GI)/surgery referrals and endoscopic ultrasound(EUS), for pancreatic cystic neoplasms(PCNs) after the regionwi...AIM To examine the practice pattern in Kaiser Permanente Southern California(KPSC), i.e., gastroenterology(GI)/surgery referrals and endoscopic ultrasound(EUS), for pancreatic cystic neoplasms(PCNs) after the regionwide dissemination of the PCN management algorithm.METHODS Retrospective review was performed; patients with PCN diagnosis given between April 2012 and April 2015(18 mo before and after the publication of the algorithm) in KPSC(integrated health system with 15 hospitals and 202 medical offices in Southern California) were identified.RESULTS2558(1157 pre-and 1401 post-algorithm) received a new diagnosis of PCN in the study period. There was no difference in the mean cyst size(pre-19.1 mm vs post-18.5 mm, P = 0.119). A smaller percentage of PCNs resulted in EUS after the implementation of the algorithm(pre-45.5% vs post-34.8%, P < 0.001). A smaller proportion of patients were referred for GI(pre-65.2% vs post-53.3%, P < 0.001) and surgery consultations(pre-24.8% vs post-16%, P < 0.001) for PCN after the implementation. There was no significant change in operations for PCNs. Cost of diagnostic care was reduced after the implementation by 24%, 18%, and 36% for EUS, GI, and surgery consultations, respectively, with total cost saving of 24%.CONCLUSION In the current healthcare climate, there is increased need to optimize resource utilization. Dissemination of an algorithm for PCN management in an integrated health system resulted in fewer EUS and GI/surgery referrals, likely by aiding the physicians ordering imaging studies in the decision making for the management of PCNs. This translated to cost saving of 24%, 18%, and 36% for EUS, GI, and surgical consultations, respectively, with total diagnostic cost saving of 24%.展开更多
In biology, signal transduction refers to a process by which a cell converts one kind of signal or stimulus into another. It involves ordered sequences of biochemical reactions inside the cell. These cascades of react...In biology, signal transduction refers to a process by which a cell converts one kind of signal or stimulus into another. It involves ordered sequences of biochemical reactions inside the cell. These cascades of reactions are carried out by enzymes and activated by second messengers. Signal transduction pathways are complex in nature. Each pathway is responsible for tuning one or more biological functions in the intracellular environment as well as more than one pathway interact among themselves to carry forward a single biological function. Such kind of behavior of these pathways makes understanding difficult. Hence, for the sake of simplicity, they need to be partitioned into smaller modules and then analyzed. We took VEGF signaling pathway, which is responsible for angiogenesis for this kind of modularized study. Modules were obtained by applying the algorithm of Nayak and De (Nayak and De, 2007) for different complexity values. These sets of modules were compared among themselves to get the best set of modules for an optimal complexity value. The best set of modules compared with four different partitioning algorithms namely, Farhat’s (Farhat, 1998), Greedy (Chartrand and Oellermann, 1993), Kernighan-Lin’s (Kernighan and Lin, 1970) and Newman’s community finding algorithm (Newman, 2006). These comparisons enabled us to decide which of the aforementioned algorithms was the best one to create partitions from human VEGF signaling pathway. The optimal complexity value, on which the best set of modules was obtained, was used to get modules from different species for comparative study. Comparison among these modules would shed light on the trend of development of VEGF signaling pathway over these species.展开更多
The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This pape...The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.展开更多
The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G en...The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.展开更多
This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital...This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.展开更多
文摘Heuristic optimization algorithms have been widely used in solving complex optimization problems in various fields such as engineering,economics,and computer science.These algorithms are designed to find high-quality solutions efficiently by balancing exploration of the search space and exploitation of promising solutions.While heuristic optimization algorithms vary in their specific details,they often exhibit common patterns that are essential to their effectiveness.This paper aims to analyze and explore common patterns in heuristic optimization algorithms.Through a comprehensive review of the literature,we identify the patterns that are commonly observed in these algorithms,including initialization,local search,diversity maintenance,adaptation,and stochasticity.For each pattern,we describe the motivation behind it,its implementation,and its impact on the search process.To demonstrate the utility of our analysis,we identify these patterns in multiple heuristic optimization algorithms.For each case study,we analyze how the patterns are implemented in the algorithm and how they contribute to its performance.Through these case studies,we show how our analysis can be used to understand the behavior of heuristic optimization algorithms and guide the design of new algorithms.Our analysis reveals that patterns in heuristic optimization algorithms are essential to their effectiveness.By understanding and incorporating these patterns into the design of new algorithms,researchers can develop more efficient and effective optimization algorithms.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
文摘Let p be a prime. For any finite p-group G, the deep transfers T H,G ' : H / H ' → G ' / G " from the maximal subgroups H of index (G:H) = p in G to the derived subgroup G ' are introduced as an innovative tool for identifying G uniquely by means of the family of kernels ùd(G) =(ker(T H,G ')) (G: H) = p. For all finite 3-groups G of coclass cc(G) = 1, the family ùd(G) is determined explicitly. The results are applied to the Galois groups G =Gal(F3 (∞)/ F) of the Hilbert 3-class towers of all real quadratic fields F = Q(√d) with fundamental discriminants d > 1, 3-class group Cl3(F) □ C3 × C3, and total 3-principalization in each of their four unramified cyclic cubic extensions E/F. A systematic statistical evaluation is given for the complete range 1 d 7, and a few exceptional cases are pointed out for 1 d 8.
文摘Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.
文摘Two new regularization algorithms for solving the first-kind Volterra integral equation, which describes the pressure-rate deconvolution problem in well test data interpretation, are developed in this paper. The main features of the problem are the strong nonuniform scale of the solution and large errors (up to 15%) in the input data. In both algorithms, the solution is represented as decomposition on special basic functions, which satisfy given a priori information on solution, and this idea allow us significantly to improve the quality of approximate solution and simplify solving the minimization problem. The theoretical details of the algorithms, as well as the results of numerical experiments for proving robustness of the algorithms, are presented.
文摘This paper presents a binary gravitational search algorithm (BGSA) is applied to solve the problem of optimal allotment of DG sets and Shunt capacitors in radial distribution systems. The problem is formulated as a nonlinear constrained single-objective optimization problem where the total line loss (TLL) and the total voltage deviations (TVD) are to be minimized separately by incorporating optimal placement of DG units and shunt capacitors with constraints which include limits on voltage, sizes of installed capacitors and DG. This BGSA is applied on the balanced IEEE 10 Bus distribution network and the results are compared with conventional binary particle swarm optimization.
文摘提出了一种基于最小二乘支持向量机的织物剪切性能预测模型,并且采用遗传算法进行最小二乘支持向量机的参数优化,将获得的样本进行归一化处理后,将其输入预测模型以得到预测结果.仿真结果表明,基于最小二乘支持向量机的预测模型比BP神经网络和线性回归方法具有更高的精度和范化能力.
Abstract:
A new method is proposed to predict the fabric shearing property with least square support vector machines ( LS-SVM ). The genetic algorithm is investigated to select the parameters of LS-SVM models as a means of improving the LS- SVM prediction. After normalizing the sampling data, the sampling data are inputted into the model to gain the prediction result. The simulation results show the prediction model gives better forecasting accuracy and generalization ability than BP neural network and linear regression method.
文摘This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injection-molded parts.At its core,the framework employs a greedy algorithm that generates runner systems based on adjacency and shortest path principles,leading to improvements in both mechanical strength and material efficiency.The design optimization is validated through a series of rigorous experimental tests,including three-point bending and torsion tests performed on key-socket frames,ensuring that the optimized designs meet practical performance requirements.A critical innovation of the framework is the development of the Adjacent Element Temperature-Driven Prestress Algorithm(AETDPA),which refines the prediction of mechanical failure and strength fitting.This algorithm has been shown to deliver mesh-independent accuracy,thereby enhancing the reliability of simulation results across various design iterations.The framework’s adaptability is further demonstrated by its ability to adjust optimization methods based on the unique geometry of each part,thus accelerating the overall design process while ensuring struc-tural integrity.In addition to its immediate applications in injection molding,the study explores the potential extension of this framework to metal additive manufacturing,opening new avenues for its use in advanced manufacturing technologies.Numerical simulations,including finite element analysis,support the experimental findings and confirm that the optimized designs provide a balanced combination of strength,durability,and efficiency.Furthermore,the integration challenges with existing injection molding practices are addressed,underscoring the framework’s scalability and industrial relevance.Overall,this hybrid topology optimization framework offers a computationally efficient and robust solution for advanced manufacturing applications,promising significant improvements in design efficiency,cost-effectiveness,and product performance.Future work will focus on further enhancing algorithm robustness and exploring additional applications across diverse manufacturing processes.
基金the National Natural Science Foundation of China(No.51575328,61503232).
文摘The contradiction of variable step size least mean square(LMS)algorithm between fast convergence speed and small steady-state error has always existed.So,a new algorithm based on the combination of logarithmic and symbolic function and step size factor is proposed.It establishes a new updating method of step factor that is related to step factor and error signal.This work makes an analysis from 3 aspects:theoretical analysis,theoretical verification and specific experiments.The experimental results show that the proposed algorithm is superior to other variable step size algorithms in convergence speed and steady-state error.
文摘Numerous cryptographic algorithms (ElGamal, Rabin, RSA, NTRU etc) require multiple computations of modulo multiplicative inverses. This paper describes and validates a new algorithm, called the Enhanced Euclid Algorithm, for modular multiplicative inverse (MMI). Analysis of the proposed algorithm shows that it is more efficient than the Extended Euclid algorithm (XEA). In addition, if a MMI does not exist, then it is not necessary to use the Backtracking procedure in the proposed algorithm;this case requires fewer operations on every step (divisions, multiplications, additions, assignments and push operations on stack), than the XEA. Overall, XEA uses more multiplications, additions, assignments and twice as many variables than the proposed algorithm.
文摘The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.
文摘In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.
文摘This work proposes a novel approach for multi-type optimal placement of flexible AC transmission system(FACTS) devices so as to optimize multi-objective voltage stability problem. The current study discusses a way for locating and setting of thyristor controlled series capacitor(TCSC) and static var compensator(SVC) using the multi-objective optimization approach named strength pareto multi-objective evolutionary algorithm(SPMOEA). Maximization of the static voltage stability margin(SVSM) and minimizations of real power losses(RPL) and load voltage deviation(LVD) are taken as the goals or three objective functions, when optimally locating multi-type FACTS devices. The performance and effectiveness of the proposed approach has been validated by the simulation results of the IEEE 30-bus and IEEE 118-bus test systems. The proposed approach is compared with non-dominated sorting particle swarm optimization(NSPSO) algorithm. This comparison confirms the usefulness of the multi-objective proposed technique that makes it promising for determination of combinatorial problems of FACTS devices location and setting in large scale power systems.
基金supported by the Cooperative Research Project between China Coal Energy Research Institute Co.,Ltd. and Xidian University (No.N-KY-HX-1101-202302-00725)the Key Research and Development Program of Shaanxi Province (No.2017ZDCXL-GY-06-02)。
文摘A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the potential fairness issue that may arise from the maximum sum-rate based objective function and the optical power constraints are set considering the non-negativity of the transmit signal, the requirement of the human eyes safety and all users' quality of service(Qo S). Then, the SSA is utilized to solve this optimization problem. Moreover, to demonstrate the superiority of the proposed strategy, it is compared with the fixed power allocation(FPA) and the gain ratio power allocation(GRPA) schemes. Results show that regardless of the number of users considered, the sum-rate achieved by SSA consistently outperforms that of FPA and GRPA schemes. Specifically, compared to FPA and GRPA schemes, the sum-rate obtained by SSA is increased by 40.45% and 53.44% when the number of users is 7, respectively. The proposed SSA also has better performance in terms of user fairness. This work will benefit the design and development of the NOMA-visible light communication(VLC) systems.
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.
文摘AIM To examine the practice pattern in Kaiser Permanente Southern California(KPSC), i.e., gastroenterology(GI)/surgery referrals and endoscopic ultrasound(EUS), for pancreatic cystic neoplasms(PCNs) after the regionwide dissemination of the PCN management algorithm.METHODS Retrospective review was performed; patients with PCN diagnosis given between April 2012 and April 2015(18 mo before and after the publication of the algorithm) in KPSC(integrated health system with 15 hospitals and 202 medical offices in Southern California) were identified.RESULTS2558(1157 pre-and 1401 post-algorithm) received a new diagnosis of PCN in the study period. There was no difference in the mean cyst size(pre-19.1 mm vs post-18.5 mm, P = 0.119). A smaller percentage of PCNs resulted in EUS after the implementation of the algorithm(pre-45.5% vs post-34.8%, P < 0.001). A smaller proportion of patients were referred for GI(pre-65.2% vs post-53.3%, P < 0.001) and surgery consultations(pre-24.8% vs post-16%, P < 0.001) for PCN after the implementation. There was no significant change in operations for PCNs. Cost of diagnostic care was reduced after the implementation by 24%, 18%, and 36% for EUS, GI, and surgery consultations, respectively, with total cost saving of 24%.CONCLUSION In the current healthcare climate, there is increased need to optimize resource utilization. Dissemination of an algorithm for PCN management in an integrated health system resulted in fewer EUS and GI/surgery referrals, likely by aiding the physicians ordering imaging studies in the decision making for the management of PCNs. This translated to cost saving of 24%, 18%, and 36% for EUS, GI, and surgical consultations, respectively, with total diagnostic cost saving of 24%.
文摘In biology, signal transduction refers to a process by which a cell converts one kind of signal or stimulus into another. It involves ordered sequences of biochemical reactions inside the cell. These cascades of reactions are carried out by enzymes and activated by second messengers. Signal transduction pathways are complex in nature. Each pathway is responsible for tuning one or more biological functions in the intracellular environment as well as more than one pathway interact among themselves to carry forward a single biological function. Such kind of behavior of these pathways makes understanding difficult. Hence, for the sake of simplicity, they need to be partitioned into smaller modules and then analyzed. We took VEGF signaling pathway, which is responsible for angiogenesis for this kind of modularized study. Modules were obtained by applying the algorithm of Nayak and De (Nayak and De, 2007) for different complexity values. These sets of modules were compared among themselves to get the best set of modules for an optimal complexity value. The best set of modules compared with four different partitioning algorithms namely, Farhat’s (Farhat, 1998), Greedy (Chartrand and Oellermann, 1993), Kernighan-Lin’s (Kernighan and Lin, 1970) and Newman’s community finding algorithm (Newman, 2006). These comparisons enabled us to decide which of the aforementioned algorithms was the best one to create partitions from human VEGF signaling pathway. The optimal complexity value, on which the best set of modules was obtained, was used to get modules from different species for comparative study. Comparison among these modules would shed light on the trend of development of VEGF signaling pathway over these species.
文摘The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.
文摘The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.
文摘This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.