The financial health of leading enterprises has a significant impact on the sustainable development of the global economy.Most data-driven financial health forecasts are based on the direct use of small-scale machine ...The financial health of leading enterprises has a significant impact on the sustainable development of the global economy.Most data-driven financial health forecasts are based on the direct use of small-scale machine learning.In this study,we proposed the idea of optimization coupling learning to improve these machine learning models in financial health forecasting.It not only revealed lagging,immediate,continuous impacts of various indicators in different fiscal year,but also had the same low computational cost and complexity as known small-scale machine learning models.We used our optimization coupling learning to investigate 3424 leading enterprises in China and revealed inner triggering mechanisms and differences of enterprises’financial health status from individual behavior to macro level.展开更多
This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft...This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.展开更多
To maximize the power density of the electric propulsion motor in aerospace application,this paper proposes a novel Dynamic Neighborhood Genetic Learning Particle Swarm Optimization(DNGL-PSO)for the motor design,which...To maximize the power density of the electric propulsion motor in aerospace application,this paper proposes a novel Dynamic Neighborhood Genetic Learning Particle Swarm Optimization(DNGL-PSO)for the motor design,which can deal with the insufficient population diversity and non-global optimal solution issues.The DNGL-PSO framework is composed of the dynamic neighborhood module and the particle update module.To improve the population diversity,the dynamic neighborhood strategy is first proposed,which combines the local neighborhood exemplar generation mechanism and the shuffling mechanism.The local neighborhood exemplar generation mechanism enlarges the search range of the algorithm in the solution space,thus obtaining highquality exemplars.Meanwhile,when the global optimal solution cannot update its fitness value,the shuffling mechanism module is triggered to dynamically change the local neighborhood members.The roulette wheel selection operator is introduced into the shuffling mechanism to ensure that particles with larger fitness value are selected with a higher probability and remain in the local neighborhood.Then,the global learning based particle update approach is proposed,which can achieve a good balance between the expansion of the search range in the early stage and the acceleration of local convergence in the later stage.Finally,the optimization design of the electric propulsion motor is conducted to verify the effectiveness of the proposed DNGL-PSO.The simulation results show that the proposed DNGL-PSO has excellent adaptability,optimization efficiency and global optimization capability,while the optimized electric propulsion motor has a high power density of 5.207 kW/kg with the efficiency of 96.12%.展开更多
Cryogenic ground support equipment (CGSE) is an important part of a famous particle physics experiment - AMS-02. In this paper a design method which optimizes PID parameters of CGSE control system via the particle swa...Cryogenic ground support equipment (CGSE) is an important part of a famous particle physics experiment - AMS-02. In this paper a design method which optimizes PID parameters of CGSE control system via the particle swarm optimization (PSO) algorithm is presented. Firstly, an improved version of the original PSO, cooperative random learning particle swarm optimization (CRPSO), is put forward to enhance the performance of the conventional PSO. Secondly, the way of finding PID coefficient will be studied by using this algorithm. Finally, the experimental results and practical works demonstrate that the CRPSO-PID controller achieves a good performance.展开更多
Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show mor...Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.展开更多
With the increasing complexity and scale of hyperscale data centers,the requirement for intelligent,real-time power delivery has never been more critical to ensure uptime,energy efficiency,and sustainability.Those tec...With the increasing complexity and scale of hyperscale data centers,the requirement for intelligent,real-time power delivery has never been more critical to ensure uptime,energy efficiency,and sustainability.Those techniques are typically static,reactive(since CPU and workload scaling is applied to performance events that occur after a request has been submitted,and is thus can be classified as a reactive response.),and require manual operation,and cannot cope with the dynamic nature of the workloads,the distributed architectures as well as the non-uniform energy sources in today’s data centers.In this paper,we elaborate on how artificial intelligence(AI)is revolutionizing power distribution in hyperscale data centers,making predictive load forecasting,real-time fault detection,and autonomous power optimization possible.We explain how ML(machine learning)and RL(reinforcement learning)-based models have been introduced in PDN(power delivery networks)for load balancing in three-phase systems,overprovisioning reduction,and energy flow optimization from the grid to the rack.The paper considers the architectural pieces of the AI-led systems,such as data ingestion pipelines,anomaly detection frameworks,and control algorithms to manage the power switching,cooling synchronization,and grid/microgrid interaction.Practical use cases show the value of these systems on PUE,infrastructure reliability,and environmental footprint.Key implementation challenges,including data quality,legacy systemintegration,and AI decision-making governance,are also discussed.Last,the paper speculates on the future of autonomous DC power infrastructure where AI becomes not only an assistive resource to the operator but really takes control over infrastructure behavior end-to-end,from procuring energy,to phase balancing,to predicting maintenance.Integrating technology innovation with operational sustainability,AI-powered power distribution is emerging as a core competence for the Smart Digital Power Facility of the Future.展开更多
For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over ti...For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over time.Decaying has been proved to enhance generalization as well as optimization.Other parameters,such as the network’s size,the number of hidden layers,drop-outs to avoid overfitting,batch size,and so on,are solely based on heuristics.This work has proposed Adaptive Teaching Learning Based(ATLB)Heuristic to identify the optimal hyperparameters for diverse networks.Here we consider three architec-tures Recurrent Neural Networks(RNN),Long Short Term Memory(LSTM),Bidirectional Long Short Term Memory(BiLSTM)of Deep Neural Networks for classification.The evaluation of the proposed ATLB is done through the various learning rate schedulers Cyclical Learning Rate(CLR),Hyperbolic Tangent Decay(HTD),and Toggle between Hyperbolic Tangent Decay and Triangular mode with Restarts(T-HTR)techniques.Experimental results have shown the performance improvement on the 20Newsgroup,Reuters Newswire and IMDB dataset.展开更多
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov...Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.展开更多
In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integrati...In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integration ofIoT and Artificial Intelligence (AI). The amalgamation of above mentioned toolshas taken the new peak in terms of diagnosis and treatment process especially inthe pandemic period. But the real challenges such as low latency, energy consumption high throughput still remains in the dark side of the research. This paperproposes a novel optimized cognitive learning based BAN model based on FogIoT technology as a real-time health monitoring systems with the increased network-life time. Energy and latency aware features of BAN have been extractedand used to train the proposed fog based learning algorithm to achieve low energyconsumption and low-latency scheduling algorithm. To test the proposed network,Fog-IoT-BAN test bed has been developed with the battery driven MICOTTboards interfaced with the health care sensors using Micro Python programming.The extensive experimentation is carried out using the above test beds and variousparameters such as accuracy, precision, recall, F1score and specificity has beencalculated along with QoS (quality of service) parameters such as latency, energyand throughput. To prove the superiority of the proposed framework, the performance of the proposed learning based framework has been compared with theother state-of-art classical learning frameworks and other existing Fog-BAN networks such as WORN, DARE, L-No-DEAF networks. Results proves the proposed framework has outperformed the other classical learning models in termsof accuracy and high False Alarm Rate (FAR), energy efficiency and latency.展开更多
Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technolog...Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technologies of fifth-generation(5G).The smaller wavelength of the millimeter wave makes it possible to assemble a large number of antennas in a small aperture.The resulting array gain can compensate for the path loss of the millimeter wave.Utilizing this feature,the millimeter wave massive multiple-input multiple-output(MIMO)system uses a large antenna array at the base station.It enables the transmission of multiple data streams,making the system have a higher data transmission rate.In the millimeter wave massive MIMO system,the precoding technology uses the state information of the channel to adjust the transmission strategy at the transmitting end,and the receiving end performs equalization,so that users can better obtain the antenna multiplexing gain and improve the system capacity.This paper proposes an efficient algorithm based on machine learning(ML)for effective system performance in mmwave massive MIMO systems.The main idea is to optimize the adaptive connection structure to maximize the received signal power of each user and correlate the RF chain and base station antenna.Simulation results show that,the proposed algorithm effectively improved the system performance in terms of spectral efficiency and complexity as compared with existing algorithms.展开更多
A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imagin...A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imaging time,widespread availability,low cost,and portability.In radiological investigations,computer-aided diagnostic tools are implemented to reduce intra-and inter-observer variability.Using lately industrialized Artificial Intelligence(AI)algorithms and radiological techniques to diagnose and classify disease is advantageous.The current study develops an automatic identification and classification model for CXR pictures using Gaussian Fil-tering based Optimized Synergic Deep Learning using Remora Optimization Algorithm(GF-OSDL-ROA).This method is inclusive of preprocessing and classification based on optimization.The data is preprocessed using Gaussian filtering(GF)to remove any extraneous noise from the image’s edges.Then,the OSDL model is applied to classify the CXRs under different severity levels based on CXR data.The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work.OSDL model,applied in this study,was validated using the COVID-19 dataset.The experiments were conducted upon the proposed OSDL model,which achieved a classification accuracy of 99.83%,while the current Convolutional Neural Network achieved less classification accuracy,i.e.,98.14%.展开更多
The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniqu...The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniques such as spectrum sensing,spectrum sharing and dynamic spectrum access will become essential components in Wireless IoT communication.IoT devices must learn adaptively to the environment and extract the spectrum knowledge and inferred spectrum knowledge by appropriately changing communication parameters such as modulation index,frequency bands,coding rate etc.,to accommodate the above characteristics.Implementing the above learning methods on the embedded chip leads to high latency,high power consumption and more chip area utilisation.To overcome the problems mentioned above,we present DEEP HOLE Radio sys-tems,the intelligent system enabling the spectrum knowledge extraction from the unprocessed samples by the optimized deep learning models directly from the Radio Frequency(RF)environment.DEEP HOLE Radio provides(i)an opti-mized deep learning framework with a good trade-off between latency,power and utilization.(ii)Complete Hardware-Software architecture where the SoC’s coupled with radio transceivers for maximum performance.The experimentation has been carried out using GNURADIO software interfaced with Zynq-7000 devices mounting on ESP8266 radio transceivers with inbuilt Omni direc-tional antennas.The whole spectrum of knowledge has been extracted using GNU radio.These extracted features are used to train the proposed optimized deep learning models,which run parallel on Zynq-SoC 7000,consuming less area,power,latency and less utilization area.The proposed framework has been evaluated and compared with the existing frameworks such as RFLearn,Long Term Short Memory(LSTM),Convolutional Neural Networks(CNN)and Deep Neural Networks(DNN).The outcome shows that the proposed framework has outperformed the existing framework regarding the area,power and time.More-over,the experimental results show that the proposed framework decreases the delay,power and area by 15%,20%25%concerning the existing RFlearn and other hardware constraint frameworks.展开更多
Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression ...Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.展开更多
The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.Th...The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.展开更多
Surface coating is a critical procedure in the case of maintenance engineering. Ceramic coating of the wear areas is of the best practice which substantially enhances the Mean Time between Failure (MTBF). EN24 is a co...Surface coating is a critical procedure in the case of maintenance engineering. Ceramic coating of the wear areas is of the best practice which substantially enhances the Mean Time between Failure (MTBF). EN24 is a commercial grade alloy which is used for various industrial applications like sleeves, nuts, bolts, shafts, etc. EN24 is having comparatively low corrosion resistance, and ceramic coating of the wear and corroding areas of such parts is a best followed practice which highly improves the frequent failures. The coating quality mainly depends on the coating thickness, surface roughness and coating hardness which finally decides the operability. This paper describes an experimental investigation to effectively optimize the Atmospheric Plasma Spray process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> coatings to get the best quality of coating on EN24 alloy steel substrate. The experiments are conducted with an Orthogonal Array (OA) design of experiments (DoE). In the current experiment, critical input parameters are considered and some of the vital output parameters are monitored accordingly and separate mathematical models are generated using regression analysis. The Analytic Hierarchy Process (AHP) method is used to generate weights for the individual objective functions and based on that, a combined objective function is made. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is practically utilized to the combined objective function to optimize the values of input parameters to get the best output parameters. Confirmation tests are also conducted and their output results are compared with predicted values obtained through mathematical models. The dominating effects of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> spray parameters on output parameters: surface roughness, coating thickness and coating hardness are discussed in detail. It is concluded that the input parameters variation directly affects the characteristics of output parameters and any number of input as well as output parameters can be easily optimized using the current approach.展开更多
SS304 is a commercial grade stainless steel which is used for various engineering applications like shafts, guides, jigs, fixtures, etc. Ceramic coating of the wear areas of such parts is a regular practice which sign...SS304 is a commercial grade stainless steel which is used for various engineering applications like shafts, guides, jigs, fixtures, etc. Ceramic coating of the wear areas of such parts is a regular practice which significantly enhances the Mean Time Between Failure (MTBF). The final coating quality depends mainly on the coating thickness, surface roughness and hardness which ultimately decides the life. This paper presents an experimental study to effectively optimize the Atmospheric Plasma Spray (APS) process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO2 ceramic coatings to get the best quality of coating on commercial SS304 substrate. The experiments are conducted with a three-level L<sub>18</sub> Orthogonal Array (OA) Design of Experiments (DoE). Critical input parameters considered are: spray nozzle distance, substrate rotating speed, current of the arc, carrier gas flow and coating powder flow rate. The surface roughness, coating thickness and hardness are considered as the output parameters. Mathematical models are generated using regression analysis for individual output parameters. The Analytic Hierarchy Process (AHP) method is applied to generate weights for the individual objective functions and a combined objective function is generated. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is applied to the combined objective function to optimize the values of input parameters to get the best output parameters and confirmation tests are conducted based on that. The significant effects of spray parameters on surface roughness, coating thickness and coating hardness are studied in detail.展开更多
Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for ...Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.展开更多
In the stability framework of model predictive control(MPC),the size of the stabilizable set(also known as the region of attraction)is dependent on the terminal constraint region.This article aims to investigate the o...In the stability framework of model predictive control(MPC),the size of the stabilizable set(also known as the region of attraction)is dependent on the terminal constraint region.This article aims to investigate the optimization of the terminal region for predictive control of a class of systems with multiplicative uncertainty,aiming to expand the attraction region in MPC.By utilizing a coordinate transformation,we initially develop a structured design for terminal ingredients while considering uncertainties in parameters.Subsequently,we propose novel methods to convert the original nonlinear problem into a linear matrix inequality(LMI)problem with minimal conservatism in the formulation.We propose an iterative learning optimization approach to compute the polytopic terminal region,and its incremental volume is theoretically proven.The efectiveness of the proposed approaches is demonstrated using a benchmark academic example and vehicle lateral dynamics.Through real-time simulation experiments,we demonstrate that the proposed approach can enlarge the domain of attraction as well as reduce the computational complexity of robust MPC systems under parameter uncertainty.展开更多
Floating photovoltaic systems provide better land use and higher energy output through water cooling effects,but accurate power forecasting remains challenging due to complex environmental factors and measurement erro...Floating photovoltaic systems provide better land use and higher energy output through water cooling effects,but accurate power forecasting remains challenging due to complex environmental factors and measurement errors.This study presents an improved teaching-learning-based optimization algorithm with extreme learning machine for floating photovoltaic power forecasting.The method uses an adaptive teaching factor that adjusts the balance between exploration and exploitation during optimization,replacing fixed teaching factors with continuous,iteration-based adjustment.The research evaluated the approach using comprehensive real data from a floating photovoltaic installation at Universiti Malaysia Pahang Al-Sultan Abdullah,Malaysia.The proposed method achieved superior forecasting accuracy compared to benchmark algorithms including standard teaching-learningbased optimization with extreme learning machine,manta rays foraging optimization with extreme learning machine,moth flame optimization with extreme learning machine,ant colony optimization with extreme learning machine and salp swarm algorithm with extreme learning machine.The improved teaching-learning-based optimization approach demonstrated a root mean squared error of 7.81 kW and coefficient of determination of 0.9386,outperforming all comparison methods with statistically significant improvements.The algorithm showed faster convergence,enhanced stability,and superior computational efficiency while maintaining accuracy suitable for real-time grid integration applications.Phase current measurements were identified as the most important predictors for floating photovoltaic power forecasting.The system achieved high prediction accuracy with most forecasts falling within acceptable error tolerance,making the proposed approach a reliable solution for floating photovoltaic power forecasting that supports grid integration and renewable energy deployment.The methodology addresses unique characteristics of aquatic solar installations while providing practical implementation viability for operational floating photovoltaic systems.展开更多
基金supported by the European Commission Horizon 2020 Framework Program No.861584the Taishan Distinguished Professor Fund No.20190910.
文摘The financial health of leading enterprises has a significant impact on the sustainable development of the global economy.Most data-driven financial health forecasts are based on the direct use of small-scale machine learning.In this study,we proposed the idea of optimization coupling learning to improve these machine learning models in financial health forecasting.It not only revealed lagging,immediate,continuous impacts of various indicators in different fiscal year,but also had the same low computational cost and complexity as known small-scale machine learning models.We used our optimization coupling learning to investigate 3424 leading enterprises in China and revealed inner triggering mechanisms and differences of enterprises’financial health status from individual behavior to macro level.
基金the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”(2018AAA0100803)the National Natural Science Foundation of China(U20B2071,91948204,T2121003,U1913602)。
文摘This paper proposes an autonomous maneuver decision method using transfer learning pigeon-inspired optimization(TLPIO)for unmanned combat aerial vehicles(UCAVs)in dogfight engagements.Firstly,a nonlinear F-16 aircraft model and automatic control system are constructed by a MATLAB/Simulink platform.Secondly,a 3-degrees-of-freedom(3-DOF)aircraft model is used as a maneuvering command generator,and the expanded elemental maneuver library is designed,so that the aircraft state reachable set can be obtained.Then,the game matrix is composed with the air combat situation evaluation function calculated according to the angle and range threats.Finally,a key point is that the objective function to be optimized is designed using the game mixed strategy,and the optimal mixed strategy is obtained by TLPIO.Significantly,the proposed TLPIO does not initialize the population randomly,but adopts the transfer learning method based on Kullback-Leibler(KL)divergence to initialize the population,which improves the search accuracy of the optimization algorithm.Besides,the convergence and time complexity of TLPIO are discussed.Comparison analysis with other classical optimization algorithms highlights the advantage of TLPIO.In the simulation of air combat,three initial scenarios are set,namely,opposite,offensive and defensive conditions.The effectiveness performance of the proposed autonomous maneuver decision method is verified by simulation results.
基金supported by the National Natural Science Foundation of China(No.:52177028)Aeronautical Science Foundation of China(No.201907051002)+1 种基金the Fundamental Research Funds for the Central Universities,China(No.YWF21BJJ522)the Major Program of the National Natural Science Foundation of China(No.51890882).
文摘To maximize the power density of the electric propulsion motor in aerospace application,this paper proposes a novel Dynamic Neighborhood Genetic Learning Particle Swarm Optimization(DNGL-PSO)for the motor design,which can deal with the insufficient population diversity and non-global optimal solution issues.The DNGL-PSO framework is composed of the dynamic neighborhood module and the particle update module.To improve the population diversity,the dynamic neighborhood strategy is first proposed,which combines the local neighborhood exemplar generation mechanism and the shuffling mechanism.The local neighborhood exemplar generation mechanism enlarges the search range of the algorithm in the solution space,thus obtaining highquality exemplars.Meanwhile,when the global optimal solution cannot update its fitness value,the shuffling mechanism module is triggered to dynamically change the local neighborhood members.The roulette wheel selection operator is introduced into the shuffling mechanism to ensure that particles with larger fitness value are selected with a higher probability and remain in the local neighborhood.Then,the global learning based particle update approach is proposed,which can achieve a good balance between the expansion of the search range in the early stage and the acceleration of local convergence in the later stage.Finally,the optimization design of the electric propulsion motor is conducted to verify the effectiveness of the proposed DNGL-PSO.The simulation results show that the proposed DNGL-PSO has excellent adaptability,optimization efficiency and global optimization capability,while the optimized electric propulsion motor has a high power density of 5.207 kW/kg with the efficiency of 96.12%.
基金the National Basic Research Program (973) of China (No. 2004CB720703)
文摘Cryogenic ground support equipment (CGSE) is an important part of a famous particle physics experiment - AMS-02. In this paper a design method which optimizes PID parameters of CGSE control system via the particle swarm optimization (PSO) algorithm is presented. Firstly, an improved version of the original PSO, cooperative random learning particle swarm optimization (CRPSO), is put forward to enhance the performance of the conventional PSO. Secondly, the way of finding PID coefficient will be studied by using this algorithm. Finally, the experimental results and practical works demonstrate that the CRPSO-PID controller achieves a good performance.
基金Supporting Project number(PNURSP2023R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.supported by MRC,UK(MC_PC_17171)+9 种基金Royal Society,UK(RP202G0230)BHF,UK(AA/18/3/34220)Hope Foundation for Cancer Research,UK(RM60G0680)GCRF,UK(P202PF11)Sino‐UK Industrial Fund,UK(RP202G0289)LIAS,UK(P202ED10,P202RE969)Data Science Enhancement Fund,UK(P202RE237)Fight for Sight,UK(24NN201)Sino‐UK Education Fund,UK(OP202006)BBSRC,UK(RM32G0178B8).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.
文摘With the increasing complexity and scale of hyperscale data centers,the requirement for intelligent,real-time power delivery has never been more critical to ensure uptime,energy efficiency,and sustainability.Those techniques are typically static,reactive(since CPU and workload scaling is applied to performance events that occur after a request has been submitted,and is thus can be classified as a reactive response.),and require manual operation,and cannot cope with the dynamic nature of the workloads,the distributed architectures as well as the non-uniform energy sources in today’s data centers.In this paper,we elaborate on how artificial intelligence(AI)is revolutionizing power distribution in hyperscale data centers,making predictive load forecasting,real-time fault detection,and autonomous power optimization possible.We explain how ML(machine learning)and RL(reinforcement learning)-based models have been introduced in PDN(power delivery networks)for load balancing in three-phase systems,overprovisioning reduction,and energy flow optimization from the grid to the rack.The paper considers the architectural pieces of the AI-led systems,such as data ingestion pipelines,anomaly detection frameworks,and control algorithms to manage the power switching,cooling synchronization,and grid/microgrid interaction.Practical use cases show the value of these systems on PUE,infrastructure reliability,and environmental footprint.Key implementation challenges,including data quality,legacy systemintegration,and AI decision-making governance,are also discussed.Last,the paper speculates on the future of autonomous DC power infrastructure where AI becomes not only an assistive resource to the operator but really takes control over infrastructure behavior end-to-end,from procuring energy,to phase balancing,to predicting maintenance.Integrating technology innovation with operational sustainability,AI-powered power distribution is emerging as a core competence for the Smart Digital Power Facility of the Future.
文摘For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over time.Decaying has been proved to enhance generalization as well as optimization.Other parameters,such as the network’s size,the number of hidden layers,drop-outs to avoid overfitting,batch size,and so on,are solely based on heuristics.This work has proposed Adaptive Teaching Learning Based(ATLB)Heuristic to identify the optimal hyperparameters for diverse networks.Here we consider three architec-tures Recurrent Neural Networks(RNN),Long Short Term Memory(LSTM),Bidirectional Long Short Term Memory(BiLSTM)of Deep Neural Networks for classification.The evaluation of the proposed ATLB is done through the various learning rate schedulers Cyclical Learning Rate(CLR),Hyperbolic Tangent Decay(HTD),and Toggle between Hyperbolic Tangent Decay and Triangular mode with Restarts(T-HTR)techniques.Experimental results have shown the performance improvement on the 20Newsgroup,Reuters Newswire and IMDB dataset.
基金Projects(61173122,61262032) supported by the National Natural Science Foundation of ChinaProjects(11JJ3067,12JJ2038) supported by the Natural Science Foundation of Hunan Province,China
文摘Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.
文摘In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integration ofIoT and Artificial Intelligence (AI). The amalgamation of above mentioned toolshas taken the new peak in terms of diagnosis and treatment process especially inthe pandemic period. But the real challenges such as low latency, energy consumption high throughput still remains in the dark side of the research. This paperproposes a novel optimized cognitive learning based BAN model based on FogIoT technology as a real-time health monitoring systems with the increased network-life time. Energy and latency aware features of BAN have been extractedand used to train the proposed fog based learning algorithm to achieve low energyconsumption and low-latency scheduling algorithm. To test the proposed network,Fog-IoT-BAN test bed has been developed with the battery driven MICOTTboards interfaced with the health care sensors using Micro Python programming.The extensive experimentation is carried out using the above test beds and variousparameters such as accuracy, precision, recall, F1score and specificity has beencalculated along with QoS (quality of service) parameters such as latency, energyand throughput. To prove the superiority of the proposed framework, the performance of the proposed learning based framework has been compared with theother state-of-art classical learning frameworks and other existing Fog-BAN networks such as WORN, DARE, L-No-DEAF networks. Results proves the proposed framework has outperformed the other classical learning models in termsof accuracy and high False Alarm Rate (FAR), energy efficiency and latency.
基金Taif University Researchers Supporting Project Number(TURSP-2020/260),Taif University,Taif,Saudi Arabia.
文摘Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technologies of fifth-generation(5G).The smaller wavelength of the millimeter wave makes it possible to assemble a large number of antennas in a small aperture.The resulting array gain can compensate for the path loss of the millimeter wave.Utilizing this feature,the millimeter wave massive multiple-input multiple-output(MIMO)system uses a large antenna array at the base station.It enables the transmission of multiple data streams,making the system have a higher data transmission rate.In the millimeter wave massive MIMO system,the precoding technology uses the state information of the channel to adjust the transmission strategy at the transmitting end,and the receiving end performs equalization,so that users can better obtain the antenna multiplexing gain and improve the system capacity.This paper proposes an efficient algorithm based on machine learning(ML)for effective system performance in mmwave massive MIMO systems.The main idea is to optimize the adaptive connection structure to maximize the received signal power of each user and correlate the RF chain and base station antenna.Simulation results show that,the proposed algorithm effectively improved the system performance in terms of spectral efficiency and complexity as compared with existing algorithms.
文摘A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imaging time,widespread availability,low cost,and portability.In radiological investigations,computer-aided diagnostic tools are implemented to reduce intra-and inter-observer variability.Using lately industrialized Artificial Intelligence(AI)algorithms and radiological techniques to diagnose and classify disease is advantageous.The current study develops an automatic identification and classification model for CXR pictures using Gaussian Fil-tering based Optimized Synergic Deep Learning using Remora Optimization Algorithm(GF-OSDL-ROA).This method is inclusive of preprocessing and classification based on optimization.The data is preprocessed using Gaussian filtering(GF)to remove any extraneous noise from the image’s edges.Then,the OSDL model is applied to classify the CXRs under different severity levels based on CXR data.The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work.OSDL model,applied in this study,was validated using the COVID-19 dataset.The experiments were conducted upon the proposed OSDL model,which achieved a classification accuracy of 99.83%,while the current Convolutional Neural Network achieved less classification accuracy,i.e.,98.14%.
文摘The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniques such as spectrum sensing,spectrum sharing and dynamic spectrum access will become essential components in Wireless IoT communication.IoT devices must learn adaptively to the environment and extract the spectrum knowledge and inferred spectrum knowledge by appropriately changing communication parameters such as modulation index,frequency bands,coding rate etc.,to accommodate the above characteristics.Implementing the above learning methods on the embedded chip leads to high latency,high power consumption and more chip area utilisation.To overcome the problems mentioned above,we present DEEP HOLE Radio sys-tems,the intelligent system enabling the spectrum knowledge extraction from the unprocessed samples by the optimized deep learning models directly from the Radio Frequency(RF)environment.DEEP HOLE Radio provides(i)an opti-mized deep learning framework with a good trade-off between latency,power and utilization.(ii)Complete Hardware-Software architecture where the SoC’s coupled with radio transceivers for maximum performance.The experimentation has been carried out using GNURADIO software interfaced with Zynq-7000 devices mounting on ESP8266 radio transceivers with inbuilt Omni direc-tional antennas.The whole spectrum of knowledge has been extracted using GNU radio.These extracted features are used to train the proposed optimized deep learning models,which run parallel on Zynq-SoC 7000,consuming less area,power,latency and less utilization area.The proposed framework has been evaluated and compared with the existing frameworks such as RFLearn,Long Term Short Memory(LSTM),Convolutional Neural Networks(CNN)and Deep Neural Networks(DNN).The outcome shows that the proposed framework has outperformed the existing framework regarding the area,power and time.More-over,the experimental results show that the proposed framework decreases the delay,power and area by 15%,20%25%concerning the existing RFlearn and other hardware constraint frameworks.
基金supported by the Deanship of Scientific Research,at Imam Abdulrahman Bin Faisal University.Grant Number:2019-416-ASCS.
文摘Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.
基金supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024)National Natural Science Foundation of China(Grant Nos.11971084and 12171060)+4 种基金National Natural Science Foundation of China and Hong Kong Research Grants Council Joint Research Program(Grant No.12261160365)the Team Project of Innovation Leading Talent in Chongqing(Grant No.CQYC20210309536)the Natural Science Foundation of Chongqing of China(Grant No.CSTB2024NSCQLZX0140)the Major Project of Science and Technology Research Rrogram of Chongqing Education Commission of China(Grant No.KJZD-M202300504)the Foundation of Chongqing Normal University(Grant Nos.22XLB005 and 22XLB006)。
文摘The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.
文摘Surface coating is a critical procedure in the case of maintenance engineering. Ceramic coating of the wear areas is of the best practice which substantially enhances the Mean Time between Failure (MTBF). EN24 is a commercial grade alloy which is used for various industrial applications like sleeves, nuts, bolts, shafts, etc. EN24 is having comparatively low corrosion resistance, and ceramic coating of the wear and corroding areas of such parts is a best followed practice which highly improves the frequent failures. The coating quality mainly depends on the coating thickness, surface roughness and coating hardness which finally decides the operability. This paper describes an experimental investigation to effectively optimize the Atmospheric Plasma Spray process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> coatings to get the best quality of coating on EN24 alloy steel substrate. The experiments are conducted with an Orthogonal Array (OA) design of experiments (DoE). In the current experiment, critical input parameters are considered and some of the vital output parameters are monitored accordingly and separate mathematical models are generated using regression analysis. The Analytic Hierarchy Process (AHP) method is used to generate weights for the individual objective functions and based on that, a combined objective function is made. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is practically utilized to the combined objective function to optimize the values of input parameters to get the best output parameters. Confirmation tests are also conducted and their output results are compared with predicted values obtained through mathematical models. The dominating effects of Al<sub>2</sub>O<sub>3</sub>-40% TiO<sub>2</sub> spray parameters on output parameters: surface roughness, coating thickness and coating hardness are discussed in detail. It is concluded that the input parameters variation directly affects the characteristics of output parameters and any number of input as well as output parameters can be easily optimized using the current approach.
文摘SS304 is a commercial grade stainless steel which is used for various engineering applications like shafts, guides, jigs, fixtures, etc. Ceramic coating of the wear areas of such parts is a regular practice which significantly enhances the Mean Time Between Failure (MTBF). The final coating quality depends mainly on the coating thickness, surface roughness and hardness which ultimately decides the life. This paper presents an experimental study to effectively optimize the Atmospheric Plasma Spray (APS) process input parameters of Al<sub>2</sub>O<sub>3</sub>-40% TiO2 ceramic coatings to get the best quality of coating on commercial SS304 substrate. The experiments are conducted with a three-level L<sub>18</sub> Orthogonal Array (OA) Design of Experiments (DoE). Critical input parameters considered are: spray nozzle distance, substrate rotating speed, current of the arc, carrier gas flow and coating powder flow rate. The surface roughness, coating thickness and hardness are considered as the output parameters. Mathematical models are generated using regression analysis for individual output parameters. The Analytic Hierarchy Process (AHP) method is applied to generate weights for the individual objective functions and a combined objective function is generated. An advanced optimization method, Teaching-Learning-Based Optimization algorithm (TLBO), is applied to the combined objective function to optimize the values of input parameters to get the best output parameters and confirmation tests are conducted based on that. The significant effects of spray parameters on surface roughness, coating thickness and coating hardness are studied in detail.
文摘Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.
基金supported by the the Fundamental Research Funds for the Central Universities(Grant No.JUSRP202501133)。
文摘In the stability framework of model predictive control(MPC),the size of the stabilizable set(also known as the region of attraction)is dependent on the terminal constraint region.This article aims to investigate the optimization of the terminal region for predictive control of a class of systems with multiplicative uncertainty,aiming to expand the attraction region in MPC.By utilizing a coordinate transformation,we initially develop a structured design for terminal ingredients while considering uncertainties in parameters.Subsequently,we propose novel methods to convert the original nonlinear problem into a linear matrix inequality(LMI)problem with minimal conservatism in the formulation.We propose an iterative learning optimization approach to compute the polytopic terminal region,and its incremental volume is theoretically proven.The efectiveness of the proposed approaches is demonstrated using a benchmark academic example and vehicle lateral dynamics.Through real-time simulation experiments,we demonstrate that the proposed approach can enlarge the domain of attraction as well as reduce the computational complexity of robust MPC systems under parameter uncertainty.
基金supported by the Ministry of Higher Education Malaysia(MOHE)under the Fundamental Research Grant Scheme(FRGS/1/2022/ICT04/UMP/02/1).
文摘Floating photovoltaic systems provide better land use and higher energy output through water cooling effects,but accurate power forecasting remains challenging due to complex environmental factors and measurement errors.This study presents an improved teaching-learning-based optimization algorithm with extreme learning machine for floating photovoltaic power forecasting.The method uses an adaptive teaching factor that adjusts the balance between exploration and exploitation during optimization,replacing fixed teaching factors with continuous,iteration-based adjustment.The research evaluated the approach using comprehensive real data from a floating photovoltaic installation at Universiti Malaysia Pahang Al-Sultan Abdullah,Malaysia.The proposed method achieved superior forecasting accuracy compared to benchmark algorithms including standard teaching-learningbased optimization with extreme learning machine,manta rays foraging optimization with extreme learning machine,moth flame optimization with extreme learning machine,ant colony optimization with extreme learning machine and salp swarm algorithm with extreme learning machine.The improved teaching-learning-based optimization approach demonstrated a root mean squared error of 7.81 kW and coefficient of determination of 0.9386,outperforming all comparison methods with statistically significant improvements.The algorithm showed faster convergence,enhanced stability,and superior computational efficiency while maintaining accuracy suitable for real-time grid integration applications.Phase current measurements were identified as the most important predictors for floating photovoltaic power forecasting.The system achieved high prediction accuracy with most forecasts falling within acceptable error tolerance,making the proposed approach a reliable solution for floating photovoltaic power forecasting that supports grid integration and renewable energy deployment.The methodology addresses unique characteristics of aquatic solar installations while providing practical implementation viability for operational floating photovoltaic systems.