Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technolog...Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technologies of fifth-generation(5G).The smaller wavelength of the millimeter wave makes it possible to assemble a large number of antennas in a small aperture.The resulting array gain can compensate for the path loss of the millimeter wave.Utilizing this feature,the millimeter wave massive multiple-input multiple-output(MIMO)system uses a large antenna array at the base station.It enables the transmission of multiple data streams,making the system have a higher data transmission rate.In the millimeter wave massive MIMO system,the precoding technology uses the state information of the channel to adjust the transmission strategy at the transmitting end,and the receiving end performs equalization,so that users can better obtain the antenna multiplexing gain and improve the system capacity.This paper proposes an efficient algorithm based on machine learning(ML)for effective system performance in mmwave massive MIMO systems.The main idea is to optimize the adaptive connection structure to maximize the received signal power of each user and correlate the RF chain and base station antenna.Simulation results show that,the proposed algorithm effectively improved the system performance in terms of spectral efficiency and complexity as compared with existing algorithms.展开更多
Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression ...Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.展开更多
An accurate assessment of the state of health(SOH)is the cornerstone for guaranteeing the long-term stable operation of electrical equipment.However,the noise the data carries during cyclic aging poses a severe challe...An accurate assessment of the state of health(SOH)is the cornerstone for guaranteeing the long-term stable operation of electrical equipment.However,the noise the data carries during cyclic aging poses a severe challenge to the accuracy of SOH estimation and the generalization ability of the model.To this end,this paper proposed a novel SOH estimation model for lithium-ion batteries that incorporates advanced signal-processing techniques and optimized machine-learning strategies.The model employs a whale optimization algorithm(WOA)to seek the optimal parameter combination(K,α)for the variational modal decomposition(VMD)method to ensure that the signals are accurately decomposed into different modes representing the SOH of batteries.Then,the excellent local feature extraction capability of the convolutional neural network(CNN)was utilized to obtain the critical features of each modal of SOH.Finally,the support vector machine(SVM)was selected as the final SOH estimation regressor based on its generalization ability and efficient performance on small sample datasets.The method proposed was validated on a two-class publicly available aging dataset of lithium-ion batteries containing different temperatures,discharge rates,and discharge depths.The results show that the WOA-VMD-based data processing technique effectively solves the interference problem of cyclic aging data noise on SOH estimation.The CNN-SVM optimized machine learning method significantly improves the accuracy of SOH estimation.Compared with traditional techniques,the fused algorithm achieves significant results in solving the interference of data noise,improving the accuracy of SOH estimation,and enhancing the generalization ability.展开更多
The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this wor...The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms.展开更多
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov...Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.展开更多
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec...Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.展开更多
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz...Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike.展开更多
Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of t...Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of the server or host accessing the cloud which may lead to create system crash problem.Thus,to resolve these existing problems,an efficient task scheduling algorithm is required for distributing the tasks over the entire feasible resources,which is termed load balancing.The load balancing approach assures that the entire Virtual Machines(VMs)are utilized appropriately.So,it is highly essential to develop a load-balancing model in a cloud environment based on machine learning and optimization strategies.Here,the computing and networking data is utilized for the analysis to observe the traffic as well as performance patterns.The acquired data is offered to the machine learning decision to select the right server by predicting the performance effectively by employing an Optimal Kernel-based Extreme Learning Machine(OK-ELM)and their parameter is tuned by the developed hybrid approach Population Size-based Mud Ring Tunicate Swarm Algorithm(PS-MRTSA).Further,effective scheduling is performed to resolve the load balancing issues by employing the developed model MR-TSA.Here,the developed approach effectively resolves the multi-objective constraints such as Response time,Resource cost,and energy consumption.Thus,the recommended load balancing model securesan enhanced performance rate than the traditional approaches over several experimental analyses.展开更多
Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,wit...Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,without producing too many false alarms.This is a challenge for machine learning owing to the extremely imbalanced data and complexity of fraud.In addition,classical machine learning methods must be extended,minimizing expected financial losses.Finally,fraud can only be combated systematically and economically if the risks and costs in payment channels are known.We define three models that overcome these challenges:machine learning-based fraud detection,economic optimization of machine learning results,and a risk model to predict the risk of fraud while considering countermeasures.The models were tested utilizing real data.Our machine learning model alone reduces the expected and unexpected losses in the three aggregated payment channels by 15%compared to a benchmark consisting of static if-then rules.Optimizing the machine-learning model further reduces the expected losses by 52%.These results hold with a low false positive rate of 0.4%.Thus,the risk framework of the three models is viable from a business and risk perspective.展开更多
基金Taif University Researchers Supporting Project Number(TURSP-2020/260),Taif University,Taif,Saudi Arabia.
文摘Millimeter wave communication works in the 30–300 GHz frequency range,and can obtain a very high bandwidth,which greatly improves the transmission rate of the communication system and becomes one of the key technologies of fifth-generation(5G).The smaller wavelength of the millimeter wave makes it possible to assemble a large number of antennas in a small aperture.The resulting array gain can compensate for the path loss of the millimeter wave.Utilizing this feature,the millimeter wave massive multiple-input multiple-output(MIMO)system uses a large antenna array at the base station.It enables the transmission of multiple data streams,making the system have a higher data transmission rate.In the millimeter wave massive MIMO system,the precoding technology uses the state information of the channel to adjust the transmission strategy at the transmitting end,and the receiving end performs equalization,so that users can better obtain the antenna multiplexing gain and improve the system capacity.This paper proposes an efficient algorithm based on machine learning(ML)for effective system performance in mmwave massive MIMO systems.The main idea is to optimize the adaptive connection structure to maximize the received signal power of each user and correlate the RF chain and base station antenna.Simulation results show that,the proposed algorithm effectively improved the system performance in terms of spectral efficiency and complexity as compared with existing algorithms.
基金supported by the Deanship of Scientific Research,at Imam Abdulrahman Bin Faisal University.Grant Number:2019-416-ASCS.
文摘Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.
基金supported by the Action Programme for Cultivation of Young and Middle-aged Teachers in Universities in Anhui Province(YQYB2023030),Chinathe Supporting Programme for Outstanding Young Talents in Colleges and Universities of Anhui Provincial Department of Education(gxyq2022068),China+1 种基金the Huainan Normal University Scientific Research Project(2023XJZD016),Chinathe Key Projects of Huainan Normal University(2024XJZD012),China.
文摘An accurate assessment of the state of health(SOH)is the cornerstone for guaranteeing the long-term stable operation of electrical equipment.However,the noise the data carries during cyclic aging poses a severe challenge to the accuracy of SOH estimation and the generalization ability of the model.To this end,this paper proposed a novel SOH estimation model for lithium-ion batteries that incorporates advanced signal-processing techniques and optimized machine-learning strategies.The model employs a whale optimization algorithm(WOA)to seek the optimal parameter combination(K,α)for the variational modal decomposition(VMD)method to ensure that the signals are accurately decomposed into different modes representing the SOH of batteries.Then,the excellent local feature extraction capability of the convolutional neural network(CNN)was utilized to obtain the critical features of each modal of SOH.Finally,the support vector machine(SVM)was selected as the final SOH estimation regressor based on its generalization ability and efficient performance on small sample datasets.The method proposed was validated on a two-class publicly available aging dataset of lithium-ion batteries containing different temperatures,discharge rates,and discharge depths.The results show that the WOA-VMD-based data processing technique effectively solves the interference problem of cyclic aging data noise on SOH estimation.The CNN-SVM optimized machine learning method significantly improves the accuracy of SOH estimation.Compared with traditional techniques,the fused algorithm achieves significant results in solving the interference of data noise,improving the accuracy of SOH estimation,and enhancing the generalization ability.
基金Projects(61573144,61773165,61673175,61174040)supported by the National Natural Science Foundation of ChinaProject(222201717006)supported by the Fundamental Research Funds for the Central Universities,China
文摘The hybrid flow shop scheduling problem with unrelated parallel machine is a typical NP-hard combinatorial optimization problem, and it exists widely in chemical, manufacturing and pharmaceutical industry. In this work, a novel mathematic model for the hybrid flow shop scheduling problem with unrelated parallel machine(HFSPUPM) was proposed. Additionally, an effective hybrid estimation of distribution algorithm was proposed to solve the HFSPUPM, taking advantage of the features in the mathematic model. In the optimization algorithm, a new individual representation method was adopted. The(EDA) structure was used for global search while the teaching learning based optimization(TLBO) strategy was used for local search. Based on the structure of the HFSPUPM, this work presents a series of discrete operations. Simulation results show the effectiveness of the proposed hybrid algorithm compared with other algorithms.
基金Projects(61173122,61262032) supported by the National Natural Science Foundation of ChinaProjects(11JJ3067,12JJ2038) supported by the Natural Science Foundation of Hunan Province,China
文摘Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343)PrincessNourah bint Abdulrahman University,Riyadh,Saudi ArabiaDeanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia,for funding this researchwork through the project number“NBU-FFR-2024-1092-02”.
文摘Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.
文摘Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike.
文摘Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of the server or host accessing the cloud which may lead to create system crash problem.Thus,to resolve these existing problems,an efficient task scheduling algorithm is required for distributing the tasks over the entire feasible resources,which is termed load balancing.The load balancing approach assures that the entire Virtual Machines(VMs)are utilized appropriately.So,it is highly essential to develop a load-balancing model in a cloud environment based on machine learning and optimization strategies.Here,the computing and networking data is utilized for the analysis to observe the traffic as well as performance patterns.The acquired data is offered to the machine learning decision to select the right server by predicting the performance effectively by employing an Optimal Kernel-based Extreme Learning Machine(OK-ELM)and their parameter is tuned by the developed hybrid approach Population Size-based Mud Ring Tunicate Swarm Algorithm(PS-MRTSA).Further,effective scheduling is performed to resolve the load balancing issues by employing the developed model MR-TSA.Here,the developed approach effectively resolves the multi-objective constraints such as Response time,Resource cost,and energy consumption.Thus,the recommended load balancing model securesan enhanced performance rate than the traditional approaches over several experimental analyses.
基金from any funding agency in the public,commercial,or not-for-profit sectors.
文摘Online banking fraud occurs whenever a criminal can seize accounts and transfer funds from an individual’s online bank account.Successfully preventing this requires the detection of as many fraudsters as possible,without producing too many false alarms.This is a challenge for machine learning owing to the extremely imbalanced data and complexity of fraud.In addition,classical machine learning methods must be extended,minimizing expected financial losses.Finally,fraud can only be combated systematically and economically if the risks and costs in payment channels are known.We define three models that overcome these challenges:machine learning-based fraud detection,economic optimization of machine learning results,and a risk model to predict the risk of fraud while considering countermeasures.The models were tested utilizing real data.Our machine learning model alone reduces the expected and unexpected losses in the three aggregated payment channels by 15%compared to a benchmark consisting of static if-then rules.Optimizing the machine-learning model further reduces the expected losses by 52%.These results hold with a low false positive rate of 0.4%.Thus,the risk framework of the three models is viable from a business and risk perspective.