期刊文献+
共找到1,584篇文章
< 1 2 80 >
每页显示 20 50 100
Graded density impactor design via machine learning and numerical simulation:Achieve controllable stress and strain rate 被引量:1
1
作者 Yahui Huang Ruizhi Zhang +6 位作者 Shuaixiong Liu Jian Peng Yong Liu Han Chen Jian Zhang Guoqiang Luo Qiang Shen 《Defence Technology(防务技术)》 2025年第9期262-273,共12页
The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to ... The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI. 展开更多
关键词 Machine learning Numerical simulation Graded density impactor Controllable stress-strain rate loading Response surface methodology
在线阅读 下载PDF
A deep-learning-based MAC for integrating channel access,rate adaptation,and channel switch
2
作者 Jiantao Xin Wei Xu +2 位作者 Bin Cao Taotao Wang Shengli Zhang 《Digital Communications and Networks》 2025年第4期1041-1053,共13页
With increasing density and heterogeneity in unlicensed wireless networks,traditional MAC protocols,such as Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA)in Wi-Fi networks,are experiencing performance... With increasing density and heterogeneity in unlicensed wireless networks,traditional MAC protocols,such as Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA)in Wi-Fi networks,are experiencing performance degradation.This is manifested in increased collisions and extended backoff times,leading to diminished spectrum efficiency and protocol coordination.Addressing these issues,this paper proposes a deep-learning-based MAC paradigm,dubbed DL-MAC,which leverages spectrum data readily available from energy detection modules in wireless devices to achieve the MAC functionalities of channel access,rate adaptation,and channel switch.First,we utilize DL-MAC to realize a joint design of channel access and rate adaptation.Subsequently,we integrate the capability of channel switching into DL-MAC,enhancing its functionality from single-channel to multi-channel operations.Specifically,the DL-MAC protocol incorporates a Deep Neural Network(DNN)for channel selection and a Recurrent Neural Network(RNN)for the joint design of channel access and rate adaptation.We conducted real-world data collection within the 2.4 GHz frequency band to validate the effectiveness of DL-MAC.Experimental results demonstrate that DL-MAC exhibits significantly superior performance compared to traditional algorithms in both single and multi-channel environments,and also outperforms single-function designs.Additionally,the performance of DL-MAC remains robust,unaffected by channel switch overheads within the evaluation range. 展开更多
关键词 Deep learning Channel access rate adaptation Channel switch
在线阅读 下载PDF
Prediction of temperature and strain rate dependent flow behaviors for AA6061-T4 sheet using phenomenology and machine learning-based approaches
3
作者 Zhi-hao WANG D.GUINES +2 位作者 Jia-shuo QI Xing-rong CHU L.LEOTOING 《Transactions of Nonferrous Metals Society of China》 2025年第11期3617-3637,共21页
The plastic flow behaviors of AA6061-T4 sheets at different temperatures(21-300°C)and strain rates(0.002-4 s^(-1))were studied.Significant nonlinear effects of temperature and strain rate on flow behaviors were r... The plastic flow behaviors of AA6061-T4 sheets at different temperatures(21-300°C)and strain rates(0.002-4 s^(-1))were studied.Significant nonlinear effects of temperature and strain rate on flow behaviors were revealed,as well as underlying micromechanical factors.Phenomenology and machine learning-based constitutive models were developed.Both models were formulated in the framework of a temperature-dependent linear combination regulated by a transition function to capture the evolution of strain-hardening behavior with increasing temperature.Novel mathematical functions for describing temperature and strain rate sensitivities were formulated for the phenomenological constitutive model.The threshold temperature related to microstructure evolution was considered in the modeling.A data-enrichment strategy based on extrapolating experimental data via classical strain hardening laws was adopted to improve neural network training.An efficient inverse identification strategy,focusing solely on the transition function,was proposed to enhance the prediction accuracy of post-necking deformation by both constitutive models. 展开更多
关键词 AA6061-T4 sheet thermo-visco-plasticity constitutive model machine learning strain rate and temperature effects
在线阅读 下载PDF
Dynamic Economic Scheduling with Self-Adaptive Uncertainty in Distribution Network Based on Deep Reinforcement Learning 被引量:3
4
作者 Guanfu Wang Yudie Sun +5 位作者 Jinling Li Yu Jiang Chunhui Li Huanan Yu He Wang Shiqiang Li 《Energy Engineering》 EI 2024年第6期1671-1695,共25页
Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to... Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem. 展开更多
关键词 self-adaptive the uncertainty of sources and load deep reinforcement learning dynamic economic scheduling
在线阅读 下载PDF
融合Q-learning的A^(*)预引导蚁群路径规划算法 被引量:1
5
作者 殷笑天 杨丽英 +1 位作者 刘干 何玉庆 《传感器与微系统》 北大核心 2025年第8期143-147,153,共6页
针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素... 针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素,引导初始路径快速逼近最优解;其次,构建全局-局部双层信息素协同模型,利用全局层保留历史精英路径经验、局部层实时响应环境变化;最后,引入Q-learning方向性奖励函数优化决策过程,在路径拐点与障碍边缘施加强化引导信号。实验表明:在25×24中等复杂度地图中,QHACO算法较传统ACO算法最优路径缩短22.7%,收敛速度提升98.7%;在50×50高密度障碍环境中,最优路径长度优化16.9%,迭代次数减少95.1%。相比传统ACO算法,QHACO算法在最优性、收敛速度与避障能力上均有显著提升,展现出较强环境适应性。 展开更多
关键词 蚁群优化算法 路径规划 局部最优 收敛速度 Q-learning 分层信息素 A^(*)算法
在线阅读 下载PDF
Machine learning applications in healthcare clinical practice and research 被引量:1
6
作者 Nikolaos-Achilleas Arkoudis Stavros P Papadakos 《World Journal of Clinical Cases》 SCIE 2025年第1期16-21,共6页
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen... Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research. 展开更多
关键词 MACHINE learning Artificial INTELLIGENCE CLINICAL Practice RESEARCH Glomerular filtration rate Non-alcoholic fatty liver disease MEDICINE
在线阅读 下载PDF
Application of machine learning in predicting the rate-dependent compressive strength of rocks 被引量:14
7
作者 Mingdong Wei Wenzhao Meng +1 位作者 Feng Dai Wei Wu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第5期1356-1365,共10页
Accurate prediction of compressive strength of rocks relies on the rate-dependent behaviors of rocks, and correlation among the geometrical, physical, and mechanical properties of rocks. However, these properties may ... Accurate prediction of compressive strength of rocks relies on the rate-dependent behaviors of rocks, and correlation among the geometrical, physical, and mechanical properties of rocks. However, these properties may not be easy to control in laboratory experiments, particularly in dynamic compression experiments. By training three machine learning models based on the support vector machine(SVM), backpropagation neural network(BPNN), and random forest(RF) algorithms, we isolated different input parameters, such as static compressive strength, P-wave velocity, specimen dimension, grain size, bulk density, and strain rate, to identify their importance in the strength prediction. Our results demonstrated that the RF algorithm shows a better performance than the other two algorithms. The strain rate is a key input parameter influencing the performance of these models, while the others(e.g. static compressive strength and P-wave velocity) are less important as their roles can be compensated by alternative parameters. The results also revealed that the effect of specimen dimension on the rock strength can be overshadowed at high strain rates, while the effect on the dynamic increase factor(i.e. the ratio of dynamic to static compressive strength) becomes significant. The dynamic increase factors for different specimen dimensions bifurcate when the strain rate reaches a relatively high value, a clue to improve our understanding of the transitional behaviors of rocks from low to high strain rates. 展开更多
关键词 Machine learning Rock dynamics Compressive strength Strain rate
在线阅读 下载PDF
Enhanced Reconfigurable Intelligent Surface Assisted mmWave Communication: A Federated Learning 被引量:7
8
作者 Lixin Li Donghui Ma +4 位作者 Huan Ren Dawei Wang Xiao Tang Wei Liang Tong Bai 《China Communications》 SCIE CSCD 2020年第10期115-128,共14页
Reconfigurable intelligent surface(RIS)has been proposed as a potential solution to improve the coverage and spectrum efficiency for future wireless communication.However,the privacy of users’data is often ignored in... Reconfigurable intelligent surface(RIS)has been proposed as a potential solution to improve the coverage and spectrum efficiency for future wireless communication.However,the privacy of users’data is often ignored in previous works,such as the user’s location information during channel estimation.In this paper,we propose a privacy-preserving design paradigm combining federated learning(FL)with RIS in the mmWave communication system.Based on FL,the local models are trained and encrypted using the private data managed on each local device.Following this,a global model is generated by aggregating them at the central server.The optimal model is trained for establishing the mapping function between channel state information(CSI)and RIS’configuration matrix in order to maximize the achievable rate of the received signal.Simulation results demonstrate that the proposed scheme can effectively approach to the theoretical value generated by centralized machine learning(ML),while protecting user’privacy. 展开更多
关键词 reconfigurable intelligent surface PRIVACY federated learning achievable rate
在线阅读 下载PDF
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms 被引量:7
9
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
在线阅读 下载PDF
Fast Learning in Spiking Neural Networks by Learning Rate Adaptation 被引量:2
10
作者 方慧娟 罗继亮 王飞 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2012年第6期1219-1224,共6页
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de... For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN. 展开更多
关键词 spiking neural networks learning algorithm learning rate adaptation Tennessee Eastman process
在线阅读 下载PDF
Development and Validation of Machine Learning Models for Lung Cancer Risk Prediction in High-Risk Population: A Retrospective Cohort Study 被引量:1
11
作者 Yu Su Haoran Zhan +5 位作者 Shangyao Li Yitong Lu Ruhuan Ma Hai Fang Tingting Xu Yu Tian 《Biomedical and Environmental Sciences》 2025年第4期501-505,共5页
Lung cancer, the leading cause of cancer deaths worldwide and in China, has a 19.7% five-year survival rate due to terminal-stage diagnosis^([1-3]).Although low-dose computed tomography(CT) screening can reduce mortal... Lung cancer, the leading cause of cancer deaths worldwide and in China, has a 19.7% five-year survival rate due to terminal-stage diagnosis^([1-3]).Although low-dose computed tomography(CT) screening can reduce mortality, high false positive rates can create economic and psychological burdens. 展开更多
关键词 lung cancer retrospective cohort study lung cancer risk prediction low dose computed tomography high risk population MORTALITY machine learning false positive rates
暂未订购
A performance-based hybrid deep learning model for predicting TBM advance rate using Attention-ResNet-LSTM 被引量:2
12
作者 Sihao Yu Zixin Zhang +2 位作者 Shuaifeng Wang Xin Huang Qinghua Lei 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期65-80,共16页
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k... The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic. 展开更多
关键词 Tunnel boring machine(TBM) Advance rate Deep learning Attention-ResNet-LSTM Evolutionary polynomial regression
在线阅读 下载PDF
Recent innovation in benchmark rates (BMR):evidence from influential factors on Turkish Lira Overnight Reference Interest Rate with machine learning algorithms 被引量:2
13
作者 Öer Depren Mustafa Tevfik Kartal Serpil KılıçDepren 《Financial Innovation》 2021年第1期942-961,共20页
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced... Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates. 展开更多
关键词 Benchmark rate Determinants Machine learning algorithms TURKEY
在线阅读 下载PDF
Prediction of corrosion rate for friction stir processed WE43 alloy by combining PSO-based virtual sample generation and machine learning 被引量:2
14
作者 Annayath Maqbool Abdul Khalad Noor Zaman Khan 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1518-1528,共11页
The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros... The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys. 展开更多
关键词 Corrosion rate Friction stir processing Virtual sample generation Particle swarm optimization Machine learning Graphical user interface
在线阅读 下载PDF
Machine learning-based comparison of factors influencing estimated glomerular filtration rate in Chinese women with or without nonalcoholic fatty liver 被引量:2
15
作者 I-Chien Chen Lin-Ju Chou +2 位作者 Shih-Chen Huang Ta-Wei Chu Shang-Sen Lee 《World Journal of Clinical Cases》 SCIE 2024年第15期2506-2521,共16页
BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear ... BACKGROUND The prevalence of non-alcoholic fatty liver(NAFLD)has increased recently.Subjects with NAFLD are known to have higher chance for renal function impairment.Many past studies used traditional multiple linear regression(MLR)to identify risk factors for decreased estimated glomerular filtration rate(eGFR).However,medical research is increasingly relying on emerging machine learning(Mach-L)methods.The present study enrolled healthy women to identify factors affecting eGFR in subjects with and without NAFLD(NAFLD+,NAFLD-)and to rank their importance.AIM To uses three different Mach-L methods to identify key impact factors for eGFR in healthy women with and without NAFLD.METHODS A total of 65535 healthy female study participants were enrolled from the Taiwan MJ cohort,accounting for 32 independent variables including demographic,biochemistry and lifestyle parameters(independent variables),while eGFR was used as the dependent variable.Aside from MLR,three Mach-L methods were applied,including stochastic gradient boosting,eXtreme gradient boosting and elastic net.Errors of estimation were used to define method accuracy,where smaller degree of error indicated better model performance.RESULTS Income,albumin,eGFR,High density lipoprotein-Cholesterol,phosphorus,forced expiratory volume in one second(FEV1),and sleep time were all lower in the NAFLD+group,while other factors were all significantly higher except for smoking area.Mach-L had lower estimation errors,thus outperforming MLR.In Model 1,age,uric acid(UA),FEV1,plasma calcium level(Ca),plasma albumin level(Alb)and T-bilirubin were the most important factors in the NAFLD+group,as opposed to age,UA,FEV1,Alb,lactic dehydrogenase(LDH)and Ca for the NAFLD-group.Given the importance percentage was much higher than the 2nd important factor,we built Model 2 by removing age.CONCLUSION The eGFR were lower in the NAFLD+group compared to the NAFLD-group,with age being was the most important impact factor in both groups of healthy Chinese women,followed by LDH,UA,FEV1 and Alb.However,for the NAFLD-group,TSH and SBP were the 5th and 6th most important factors,as opposed to Ca and BF in the NAFLD+group. 展开更多
关键词 Non-alcoholic fatty liver Estimated glomerular filtration rate Machine learning Chinese women
暂未订购
Federated Learning Model for Auto Insurance Rate Setting Based on Tweedie Distribution 被引量:1
16
作者 Tao Yin Changgen Peng +2 位作者 Weijie Tan Dequan Xu Hanlin Tang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期827-843,共17页
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ... In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party. 展开更多
关键词 rate setting Tweedie distribution generalized linear models federated learning homomorphic encryption
在线阅读 下载PDF
Choice of discount rate in reinforcement learning with long-delay rewards 被引量:1
17
作者 LIN Xiangyang XING Qinghua LIU Fuxian 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第2期381-392,共12页
In the world, most of the successes are results of longterm efforts. The reward of success is extremely high, but before that, a long-term investment process is required. People who are “myopic” only value short-ter... In the world, most of the successes are results of longterm efforts. The reward of success is extremely high, but before that, a long-term investment process is required. People who are “myopic” only value short-term rewards and are unwilling to make early-stage investments, so they hardly get the ultimate success and the corresponding high rewards. Similarly, for a reinforcement learning(RL) model with long-delay rewards, the discount rate determines the strength of agent’s “farsightedness”.In order to enable the trained agent to make a chain of correct choices and succeed finally, the feasible region of the discount rate is obtained through mathematical derivation in this paper firstly. It satisfies the “farsightedness” requirement of agent. Afterwards, in order to avoid the complicated problem of solving implicit equations in the process of choosing feasible solutions,a simple method is explored and verified by theoreti cal demonstration and mathematical experiments. Then, a series of RL experiments are designed and implemented to verify the validity of theory. Finally, the model is extended from the finite process to the infinite process. The validity of the extended model is verified by theories and experiments. The whole research not only reveals the significance of the discount rate, but also provides a theoretical basis as well as a practical method for the choice of discount rate in future researches. 展开更多
关键词 reinforcement learning(RL) discount rate longdelay reward Q-learning treasure-detecting model feasible solution
在线阅读 下载PDF
LEARNING RATES OF KERNEL-BASED ROBUST CLASSIFICATION 被引量:1
18
作者 Shuhua WANG Baohuai SHENG 《Acta Mathematica Scientia》 SCIE CSCD 2022年第3期1173-1190,共18页
This paper considers a robust kernel regularized classification algorithm with a non-convex loss function which is proposed to alleviate the performance deterioration caused by the outliers.A comparison relationship b... This paper considers a robust kernel regularized classification algorithm with a non-convex loss function which is proposed to alleviate the performance deterioration caused by the outliers.A comparison relationship between the excess misclassification error and the excess generalization error is provided;from this,along with the convex analysis theory,a kind of learning rate is derived.The results show that the performance of the classifier is effected by the outliers,and the extent of impact can be controlled by choosing the homotopy parameters properly. 展开更多
关键词 Support vector machine robust classification quasiconvex loss function learning rate right-sided directional derivative
在线阅读 下载PDF
Rate distortion optimization for adaptive gradient quantization in federated learning 被引量:2
19
作者 Guojun Chen Kaixuan Xie +4 位作者 Wenqiang Luo Yinfei Xu Lun Xin Tiecheng Song Jing Hu 《Digital Communications and Networks》 CSCD 2024年第6期1813-1825,共13页
Federated Learning(FL)is an emerging machine learning framework designed to preserve privacy.However,the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communicati... Federated Learning(FL)is an emerging machine learning framework designed to preserve privacy.However,the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload,which is a major challenge for FL.To address this issue,we propose an adaptive gradient quantization approach that enhances communication efficiency.Aiming to minimize the total communication costs,we consider both the correlation of gradients between local clients and the correlation of gradients between communication rounds,namely,in the time and space dimensions.The compression strategy is based on rate distortion theory,which allows us to find an optimal quantization strategy for the gradients.To further reduce the computational complexity,we introduce the Kalman filter into the proposed approach.Finally,numerical results demonstrate the effectiveness and robustness of the proposed rate-distortion optimization adaptive gradient quantization approach in significantly reducing the communication costs when compared to other quantization methods. 展开更多
关键词 Federated learning Communication efficiency Adaptive quantization rate distortion
在线阅读 下载PDF
Machine Learning-based USD/PKR Exchange Rate Forecasting Using Sentiment Analysis of Twitter Data 被引量:1
20
作者 Samreen Naeem Wali Khan Mashwani +4 位作者 Aqib Ali M.Irfan Uddin Marwan Mahmoud Farrukh Jamal Christophe Chesneau 《Computers, Materials & Continua》 SCIE EI 2021年第6期3451-3461,共11页
This study proposes an approach based on machine learning to forecast currency exchange rates by applying sentiment analysis to messages on Twitter(called tweets).A dataset of the exchange rates between the United Sta... This study proposes an approach based on machine learning to forecast currency exchange rates by applying sentiment analysis to messages on Twitter(called tweets).A dataset of the exchange rates between the United States Dollar(USD)and the Pakistani Rupee(PKR)was formed by collecting information from a forex website as well as a collection of tweets from the business community in Pakistan containing finance-related words.The dataset was collected in raw form,and was subjected to natural language processing by way of data preprocessing.Response variable labeling was then applied to the standardized dataset,where the response variables were divided into two classes:“1”indicated an increase in the exchange rate and“−1”indicated a decrease in it.To better represent the dataset,we used linear discriminant analysis and principal component analysis to visualize the data in three-dimensional vector space.Clusters that were obtained using a sampling approach were then used for data optimization.Five machine learning classifiers—the simple logistic classifier,the random forest,bagging,naïve Bayes,and the support vector machine—were applied to the optimized dataset.The results show that the simple logistic classifier yielded the highest accuracy of 82.14%for the USD and the PKR exchange rates forecasting. 展开更多
关键词 Machine learning exchange rate sentiment analysis linear discriminant analysis principal component analysis simple logistic
在线阅读 下载PDF
上一页 1 2 80 下一页 到第
使用帮助 返回顶部