期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Towards robust neural networks via a global and monotonically decreasing robustness training strategy 被引量:1
1
作者 Zhen LIANG Taoran WU +4 位作者 Wanwei LIU Bai XUE Wenjing YANG Ji WANG Zhengbin PANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第10期1375-1389,共15页
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical domains.Instead of verifying whether the robustness property holds or not in c... Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical domains.Instead of verifying whether the robustness property holds or not in certain neural networks,this paper focuses on training robust neural networks with respect to given perturbations.State-of-the-art training methods,interval bound propagation(IBP)and CROWN-IBP,perform well with respect to small perturbations,but their performance declines significantly in large perturbation cases,which is termed“drawdown risk”in this paper.Specifically,drawdown risk refers to the phenomenon that IBPfamily training methods cannot provide expected robust neural networks in larger perturbation cases,as in smaller perturbation cases.To alleviate the unexpected drawdown risk,we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch(global robustness training),and the corresponding robustness losses are combined with monotonically decreasing weights(monotonically decreasing robustness training).With experimental demonstrations,our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent.It is also noteworthy that our training method achieves higher model accuracy than the original training methods,which means that our presented training strategy gives more balanced consideration to robustness and accuracy. 展开更多
关键词 robust neural networks training method Drawdown risk Global robustness training Monotonically decreasing robustness
原文传递
A robust optimization method for label noisy datasets based on adaptive threshold: Adaptive-k
2
作者 Enes DEDEOGLU Himmet Toprak KESGIN Mehmet Fatih AMASYALI 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期49-60,共12页
The use of all samples in the optimization process does not produce robust results in datasets with label noise.Because the gradients calculated according to the losses of the noisy samples cause the optimization proc... The use of all samples in the optimization process does not produce robust results in datasets with label noise.Because the gradients calculated according to the losses of the noisy samples cause the optimization process to go in the wrong direction.In this paper,we recommend using samples with loss less than a threshold determined during the optimization,instead of using all samples in the mini-batch.Our proposed method,Adaptive-k,aims to exclude label noise samples from the optimization process and make the process robust.On noisy datasets,we found that using a threshold-based approach,such as Adaptive-k,produces better results than using all samples or a fixed number of low-loss samples in the mini-batch.On the basis of our theoretical analysis and experimental results,we show that the Adaptive-k method is closest to the performance of the Oracle,in which noisy samples are entirely removed from the dataset.Adaptive-k is a simple but effective method.It does not require prior knowledge of the noise ratio of the dataset,does not require additional model training,and does not increase training time significantly.In the experiments,we also show that Adaptive-k is compatible with different optimizers such as SGD,SGDM,and Adam.The code for Adaptive-k is available at GitHub. 展开更多
关键词 robust optimization label noise noisy label deep learning noisy datasets noise ratio estimation robust training
原文传递
Robust energy-efficient train speed profile optimization in a scenario-based position-time-speed network 被引量:5
3
作者 Yu CHENG Jiateng YIN Lixing YANG 《Frontiers of Engineering Management》 2021年第4期595-614,共20页
Train speed profile optimization is an efficient approach to reducing energy consumption in urban rail transit systems.Different from most existing studies that assume deterministic parameters as model inputs,this pap... Train speed profile optimization is an efficient approach to reducing energy consumption in urban rail transit systems.Different from most existing studies that assume deterministic parameters as model inputs,this paper proposes a robust energy-efficient train speed profile optimization approach by considering the uncertainty of train modeling parameters.Specifically,we first construct a scenario-based position-time-speed(PTS)network by considering resistance parameters as discrete scenariobased random variables.Then,a percentile reliability model is proposed to generate a robust train speed profile,by which the scenario-based energy consumption is less than the model objective value at a confidence level.To solve the model efficiently,we present several algorithms to eliminate the infeasible nodes and arcs in the PTS network and propose a model reformulation strategy to transform the original model into an equivalent linear programming model.Lastly,on the basis of our field test data collected in Beijing metro Yizhuang line,a series of experiments are conducted to verify the effectiveness of the model and analyze the influences of parameter uncertainties on the generated train speed profile. 展开更多
关键词 robust train speed profile percentile reliability model scenario-based position-time-speed network mixed-integer programming
原文传递
Bridging Data Gaps in Healthcare:A Scoping Review of Transfer Learning in Structured Data Analysis
4
作者 Siqi Li Xin Li +12 位作者 Kunyu Yu Qiming Wu Di Miao Mingcheng Zhu Mengying Yan Yuhe Ke Danny D’Agostino Yilin Ning Ziwen Wang Yuqing Shang Molei Liu Chuan Hong Nan Liu 《Health Data Science》 2025年第1期39-53,共15页
Background:Clinical and biomedical research in low-resource settings often faces substantial challenges due to the need for high-quality data with sufficient sample sizes to construct effective models.These constraint... Background:Clinical and biomedical research in low-resource settings often faces substantial challenges due to the need for high-quality data with sufficient sample sizes to construct effective models.These constraints hinder robust model training and prompt researchers to seek methods for leveraging existing knowledge from related studies to support new research efforts.Transfer learning(TL),a machine learning technique,emerges as a powerful solution by utilizing knowledge from pretrained models to enhance the performance of new models,offering promise across various healthcare domains.Despite its conceptual origins in the 1990s,the application of TL in medical research has remained limited,especially beyond image analysis.This review aims to analyze TL applications,highlight overlooked techniques,and suggest improvements for future healthcare research.Methods:Following the PRISMA-ScR guidelines,we conducted a search for published articles that employed TL with structured clinical or biomedical data by searching the SCOPUS,MEDLINE,Web of Science,Embase,and CINAHL databases.Results:We screened 5,080 papers,with 86 meeting the inclusion criteria.Among these,only 2%(2 of 86)utilized external studies,and 5%(4 of 86)addressed scenarios involving multi-site collaborations with privacy constraints.Conclusions:To achieve actionable TL with structured medical data while addressing regional disparities,inequality,and privacy constraints in healthcare research,we advocate for the careful identification of appropriate source data and models,the selection of suitable TL frameworks,and the validation of TL models with proper baselines. 展开更多
关键词 biomedical research structured data analysis transfer learning healthcare research machine learning techniqueemerges robust model training leveraging existing knowledge related studies construct effective modelsthese
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部