The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of ...In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.展开更多
Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic ...Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.展开更多
针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基...针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。展开更多
时间序列预测在能源管理、交通流量和气象分析等多个实际场景中具有重要应用价值。然而,时间序列数据中存在的分布漂移(Distribution Shift)与长程依赖(Long-term Dependency)仍限制了传统方法与现有深度学习模型在长期预测中的表现。为...时间序列预测在能源管理、交通流量和气象分析等多个实际场景中具有重要应用价值。然而,时间序列数据中存在的分布漂移(Distribution Shift)与长程依赖(Long-term Dependency)仍限制了传统方法与现有深度学习模型在长期预测中的表现。为此,提出了一种名为D-LINet(Dual-Normalization and Linear Integration Network)的创新模型。该模型结合了Dish-TS(Distribution Shift in Time Series Forecasting)框架的分布归一化能力与线性映射的高效性,并采用双向归一化与双线性层的设计,有效缓解输入与输出空间的分布偏移,增强了对周期性与趋势性特征的捕捉能力。在多个真实数据集上对D-LINet的预测性能进行了全面评估。结果显示,在短期与长期预测中,D-LINet的均方误差和平均绝对误差均显著优于主流模型(如Transformer,Informer,Autoformer和DLinear)。此外,实验还探讨了输入窗口长度及先验知识的引入对预测性能的影响,为后续模型优化提供了重要指导。该研究针对复杂分布漂移问题提出了新的解决思路,并有助于提升时间序列预测的精度与稳健性。展开更多
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(NRF-2022R1A2C2012243).
文摘In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.
文摘Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.
文摘针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。
文摘时间序列预测在能源管理、交通流量和气象分析等多个实际场景中具有重要应用价值。然而,时间序列数据中存在的分布漂移(Distribution Shift)与长程依赖(Long-term Dependency)仍限制了传统方法与现有深度学习模型在长期预测中的表现。为此,提出了一种名为D-LINet(Dual-Normalization and Linear Integration Network)的创新模型。该模型结合了Dish-TS(Distribution Shift in Time Series Forecasting)框架的分布归一化能力与线性映射的高效性,并采用双向归一化与双线性层的设计,有效缓解输入与输出空间的分布偏移,增强了对周期性与趋势性特征的捕捉能力。在多个真实数据集上对D-LINet的预测性能进行了全面评估。结果显示,在短期与长期预测中,D-LINet的均方误差和平均绝对误差均显著优于主流模型(如Transformer,Informer,Autoformer和DLinear)。此外,实验还探讨了输入窗口长度及先验知识的引入对预测性能的影响,为后续模型优化提供了重要指导。该研究针对复杂分布漂移问题提出了新的解决思路,并有助于提升时间序列预测的精度与稳健性。