Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating ...Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating the radiometric inconsistency.The radiometric trans-forming relation between the subject image and the reference image is an essential aspect of RRN.Aimed at accurate radiometric transforming relation modeling,the learning-based nonlinear regression method,Support Vector machine Regression(SVR)is used for fitting the complicated radiometric transforming relation for the coarse-resolution data-referenced RRN.To evaluate the effectiveness of the proposed method,a series of experiments are performed,including two synthetic data experiments and one real data experiment.And the proposed method is compared with other methods that use linear regression,Artificial Neural Network(ANN)or Random Forest(RF)for radiometric transforming relation modeling.The results show that the proposed method performs well on fitting the radiometric transforming relation and could enhance the RRN performance.展开更多
The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic ...Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.展开更多
In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of ...In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.展开更多
针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基...针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。展开更多
目的检测高血压大鼠肠道菌群的变化,探讨正常菌群在盐诱导高血压发生发展中的作用。方法以8%高盐饮食喂养SD雄性大鼠制备高血压模型。Real-time PCR检测菌群结构的改变,同时检测血浆炎性因子IL-1β、IL-6和TNF-α水平变化。结果同对照...目的检测高血压大鼠肠道菌群的变化,探讨正常菌群在盐诱导高血压发生发展中的作用。方法以8%高盐饮食喂养SD雄性大鼠制备高血压模型。Real-time PCR检测菌群结构的改变,同时检测血浆炎性因子IL-1β、IL-6和TNF-α水平变化。结果同对照组大鼠血压(96.00mmHg±5.74mmHg)相比盐敏感组(122.79 mmHg±6.37 mmHg)显著升高,而盐抵抗组无明显变化;实验组大鼠体重(172.00g±15.58g,164.25g±16.11g)较对照组(377.63g±32.47g)明显降低;同对照组相比实验组菌群结构发生比例倒置,即双歧杆菌(6.19±0.47,7.52±0.47 vs 8.59±0.42)、乳杆菌(6.77±0.23,7.09±0.28 vs 7.60±0.26)、拟杆菌(8.98±0.45,8.46±0.47 vs 9.99±0.73)数量降低;肠杆菌(7.93±0.20,7.78±0.29 vs 7.28±0.27)数量升高。同盐抵抗组大鼠相比,盐敏感组双歧杆菌、乳杆菌降低更加显著,但拟杆菌数量高于盐抵抗组。两实验组大鼠血浆细胞因子IL-1β、IL-6和TNF-α水平较对照组均显著升高(P<0.05)。结论盐诱导高血压大鼠肠道菌群结构发生改变,盐敏感组双歧杆菌、乳杆菌和拟杆菌含量显著降低,提示其可能参与盐诱导高血压病程。展开更多
基金This research was funded by the National Natural Science Fund of China[grant number 41701415]Science fund project of Wuhan Institute of Technology[grant number K201724]Science and Technology Development Funds Project of Department of Transportation of Hubei Province[grant number 201900001].
文摘Radiometric normalization,as an essential step for multi-source and multi-temporal data processing,has received critical attention.Relative Radiometric Normalization(RRN)method has been primarily used for eliminating the radiometric inconsistency.The radiometric trans-forming relation between the subject image and the reference image is an essential aspect of RRN.Aimed at accurate radiometric transforming relation modeling,the learning-based nonlinear regression method,Support Vector machine Regression(SVR)is used for fitting the complicated radiometric transforming relation for the coarse-resolution data-referenced RRN.To evaluate the effectiveness of the proposed method,a series of experiments are performed,including two synthetic data experiments and one real data experiment.And the proposed method is compared with other methods that use linear regression,Artificial Neural Network(ANN)or Random Forest(RF)for radiometric transforming relation modeling.The results show that the proposed method performs well on fitting the radiometric transforming relation and could enhance the RRN performance.
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
文摘Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(NRF-2022R1A2C2012243).
文摘In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.
文摘针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。
文摘目的检测高血压大鼠肠道菌群的变化,探讨正常菌群在盐诱导高血压发生发展中的作用。方法以8%高盐饮食喂养SD雄性大鼠制备高血压模型。Real-time PCR检测菌群结构的改变,同时检测血浆炎性因子IL-1β、IL-6和TNF-α水平变化。结果同对照组大鼠血压(96.00mmHg±5.74mmHg)相比盐敏感组(122.79 mmHg±6.37 mmHg)显著升高,而盐抵抗组无明显变化;实验组大鼠体重(172.00g±15.58g,164.25g±16.11g)较对照组(377.63g±32.47g)明显降低;同对照组相比实验组菌群结构发生比例倒置,即双歧杆菌(6.19±0.47,7.52±0.47 vs 8.59±0.42)、乳杆菌(6.77±0.23,7.09±0.28 vs 7.60±0.26)、拟杆菌(8.98±0.45,8.46±0.47 vs 9.99±0.73)数量降低;肠杆菌(7.93±0.20,7.78±0.29 vs 7.28±0.27)数量升高。同盐抵抗组大鼠相比,盐敏感组双歧杆菌、乳杆菌降低更加显著,但拟杆菌数量高于盐抵抗组。两实验组大鼠血浆细胞因子IL-1β、IL-6和TNF-α水平较对照组均显著升高(P<0.05)。结论盐诱导高血压大鼠肠道菌群结构发生改变,盐敏感组双歧杆菌、乳杆菌和拟杆菌含量显著降低,提示其可能参与盐诱导高血压病程。