The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic ...Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.展开更多
In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of ...In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.展开更多
针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基...针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。展开更多
A novel approach is proposed to detect the normal vector to product surface in real time for the robotic precision drilling system in aircraft component assembly, and the auto-normalization algorithm is presented base...A novel approach is proposed to detect the normal vector to product surface in real time for the robotic precision drilling system in aircraft component assembly, and the auto-normalization algorithm is presented based on the detection system. Firstly, the deviation between the normal vector and the spindle axis is measured by the four laser displacement sensors installed at the head of the multi-function end effector. Then, the robot target attitude is inversely solved according to the auto-normalization algorithm. Finally, adjust the robot to the target attitude via pitch and yaw rotations about the tool center point and the spindle axis is corrected in line with the normal vector simultaneously. To test and verify the auto-normalization algorithm, an experimental platform is established in which the laser tracker is introduced for accurate measurement. The results show that the deviations between the corrected spindle axis and the normal vector are all reduced to less than 0.5°, with the mean value 0.32°. It is demonstrated the detection method and the autonormalization algorithm are feasible and reliable.展开更多
In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illuminati...In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.展开更多
针对因特网上数字图像的版权保护、认证和完整性等问题,基于DCT变换、image moment normalization和m序列,提出了一种二值水印嵌入算法,实现了二值图像的嵌入和提取。根据m序列的伪随机性和抗干扰性能,使水印具有良好的隐蔽性和稳健性;...针对因特网上数字图像的版权保护、认证和完整性等问题,基于DCT变换、image moment normalization和m序列,提出了一种二值水印嵌入算法,实现了二值图像的嵌入和提取。根据m序列的伪随机性和抗干扰性能,使水印具有良好的隐蔽性和稳健性;使用了moment normalization能抵制各种几何攻击。实验表明该算法具有很好的鲁棒性、实用性和可操作性。展开更多
This study successfully deals with the inhomogeneous dimension problem of load separation assumption, which is the theoretical basis of the normalization method. According to the dimensionless load separation principl...This study successfully deals with the inhomogeneous dimension problem of load separation assumption, which is the theoretical basis of the normalization method. According to the dimensionless load separation principle, the normalization method has been improved by intro- ducing a forcible blunting correction. With the improved normalization method, the J-resistance curves of five different metallic materials of CT and SEB specimens are estimated. The forcible blunting correction of initial crack size plays an important role in the J-resistance curve estima- tion, which is closely related to the strain hardening level of material. The higher level of strain hardening leads to a greater difference in JQ determined by different slopes of the blunting line. If the blunting line coefficient recommended by ASTM E1820-11 is used in the improved nor- realization method, it will lead to greater fracture resistance than that processed by the blunting line coefficient recommended by ISO 12135-2002. Therefore, the influence of the blunting line on the determination of JQ must be taken into full account in the fracture toughness assessment of metallic materials.展开更多
Background: Expression levels for genes of interest must be normalized with an appropriate reference, or housekeeping gene, to make accurate comparisons of quantitative real-time PCR results. The purpose of this stud...Background: Expression levels for genes of interest must be normalized with an appropriate reference, or housekeeping gene, to make accurate comparisons of quantitative real-time PCR results. The purpose of this study was to identify the most stable housekeeping genes in porcine articular cartilage subjected to a mechanical injury from a panel of 10 candidate genes. Results: Ten candidate housekeeping genes were evaluated in three different treatment groups of mechanically impacted porcine articular cartilage. The genes evaluated were: beta actin, beta-2-microglobulin, glyceraldehyde-3-phosphate dehydrogenase, hydroxymethylbilane synthase, hypoxanthine phosphoribosyl transferase, peptidylprolyl isomerase A (cyclophilin A), ribosomal protein L4, succinate dehydrogenase flavoprotein subunit A, TATA box binding protein, and tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein--zeta polypeptide The stability of the genes was measured using geNorm, BestKeeper, and NormFinder software. The four most stable genes measured via geNorm were (most to least stable) succinate dehydrogenase flavoprotein, subunit A, peptidylprolyl isomerase A, glyceraldehyde-3-phosphate dehydrogenase, beta actin; the four most stable genes measured via BestKeeper were glyceraldehyde-3-phosphate dehydrogenase, peptidylprolyl isomerase A, beta actin, succinate dehydrogenase flavoprotein, subunit A; and the four most stable genes measured via NormFinder were peptidylprolyl isomerase A, sucdnate dehydrogenase flavoprotein, subunit A, glyceraldehyde-3-phosphate dehydrogenase, beta actin. Conclusions: BestKeeper, geNorm, and NormFinder all generated similar results for the most stable genes in porcine articular cartilage. The use of these appropriate reference genes will facilitate accurate gene expression studies of porcine articular cartilage and suggest appropriate housekeeping genes for articular cartilage studies in other species.展开更多
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
文摘Renormalization group analysis has been proposed to eliminate secular terms in perturbation solutions of differential equations and thus expand the domain of their validity.Here we extend the method to treat periodic orbits or limit cycles.Interesting normal forms could be derived through a generalization of the concept'resonance',which offers nontrivial analytic approximations.Compared with traditional techniques such as multi-scale methods,the current scheme proceeds in a very straightforward and simple way,delivering not only the period and the amplitude but also the transient path to limit cycles.The method is demonstrated with several examples including the Duffing oscillator,van der Pol equation and Lorenz equation.The obtained solutions match well with numerical results and with those derived by traditional analytic methods.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(NRF-2022R1A2C2012243).
文摘In recent decades,brain tumors have emerged as a serious neurological disorder that often leads to death.Hence,Brain Tumor Segmentation(BTS)is significant to enable the visualization,classification,and delineation of tumor regions in Magnetic Resonance Imaging(MRI).However,BTS remains a challenging task because of noise,non-uniform object texture,diverse image content and clustered objects.To address these challenges,a novel model is implemented in this research.The key objective of this research is to improve segmentation accuracy and generalization in BTS by incorporating Switchable Normalization into Faster R-CNN,which effectively captures the fine-grained tumor features to enhance segmentation precision.MRI images are initially acquired from three online datasets:Dataset 1—Brain Tumor Segmentation(BraTS)2018,Dataset 2—BraTS 2019,and Dataset 3—BraTS 2020.Subsequently,the Switchable Normalization-based Faster Regions with Convolutional Neural Networks(SNFRC)model is proposed for improved BTS in MRI images.In the proposed model,Switchable Normalization is integrated into the conventional architecture,enhancing generalization capability and reducing overfitting to unseen image data,which is essential due to the typically limited size of available datasets.The network depth is increased to obtain discriminative semantic features that improve segmentation performance.Specifically,Switchable Normalization captures the diverse feature representations from the brain images.The Faster R-CNN model develops end-to-end training and effective regional proposal generation,with an enhanced training stability using Switchable Normalization,to perform an effective segmentation in MRI images.From the experimental results,the proposed model attains segmentation accuracies of 99.41%,98.12%,and 96.71%on Datasets 1,2,and 3,respectively,outperforming conventional deep learning models used for BTS.
文摘针对说话人确认中,各目标话者模型输出评分分布不一致而导致系统确认阈值设置的困难,本文采取了通过评分规整确定系统最小检测代价函数(DCF)确认阈值的方法。在分析了已有的两种评分规整方法Z norm a l-ization和T norm a lization的基础上,提出了一种结合两者优点的组合规整方法——TZ norm a lization,并据此给出了一种阈值动态修正方法,有效地提高了系统的性能和阈值选取的鲁棒性。对历年的N IST(手机电话语音)评测语料库进行了实验,表明了该方法的有效性。
基金co-supported by Key Technology Research and Development Program of Jiangsu Province, China (No. BE2011178)the Aviation Industry Innovation Fund (No. AC2011214)
文摘A novel approach is proposed to detect the normal vector to product surface in real time for the robotic precision drilling system in aircraft component assembly, and the auto-normalization algorithm is presented based on the detection system. Firstly, the deviation between the normal vector and the spindle axis is measured by the four laser displacement sensors installed at the head of the multi-function end effector. Then, the robot target attitude is inversely solved according to the auto-normalization algorithm. Finally, adjust the robot to the target attitude via pitch and yaw rotations about the tool center point and the spindle axis is corrected in line with the normal vector simultaneously. To test and verify the auto-normalization algorithm, an experimental platform is established in which the laser tracker is introduced for accurate measurement. The results show that the deviations between the corrected spindle axis and the normal vector are all reduced to less than 0.5°, with the mean value 0.32°. It is demonstrated the detection method and the autonormalization algorithm are feasible and reliable.
基金This paper is supported by the National Natural Science Foundation ofChina (No .40371107) .
文摘In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
文摘针对因特网上数字图像的版权保护、认证和完整性等问题,基于DCT变换、image moment normalization和m序列,提出了一种二值水印嵌入算法,实现了二值图像的嵌入和提取。根据m序列的伪随机性和抗干扰性能,使水印具有良好的隐蔽性和稳健性;使用了moment normalization能抵制各种几何攻击。实验表明该算法具有很好的鲁棒性、实用性和可操作性。
基金supported by the National Natural Science Foundation of China(Nos.11472228 and 11202174)the Sichuan Provincial Youth Science and Technology Innovation Team(No.2013TD0004)
文摘This study successfully deals with the inhomogeneous dimension problem of load separation assumption, which is the theoretical basis of the normalization method. According to the dimensionless load separation principle, the normalization method has been improved by intro- ducing a forcible blunting correction. With the improved normalization method, the J-resistance curves of five different metallic materials of CT and SEB specimens are estimated. The forcible blunting correction of initial crack size plays an important role in the J-resistance curve estima- tion, which is closely related to the strain hardening level of material. The higher level of strain hardening leads to a greater difference in JQ determined by different slopes of the blunting line. If the blunting line coefficient recommended by ASTM E1820-11 is used in the improved nor- realization method, it will lead to greater fracture resistance than that processed by the blunting line coefficient recommended by ISO 12135-2002. Therefore, the influence of the blunting line on the determination of JQ must be taken into full account in the fracture toughness assessment of metallic materials.
文摘Background: Expression levels for genes of interest must be normalized with an appropriate reference, or housekeeping gene, to make accurate comparisons of quantitative real-time PCR results. The purpose of this study was to identify the most stable housekeeping genes in porcine articular cartilage subjected to a mechanical injury from a panel of 10 candidate genes. Results: Ten candidate housekeeping genes were evaluated in three different treatment groups of mechanically impacted porcine articular cartilage. The genes evaluated were: beta actin, beta-2-microglobulin, glyceraldehyde-3-phosphate dehydrogenase, hydroxymethylbilane synthase, hypoxanthine phosphoribosyl transferase, peptidylprolyl isomerase A (cyclophilin A), ribosomal protein L4, succinate dehydrogenase flavoprotein subunit A, TATA box binding protein, and tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein--zeta polypeptide The stability of the genes was measured using geNorm, BestKeeper, and NormFinder software. The four most stable genes measured via geNorm were (most to least stable) succinate dehydrogenase flavoprotein, subunit A, peptidylprolyl isomerase A, glyceraldehyde-3-phosphate dehydrogenase, beta actin; the four most stable genes measured via BestKeeper were glyceraldehyde-3-phosphate dehydrogenase, peptidylprolyl isomerase A, beta actin, succinate dehydrogenase flavoprotein, subunit A; and the four most stable genes measured via NormFinder were peptidylprolyl isomerase A, sucdnate dehydrogenase flavoprotein, subunit A, glyceraldehyde-3-phosphate dehydrogenase, beta actin. Conclusions: BestKeeper, geNorm, and NormFinder all generated similar results for the most stable genes in porcine articular cartilage. The use of these appropriate reference genes will facilitate accurate gene expression studies of porcine articular cartilage and suggest appropriate housekeeping genes for articular cartilage studies in other species.