Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr...Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.展开更多
多特征模态融合时存在噪声的叠加,而为减小模态间的差异采用的级联方式的结构也未充分利用模态间的特征信息,因此设计一种跨模态双流交替交互网络(DAINet)方法。首先,构建双流交替增强(DAE)模块,以交互双分支形式融合模态特征,并通过学...多特征模态融合时存在噪声的叠加,而为减小模态间的差异采用的级联方式的结构也未充分利用模态间的特征信息,因此设计一种跨模态双流交替交互网络(DAINet)方法。首先,构建双流交替增强(DAE)模块,以交互双分支形式融合模态特征,并通过学习模态数据的映射关系,以红外-可见光-红外(IR-VIS-IR)和可见光-红外-可见光(VIS-IR-VIS)的双向反馈调节实现模态间噪声的交叉抑制;然后,构建跨模态特征交互(CMFI)模块,并引入残差结构将红外-可见光模态内以及模态间的低层特征和高层特征进行有效融合,从而减小模态间的差异并充分利用模态间的特征信息;最后,在自建红外-可见光多模态台风数据集及RGB-NIR多模态公开场景数据集上进行实验,以验证DAE模块和CMFI模块的有效性。实验结果表明,与简单级联融合方法相比,所提的基于DAINet的特征融合方法在自建台风数据集上的红外模态和可见光模态上的总体分类精度分别提高了6.61和3.93个百分点,G-mean值分别提高了6.24和2.48个百分点,表明所提方法在类别不均衡分类任务上的通用性;所提方法在RGB-NIR数据集上的2种测试模态下的总体分类精度分别提高了13.47和13.90个百分点。同时,所提方法在2个数据集上分别与IFCNN(general Image Fusion framework based on Convolutional Neural Network)和DenseFuse方法进行对比的实验结果表明,所提方法在自建台风数据集上的2种测试模态下的总体分类精度分别提高了9.82、6.02和17.38、1.68个百分点。展开更多
针对校园网络安全入侵检测中存在的复杂网络流量模式识别困难和实时响应能力不足的问题,提出一种基于双流金字塔增强策略的深度可分离卷积网络(DSCN)模型,以优化校园网络安全入侵检测与响应系统。该模型通过双流结构融合低分辨率与高分...针对校园网络安全入侵检测中存在的复杂网络流量模式识别困难和实时响应能力不足的问题,提出一种基于双流金字塔增强策略的深度可分离卷积网络(DSCN)模型,以优化校园网络安全入侵检测与响应系统。该模型通过双流结构融合低分辨率与高分辨率路径的多尺度信息,显著提升其对复杂网络流量模式的识别能力,同时深度可分离卷积(DSC)通过分解卷积操作,有效降低模型的计算复杂度,进而提升系统的实时响应能力。将该模型在KDD Cup 1999、CICIDS 2017、CICIDS 2021和UNSW-NB15数据集上进行验证,结果表明,该模型在入侵检测率、误报率、响应时间等多项指标上表现优异,为校园网络安全防护提供了一种高效的技术解决方案。展开更多
基金supported by the Key Research and Development Program of Jiangsu Province under Grant BE2022059-3,CTBC Bank through the Industry-Academia Cooperation Project,as well as by the Ministry of Science and Technology of Taiwan through Grants MOST-108-2218-E-002-055,MOST-109-2223-E-009-002-MY3,MOST-109-2218-E-009-025,and MOST431109-2218-E-002-015.
文摘Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.
文摘多特征模态融合时存在噪声的叠加,而为减小模态间的差异采用的级联方式的结构也未充分利用模态间的特征信息,因此设计一种跨模态双流交替交互网络(DAINet)方法。首先,构建双流交替增强(DAE)模块,以交互双分支形式融合模态特征,并通过学习模态数据的映射关系,以红外-可见光-红外(IR-VIS-IR)和可见光-红外-可见光(VIS-IR-VIS)的双向反馈调节实现模态间噪声的交叉抑制;然后,构建跨模态特征交互(CMFI)模块,并引入残差结构将红外-可见光模态内以及模态间的低层特征和高层特征进行有效融合,从而减小模态间的差异并充分利用模态间的特征信息;最后,在自建红外-可见光多模态台风数据集及RGB-NIR多模态公开场景数据集上进行实验,以验证DAE模块和CMFI模块的有效性。实验结果表明,与简单级联融合方法相比,所提的基于DAINet的特征融合方法在自建台风数据集上的红外模态和可见光模态上的总体分类精度分别提高了6.61和3.93个百分点,G-mean值分别提高了6.24和2.48个百分点,表明所提方法在类别不均衡分类任务上的通用性;所提方法在RGB-NIR数据集上的2种测试模态下的总体分类精度分别提高了13.47和13.90个百分点。同时,所提方法在2个数据集上分别与IFCNN(general Image Fusion framework based on Convolutional Neural Network)和DenseFuse方法进行对比的实验结果表明,所提方法在自建台风数据集上的2种测试模态下的总体分类精度分别提高了9.82、6.02和17.38、1.68个百分点。
文摘针对校园网络安全入侵检测中存在的复杂网络流量模式识别困难和实时响应能力不足的问题,提出一种基于双流金字塔增强策略的深度可分离卷积网络(DSCN)模型,以优化校园网络安全入侵检测与响应系统。该模型通过双流结构融合低分辨率与高分辨率路径的多尺度信息,显著提升其对复杂网络流量模式的识别能力,同时深度可分离卷积(DSC)通过分解卷积操作,有效降低模型的计算复杂度,进而提升系统的实时响应能力。将该模型在KDD Cup 1999、CICIDS 2017、CICIDS 2021和UNSW-NB15数据集上进行验证,结果表明,该模型在入侵检测率、误报率、响应时间等多项指标上表现优异,为校园网络安全防护提供了一种高效的技术解决方案。