One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence...One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence of emotions.Observing the reduction of BPSD in dementia through emotions can be considered effective and widely used in the field of non-pharmacological therapy.At present,this article will verify whether the image recognition artificial intelligence(AI)system can correctly reflect the emotional performance of the elderly with dementia through a questionnaire survey of three professional elderly nursing staff.The ANOVA(sig.=0.50)is used to determine that the judgment given by the nursing staff has no obvious deviation,and then Kendall's test(0.722**)and spearman's test(0.863**)are used to verify the judgment severity of the emotion recognition system and the nursing staff unanimously.This implies the usability of the tool.Additionally,it can be expected to be further applied in the research related to BPSD elderly emotion detection.展开更多
The distinctive fault characteristics of battery energy storage stations(BESSs)significantly affect the reliability of conventional protection methods for transmission lines.In this paper,the three-dimensional(3D)data...The distinctive fault characteristics of battery energy storage stations(BESSs)significantly affect the reliability of conventional protection methods for transmission lines.In this paper,the three-dimensional(3D)data scattergrams are constructed using current data from both sides of the transmission line and their sum.Following a comprehensive analysis of the varying characteristics of 3D data scattergrams under different conditions,a 3D data scattergram image classification based protection method is developed.The depth-wise separable convolution is used to ensure a lightweight convolutional neural network(CNN)structure without compromising performance.In addition,a Bayesian hyperparameter optimization algorithm is used to achieve a hyperparametric search to simplify the training process.Compared with artificial neural networks and CNNs,the depth-wise separable convolution based CNN(DPCNN)achieves a higher recognition accuracy.The 3D data scattergram image classification based protection method using DPCNN can accurately separate internal faults from other disturbances and identify fault phases under different operating states and fault conditions.The proposed protection method also shows first-class tolerability against current transformer(CT)saturation and CT measurement errors.展开更多
The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation...The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation approaches based on convolutional neural networks(CNNs)have achieved remarkable effectiveness.Here,we extend a retinal vessel segmentation model with low complexity and high performance based on U-Net,which is one of the most popular architectures.In view of the excellent work of depth-wise separable convolution,we introduce it to replace the standard convolutional layer.The complexity of the proposed model is reduced by decreasing the number of parameters and calculations required for themodel.To ensure performance while lowering redundant parameters,we integrate the pre-trained MobileNet V2 into the encoder.Then,a feature fusion residual module(FFRM)is designed to facilitate complementary strengths by enhancing the effective fusion between adjacent levels,which alleviates extraneous clutter introduced by direct fusion.Finally,we provide detailed comparisons between the proposed SepFE and U-Net in three retinal image mainstream datasets(DRIVE,STARE,and CHASEDB1).The results show that the number of SepFE parameters is only 3%of U-Net,the Flops are only 8%of U-Net,and better segmentation performance is obtained.The superiority of SepFE is further demonstrated through comparisons with other advanced methods.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Introduction:Accurate prediction of protocadherin 8(PCDH8)gene expression status from whole-slide images(WSIs)is critical for thyroid cancer diagnosis and prognosis,as PCDH8 overexpression is associated with tumor agg...Introduction:Accurate prediction of protocadherin 8(PCDH8)gene expression status from whole-slide images(WSIs)is critical for thyroid cancer diagnosis and prognosis,as PCDH8 overexpression is associated with tumor aggressiveness and poor outcomes.Existing methods for PCDH8 detection are often costly,time-consuming,or require specialized expertise.To address these limitations,we developed a novel depth-wise separable residual neural network(DSRNet)for noninvasive PCDH8 status prediction directly from WSIs.Materials and methods:We collected 403 thyroid cancer WSIs from The Cancer Genome Atlas(TCGA),with PCDH8 expression status classified as high or low based on median expression values.Each WSI was divided into 512×512 pixel tiles,with the top 100 non-white tiles selected per slide.DSRNet integrates depth-wise separable convolutions,residual connections,and a deformable convolutional pyramid pooling module to efficiently capture multiscale and long-range features in gigapixel WSIs.The model was trained using tenfold cross-validation.Results:DSRNet achieved state-of-the-art performance with 92.76%accuracy,91.92%precision,92.69%recall,and 0.93 area under the curve on the thyroid cancer dataset(TCGA-THCA),significantly outperforming leading convolutional neural networks and Transformer models.Ablation studies confirmed the contributions of each component,and attention visualization showed that DSRNet focuses on biologically relevant regions.The model also generalized well to a breast cancer dataset(TCGA-BRCA),achieving 89.13%accuracy.Conclusions:We developed DSRNet,a deep learning-based model for predicting PCDH8 status directly from routine hematoxylin and eosin-stained pathological images.DSRNet combines the efficiency of convolutional operations with enhanced long-range dependency modeling,providing a noninvasive,accurate,and interpretable tool for auxiliary thyroid cancer diagnosis and prognosis.The results demonstrate its strong potential for clinical translation,though further multicenter validation is warranted.展开更多
为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征...为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SISNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。展开更多
计算机辅助肝脏肿瘤分割可减少医生工作量,提高手术成功率,因而具有重要的临床诊疗价值。为获得精确的肝脏肿瘤自动分割结果,该文结合医学影像分割领域近年新兴的U-Net模块提出了基于级联可分离空洞残差U-Net(cascaded separable and di...计算机辅助肝脏肿瘤分割可减少医生工作量,提高手术成功率,因而具有重要的临床诊疗价值。为获得精确的肝脏肿瘤自动分割结果,该文结合医学影像分割领域近年新兴的U-Net模块提出了基于级联可分离空洞残差U-Net(cascaded separable and dilated residual U-Net, CSDResU-Net)的肝脏肿瘤分割方法。CSDResU-Net采用了级联操作,解决了因肿瘤在整幅图像中占比小而造成的肿瘤分割数据不平衡问题;通过在分割网络中整合残差单元、深度可分离卷积和空洞卷积,能够增加卷积核感受野并快速提取更具判别性的肝脏肿瘤图像特征,从而提高肝脏肿瘤分割精度。在国际医学图像计算和计算机辅助干预协会肝脏肿瘤分割数据库上的实验结果表明,CSDResU-Net比基线方法的Dice系数指标提升了1.3%,同时发现空洞率对分割网络的性能表现影响较大。展开更多
文摘One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence of emotions.Observing the reduction of BPSD in dementia through emotions can be considered effective and widely used in the field of non-pharmacological therapy.At present,this article will verify whether the image recognition artificial intelligence(AI)system can correctly reflect the emotional performance of the elderly with dementia through a questionnaire survey of three professional elderly nursing staff.The ANOVA(sig.=0.50)is used to determine that the judgment given by the nursing staff has no obvious deviation,and then Kendall's test(0.722**)and spearman's test(0.863**)are used to verify the judgment severity of the emotion recognition system and the nursing staff unanimously.This implies the usability of the tool.Additionally,it can be expected to be further applied in the research related to BPSD elderly emotion detection.
基金supported by the Fundamental Research Funds for Central Universities(No.2024JCCXJD01).
文摘The distinctive fault characteristics of battery energy storage stations(BESSs)significantly affect the reliability of conventional protection methods for transmission lines.In this paper,the three-dimensional(3D)data scattergrams are constructed using current data from both sides of the transmission line and their sum.Following a comprehensive analysis of the varying characteristics of 3D data scattergrams under different conditions,a 3D data scattergram image classification based protection method is developed.The depth-wise separable convolution is used to ensure a lightweight convolutional neural network(CNN)structure without compromising performance.In addition,a Bayesian hyperparameter optimization algorithm is used to achieve a hyperparametric search to simplify the training process.Compared with artificial neural networks and CNNs,the depth-wise separable convolution based CNN(DPCNN)achieves a higher recognition accuracy.The 3D data scattergram image classification based protection method using DPCNN can accurately separate internal faults from other disturbances and identify fault phases under different operating states and fault conditions.The proposed protection method also shows first-class tolerability against current transformer(CT)saturation and CT measurement errors.
基金supported by the Hunan Provincial Natural Science Foundation of China(2021JJ50074)the Scientific Research Fund of Hunan Provincial Education Department(19B082)+6 种基金the Science and Technology Development Center of the Ministry of Education-New Generation Information Technology Innovation Project(2018A02020)the Science Foundation of Hengyang Normal University(19QD12)the Science and Technology Plan Project of Hunan Province(2016TP1020)the Subject Group Construction Project of Hengyang Normal University(18XKQ02)theApplication Oriented SpecialDisciplines,Double First ClassUniversity Project of Hunan Province(Xiangjiaotong[2018]469)the Hunan Province Special Funds of Central Government for Guiding Local Science and Technology Development(2018CT5001)the First Class Undergraduate Major in Hunan Province Internet of Things Major(Xiangjiaotong[2020]248,No.288).
文摘The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation approaches based on convolutional neural networks(CNNs)have achieved remarkable effectiveness.Here,we extend a retinal vessel segmentation model with low complexity and high performance based on U-Net,which is one of the most popular architectures.In view of the excellent work of depth-wise separable convolution,we introduce it to replace the standard convolutional layer.The complexity of the proposed model is reduced by decreasing the number of parameters and calculations required for themodel.To ensure performance while lowering redundant parameters,we integrate the pre-trained MobileNet V2 into the encoder.Then,a feature fusion residual module(FFRM)is designed to facilitate complementary strengths by enhancing the effective fusion between adjacent levels,which alleviates extraneous clutter introduced by direct fusion.Finally,we provide detailed comparisons between the proposed SepFE and U-Net in three retinal image mainstream datasets(DRIVE,STARE,and CHASEDB1).The results show that the number of SepFE parameters is only 3%of U-Net,the Flops are only 8%of U-Net,and better segmentation performance is obtained.The superiority of SepFE is further demonstrated through comparisons with other advanced methods.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金partially supported by the Henan Provincial Key Research and Promotion Projects(Grant No.:242102211012)the Ministry of Education in China Project of Humanities and Social Sciences(Grant No.:24YJCZH261).
文摘Introduction:Accurate prediction of protocadherin 8(PCDH8)gene expression status from whole-slide images(WSIs)is critical for thyroid cancer diagnosis and prognosis,as PCDH8 overexpression is associated with tumor aggressiveness and poor outcomes.Existing methods for PCDH8 detection are often costly,time-consuming,or require specialized expertise.To address these limitations,we developed a novel depth-wise separable residual neural network(DSRNet)for noninvasive PCDH8 status prediction directly from WSIs.Materials and methods:We collected 403 thyroid cancer WSIs from The Cancer Genome Atlas(TCGA),with PCDH8 expression status classified as high or low based on median expression values.Each WSI was divided into 512×512 pixel tiles,with the top 100 non-white tiles selected per slide.DSRNet integrates depth-wise separable convolutions,residual connections,and a deformable convolutional pyramid pooling module to efficiently capture multiscale and long-range features in gigapixel WSIs.The model was trained using tenfold cross-validation.Results:DSRNet achieved state-of-the-art performance with 92.76%accuracy,91.92%precision,92.69%recall,and 0.93 area under the curve on the thyroid cancer dataset(TCGA-THCA),significantly outperforming leading convolutional neural networks and Transformer models.Ablation studies confirmed the contributions of each component,and attention visualization showed that DSRNet focuses on biologically relevant regions.The model also generalized well to a breast cancer dataset(TCGA-BRCA),achieving 89.13%accuracy.Conclusions:We developed DSRNet,a deep learning-based model for predicting PCDH8 status directly from routine hematoxylin and eosin-stained pathological images.DSRNet combines the efficiency of convolutional operations with enhanced long-range dependency modeling,providing a noninvasive,accurate,and interpretable tool for auxiliary thyroid cancer diagnosis and prognosis.The results demonstrate its strong potential for clinical translation,though further multicenter validation is warranted.
文摘为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SISNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。