Deep learning attentionmechanisms have achieved remarkable progress in computer vision,but still face limitations when handling images with ambiguous boundaries and uncertain feature representations.Conventional atten...Deep learning attentionmechanisms have achieved remarkable progress in computer vision,but still face limitations when handling images with ambiguous boundaries and uncertain feature representations.Conventional attention modules such as SE-Net,CBAM,ECA-Net,and CA adopt a deterministic paradigm,assigning fixed scalar weights to features without modeling ambiguity or confidence.To overcome these limitations,this paper proposes the Fuzzy Attention Network Layer(FANL),which integrates intuitionistic fuzzy set theory with convolutional neural networks to explicitly represent feature uncertainty through membership(μ),non-membership(ν),and hesitation(π)degrees.FANLconsists of four coremodules:(1)feature dimensionality reduction via global pooling,(2)fuzzymodeling using learnable clustering centers,(3)adaptive attention generation through weighted fusion of fuzzy components,and(4)feature refinement through residual connections.A cross-layer guidance mechanism is further introduced to enhance hierarchical feature propagation,allowing high-level semantic features to incorporate fine-grained texture information from shallow layers.Comprehensive experiments on three benchmark datasets—PathMNIST-30000,full PathMNIST,and Blood MNIST—demonstrate the effectiveness and generalizability of FANL.The model achieves 84.41±0.56%accuracy and a 1.69%improvement over the baseline CNN while maintaining lightweight computational complexity.Ablation studies show that removing any component causes a 1.7%–2.0%performance drop,validating the synergistic contribution of each module.Furthermore,FANL provides superior uncertainty calibration(ECE=0.0452)and interpretable selective prediction under uncertainty.Overall,FANL presents an efficient and uncertaintyaware attention framework that improves both accuracy and reliability,offering a promising direction for robust visual recognition under ambiguous or noisy conditions.展开更多
文摘Deep learning attentionmechanisms have achieved remarkable progress in computer vision,but still face limitations when handling images with ambiguous boundaries and uncertain feature representations.Conventional attention modules such as SE-Net,CBAM,ECA-Net,and CA adopt a deterministic paradigm,assigning fixed scalar weights to features without modeling ambiguity or confidence.To overcome these limitations,this paper proposes the Fuzzy Attention Network Layer(FANL),which integrates intuitionistic fuzzy set theory with convolutional neural networks to explicitly represent feature uncertainty through membership(μ),non-membership(ν),and hesitation(π)degrees.FANLconsists of four coremodules:(1)feature dimensionality reduction via global pooling,(2)fuzzymodeling using learnable clustering centers,(3)adaptive attention generation through weighted fusion of fuzzy components,and(4)feature refinement through residual connections.A cross-layer guidance mechanism is further introduced to enhance hierarchical feature propagation,allowing high-level semantic features to incorporate fine-grained texture information from shallow layers.Comprehensive experiments on three benchmark datasets—PathMNIST-30000,full PathMNIST,and Blood MNIST—demonstrate the effectiveness and generalizability of FANL.The model achieves 84.41±0.56%accuracy and a 1.69%improvement over the baseline CNN while maintaining lightweight computational complexity.Ablation studies show that removing any component causes a 1.7%–2.0%performance drop,validating the synergistic contribution of each module.Furthermore,FANL provides superior uncertainty calibration(ECE=0.0452)and interpretable selective prediction under uncertainty.Overall,FANL presents an efficient and uncertaintyaware attention framework that improves both accuracy and reliability,offering a promising direction for robust visual recognition under ambiguous or noisy conditions.