期刊文献+
共找到155,097篇文章
< 1 2 250 >
每页显示 20 50 100
姿轨控发动机充气阀FEP垫片的蠕变特性和密封性能分析
1
作者 刘丽静 郑娆 +1 位作者 李双喜 刘登宇 《工程力学》 北大核心 2025年第6期234-242,共9页
姿轨控发动机充气阀密封垫在长时间的高压工作状态下容易发生蠕变现象,垫片密封因此失效而导致阀门发生泄漏,严重影响姿轨控发动机的安全可靠运行。为了解决聚全氟乙丙烯(FEP)密封垫的蠕变失效问题,进行了力学及压缩蠕变性能试验,并通过... 姿轨控发动机充气阀密封垫在长时间的高压工作状态下容易发生蠕变现象,垫片密封因此失效而导致阀门发生泄漏,严重影响姿轨控发动机的安全可靠运行。为了解决聚全氟乙丙烯(FEP)密封垫的蠕变失效问题,进行了力学及压缩蠕变性能试验,并通过Origin曲线拟合得到了FEP材料的蠕变系数A、m、n;建立了密封结构的时效硬化蠕变模型,采用有限元方法分析了时间、操作压力、结构参数、材料对FEP垫片蠕变及密封性能的影响,并对FEP材料静态压缩蠕变试验过程的仿真模拟进行了试验验证,仿真与试验误差率小于10%。结果表明:FEP材料的减速蠕变阶段时间约为5 h,蠕变速率随时间增加而减小。确定了FEP密封垫的服役时间为571.67 h及最大工作压力为54.32 MPa。减小圆角半径、提高凸台高度均能够减少等效Mises应力以降低产生裂纹的概率,但同时也降低了接触应力。 展开更多
关键词 蠕变 密封性能 服役时间 fep垫片 时效硬化模型
在线阅读 下载PDF
YOLO-BFEPS:一种高效注意力增强的跨尺度YOLOv10火灾检测模型 被引量:1
2
作者 高均益 张伟 李泽麟 《计算机科学》 北大核心 2025年第S1期412-420,共9页
为解决传统火灾检测模型在处理复杂场景时,特征提取不充分和模型复杂度过高导致预警延迟及识别精度下降的问题,提出一种可部署到终端设备上的基于改进YOLOv10的新型火灾检测模型YOLO-BFEPS(YOLO Bi-directional Fusion with Enhanced Pa... 为解决传统火灾检测模型在处理复杂场景时,特征提取不充分和模型复杂度过高导致预警延迟及识别精度下降的问题,提出一种可部署到终端设备上的基于改进YOLOv10的新型火灾检测模型YOLO-BFEPS(YOLO Bi-directional Fusion with Enhanced Partial Self-Attention),实现了同时对烟雾与火灾的快速准确检测。首先,改进PSA模块,加强空间语义特征提取,解决通道降维建模跨通道关系时带来的信息丢失与计算复杂度增加的问题,提高检测精度,并将改进后的模块记为E-PSA(Enhanced Partial Self-Attention);其次,基于BiFPN提出特征层双向跨连接的思想进行尺度融合,重新设计了YOLOv10的颈部结构,并创新性地增加来自低特征层信息的融合,在保持准确度的同时大大减少了模型参数以及计算复杂度;引入Faster Block结构替换C2f模块的Bottleneck结构,实现模型的轻量化设计,并将其称为C2f-Faster。最后,通过在多个数据集上进行实验验证了所提模型的有效性,其在参数量减少35.5%、计算复杂度降低17.6%的基础上,将检测精度(Precision)和mAP@0.5分别提升了5.9%和1.4%。 展开更多
关键词 高效注意力 多尺度特征 加权双向特征金字塔 火灾检测 YOLOv10 轻量化 计算机视觉 深度学习
在线阅读 下载PDF
Correction:A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion
3
作者 Khadija Manzoor Fiaz Majeed +5 位作者 Ansar Siddique Talha Meraj Hafiz Tayyab Rauf Mohammed A.El-Meligy Mohamed Sharaf Abd Elatty E.Abd Elgawad 《Computers, Materials & Continua》 SCIE EI 2025年第1期1459-1459,共1页
In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Ela... In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”. 展开更多
关键词 FUSION SKIN FEATURE
在线阅读 下载PDF
Retrospective analysis of pathological types and imaging features in pancreatic cancer: A comprehensive study
4
作者 Yang-Gang Luo Mei Wu Hong-Guang Chen 《World Journal of Gastrointestinal Oncology》 SCIE 2025年第1期121-129,共9页
BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features ... BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features is crucial for early detection and appropriate treatment planning.AIM To retrospectively analyze the relationship between different pathological types of pancreatic cancer and their corresponding imaging features.METHODS We retrospectively analyzed the data of 500 patients diagnosed with pancreatic cancer between January 2010 and December 2020 at our institution.Pathological types were determined by histopathological examination of the surgical spe-cimens or biopsy samples.The imaging features were assessed using computed tomography,magnetic resonance imaging,and endoscopic ultrasound.Statistical analyses were performed to identify significant associations between pathological types and specific imaging characteristics.RESULTS There were 320(64%)cases of pancreatic ductal adenocarcinoma,75(15%)of intraductal papillary mucinous neoplasms,50(10%)of neuroendocrine tumors,and 55(11%)of other rare types.Distinct imaging features were identified in each pathological type.Pancreatic ductal adenocarcinoma typically presents as a hypodense mass with poorly defined borders on computed tomography,whereas intraductal papillary mucinous neoplasms present as characteristic cystic lesions with mural nodules.Neuroendocrine tumors often appear as hypervascular lesions in contrast-enhanced imaging.Statistical analysis revealed significant correlations between specific imaging features and pathological types(P<0.001).CONCLUSION This study demonstrated a strong association between the pathological types of pancreatic cancer and imaging features.These findings can enhance the accuracy of noninvasive diagnosis and guide personalized treatment approaches. 展开更多
关键词 Pancreatic cancer Pathological types Imaging features Retrospective analysis Diagnostic accuracy
暂未订购
New Features and New Challenges of U.S.-Europe Relations Under Trump 2.0 被引量:1
5
作者 Zhao Huaipu 《Contemporary World》 2025年第3期47-52,共6页
During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 202... During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 2024 presidential election of the United States has plunged the U.S.-Europe relations into more gloomy waters,ushering in a more complex and turbulent period of adjustment. 展开更多
关键词 new features turbulent period Trump U S Europe relations presidential election new challenges UNCERTAINTIES transatlantic alliance
在线阅读 下载PDF
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
6
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
FeP纳米晶催化活化Li_(2)S构建长寿命锂硫电池
7
作者 陈飞 翟飞飞 +1 位作者 宋昊鑫 吕盼 《应用化学》 北大核心 2025年第5期668-674,共7页
通过简便且易规模化的高温原位热解聚合物制备出自支撑的核壳结构柔性多孔催化载体,用于高载量负载活性硫组分。壳层内FeP纳米晶颗粒的引入,在加速硫电极结构内离子输运的同时,作为活性位点高效催化Li2S的活化转化,加速了Li-S电池中Li_(... 通过简便且易规模化的高温原位热解聚合物制备出自支撑的核壳结构柔性多孔催化载体,用于高载量负载活性硫组分。壳层内FeP纳米晶颗粒的引入,在加速硫电极结构内离子输运的同时,作为活性位点高效催化Li2S的活化转化,加速了Li-S电池中Li_(2)S↔S_(8)的转化反应,保证了锂硫电池的高度可逆性和长循环稳定性。制备的高硫负载阴极展现出较高的放电比容量(1306.2 mA·h/g)和十分优异的长循环稳定性(2 C→0.1 C,容量恢复率95.8%)。通过循环后电池拆解分析,验证了实验条件下所组装的锂硫电池性能衰减机制。 展开更多
关键词 锂硫电池 磷化铁纳米晶 核壳结构 硫化锂活化 多孔载体
在线阅读 下载PDF
Block-gram:Mining knowledgeable features for efficiently smart contract vulnerability detection
8
作者 Xueshuo Xie Haolong Wang +3 位作者 Zhaolong Jian Yaozheng Fang Zichun Wang Tao Li 《Digital Communications and Networks》 2025年第1期1-12,共12页
Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attack... Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attacks on smart contracts often cause huge economic losses.Since it is difficult to repair and update smart contracts,it is necessary to find the vulnerabilities before they are deployed.However,code analysis,which requires traversal paths,and learning methods,which require many features to be trained,are too time-consuming to detect large-scale on-chain contracts.Learning-based methods will obtain detection models from a feature space compared to code analysis methods such as symbol execution.But the existing features lack the interpretability of the detection results and training model,even worse,the large-scale feature space also affects the efficiency of detection.This paper focuses on improving the detection efficiency by reducing the dimension of the features,combined with expert knowledge.In this paper,a feature extraction model Block-gram is proposed to form low-dimensional knowledge-based features from bytecode.First,the metadata is separated and the runtime code is converted into a sequence of opcodes,which are divided into segments based on some instructions(jumps,etc.).Then,scalable Block-gram features,including 4-dimensional block features and 8-dimensional attribute features,are mined for the learning-based model training.Finally,feature contributions are calculated from SHAP values to measure the relationship between our features and the results of the detection model.In addition,six types of vulnerability labels are made on a dataset containing 33,885 contracts,and these knowledge-based features are evaluated using seven state-of-the-art learning algorithms,which show that the average detection latency speeds up 25×to 650×,compared with the features extracted by N-gram,and also can enhance the interpretability of the detection model. 展开更多
关键词 Smart contract Bytecode&opcode Knowledgeable features Vulnerability detection Feature contribution
在线阅读 下载PDF
BAHGRF^(3):Human gait recognition in the indoor environment using deep learning features fusion assisted framework and posterior probability moth flame optimisation
9
作者 Muhammad Abrar Ahmad Khan Muhammad Attique Khan +5 位作者 Ateeq Ur Rehman Ahmed Ibrahim Alzahrani Nasser Alalwan Deepak Gupta Saima Ahmed Rahin Yudong Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期387-401,共15页
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework... Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques. 展开更多
关键词 deep learning feature fusion feature optimization gait classification indoor environment machine learning
在线阅读 下载PDF
Hyperspectral Image Super-Resolution Based on Spatial-Spectral-Frequency Multidimensional Features
10
作者 Sifan Zheng Tao Zhang +3 位作者 Haibing Yin Hao Hu Jian Jiang Chenggang Yan 《Journal of Beijing Institute of Technology》 2025年第1期28-41,共14页
Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vi... Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance. 展开更多
关键词 deep neural network hyperspectral image spatial feature spectral information frequency feature
在线阅读 下载PDF
HybridLSTM:An Innovative Method for Road Scene Categorization Employing Hybrid Features
11
作者 Sanjay P.Pande Sarika Khandelwal +4 位作者 Ganesh K.Yenurkar Rakhi D.Wajgi Vincent O.Nyangaresi Pratik R.Hajare Poonam T.Agarkar 《Computers, Materials & Continua》 2025年第9期5937-5975,共39页
Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learni... Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications. 展开更多
关键词 HybridLSTM autonomous vehicles road scene classification critical requirement global features handcrafted features
在线阅读 下载PDF
Efficient Reconstruction of Spatial Features for Remote Sensing Image-Text Retrieval
12
作者 ZHANG Weihang CHEN Jialiang +3 位作者 ZHANG Wenkai LI Xinming GAO Xin SUN Xian 《Transactions of Nanjing University of Aeronautics and Astronautics》 2025年第1期101-111,共11页
Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasi... Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%. 展开更多
关键词 remote sensing cross-modal image-text retrieval(RSCIR) spatial features channel features contrastive learning parameter effective transfer learning
在线阅读 下载PDF
Detection and analysis of Spartina alterniflora in Chongming East Beach using Sentinel-2 imagery and image texture features
13
作者 Xinyu Mei Zhongbiao Chen +1 位作者 Runxia Sun Yijun He 《Acta Oceanologica Sinica》 2025年第2期80-90,共11页
Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-... Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type. 展开更多
关键词 texture features Recursive Feature Elimination with Cross-Validation(RFECV) SHapley Additive exPlanations(SHAP) Sentinel-2 time-series imagery multi-model comparison
在线阅读 下载PDF
A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features
14
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Mohammed ELAffendi Sajid Shah 《Computers, Materials & Continua》 2025年第3期3943-3964,共22页
Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ... Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance. 展开更多
关键词 Image-captioning visual attention deep learning visual features
在线阅读 下载PDF
Environmental Features of Heavy Precipitation under Favorable Synoptic Patterns:A Lesson from the 2021 Henan Extreme Precipitation Event
15
作者 Nan LV Zhongxi LIN +2 位作者 Ji NIE Zhiyong MENG Ping LU 《Advances in Atmospheric Sciences》 2025年第9期1863-1875,共13页
In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by th... In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by the presence of two typhoons and substantial water transport into Henan.However,a favorable synoptic pattern only does not guarantee the occurrence of heavy precipitation in Henan.This study investigates the key environmental features critical for EP under similar synoptic patterns to the 2021 Henan extreme event.It is found that cold clouds are better aggregated on EP days,accompanied by beneficial environment features like enhanced moisture conditions,stronger updrafts,and greater atmospheric instability.The temporal evolution of these environmental features shows a leading signal by one to three days.These results suggest the importance of combining the synoptic pattern and environmental features in the forecasting of heavy precipitation events. 展开更多
关键词 extreme precipitation synoptic pattern environmental features
在线阅读 下载PDF
A lung cancer early-warning risk model based on facial diagnosis image features
16
作者 Yulin SHI Shuyi ZHANG +4 位作者 Jiayi LIU Wenlian CHEN Lingshuang LIU Ling XU Jiatuo XU 《Digital Chinese Medicine》 2025年第3期351-362,共12页
Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included p... Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included patients with pulmonary nodules diagnosed at the Physical Examination Center of Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine from November 1,2019 to December 31,2024,as well as patients with lung cancer diagnosed in the Oncology Departments of Yueyang Hospital of Integrated Traditional Chinese and Western Medicine and Longhua Hospital during the same period.The facial image information of patients with pulmonary nodules and lung cancer was collected using the TFDA-1 tongue and facial diagnosis instrument,and the facial diagnosis features were extracted from it by deep learning technology.Statistical analysis was conducted on the objective facial diagnosis characteristics of the two groups of participants to explore the differences in their facial image characteristics,and the least absolute shrinkage and selection operator(LASSO)regression was used to screen the characteristic variables.Based on the screened feature variables,four machine learning methods:random forest,logistic regression,support vector machine(SVM),and gradient boosting decision tree(GBDT)were used to establish lung cancer classification models independently.Meanwhile,the model performance was evaluated by indicators such as sensitivity,specificity,F1 score,precision,accuracy,the area under the receiver operating characteristic(ROC)curve(AUC),and the area under the precision-recall curve(AP).Results A total of 1275 patients with pulmonary nodules and 1623 patients with lung cancer were included in this study.After propensity score matching(PSM)to adjust for gender and age,535 patients were finally included in the pulmonary nodule group and the lung cancer group,respectively.There were significant differences in multiple color space metrics(such as R,G,B,V,L,a,b,Cr,H,Y,and Cb)and texture metrics[such as gray-levcl co-occurrence matrix(GLCM)-contrast(CON)and GLCM-inverse different moment(IDM)]between the two groups of individuals with pulmonary nodules and lung cancer(P<0.05).To construct a classification model,LASSO regression was used to select 63 key features from the initial 136 facial features.Based on this feature set,the SVM model demonstrated the best performance after 10-fold stratified cross-validation.The model achieved an average AUC of 0.8729 and average accuracy of 0.7990 on the internal test set.Further validation on an independent test set confirmed the model’s robust performance(AUC=0.8233,accuracy=0.7290),indicating its good generalization ability.Feature importance analysis demonstrated that color space indicators and the whole/lip Cr components(including color-B-0,wholecolor-Cr,and lipcolor-Cr)were the core factors in the model’s classification decisions,while texture indicators[GLCM-angular second moment(ASM)_2,GLCM-IDM_1,GLCM-CON_1,GLCM-entropy(ENT)_2]played an important auxiliary role.Conclusion The facial image features of patients with lung cancer and pulmonary nodules show significant differences in color and texture characteristics in multiple areas.The various models constructed based on facial image features all demonstrate good performance,indicating that facial image features can serve as potential biomarkers for lung cancer risk prediction,providing a non-invasive and feasible new approach for early lung cancer screening. 展开更多
关键词 INSPECTION Facial features Lung cancer Early-warning risk Machine learning
暂未订购
Correlation of pathological types and imaging features in pancreatic cancer
17
作者 Qiu-Long Wang Xiao-Jun Yang 《World Journal of Gastrointestinal Oncology》 2025年第8期420-424,共5页
The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting... The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting one of the highest mortality rates associated with cancer globally.The delayed onset of symptoms and diagnosis often results in metastasis or local progression of the cancer,thereby constraining treatment options and outcomes.For these patients,prompt tumour identification and treatment strategising are crucial.The present objective of pancreatic cancer research is to examine the correlation between various pathological types and imaging data to facilitate therapeutic decision-making.This study aims to clarify the correlation between diverse pathological markers and imaging in pancreatic cancer patients,with prospective longitudinal studies potentially providing novel insights into the diagnosis and treatment of pancreatic cancer. 展开更多
关键词 Pancreatic cancer Pathological types Imaging features ASSOCIATION Noninvasive tests
暂未订购
Research on the estimation of wheat AGB at the entire growth stage based on improved convolutional features
18
作者 Tao Liu Jianliang Wang +7 位作者 Jiayi Wang Yuanyuan Zhao Hui Wang Weijun Zhang Zhaosheng Yao Shengping Liu Xiaochun Zhong Chengming Sun 《Journal of Integrative Agriculture》 2025年第4期1403-1423,共21页
The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation method... The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops. 展开更多
关键词 WHEAT above-ground biomass UAV entire growth stage convolutional feature
在线阅读 下载PDF
Face recognition algorithm using collaborative sparse representation based on CNN features
19
作者 ZHAO Shilin XU Chengjun LIU Changrong 《Journal of Measurement Science and Instrumentation》 2025年第1期85-95,共11页
Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extrac... Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods. 展开更多
关键词 sparse representation deep learning face recognition dictionary update feature extraction
在线阅读 下载PDF
Salient Features Guided Augmentation for Enhanced Deep Learning Classification in Hematoxylin and Eosin Images
20
作者 Tengyue Li Shuangli Song +6 位作者 Jiaming Zhou Simon Fong Geyue Li Qun Song Sabah Mohammed Weiwei Lin Juntao Gao 《Computers, Materials & Continua》 2025年第7期1711-1730,共20页
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat... Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement. 展开更多
关键词 Image processing feature extraction deep learning machine learning data augmentation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部