期刊文献+
共找到155,020篇文章
< 1 2 250 >
每页显示 20 50 100
An Expert System to Detect Political Arabic Articles Orientation Using CatBoost Classifier Boosted by Multi-Level Features
1
作者 Saad M.Darwish Abdul Rahman M.Sabri +1 位作者 Dhafar Hamed Abd Adel A.Elzoghabi 《Computer Systems Science & Engineering》 2024年第6期1595-1624,共30页
The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient... The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%. 展开更多
关键词 Political articles orientation detection CatBoost classifier multi-level features context-based classification social networks machine learning stylometric features
在线阅读 下载PDF
Multi-relation spatiotemporal graph residual network model with multi-level feature attention:A novel approach for landslide displacement prediction
2
作者 Ziqian Wang Xiangwei Fang +3 位作者 Wengang Zhang Xuanming Ding Luqi Wang Chao Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第7期4211-4226,共16页
Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,ther... Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction. 展开更多
关键词 Landslide displacement prediction Spatiotemporal fusion Dynamic graph Data feature enhancement multi-level feature attention
在线阅读 下载PDF
Correction:A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion
3
作者 Khadija Manzoor Fiaz Majeed +5 位作者 Ansar Siddique Talha Meraj Hafiz Tayyab Rauf Mohammed A.El-Meligy Mohamed Sharaf Abd Elatty E.Abd Elgawad 《Computers, Materials & Continua》 SCIE EI 2025年第1期1459-1459,共1页
In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Ela... In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”. 展开更多
关键词 FUSION SKIN featurE
在线阅读 下载PDF
Retrospective analysis of pathological types and imaging features in pancreatic cancer: A comprehensive study
4
作者 Yang-Gang Luo Mei Wu Hong-Guang Chen 《World Journal of Gastrointestinal Oncology》 SCIE 2025年第1期121-129,共9页
BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features ... BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features is crucial for early detection and appropriate treatment planning.AIM To retrospectively analyze the relationship between different pathological types of pancreatic cancer and their corresponding imaging features.METHODS We retrospectively analyzed the data of 500 patients diagnosed with pancreatic cancer between January 2010 and December 2020 at our institution.Pathological types were determined by histopathological examination of the surgical spe-cimens or biopsy samples.The imaging features were assessed using computed tomography,magnetic resonance imaging,and endoscopic ultrasound.Statistical analyses were performed to identify significant associations between pathological types and specific imaging characteristics.RESULTS There were 320(64%)cases of pancreatic ductal adenocarcinoma,75(15%)of intraductal papillary mucinous neoplasms,50(10%)of neuroendocrine tumors,and 55(11%)of other rare types.Distinct imaging features were identified in each pathological type.Pancreatic ductal adenocarcinoma typically presents as a hypodense mass with poorly defined borders on computed tomography,whereas intraductal papillary mucinous neoplasms present as characteristic cystic lesions with mural nodules.Neuroendocrine tumors often appear as hypervascular lesions in contrast-enhanced imaging.Statistical analysis revealed significant correlations between specific imaging features and pathological types(P<0.001).CONCLUSION This study demonstrated a strong association between the pathological types of pancreatic cancer and imaging features.These findings can enhance the accuracy of noninvasive diagnosis and guide personalized treatment approaches. 展开更多
关键词 Pancreatic cancer Pathological types Imaging features Retrospective analysis Diagnostic accuracy
暂未订购
New Features and New Challenges of U.S.-Europe Relations Under Trump 2.0 被引量:1
5
作者 Zhao Huaipu 《Contemporary World》 2025年第3期47-52,共6页
During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 202... During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 2024 presidential election of the United States has plunged the U.S.-Europe relations into more gloomy waters,ushering in a more complex and turbulent period of adjustment. 展开更多
关键词 new features turbulent period Trump U S Europe relations presidential election new challenges UNCERTAINTIES transatlantic alliance
在线阅读 下载PDF
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
6
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
Block-gram:Mining knowledgeable features for efficiently smart contract vulnerability detection
7
作者 Xueshuo Xie Haolong Wang +3 位作者 Zhaolong Jian Yaozheng Fang Zichun Wang Tao Li 《Digital Communications and Networks》 2025年第1期1-12,共12页
Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attack... Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attacks on smart contracts often cause huge economic losses.Since it is difficult to repair and update smart contracts,it is necessary to find the vulnerabilities before they are deployed.However,code analysis,which requires traversal paths,and learning methods,which require many features to be trained,are too time-consuming to detect large-scale on-chain contracts.Learning-based methods will obtain detection models from a feature space compared to code analysis methods such as symbol execution.But the existing features lack the interpretability of the detection results and training model,even worse,the large-scale feature space also affects the efficiency of detection.This paper focuses on improving the detection efficiency by reducing the dimension of the features,combined with expert knowledge.In this paper,a feature extraction model Block-gram is proposed to form low-dimensional knowledge-based features from bytecode.First,the metadata is separated and the runtime code is converted into a sequence of opcodes,which are divided into segments based on some instructions(jumps,etc.).Then,scalable Block-gram features,including 4-dimensional block features and 8-dimensional attribute features,are mined for the learning-based model training.Finally,feature contributions are calculated from SHAP values to measure the relationship between our features and the results of the detection model.In addition,six types of vulnerability labels are made on a dataset containing 33,885 contracts,and these knowledge-based features are evaluated using seven state-of-the-art learning algorithms,which show that the average detection latency speeds up 25×to 650×,compared with the features extracted by N-gram,and also can enhance the interpretability of the detection model. 展开更多
关键词 Smart contract Bytecode&opcode Knowledgeable features Vulnerability detection feature contribution
在线阅读 下载PDF
BAHGRF^(3):Human gait recognition in the indoor environment using deep learning features fusion assisted framework and posterior probability moth flame optimisation
8
作者 Muhammad Abrar Ahmad Khan Muhammad Attique Khan +5 位作者 Ateeq Ur Rehman Ahmed Ibrahim Alzahrani Nasser Alalwan Deepak Gupta Saima Ahmed Rahin Yudong Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期387-401,共15页
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework... Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques. 展开更多
关键词 deep learning feature fusion feature optimization gait classification indoor environment machine learning
在线阅读 下载PDF
Hyperspectral Image Super-Resolution Based on Spatial-Spectral-Frequency Multidimensional Features
9
作者 Sifan Zheng Tao Zhang +3 位作者 Haibing Yin Hao Hu Jian Jiang Chenggang Yan 《Journal of Beijing Institute of Technology》 2025年第1期28-41,共14页
Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vi... Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance. 展开更多
关键词 deep neural network hyperspectral image spatial feature spectral information frequency feature
在线阅读 下载PDF
HybridLSTM:An Innovative Method for Road Scene Categorization Employing Hybrid Features
10
作者 Sanjay P.Pande Sarika Khandelwal +4 位作者 Ganesh K.Yenurkar Rakhi D.Wajgi Vincent O.Nyangaresi Pratik R.Hajare Poonam T.Agarkar 《Computers, Materials & Continua》 2025年第9期5937-5975,共39页
Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learni... Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications. 展开更多
关键词 HybridLSTM autonomous vehicles road scene classification critical requirement global features handcrafted features
在线阅读 下载PDF
Efficient Reconstruction of Spatial Features for Remote Sensing Image-Text Retrieval
11
作者 ZHANG Weihang CHEN Jialiang +3 位作者 ZHANG Wenkai LI Xinming GAO Xin SUN Xian 《Transactions of Nanjing University of Aeronautics and Astronautics》 2025年第1期101-111,共11页
Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasi... Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%. 展开更多
关键词 remote sensing cross-modal image-text retrieval(RSCIR) spatial features channel features contrastive learning parameter effective transfer learning
在线阅读 下载PDF
Detection and analysis of Spartina alterniflora in Chongming East Beach using Sentinel-2 imagery and image texture features
12
作者 Xinyu Mei Zhongbiao Chen +1 位作者 Runxia Sun Yijun He 《Acta Oceanologica Sinica》 2025年第2期80-90,共11页
Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-... Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type. 展开更多
关键词 texture features Recursive feature Elimination with Cross-Validation(RFECV) SHapley Additive exPlanations(SHAP) Sentinel-2 time-series imagery multi-model comparison
在线阅读 下载PDF
A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features
13
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Mohammed ELAffendi Sajid Shah 《Computers, Materials & Continua》 2025年第3期3943-3964,共22页
Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ... Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance. 展开更多
关键词 Image-captioning visual attention deep learning visual features
在线阅读 下载PDF
Environmental Features of Heavy Precipitation under Favorable Synoptic Patterns:A Lesson from the 2021 Henan Extreme Precipitation Event
14
作者 Nan LV Zhongxi LIN +2 位作者 Ji NIE Zhiyong MENG Ping LU 《Advances in Atmospheric Sciences》 2025年第9期1863-1875,共13页
In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by th... In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by the presence of two typhoons and substantial water transport into Henan.However,a favorable synoptic pattern only does not guarantee the occurrence of heavy precipitation in Henan.This study investigates the key environmental features critical for EP under similar synoptic patterns to the 2021 Henan extreme event.It is found that cold clouds are better aggregated on EP days,accompanied by beneficial environment features like enhanced moisture conditions,stronger updrafts,and greater atmospheric instability.The temporal evolution of these environmental features shows a leading signal by one to three days.These results suggest the importance of combining the synoptic pattern and environmental features in the forecasting of heavy precipitation events. 展开更多
关键词 extreme precipitation synoptic pattern environmental features
在线阅读 下载PDF
A lung cancer early-warning risk model based on facial diagnosis image features
15
作者 Yulin SHI Shuyi ZHANG +4 位作者 Jiayi LIU Wenlian CHEN Lingshuang LIU Ling XU Jiatuo XU 《Digital Chinese Medicine》 2025年第3期351-362,共12页
Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included p... Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included patients with pulmonary nodules diagnosed at the Physical Examination Center of Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine from November 1,2019 to December 31,2024,as well as patients with lung cancer diagnosed in the Oncology Departments of Yueyang Hospital of Integrated Traditional Chinese and Western Medicine and Longhua Hospital during the same period.The facial image information of patients with pulmonary nodules and lung cancer was collected using the TFDA-1 tongue and facial diagnosis instrument,and the facial diagnosis features were extracted from it by deep learning technology.Statistical analysis was conducted on the objective facial diagnosis characteristics of the two groups of participants to explore the differences in their facial image characteristics,and the least absolute shrinkage and selection operator(LASSO)regression was used to screen the characteristic variables.Based on the screened feature variables,four machine learning methods:random forest,logistic regression,support vector machine(SVM),and gradient boosting decision tree(GBDT)were used to establish lung cancer classification models independently.Meanwhile,the model performance was evaluated by indicators such as sensitivity,specificity,F1 score,precision,accuracy,the area under the receiver operating characteristic(ROC)curve(AUC),and the area under the precision-recall curve(AP).Results A total of 1275 patients with pulmonary nodules and 1623 patients with lung cancer were included in this study.After propensity score matching(PSM)to adjust for gender and age,535 patients were finally included in the pulmonary nodule group and the lung cancer group,respectively.There were significant differences in multiple color space metrics(such as R,G,B,V,L,a,b,Cr,H,Y,and Cb)and texture metrics[such as gray-levcl co-occurrence matrix(GLCM)-contrast(CON)and GLCM-inverse different moment(IDM)]between the two groups of individuals with pulmonary nodules and lung cancer(P<0.05).To construct a classification model,LASSO regression was used to select 63 key features from the initial 136 facial features.Based on this feature set,the SVM model demonstrated the best performance after 10-fold stratified cross-validation.The model achieved an average AUC of 0.8729 and average accuracy of 0.7990 on the internal test set.Further validation on an independent test set confirmed the model’s robust performance(AUC=0.8233,accuracy=0.7290),indicating its good generalization ability.Feature importance analysis demonstrated that color space indicators and the whole/lip Cr components(including color-B-0,wholecolor-Cr,and lipcolor-Cr)were the core factors in the model’s classification decisions,while texture indicators[GLCM-angular second moment(ASM)_2,GLCM-IDM_1,GLCM-CON_1,GLCM-entropy(ENT)_2]played an important auxiliary role.Conclusion The facial image features of patients with lung cancer and pulmonary nodules show significant differences in color and texture characteristics in multiple areas.The various models constructed based on facial image features all demonstrate good performance,indicating that facial image features can serve as potential biomarkers for lung cancer risk prediction,providing a non-invasive and feasible new approach for early lung cancer screening. 展开更多
关键词 INSPECTION Facial features Lung cancer Early-warning risk Machine learning
暂未订购
Correlation of pathological types and imaging features in pancreatic cancer
16
作者 Qiu-Long Wang Xiao-Jun Yang 《World Journal of Gastrointestinal Oncology》 2025年第8期420-424,共5页
The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting... The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting one of the highest mortality rates associated with cancer globally.The delayed onset of symptoms and diagnosis often results in metastasis or local progression of the cancer,thereby constraining treatment options and outcomes.For these patients,prompt tumour identification and treatment strategising are crucial.The present objective of pancreatic cancer research is to examine the correlation between various pathological types and imaging data to facilitate therapeutic decision-making.This study aims to clarify the correlation between diverse pathological markers and imaging in pancreatic cancer patients,with prospective longitudinal studies potentially providing novel insights into the diagnosis and treatment of pancreatic cancer. 展开更多
关键词 Pancreatic cancer Pathological types Imaging features ASSOCIATION Noninvasive tests
暂未订购
Research on the estimation of wheat AGB at the entire growth stage based on improved convolutional features
17
作者 Tao Liu Jianliang Wang +7 位作者 Jiayi Wang Yuanyuan Zhao Hui Wang Weijun Zhang Zhaosheng Yao Shengping Liu Xiaochun Zhong Chengming Sun 《Journal of Integrative Agriculture》 2025年第4期1403-1423,共21页
The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation method... The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops. 展开更多
关键词 WHEAT above-ground biomass UAV entire growth stage convolutional feature
在线阅读 下载PDF
Face recognition algorithm using collaborative sparse representation based on CNN features
18
作者 ZHAO Shilin XU Chengjun LIU Changrong 《Journal of Measurement Science and Instrumentation》 2025年第1期85-95,共11页
Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extrac... Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods. 展开更多
关键词 sparse representation deep learning face recognition dictionary update feature extraction
在线阅读 下载PDF
Salient Features Guided Augmentation for Enhanced Deep Learning Classification in Hematoxylin and Eosin Images
19
作者 Tengyue Li Shuangli Song +6 位作者 Jiaming Zhou Simon Fong Geyue Li Qun Song Sabah Mohammed Weiwei Lin Juntao Gao 《Computers, Materials & Continua》 2025年第7期1711-1730,共20页
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat... Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement. 展开更多
关键词 Image processing feature extraction deep learning machine learning data augmentation
在线阅读 下载PDF
Risk assessment of type I gastric neuroendocrine tumors based on endoscopic and clinical features of autoimmune gastritis
20
作者 Yan-Mei Li Wen-Juan Guo +7 位作者 Chao Deng Jie Luo Yan-Fen Shi Dan Zhu Qi-Lu Wei Ming-Gang Zhang Shi-Yu Du Huang-Ying Tan 《World Journal of Gastroenterology》 2025年第41期86-96,共11页
BACKGROUND Autoimmune gastritis(AIG)is frequently associated with one or more comorbid conditions,among which type I gastric neuroendocrine tumors(gNETs)warrant significant clinical concern.However,risk factors for th... BACKGROUND Autoimmune gastritis(AIG)is frequently associated with one or more comorbid conditions,among which type I gastric neuroendocrine tumors(gNETs)warrant significant clinical concern.However,risk factors for the development of gNETs in AIG populations remain poorly defined.AIM To characterize the clinical and endoscopic profiles of AIG and identify potential risk factors for gNETs development.METHODS In this single-center cross-sectional study carried out at a tertiary hospital,303 patients with AIG over an 8-year period were retrospectively categorized into gNETs(n=116)and non-gNETs(n=187)groups.Endoscopic and clinical pa-rameters were analyzed.Endoscopic features were systematically reevaluated according to the 2023 Japanese diagnostic criteria for AIG.Feature selection was performed using the Boruta algorithm,and the model discriminative ability was evaluated via receiver operating characteristic curve analysis.RESULTS Among the 303 patients with AIG,116 had gNETs and 187 did not.Compared with the non-gNETs group,patients in the gNETs group were younger(54.3 years vs 60.6 years,P<0.001),had higher rate of vitamin B12 deficiency(77.2%vs 55.8%,P<0.001),lower pepsinogen I(4.3 ng/mL vs 7.4 ng/mL,P<0.001)and pepsinogen I/II ratios(0.7 vs.1.1,P<0.001),and lower prior Helicobacter pylori infection rate(3.4%vs 21.4%,P<0.001).Endoscopically,the gNETs group showed a lower incidence of oxyntic mucosal remnants,hyperplastic polyps,and patchy antral redness.The predictive model incorporating age,prior Helicobacter pylori infection,vitamin B12 level,gastric hy-perplastic polyps,and patchy antral redness showed an area under the curve of 0.830.CONCLUSION Patients with AIG or gNETs exhibit specific clinical and endoscopic features.The predictive model demonstrated favorable discriminative ability and may facilitate risk stratification of gNETs in patients with AIG. 展开更多
关键词 Autoimmune gastritis Gastric neuroendocrine tumors ENDOSCOPY Clinical features Risk factor
暂未订购
上一页 1 2 250 下一页 到第
使用帮助 返回顶部