The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient...The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.展开更多
Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,ther...Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction.展开更多
In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Ela...In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.展开更多
BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features ...BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features is crucial for early detection and appropriate treatment planning.AIM To retrospectively analyze the relationship between different pathological types of pancreatic cancer and their corresponding imaging features.METHODS We retrospectively analyzed the data of 500 patients diagnosed with pancreatic cancer between January 2010 and December 2020 at our institution.Pathological types were determined by histopathological examination of the surgical spe-cimens or biopsy samples.The imaging features were assessed using computed tomography,magnetic resonance imaging,and endoscopic ultrasound.Statistical analyses were performed to identify significant associations between pathological types and specific imaging characteristics.RESULTS There were 320(64%)cases of pancreatic ductal adenocarcinoma,75(15%)of intraductal papillary mucinous neoplasms,50(10%)of neuroendocrine tumors,and 55(11%)of other rare types.Distinct imaging features were identified in each pathological type.Pancreatic ductal adenocarcinoma typically presents as a hypodense mass with poorly defined borders on computed tomography,whereas intraductal papillary mucinous neoplasms present as characteristic cystic lesions with mural nodules.Neuroendocrine tumors often appear as hypervascular lesions in contrast-enhanced imaging.Statistical analysis revealed significant correlations between specific imaging features and pathological types(P<0.001).CONCLUSION This study demonstrated a strong association between the pathological types of pancreatic cancer and imaging features.These findings can enhance the accuracy of noninvasive diagnosis and guide personalized treatment approaches.展开更多
During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 202...During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 2024 presidential election of the United States has plunged the U.S.-Europe relations into more gloomy waters,ushering in a more complex and turbulent period of adjustment.展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attack...Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attacks on smart contracts often cause huge economic losses.Since it is difficult to repair and update smart contracts,it is necessary to find the vulnerabilities before they are deployed.However,code analysis,which requires traversal paths,and learning methods,which require many features to be trained,are too time-consuming to detect large-scale on-chain contracts.Learning-based methods will obtain detection models from a feature space compared to code analysis methods such as symbol execution.But the existing features lack the interpretability of the detection results and training model,even worse,the large-scale feature space also affects the efficiency of detection.This paper focuses on improving the detection efficiency by reducing the dimension of the features,combined with expert knowledge.In this paper,a feature extraction model Block-gram is proposed to form low-dimensional knowledge-based features from bytecode.First,the metadata is separated and the runtime code is converted into a sequence of opcodes,which are divided into segments based on some instructions(jumps,etc.).Then,scalable Block-gram features,including 4-dimensional block features and 8-dimensional attribute features,are mined for the learning-based model training.Finally,feature contributions are calculated from SHAP values to measure the relationship between our features and the results of the detection model.In addition,six types of vulnerability labels are made on a dataset containing 33,885 contracts,and these knowledge-based features are evaluated using seven state-of-the-art learning algorithms,which show that the average detection latency speeds up 25×to 650×,compared with the features extracted by N-gram,and also can enhance the interpretability of the detection model.展开更多
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework...Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.展开更多
Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vi...Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.展开更多
Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learni...Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.展开更多
Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasi...Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%.展开更多
Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-...Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type.展开更多
Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ...Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance.展开更多
In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by th...In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by the presence of two typhoons and substantial water transport into Henan.However,a favorable synoptic pattern only does not guarantee the occurrence of heavy precipitation in Henan.This study investigates the key environmental features critical for EP under similar synoptic patterns to the 2021 Henan extreme event.It is found that cold clouds are better aggregated on EP days,accompanied by beneficial environment features like enhanced moisture conditions,stronger updrafts,and greater atmospheric instability.The temporal evolution of these environmental features shows a leading signal by one to three days.These results suggest the importance of combining the synoptic pattern and environmental features in the forecasting of heavy precipitation events.展开更多
Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included p...Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included patients with pulmonary nodules diagnosed at the Physical Examination Center of Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine from November 1,2019 to December 31,2024,as well as patients with lung cancer diagnosed in the Oncology Departments of Yueyang Hospital of Integrated Traditional Chinese and Western Medicine and Longhua Hospital during the same period.The facial image information of patients with pulmonary nodules and lung cancer was collected using the TFDA-1 tongue and facial diagnosis instrument,and the facial diagnosis features were extracted from it by deep learning technology.Statistical analysis was conducted on the objective facial diagnosis characteristics of the two groups of participants to explore the differences in their facial image characteristics,and the least absolute shrinkage and selection operator(LASSO)regression was used to screen the characteristic variables.Based on the screened feature variables,four machine learning methods:random forest,logistic regression,support vector machine(SVM),and gradient boosting decision tree(GBDT)were used to establish lung cancer classification models independently.Meanwhile,the model performance was evaluated by indicators such as sensitivity,specificity,F1 score,precision,accuracy,the area under the receiver operating characteristic(ROC)curve(AUC),and the area under the precision-recall curve(AP).Results A total of 1275 patients with pulmonary nodules and 1623 patients with lung cancer were included in this study.After propensity score matching(PSM)to adjust for gender and age,535 patients were finally included in the pulmonary nodule group and the lung cancer group,respectively.There were significant differences in multiple color space metrics(such as R,G,B,V,L,a,b,Cr,H,Y,and Cb)and texture metrics[such as gray-levcl co-occurrence matrix(GLCM)-contrast(CON)and GLCM-inverse different moment(IDM)]between the two groups of individuals with pulmonary nodules and lung cancer(P<0.05).To construct a classification model,LASSO regression was used to select 63 key features from the initial 136 facial features.Based on this feature set,the SVM model demonstrated the best performance after 10-fold stratified cross-validation.The model achieved an average AUC of 0.8729 and average accuracy of 0.7990 on the internal test set.Further validation on an independent test set confirmed the model’s robust performance(AUC=0.8233,accuracy=0.7290),indicating its good generalization ability.Feature importance analysis demonstrated that color space indicators and the whole/lip Cr components(including color-B-0,wholecolor-Cr,and lipcolor-Cr)were the core factors in the model’s classification decisions,while texture indicators[GLCM-angular second moment(ASM)_2,GLCM-IDM_1,GLCM-CON_1,GLCM-entropy(ENT)_2]played an important auxiliary role.Conclusion The facial image features of patients with lung cancer and pulmonary nodules show significant differences in color and texture characteristics in multiple areas.The various models constructed based on facial image features all demonstrate good performance,indicating that facial image features can serve as potential biomarkers for lung cancer risk prediction,providing a non-invasive and feasible new approach for early lung cancer screening.展开更多
The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting...The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting one of the highest mortality rates associated with cancer globally.The delayed onset of symptoms and diagnosis often results in metastasis or local progression of the cancer,thereby constraining treatment options and outcomes.For these patients,prompt tumour identification and treatment strategising are crucial.The present objective of pancreatic cancer research is to examine the correlation between various pathological types and imaging data to facilitate therapeutic decision-making.This study aims to clarify the correlation between diverse pathological markers and imaging in pancreatic cancer patients,with prospective longitudinal studies potentially providing novel insights into the diagnosis and treatment of pancreatic cancer.展开更多
The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation method...The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops.展开更多
Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extrac...Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods.展开更多
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat...Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.展开更多
BACKGROUND Autoimmune gastritis(AIG)is frequently associated with one or more comorbid conditions,among which type I gastric neuroendocrine tumors(gNETs)warrant significant clinical concern.However,risk factors for th...BACKGROUND Autoimmune gastritis(AIG)is frequently associated with one or more comorbid conditions,among which type I gastric neuroendocrine tumors(gNETs)warrant significant clinical concern.However,risk factors for the development of gNETs in AIG populations remain poorly defined.AIM To characterize the clinical and endoscopic profiles of AIG and identify potential risk factors for gNETs development.METHODS In this single-center cross-sectional study carried out at a tertiary hospital,303 patients with AIG over an 8-year period were retrospectively categorized into gNETs(n=116)and non-gNETs(n=187)groups.Endoscopic and clinical pa-rameters were analyzed.Endoscopic features were systematically reevaluated according to the 2023 Japanese diagnostic criteria for AIG.Feature selection was performed using the Boruta algorithm,and the model discriminative ability was evaluated via receiver operating characteristic curve analysis.RESULTS Among the 303 patients with AIG,116 had gNETs and 187 did not.Compared with the non-gNETs group,patients in the gNETs group were younger(54.3 years vs 60.6 years,P<0.001),had higher rate of vitamin B12 deficiency(77.2%vs 55.8%,P<0.001),lower pepsinogen I(4.3 ng/mL vs 7.4 ng/mL,P<0.001)and pepsinogen I/II ratios(0.7 vs.1.1,P<0.001),and lower prior Helicobacter pylori infection rate(3.4%vs 21.4%,P<0.001).Endoscopically,the gNETs group showed a lower incidence of oxyntic mucosal remnants,hyperplastic polyps,and patchy antral redness.The predictive model incorporating age,prior Helicobacter pylori infection,vitamin B12 level,gastric hy-perplastic polyps,and patchy antral redness showed an area under the curve of 0.830.CONCLUSION Patients with AIG or gNETs exhibit specific clinical and endoscopic features.The predictive model demonstrated favorable discriminative ability and may facilitate risk stratification of gNETs in patients with AIG.展开更多
文摘The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.
基金the funding support from the National Natural Science Foundation of China(Grant No.52308340)Chongqing Talent Innovation and Entrepreneurship Demonstration Team Project(Grant No.cstc2024ycjh-bgzxm0012)the Science and Technology Projects supported by China Coal Technology and Engineering Chongqing Design and Research Institute(Group)Co.,Ltd.(Grant No.H20230317).
文摘Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction.
文摘In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.
文摘BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features is crucial for early detection and appropriate treatment planning.AIM To retrospectively analyze the relationship between different pathological types of pancreatic cancer and their corresponding imaging features.METHODS We retrospectively analyzed the data of 500 patients diagnosed with pancreatic cancer between January 2010 and December 2020 at our institution.Pathological types were determined by histopathological examination of the surgical spe-cimens or biopsy samples.The imaging features were assessed using computed tomography,magnetic resonance imaging,and endoscopic ultrasound.Statistical analyses were performed to identify significant associations between pathological types and specific imaging characteristics.RESULTS There were 320(64%)cases of pancreatic ductal adenocarcinoma,75(15%)of intraductal papillary mucinous neoplasms,50(10%)of neuroendocrine tumors,and 55(11%)of other rare types.Distinct imaging features were identified in each pathological type.Pancreatic ductal adenocarcinoma typically presents as a hypodense mass with poorly defined borders on computed tomography,whereas intraductal papillary mucinous neoplasms present as characteristic cystic lesions with mural nodules.Neuroendocrine tumors often appear as hypervascular lesions in contrast-enhanced imaging.Statistical analysis revealed significant correlations between specific imaging features and pathological types(P<0.001).CONCLUSION This study demonstrated a strong association between the pathological types of pancreatic cancer and imaging features.These findings can enhance the accuracy of noninvasive diagnosis and guide personalized treatment approaches.
文摘During Donald Trump’s first term,the“Trump Shock”brought world politics into an era of uncertainties and pulled the transatlantic alliance down to its lowest point in history.The Trump 2.0 tsunami brewed by the 2024 presidential election of the United States has plunged the U.S.-Europe relations into more gloomy waters,ushering in a more complex and turbulent period of adjustment.
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金partially supported by the National Natural Science Foundation (62272248)the Open Project Fund of State Key Laboratory of Computer Architecture,Institute of Computing Technology,Chinese Academy of Sciences (CARCHA202108,CARCH201905)+1 种基金the Natural Science Foundation of Tianjin (20JCZDJC00610)Sponsored by Zhejiang Lab (2021KF0AB04)。
文摘Smart contracts are widely used on the blockchain to implement complex transactions,such as decentralized applications on Ethereum.Effective vulnerability detection of large-scale smart contracts is critical,as attacks on smart contracts often cause huge economic losses.Since it is difficult to repair and update smart contracts,it is necessary to find the vulnerabilities before they are deployed.However,code analysis,which requires traversal paths,and learning methods,which require many features to be trained,are too time-consuming to detect large-scale on-chain contracts.Learning-based methods will obtain detection models from a feature space compared to code analysis methods such as symbol execution.But the existing features lack the interpretability of the detection results and training model,even worse,the large-scale feature space also affects the efficiency of detection.This paper focuses on improving the detection efficiency by reducing the dimension of the features,combined with expert knowledge.In this paper,a feature extraction model Block-gram is proposed to form low-dimensional knowledge-based features from bytecode.First,the metadata is separated and the runtime code is converted into a sequence of opcodes,which are divided into segments based on some instructions(jumps,etc.).Then,scalable Block-gram features,including 4-dimensional block features and 8-dimensional attribute features,are mined for the learning-based model training.Finally,feature contributions are calculated from SHAP values to measure the relationship between our features and the results of the detection model.In addition,six types of vulnerability labels are made on a dataset containing 33,885 contracts,and these knowledge-based features are evaluated using seven state-of-the-art learning algorithms,which show that the average detection latency speeds up 25×to 650×,compared with the features extracted by N-gram,and also can enhance the interpretability of the detection model.
基金King Saud University,Grant/Award Number:RSP2024R157。
文摘Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.
基金supported by the Fundamental Research Funds for the Provincial Universities of Zhejiang (No.GK249909299001-036)National Key Research and Development Program of China (No. 2023YFB4502803)Zhejiang Provincial Natural Science Foundation of China (No.LDT23F01014F01)。
文摘Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.
文摘Recognizing road scene context from a single image remains a critical challenge for intelligent autonomous driving systems,particularly in dynamic and unstructured environments.While recent advancements in deep learning have significantly enhanced road scene classification,simultaneously achieving high accuracy,computational efficiency,and adaptability across diverse conditions continues to be difficult.To address these challenges,this study proposes HybridLSTM,a novel and efficient framework that integrates deep learning-based,object-based,and handcrafted feature extraction methods within a unified architecture.HybridLSTM is designed to classify four distinct road scene categories—crosswalk(CW),highway(HW),overpass/tunnel(OP/T),and parking(P)—by leveraging multiple publicly available datasets,including Places-365,BDD100K,LabelMe,and KITTI,thereby promoting domain generalization.The framework fuses object-level features extracted using YOLOv5 and VGG19,scene-level global representations obtained from a modified VGG19,and fine-grained texture features captured through eight handcrafted descriptors.This hybrid feature fusion enables the model to capture both semantic context and low-level visual cues,which are critical for robust scene understanding.To model spatial arrangements and latent sequential dependencies present even in static imagery,the combined features are processed through a Long Short-Term Memory(LSTM)network,allowing the extraction of discriminative patterns across heterogeneous feature spaces.Extensive experiments conducted on 2725 annotated road scene images,with an 80:20 training-to-testing split,validate the effectiveness of the proposed model.HybridLSTM achieves a classification accuracy of 96.3%,a precision of 95.8%,a recall of 96.1%,and an F1-score of 96.0%,outperforming several existing state-of-the-art methods.These results demonstrate the robustness,scalability,and generalization capability of HybridLSTM across varying environments and scene complexities.Moreover,the framework is optimized to balance classification performance with computational efficiency,making it highly suitable for real-time deployment in embedded autonomous driving systems.Future work will focus on extending the model to multi-class detection within a single frame and optimizing it further for edge-device deployments to reduce computational overhead in practical applications.
基金supported by the National Key R&D Program of China(No.2022ZD0118402)。
文摘Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%.
基金The National Key Research and Development Program of China under contract No.2023YFC3008204the National Natural Science Foundation of China under contract Nos 41977302 and 42476217.
文摘Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type.
基金supported by the National Natural Science Foundation of China(Nos.U22A2034,62177047)High Caliber Foreign Experts Introduction Plan funded by MOST,and Central South University Research Programme of Advanced Interdisciplinary Studies(No.2023QYJC020).
文摘Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance.
基金supported by the National Key Research and Development Pro-gram of China(Grant No.2022YFC3003902)the National Natu-ral Science Foundation of China(Grant Nos.42075146 and 42275006).
文摘In July 2021,a catastrophic extreme precipitation(EP)event occurred in Henan Province,China,resulting in considerable human and economic losses.The synoptic pattern during this event is distinctive,characterized by the presence of two typhoons and substantial water transport into Henan.However,a favorable synoptic pattern only does not guarantee the occurrence of heavy precipitation in Henan.This study investigates the key environmental features critical for EP under similar synoptic patterns to the 2021 Henan extreme event.It is found that cold clouds are better aggregated on EP days,accompanied by beneficial environment features like enhanced moisture conditions,stronger updrafts,and greater atmospheric instability.The temporal evolution of these environmental features shows a leading signal by one to three days.These results suggest the importance of combining the synoptic pattern and environmental features in the forecasting of heavy precipitation events.
基金National Natural Science Foundation of China(82305090)Shanghai Municipal Health Commission(20234Y0168)National Key Research and Development Program of China (2017YFC1703301)。
文摘Objective To explore the feasibility of constructing a lung cancer early-warning risk model based on facial image features,providing novel insights into the early screening of lung cancer.Methods This study included patients with pulmonary nodules diagnosed at the Physical Examination Center of Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine from November 1,2019 to December 31,2024,as well as patients with lung cancer diagnosed in the Oncology Departments of Yueyang Hospital of Integrated Traditional Chinese and Western Medicine and Longhua Hospital during the same period.The facial image information of patients with pulmonary nodules and lung cancer was collected using the TFDA-1 tongue and facial diagnosis instrument,and the facial diagnosis features were extracted from it by deep learning technology.Statistical analysis was conducted on the objective facial diagnosis characteristics of the two groups of participants to explore the differences in their facial image characteristics,and the least absolute shrinkage and selection operator(LASSO)regression was used to screen the characteristic variables.Based on the screened feature variables,four machine learning methods:random forest,logistic regression,support vector machine(SVM),and gradient boosting decision tree(GBDT)were used to establish lung cancer classification models independently.Meanwhile,the model performance was evaluated by indicators such as sensitivity,specificity,F1 score,precision,accuracy,the area under the receiver operating characteristic(ROC)curve(AUC),and the area under the precision-recall curve(AP).Results A total of 1275 patients with pulmonary nodules and 1623 patients with lung cancer were included in this study.After propensity score matching(PSM)to adjust for gender and age,535 patients were finally included in the pulmonary nodule group and the lung cancer group,respectively.There were significant differences in multiple color space metrics(such as R,G,B,V,L,a,b,Cr,H,Y,and Cb)and texture metrics[such as gray-levcl co-occurrence matrix(GLCM)-contrast(CON)and GLCM-inverse different moment(IDM)]between the two groups of individuals with pulmonary nodules and lung cancer(P<0.05).To construct a classification model,LASSO regression was used to select 63 key features from the initial 136 facial features.Based on this feature set,the SVM model demonstrated the best performance after 10-fold stratified cross-validation.The model achieved an average AUC of 0.8729 and average accuracy of 0.7990 on the internal test set.Further validation on an independent test set confirmed the model’s robust performance(AUC=0.8233,accuracy=0.7290),indicating its good generalization ability.Feature importance analysis demonstrated that color space indicators and the whole/lip Cr components(including color-B-0,wholecolor-Cr,and lipcolor-Cr)were the core factors in the model’s classification decisions,while texture indicators[GLCM-angular second moment(ASM)_2,GLCM-IDM_1,GLCM-CON_1,GLCM-entropy(ENT)_2]played an important auxiliary role.Conclusion The facial image features of patients with lung cancer and pulmonary nodules show significant differences in color and texture characteristics in multiple areas.The various models constructed based on facial image features all demonstrate good performance,indicating that facial image features can serve as potential biomarkers for lung cancer risk prediction,providing a non-invasive and feasible new approach for early lung cancer screening.
基金Supported by the National Health Commission’s Key Laboratory of Gastrointestinal Tumor Diagnosis and Treatment for The Year 2022,National Health Commission’s Master’s and Doctoral/Postdoctoral Fund Project,No.NHCDP2022001Gansu Provincial People’s Hospital Doctoral Supervisor Training Project,No.22GSSYA-3.
文摘The study by Luo et al published in the World Journal of Gastrointestinal Oncology presents a thorough and scientific methodology.Pancreatic cancer is the most challenging malignancy in the digestive system,exhibiting one of the highest mortality rates associated with cancer globally.The delayed onset of symptoms and diagnosis often results in metastasis or local progression of the cancer,thereby constraining treatment options and outcomes.For these patients,prompt tumour identification and treatment strategising are crucial.The present objective of pancreatic cancer research is to examine the correlation between various pathological types and imaging data to facilitate therapeutic decision-making.This study aims to clarify the correlation between diverse pathological markers and imaging in pancreatic cancer patients,with prospective longitudinal studies potentially providing novel insights into the diagnosis and treatment of pancreatic cancer.
基金supported by the Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(SJCX23_1973)the National Natural Science Foundation of China(32172110,32071945)+7 种基金the Key Research and Development Program(Modern Agriculture)of Jiangsu Province,China(BE2022342-2,BE2020319)the Anhui Province Crop Intelligent Planting and Processing Technology Engineering Research Center Open Project,China(ZHKF04)the National Key Research and Development Program of China(2023YFD2300201,2023YFD1202200)the Special Funds for Scientific and Technological Innovation of Jiangsu Province,China(BE2022425)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)the Central Publicinterest Scientific Institution Basal Research Fund,China(JBYW-AII-2023-08)the Science and Technology Innovation Project of the Chinese Academy of Agricultural Sciences(CAAS-CS-202201)the Special Fund for Independent Innovation of Agriculture Science and Technology in Jiangsu Province,China(CX(22)3112)。
文摘The wheat above-ground biomass(AGB)is an important index that shows the life activity of vegetation,which is of great significance for wheat growth monitoring and yield prediction.Traditional biomass estimation methods specifically include sample surveys and harvesting statistics.Although these methods have high estimation accuracy,they are time-consuming,destructive,and difficult to implement to monitor the biomass at a large scale.The main objective of this study is to optimize the traditional remote sensing methods to estimate the wheat AGBbased on improved convolutional features(CFs).Low-cost unmanned aerial vehicles(UAV)were used as the main data acquisition equipment.This study acquired image data acquired by RGB camera(RGB)and multi-spectral(MS)image data of the wheat population canopy for two wheat varieties and five key growth stages.Then,field measurements were conducted to obtain the actual wheat biomass data for validation.Based on the remote sensing indices(RSIs),structural features(SFs),and CFs,this study proposed a new feature named AUR-50(multi-source combination based on convolutional feature optimization)to estimate the wheat AGB.The results show that AUR-50 could estimate the wheat AGB more accurately than RSIs and SFs,and the average R^(2) exceeded 0.77.In the overwintering period,AUR-50_(MS)(multi-source combination with convolutional feature optimization using multispectral imagery)had the highest estimation accuracy(R^(2) of 0.88).In addition,AUR-50 reduced the effect of the vegetation index saturation on the biomass estimation accuracy by adding CFs,where the highest R^(2) was 0.69 at the flowering stage.The results of this study provide an effective method to evaluate the AGB in wheat with high throughput and a research reference for the phenotypic parameters of other crops.
基金the financial support from Natural Science Foundation of Gansu Province(Nos.22JR5RA217,22JR5RA216)Lanzhou Science and Technology Program(No.2022-2-111)+1 种基金Lanzhou University of Arts and Sciences School Innovation Fund Project(No.XJ2022000103)Lanzhou College of Arts and Sciences 2023 Talent Cultivation Quality Improvement Project(No.2023-ZL-jxzz-03)。
文摘Considering that the algorithm accuracy of the traditional sparse representation models is not high under the influence of multiple complex environmental factors,this study focuses on the improvement of feature extraction and model construction.Firstly,the convolutional neural network(CNN)features of the face are extracted by the trained deep learning network.Next,the steady-state and dynamic classifiers for face recognition are constructed based on the CNN features and Haar features respectively,with two-stage sparse representation introduced in the process of constructing the steady-state classifier and the feature templates with high reliability are dynamically selected as alternative templates from the sparse representation template dictionary constructed using the CNN features.Finally,the results of face recognition are given based on the classification results of the steady-state classifier and the dynamic classifier together.Based on this,the feature weights of the steady-state classifier template are adjusted in real time and the dictionary set is dynamically updated to reduce the probability of irrelevant features entering the dictionary set.The average recognition accuracy of this method is 94.45%on the CMU PIE face database and 96.58%on the AR face database,which is significantly improved compared with that of the traditional face recognition methods.
基金supported by grants fromthe North China University of Technology Research Start-Up Fund(11005136024XN147-14)and(110051360024XN151-97)Guangzhou Development Zone Science and Technology Project(2023GH02)+4 种基金the National Key R&D Program of China(2021YFE0201100 and 2022YFA1103401 to Juntao Gao)National Natural Science Foundation of China(981890991 to Juntao Gao)Beijing Municipal Natural Science Foundation(Z200021 to Juntao Gao)CAS Interdisciplinary Innovation Team(JCTD-2020-04 to Juntao Gao)0032/2022/A,by Macao FDCT,and MYRG2022-00271-FST.
文摘Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.
文摘BACKGROUND Autoimmune gastritis(AIG)is frequently associated with one or more comorbid conditions,among which type I gastric neuroendocrine tumors(gNETs)warrant significant clinical concern.However,risk factors for the development of gNETs in AIG populations remain poorly defined.AIM To characterize the clinical and endoscopic profiles of AIG and identify potential risk factors for gNETs development.METHODS In this single-center cross-sectional study carried out at a tertiary hospital,303 patients with AIG over an 8-year period were retrospectively categorized into gNETs(n=116)and non-gNETs(n=187)groups.Endoscopic and clinical pa-rameters were analyzed.Endoscopic features were systematically reevaluated according to the 2023 Japanese diagnostic criteria for AIG.Feature selection was performed using the Boruta algorithm,and the model discriminative ability was evaluated via receiver operating characteristic curve analysis.RESULTS Among the 303 patients with AIG,116 had gNETs and 187 did not.Compared with the non-gNETs group,patients in the gNETs group were younger(54.3 years vs 60.6 years,P<0.001),had higher rate of vitamin B12 deficiency(77.2%vs 55.8%,P<0.001),lower pepsinogen I(4.3 ng/mL vs 7.4 ng/mL,P<0.001)and pepsinogen I/II ratios(0.7 vs.1.1,P<0.001),and lower prior Helicobacter pylori infection rate(3.4%vs 21.4%,P<0.001).Endoscopically,the gNETs group showed a lower incidence of oxyntic mucosal remnants,hyperplastic polyps,and patchy antral redness.The predictive model incorporating age,prior Helicobacter pylori infection,vitamin B12 level,gastric hy-perplastic polyps,and patchy antral redness showed an area under the curve of 0.830.CONCLUSION Patients with AIG or gNETs exhibit specific clinical and endoscopic features.The predictive model demonstrated favorable discriminative ability and may facilitate risk stratification of gNETs in patients with AIG.