期刊文献+
共找到30,572篇文章
< 1 2 250 >
每页显示 20 50 100
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
1
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
A solution framework for the experimental data shortage problem of lithium-ion batteries:Generative adversarial network-based data augmentation for battery state estimation 被引量:1
2
作者 Jinghua Sun Ankun Gu Josef Kainz 《Journal of Energy Chemistry》 2025年第4期476-497,共22页
In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and th... In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and the gradient penalty to achieve data augmentation.To lower the threshold for implementing the proposed method,transfer learning is further introduced.The W-DC-GAN-GP-TL framework is thereby formed.This framework is evaluated on 3 different publicly available datasets to judge the quality of generated data.Through visual comparisons and the examination of two visualization methods(probability density function(PDF)and principal component analysis(PCA)),it is demonstrated that the generated data is hard to distinguish from the real data.The application of generated data for training a battery state model using transfer learning is further evaluated.Specifically,Bi-GRU-based and Transformer-based methods are implemented on 2 separate datasets for estimating state of health(SOH)and state of charge(SOC),respectively.The results indicate that the proposed framework demonstrates satisfactory performance in different scenarios:for the data replacement scenario,where real data are removed and replaced with generated data,the state estimator accuracy decreases only slightly;for the data enhancement scenario,the estimator accuracy is further improved.The estimation accuracy of SOH and SOC is as low as 0.69%and 0.58%root mean square error(RMSE)after applying the proposed framework.This framework provides a reliable method for enriching battery measurement data.It is a generalized framework capable of generating a variety of time series data. 展开更多
关键词 Lithium-ion battery Generative adversarial network data augmentation State of health State of charge data shortage
在线阅读 下载PDF
DSP-free coherent receivers in frequency-synchronous optical networks for next-generation data center interconnects 被引量:1
3
作者 Lei Liu Feng Liu +2 位作者 Cheng Peng Bo Xue William Shieh 《Advanced Photonics Nexus》 2025年第3期141-148,共8页
Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communi... Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communication has evolved into an increasingly prominent area of research in recent years.Here,we demonstrate DSP-free coherent optical transmission by analog signal processing in frequency synchronous optical network(FSON)architecture,which supports polarization multiplexing and higher-order modulation formats.The FSON architecture that allows the numerous laser sources of optical transceivers within a data center can be quasi-synchronized by means of a tree-distributed homology architecture.In conjunction with our proposed pilot-tone assisted Costas loop for an analog coherent receiver,we achieve a record dual-polarization 224-Gb/s 16-QAM 5-km mismatch transmission with reset-free carrier phase recovery in the optical domain.Our proposed DSP-free analog coherent detection system based on the FSON makes it a promising solution for next-generation,low-power,and high-capacity coherent data center interconnects. 展开更多
关键词 digital signal processing-free data center interconnect frequency synchronous optical network analog signal processing
在线阅读 下载PDF
A Generative Model-Based Network Framework for Ecological Data Reconstruction
4
作者 Shuqiao Liu Zhao Zhang +1 位作者 Hongyan Zhou Xuebo Chen 《Computers, Materials & Continua》 SCIE EI 2025年第1期929-948,共20页
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Th... This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data reconstruction.The model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT Analysis.The model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample data.Reconstructed data is used to retain more semantic information to generate features.The model was applied to species in Southern California,USA,citing SWOT analysis data to train the model.Experiments show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development environments.The model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data domain.This study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development. 展开更多
关键词 Convolutional Neural network(CNN) VAE GAN TOPSIS data reconstruction
在线阅读 下载PDF
Experiments on image data augmentation techniques for geological rock type classification with convolutional neural networks 被引量:2
5
作者 Afshin Tatar Manouchehr Haghighi Abbas Zeinijahromi 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期106-125,共20页
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist... The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications. 展开更多
关键词 Deep learning(DL) Image analysis Image data augmentation Convolutional neural networks(CNNs) Geological image analysis Rock classification Rock thin section(RTS)images
在线阅读 下载PDF
Data Gathering Based on Hybrid Energy Efficient Clustering Algorithm and DCRNN Model in Wireless Sensor Network
6
作者 Li Cuiran Liu Shuqi +1 位作者 Xie Jianli Liu Li 《China Communications》 2025年第3期115-131,共17页
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu... In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay. 展开更多
关键词 CLUSTERING data gathering DCRNN model network lifetime wireless sensor network
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
7
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
Enhancing Healthcare Data Privacy in Cloud IoT Networks Using Anomaly Detection and Optimization with Explainable AI (ExAI)
8
作者 Jitendra Kumar Samriya Virendra Singh +4 位作者 Gourav Bathla Meena Malik Varsha Arya Wadee Alhalabi Brij B.Gupta 《Computers, Materials & Continua》 2025年第8期3893-3910,共18页
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha... The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings. 展开更多
关键词 Healthcare data privacy analysis anomaly detection cloud IoT network explainable artificial intelligence temporal fuzzy network
在线阅读 下载PDF
Optimal Secure Control of Networked Control Systems Under False Data Injection Attacks:A Multi-Stage Attack-Defense Game Approach
9
作者 Dajun Du Yi Zhang +1 位作者 Baoyue Xu Minrui Fei 《IEEE/CAA Journal of Automatica Sinica》 2025年第4期821-823,共3页
Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by de... Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels. 展开更多
关键词 designing defense strategy networked control systems ncss alterations energy networked control systems false data injection attacks fdias strategywhile false data injection attacks optimal secure control identifying attack strategymaintaining
在线阅读 下载PDF
Enhanced Multi-Object Dwarf Mongoose Algorithm for Optimization Stochastic Data Fusion Wireless Sensor Network Deployment
10
作者 Shumin Li Qifang Luo Yongquan Zhou 《Computer Modeling in Engineering & Sciences》 2025年第2期1955-1994,共40页
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ... Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained. 展开更多
关键词 Stochastic data fusion wireless sensor networks network deployment spatiotemporal coverage dwarf mongoose optimization algorithm multi-objective optimization
在线阅读 下载PDF
A Hierarchical-Based Sequential Caching Scheme in Named Data Networking
11
作者 Zhang Junmin Jin Jihuan +3 位作者 Hou Rui Dong Mianxiong Kaoru Ota Zeng Deze 《China Communications》 2025年第5期48-60,共13页
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r... Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH). 展开更多
关键词 hierarchical router named data networking sequential caching
在线阅读 下载PDF
A symmetric difference data enhancement physics-informed neural network for the solving of discrete nonlinear lattice equations
12
作者 Jian-Chen Zhou Xiao-Yong Wen Ming-Juan Guo 《Communications in Theoretical Physics》 2025年第6期21-29,共9页
In this paper,we propose a symmetric difference data enhancement physics-informed neural network(SDE-PINN)to study soliton solutions for discrete nonlinear lattice equations(NLEs).By considering known and unknown symm... In this paper,we propose a symmetric difference data enhancement physics-informed neural network(SDE-PINN)to study soliton solutions for discrete nonlinear lattice equations(NLEs).By considering known and unknown symmetric points,numerical simulations are conducted to one-soliton and two-soliton solutions of a discrete KdV equation,as well as a one-soliton solution of a discrete Toda lattice equation.Compared with the existing discrete deep learning approach,the numerical results reveal that within the specified spatiotemporal domain,the prediction accuracy by SDE-PINN is excellent regardless of the interior or extrapolation prediction,with a significant reduction in training time.The proposed data enhancement technique and symmetric structure development provides a new perspective for the deep learning approach to solve discrete NLEs.The newly proposed SDE-PINN can also be applied to solve continuous nonlinear equations and other discrete NLEs numerically. 展开更多
关键词 symmetric difference data enhancement physics-informed neural network data enhancement symmetric point soliton solutions discrete nonlinear lattice equations
原文传递
Application of Dual-Polarization Radar Data Assimilation Via a Deep UNet Network Model
13
作者 XIA Xin YIN Peng-shuai +8 位作者 WAN Qi-lin GAO Yan WANG Hong FENG Jia-li MA Yu-long JIN Yu-chao SUN Jian SUN Shu-yue ZENG Qing-feng 《Journal of Tropical Meteorology》 2025年第6期591-602,共12页
The assimilation of dual-polarization(dual-pol)radar data plays a crucial role in enhancing the simulation of hydrometeors and improving the short-term precipitation forecasts of numerical weather prediction(NWP)model... The assimilation of dual-polarization(dual-pol)radar data plays a crucial role in enhancing the simulation of hydrometeors and improving the short-term precipitation forecasts of numerical weather prediction(NWP)models.However,existing dual-pol radar data assimilation(DA)methods exhibit limitations in terms of computational efficiency and data utilization.In this study,a new dual-pol radar DA approach is developed that utilizes a UNet-based model to retrieve mixing ratio information for four hydrometeor species from dual-pol radar data.The validation results for the UNet-based model indicate that the distributions of the retrieved hydrometeor mixing ratios provided by the model align well with the labeled data,yielding a reasonable range of root mean square errors(RMSEs).On this basis,the hydrometeor analysis increments retrieved by the UNet-based model are incorporated into the model integration process through the incremental analysis update(IAU)scheme,establishing a complete dual-pol radar DA framework for the CMA-MESO model.To evaluate the efficacy of this DA scheme,comparative simulation experiments were conducted for Typhoon Lekima(2019).Verification results indicate that using the hydrometeor DA scheme generally improves the threat scores(TSs)for 3-hour accumulated precipitation during medium-and heavy-rainfall events.Additionally,the 24-hour accumulated rainfall TSs for the medium-,heavy-,and extreme-precipitation categories in the DA experiment are all superior to those in the control experiment.The DA method also yields superior predictions of the spatial distribution of extremerainfall events.These results demonstrate that the proposed dual-pol radar DA approach effectively enhances the precipitation forecasting capabilities of numerical weather models. 展开更多
关键词 dual-polarization radar data assimilation UNet network incremental analysis update tropical cyclone
在线阅读 下载PDF
Dynamic Collaborative Data Download in Heterogeneous Satellite Networks
14
作者 Wu Qi Li Xintong Zhu Lidong 《China Communications》 2025年第2期26-46,共21页
Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic... Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic nature of LEO satellites leads to limited and rapidly varying contact time between them and Earth stations(ESs),making it difficult to timely download massive communication and remote sensing data within the limited time window.To address this challenge in heterogeneous satellite networks with coexisting geostationary-earth-orbit(GEO)and LEO satellites,this paper proposes a dynamic collaborative inter-satellite data download strategy to optimize the long-term weighted energy consumption and data downloads within the constraints of on-board power,backlog stability and time-varying contact.Specifically,the Lyapunov optimization theory is applied to transform the long-term stochastic optimization problem,subject to time-varying contact time and on-board power constraints,into multiple deterministic single time slot problems,based on which online distributed algorithms are developed to enable each satellite to independently obtain the transmit power allocation and data processing decisions in closed-form.Finally,the simulation results demonstrate the superiority of the proposed scheme over benchmarks,e.g.,achieving asymptotic optimality of the weighted energy consumption and data downloads,while maintaining stability of the on-board backlog. 展开更多
关键词 backlog stability data download heterogeneous satellite networks Lyapunov optimization power allocation
在线阅读 下载PDF
Unveiling core acupoints in acupuncture treatment for primary depressive disorder:integrating data mining and network acupuncture-based analysis
15
作者 Siyu LIU Xinnan LUOa Jiayun XIE +2 位作者 Miqun ZHOU Xiaona HU Shuang SONG 《Digital Chinese Medicine》 2025年第4期504-516,共13页
Objective To identify core acupoint patterns and elucidate the molecular mechanisms of acupuncture for primary depressive disorder(PDD)through data mining and network analysis.Methods A comprehensive literature search... Objective To identify core acupoint patterns and elucidate the molecular mechanisms of acupuncture for primary depressive disorder(PDD)through data mining and network analysis.Methods A comprehensive literature search was conducted across PubMed,Embase,Ovid Technologies(OVID),Web of Science,Cochrane Library,China National Knowledge Infrastructure(CNKI),China National Knowledge Infrastructure Database(VIP),Wanfang Data,and SinoMed Database from database foundation to January 31,2025,for clinical studies on acupuncture treatment of PDD.Descriptive statistics,high-frequency acupoint analysis,degree and betweenness centrality evaluation,and core acupoint prescription mining identified predominant therapeutic combinations for PDD.Network acupuncture was used to predict therapeutic target for the core acupoint prescription.Subsequent protein-protein interaction(PPI)network and molecular complex detection(MCODE)analyses were conducted to identify the key targets and functional modules.Gene Ontology(GO)and Kyoto Encyclopedia of Genes and Genomes(KEGG)analyses explored the underlying biological mechanisms of the core acupoint prescription in treating PDD.Results A total of 57 acupoint prescriptions underwent systematic analysis.The core therapeutic combinations comprised Baihui(GV20),Yintang(GV29),Neiguan(PC6),Hegu(LI4),and Shenmen(HT7).Network acupuncture analysis identified 88 potential therapeutic targets(79 overlapping with PDD),while PPI network analysis revealed central regulatory nodes,including interleukin(IL)-6,IL-1β,tumor necrosis factor(TNF)-α,toll-like receptor 4(TLR4),IL-10,brain-derived neurotrophic factor(BDNF),transforming growth factor(TGF)-β1,C-XC motif chemokine ligand 10(CXCL10),mitogen-activated protein kinase 3(MAPK3),and nitric oxide synthase 1(NOS1).MCODE-based modular analysis further elucidated three functionally coherent clusters:inflammation-homeostasis(score=6.571),plasticity-neurotransmission(score=3.143),and oxidative stress(score=3.000).GO and KEGG analyses demonstrated significant enrichment of the MAPK,phosphoinositide 3-kinase/protein kinase B(PI3K/Akt),and hypoxia-inducible factor(HIF)-1 signaling pathways.These mechanistic insights suggested that the antidepressant effects mediated through mechanisms of neuroinflammatory regulation,neuroplasticity restoration,and immune-oxidative stress homeostasis.Conclusion This study reveals that acupuncture alleviates depression through a multi-level mechanism,primarily involving the neuroinflammation suppression,neuroplasticity enhancement,and oxidative stress regulation.These findings systematically clarify the underlying mechanisms of acupuncture’s antidepressant effects and identify novel therapeutic targets for further mechanistic research. 展开更多
关键词 ACUPUNCTURE Primary depressive disorder(PDD) data mining network acupuncture Association analysis
暂未订购
Big Texture Dataset Synthesized Based on Gradient and Convolution Kernels Using Pre-Trained Deep Neural Networks
16
作者 Farhan A.Alenizi Faten Khalid Karim +1 位作者 Alaa R.Al-Shamasneh Mohammad Hossein Shakoor 《Computer Modeling in Engineering & Sciences》 2025年第8期1793-1829,共37页
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t... Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting. 展开更多
关键词 Big texture dataset data generation pre-trained deep neural network
在线阅读 下载PDF
Adjustable random linear network coding(ARLNC): A solution for data transmission in dynamic IoT computational environments
17
作者 Raffi Dilanchian Ali Bohlooli Kamal Jamshidi 《Digital Communications and Networks》 2025年第2期574-586,共13页
In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, c... In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding(RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC(ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks. 展开更多
关键词 Random linear network coding Adjust redundancy Galois field Internet of Things data transfer
在线阅读 下载PDF
Data Aggregation Point Placement and Subnetwork Optimization for Smart Grids
18
作者 Tien-Wen Sung Wei Li +2 位作者 Chao-Yang Lee Yuzhen Chen Qingjun Fang 《Computers, Materials & Continua》 2025年第4期407-434,共28页
To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installa... To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installation location greatly impact the whole network.For the traditional DAP placement algorithm,the number of DAPs must be set in advance,but determining the best number of DAPs is difficult,which undoubtedly reduces the overall performance of the network.Moreover,the excessive gap between the loads of different DAPs is also an important factor affecting the quality of the network.To address the above problems,this paper proposes a DAP placement algorithm,APSSA,based on the improved affinity propagation(AP)algorithm and sparrow search(SSA)algorithm,which can select the appropriate number of DAPs to be installed and the corresponding installation locations according to the number of SMs and their distribution locations in different environments.The algorithm adds an allocation mechanism to optimize the subnetwork in the SSA.APSSA is evaluated under three different areas and compared with other DAP placement algorithms.The experimental results validated that the method in this paper can reduce the network cost,shorten the average transmission distance,and reduce the load gap. 展开更多
关键词 Smart grid data aggregation point placement network cost average transmission distance load gap
在线阅读 下载PDF
DPZTN:Data-Plane-Based Access Control Zero-Trust Network
19
作者 Jingfu Yan Huachun Zhou Weilin Wang 《Computer Systems Science & Engineering》 2025年第1期499-531,共33页
The 6G network architecture introduces the paradigm of Trust+Security,representing a shift in network protection strategies from external defense mechanisms to endogenous security enforcement.While ZTNs(zerotrust netw... The 6G network architecture introduces the paradigm of Trust+Security,representing a shift in network protection strategies from external defense mechanisms to endogenous security enforcement.While ZTNs(zerotrust networks)have demonstrated significant advancements in constructing trust-centric frameworks,most existing ZTN implementations lack comprehensive integration of security deployment and traffic monitoring capabilities.Furthermore,current ZTN designs generally do not facilitate dynamic assessment of user reputation.To address these limitations,this study proposes a DPZTN(Data-plane-based Zero Trust Network).DPZTN framework extends traditional ZTN models by incorporating security mechanisms directly into the data plane.Additionally,blockchain infrastructure is used to enable decentralized identity authentication and distributed access control.A pivotal element within the proposed framework is ZTNE(Zero-Trust Network Element),which executes access control policies and performs real-time user traffic inspection.To enable dynamic and fine-grained evaluation of user trustworthiness,this study introduces BBEA(Bayesian-based Behavior Evaluation Algorithm).BBEA provides a framework for continuous user behavior analysis,supporting adaptive privilege management and behavior-informed access control.Experimental results demonstrate that ZTNE combined with BBEA,can effectively respond to both individual and mixed attack types by promptly adjusting user behavior scores and dynamically modifying access privileges based on initial privilege levels.Under conditions supporting up to 10,000 concurrent users,the control system maintains approximately 65%CPU usage and less than 60%memory usage,with average user authentication latency around 1 s and access control latency close to 1 s. 展开更多
关键词 Zero trust network data plane bayesian-based behavior evaluation blockchain-based access control security functions
在线阅读 下载PDF
Optimization of convolutional neural networks for predicting water pollutants using spectral data in the middle and lower reaches of the Yangtze River Basin,China
20
作者 ZHANG Guohao LI Song +3 位作者 WANG Cailing WANG Hongwei YU Tao DAI Xiaoxu 《Journal of Mountain Science》 2025年第8期2851-2869,共19页
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t... Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control. 展开更多
关键词 Water pollutants Convolutional neural networks data augmentation Optimization algorithms Model evaluation methods Deep Learning
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部