期刊文献+
共找到454篇文章
< 1 2 23 >
每页显示 20 50 100
HCL Net: Deep Learning for Accurate Classification of Honeycombing Lung and Ground Glass Opacity in CT Images
1
作者 Hairul Aysa Abdul Halim Sithiq Liyana Shuib +1 位作者 Muneer Ahmad Chermaine Deepa Antony 《Computers, Materials & Continua》 2026年第1期999-1023,共25页
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal... Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis. 展开更多
关键词 Deep learning honeycombing lung ground glass opacity Resnet50v2 multiclass classification
在线阅读 下载PDF
A survey of edge computing-based designs for IoT security 被引量:15
2
作者 Kewei Sha T.Andrew Yang +1 位作者 Wei Wei Sadegh Davari 《Digital Communications and Networks》 SCIE 2020年第2期195-202,共8页
Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physi... Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physical systems at risk.Severe resource constraints and insufficient security design are two major causes of many security problems in IoT applications.As an extension of the cloud,the emerging edge computing with rich resources provides us a new venue to design and deploy novel security solutions for IoT applications.Although there are some research efforts in this area,edge-based security designs for IoT applications are still in its infancy.This paper aims to present a comprehensive survey of existing IoT security solutions at the edge layer as well as to inspire more edge-based IoT security designs.We first present an edge-centric IoT architecture.Then,we extensively review the edge-based IoT security research efforts in the context of security architecture designs,firewalls,intrusion detection systems,authentication and authorization protocols,and privacy-preserving mechanisms.Finally,we propose our insight into future research directions and open research issues. 展开更多
关键词 Edge computing Internet of Things(IoT) SECURITY Architecture Secure protocols FIREWALL Intrusion detection Authentication AUTHORIZATION Privacy
在线阅读 下载PDF
A Survey of Mobile Cloud Computing 被引量:7
3
作者 Xiaopeng Fan Jiannong Cao Haixia Mao 《ZTE Communications》 2011年第1期4-8,共5页
Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobi... Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work. 展开更多
关键词 mobile cloud computing cloud computing
在线阅读 下载PDF
Cloud Computing and Big Data: A Review of Current Service Models and Hardware Perspectives 被引量:1
4
作者 Richard Branch Heather Tjeerdsma +2 位作者 Cody Wilson Richard Hurley Sabine McConnell 《Journal of Software Engineering and Applications》 2014年第8期686-693,共8页
Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include s... Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data. 展开更多
关键词 BIG Data CLOUD Computing CLOUD Storage Software as a Service NOSQL ARCHITECTURES
在线阅读 下载PDF
Is it Visual? The Importance of a Problem Solving Module within a Computing Course
5
作者 KARIYAWASAM K A TURNER S J HILL G J 《计算机教育》 2012年第10期5-7,共3页
This paper looks at student's view of the usefulness of a problem solving and programming module in the first year of a 3-year undergraduate program.The School of Science and Technology,University of Northampton,U... This paper looks at student's view of the usefulness of a problem solving and programming module in the first year of a 3-year undergraduate program.The School of Science and Technology,University of Northampton,UK has been investigating,over the last seven years the teaching of problem solving.Including looking at whether a more visual approach has any benefits(the visual programming includes both 2-d and graphical user interfaces).Whilst the authors have discussed the subject problem solving and programming in the past [1] this paper considers the students perspective from research collected/collated by a student researcher under a new initiative within the University.All students interviewed either had completed the module within the two years of the survey or were completing the problem-solving module in their first year. 展开更多
关键词 Problem solving VISUAL student experience PROGRAMMING
在线阅读 下载PDF
A Review of Human Vulnerabilities in Cyber Security: Challenges and Solutions for Microfinance Institutions
6
作者 Evaline Waweru Simon Maina Karume Alex Kibet 《Journal of Information Security》 2025年第1期114-130,共17页
This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak... This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs. 展开更多
关键词 Human Vulnerabilities CYBERSECURITY Microfinance Institutions Cyber Threats Cybersecurity Awareness Risk Mitigation
在线阅读 下载PDF
A Feasibility Study of Renewable Energy Generation from Palm Oil Waste in Malaysia
7
作者 Mujahid Tabassum Md.Bazlul Mobin Siddique +1 位作者 Hadi Nabipour Afrouzi Saad Bin Abdul Kashem 《Energy Engineering》 2025年第9期3433-3457,共25页
Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmo... Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmoil extraction process.In this paper,we have first examined and compared the use of palmoil waste as biomass for electricity generation in different countries with reference to Malaysia.Some areas with default accessibility in rural areas,like those in Sabah and Sarawak,require a cheap and reliable source of electricity.Palm oil waste possesses the potential to be the source.Therefore,this research examines the cost-effective comparison between electricity generated frompalm oil waste and standalone diesel electric generation in Marudi,Sarawak,Malaysia.This research aims to investigate the potential electricity generation using palm oil waste and the feasibility of implementing the technology in rural areas.To implement and analyze the feasibility,a case study has been carried out in a rural area in Sarawak,Malaysia.The finding shows the electricity cost calculation of small towns like Long Lama,Long Miri,and Long Atip,with ten nearby schools,and suggests that using EFB from palm oil waste is cheaper and reduces greenhouse gas emissions.The study also points out the need to conduct further research on power systems,such as energy storage andmicrogrids,to better understand the future of power systems.By collecting data through questionnaires and surveys,an analysis has been carried out to determine the approximate cost and quantity of palm oil waste to generate cheaper renewable energy.We concluded that electricity generation from palm oil waste is cost-effective and beneficial.The infrastructure can be a microgrid connected to the main grid. 展开更多
关键词 Electricity generation energy sustainability palm oil waste management rural areas energy source
在线阅读 下载PDF
ANNDRA-IoT:A Deep Learning Approach for Optimal Resource Allocation in Internet of Things Environments
8
作者 Abdullah M.Alqahtani Kamran Ahmad Awan +1 位作者 Abdulaziz Almaleh Osama Aletri 《Computer Modeling in Engineering & Sciences》 2025年第3期3155-3179,共25页
Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-ba... Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods. 展开更多
关键词 Internet of things resource optimization deep learning optimal resource allocation neural network EFFICIENCY
在线阅读 下载PDF
Privacy Preserving Federated Anomaly Detection in IoT Edge Computing Using Bayesian Game Reinforcement Learning
9
作者 Fatima Asiri Wajdan Al Malwi +4 位作者 Fahad Masood Mohammed S.Alshehri Tamara Zhukabayeva Syed Aziz Shah Jawad Ahmad 《Computers, Materials & Continua》 2025年第8期3943-3960,共18页
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha... Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks. 展开更多
关键词 IOT edge computing smart homes anomaly detection Bayesian game theory reinforcement learning
在线阅读 下载PDF
A Paradigm of Temporal-Weather-Aware Transition Pattern for POI Recommendation
10
作者 Junyang Chen Jingcai Guo +4 位作者 Huan Wang Zhihui Lai Qin Zhang Kaishun Wu Liang-Jie Zhang 《CAAI Transactions on Intelligence Technology》 2025年第6期1675-1687,共13页
Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of... Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of sparse historical data for individual users.To address these issues,this paper proposes a new paradigm,namely temporal-weather-aware transition pattern for POI recommendation(TWTransNet).This paradigm is designed to capture user transition patterns under different times and weather conditions.Additionally,we introduce the construction of a user-POI interaction graph to alleviate the problem of sparse historical data for individual users.Furthermore,when predicting user interests by aggregating graph information,some POIs may not be suitable for visitation under current weather conditions.To account for this,we propose an attention mechanism to filter POI neighbours when aggregating information from the graph,considering the impact of weather and time.Empirical results on two real-world datasets demonstrate the superior performance of our proposed method,showing a substantial improvement of 6.91%-23.31% in terms of prediction accuracy. 展开更多
关键词 data mining decision making MULTIMEDIA
在线阅读 下载PDF
Resilient task offloading in integrated satellite-terrestrial networks with mobility-induced variability
11
作者 Kongyang Chen Guomin Liang +2 位作者 Hongfa Zhang Waixi Liu Jiaxing Shen 《Digital Communications and Networks》 2025年第6期1961-1972,共12页
Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate... Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks. 展开更多
关键词 LEO satellites Task offloading Edge computing Communication reliability
在线阅读 下载PDF
A lightweight physics-conditioned diffusion multi-model for medical image reconstruction
12
作者 Raja Vavekanand Ganesh Kumar Shakhlokhon Kurbanova 《Biomedical Engineering Communications》 2026年第2期50-59,共10页
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio... Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications. 展开更多
关键词 medical image reconstruction physics-conditioned diffusion multi-task learning self-supervised fine-tuning multimodal fusion lightweight neural networks
在线阅读 下载PDF
MewCDNet: A Wavelet-Based Multi-Scale Interaction Network for Efficient Remote Sensing Building Change Detection
13
作者 Jia Liu Hao Chen +5 位作者 Hang Gu Yushan Pan Haoran Chen Erlin Tian Min Huang Zuhe Li 《Computers, Materials & Continua》 2026年第1期687-710,共24页
Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectra... Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectral similarity between buildings and backgrounds,sensor variations,and insufficient computational efficiency.To address these challenges,this paper proposes a novel Multi-scale Efficient Wavelet-based Change Detection Network(MewCDNet),which integrates the advantages of Convolutional Neural Networks and Transformers,balances computational costs,and achieves high-performance building change detection.The network employs EfficientNet-B4 as the backbone for hierarchical feature extraction,integrates multi-level feature maps through a multi-scale fusion strategy,and incorporates two key modules:Cross-temporal Difference Detection(CTDD)and Cross-scale Wavelet Refinement(CSWR).CTDD adopts a dual-branch architecture that combines pixel-wise differencing with semanticaware Euclidean distance weighting to enhance the distinction between true changes and background noise.CSWR integrates Haar-based Discrete Wavelet Transform with multi-head cross-attention mechanisms,enabling cross-scale feature fusion while significantly improving edge localization and suppressing spurious changes.Extensive experiments on four benchmark datasets demonstrate MewCDNet’s superiority over comparison methods:achieving F1 scores of 91.54%on LEVIR,93.70%on WHUCD,and 64.96%on S2Looking for building change detection.Furthermore,MewCDNet exhibits optimal performance on the multi-class⋅SYSU dataset(F1:82.71%),highlighting its exceptional generalization capability. 展开更多
关键词 Remote sensing change detection deep learning wavelet transform MULTI-SCALE
在线阅读 下载PDF
Graph Attention Networks for Skin Lesion Classification with CNN-Driven Node Features
14
作者 Ghadah Naif Alwakid Samabia Tehsin +3 位作者 Mamoona Humayun Asad Farooq Ibrahim Alrashdi Amjad Alsirhani 《Computers, Materials & Continua》 2026年第1期1964-1984,共21页
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ... Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems. 展开更多
关键词 Graph neural network image classification DermaMNIST dataset graph representation
在线阅读 下载PDF
Dietary and metabolomic determinants of relapse in ulcerative colitis patients: A pilot prospective cohort study 被引量:12
15
作者 Ammar Hassanzadeh Keshtel iFloris F van den Brand +9 位作者 Karen L Madsen Rupasri Mandal Rosica ValchevaKaren I Kroeker Beomsoo Han Rhonda C Bell Janis Cole Thomas Hoevers David S Wishart Richard N Fedorak Levinus A Dieleman 《World Journal of Gastroenterology》 SCIE CAS 2017年第21期3890-3899,共10页
AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were ... AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were recruited and followed-up at 12 mo to assess a clinical relapse, or not. At baseline information on demographic and clinical parameters was collected. Serum and urine samples were collected for analysis of metabolomic assays using a combined direct infusion/liquid chromatography tandem mass spectrometry and nuclear magnetic resolution spectroscopy. Stool samples were also collected to measure fecal calprotectin(FCP). Dietary assessment was performed using a validated self-administered food frequency questionnaire. RESULTS Twenty patients were included(mean age: 42.7 ± 14.8 years, females: 55%). Seven patients(35%) experienced a clinical relapse during the follow-up period. While 6 patients(66.7%) with normal body weight developed a clinical relapse, 1 UC patient(9.1%) who was overweight/obese relapsed during the follow-up(P = 0.02). At baseline, poultry intake was significantly higher in patients who were still in remission during follow-up(0.9 oz vs 0.2 oz, P = 0.002). Five patients(71.4%) with FCP > 150 μg/g and 2 patients(15.4%) with normal FCP(≤ 150 μg/g) at baseline relapsed during the follow-up(P = 0.02). Interestingly, baseline urinary and serum metabolomic profiling of UC patients with or without clinical relapse within 12 mo showed a significant difference. The most important metabolites that were responsible for this discrimination were trans-aconitate, cystine and acetamide in urine, and 3-hydroxybutyrate, acetoacetate and acetone in serum. CONCLUSION A combination of baseline dietary intake, fecal calprotectin, and metabolomic factors are associated with risk of UC clinical relapse within 12 mo. 展开更多
关键词 Ulcerative colitis RELAPSE Metabolomics DIET Fecal calprotectin
暂未订购
Neurocognitive Graphs of First-Episode Schizophrenia and Major Depression Based on Cognitive Features 被引量:8
16
作者 Sugai Liang Roberto Vega +8 位作者 Xiangzhen Kong Wei Deng Qiang Wang Xiaohong Ma Mingli Li Xun Hu Andrew J.Greenshaw Russell Greiner Tao Li 《Neuroscience Bulletin》 SCIE CAS CSCD 2018年第2期312-320,共9页
Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitiv... Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia(FES), 125 with MDD, and 237 demographically-matched healthy controls(HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with aone-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD.Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression. 展开更多
关键词 SCHIZOPHRENIA Major depressive disorder NEUROCOGNITION Neurocognitive graph Graphical LASSO
原文传递
BAS-ADAM:An ADAM Based Approach to Improve the Performance of Beetle Antennae Search Optimizer 被引量:34
17
作者 Ameer Hamza Khan Xinwei Cao +2 位作者 Shuai Li Vasilios N.Katsikis Liefa Liao 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2020年第2期461-471,共11页
In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We ach... In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm. 展开更多
关键词 Adaptive moment estimation(ADAM) Beetle antennae search(BAM) gradient estimation metaheuristic optimization nature-inspired algorithms neural network
在线阅读 下载PDF
Joint Algorithm of Message Fragmentation and No-Wait Scheduling for Time-Sensitive Networks 被引量:7
18
作者 Xi Jin Changqing Xia +1 位作者 Nan Guan Peng Zeng 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期478-490,共13页
Time-sensitive networks(TSNs)support not only traditional best-effort communications but also deterministic communications,which send each packet at a deterministic time so that the data transmissions of networked con... Time-sensitive networks(TSNs)support not only traditional best-effort communications but also deterministic communications,which send each packet at a deterministic time so that the data transmissions of networked control systems can be precisely scheduled to guarantee hard real-time constraints.No-wait scheduling is suitable for such TSNs and generates the schedules of deterministic communications with the minimal network resources so that all of the remaining resources can be used to improve the throughput of best-effort communications.However,due to inappropriate message fragmentation,the realtime performance of no-wait scheduling algorithms is reduced.Therefore,in this paper,joint algorithms of message fragmentation and no-wait scheduling are proposed.First,a specification for the joint problem based on optimization modulo theories is proposed so that off-the-shelf solvers can be used to find optimal solutions.Second,to improve the scalability of our algorithm,the worst-case delay of messages is analyzed,and then,based on the analysis,a heuristic algorithm is proposed to construct low-delay schedules.Finally,we conduct extensive test cases to evaluate our proposed algorithms.The evaluation results indicate that,compared to existing algorithms,the proposed joint algorithm improves schedulability by up to 50%. 展开更多
关键词 Message fragmentation networked control system real-time scheduling time sensitive network
在线阅读 下载PDF
BIFURCATIONS OF TRAVELLING WAVE SOLUTIONS FOR THE GENERALIZED DODD-BULLOUGH-MIKHAILOV EQUATION 被引量:7
19
作者 Tang Shengqiang Huang Wentao 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2007年第1期21-28,共8页
In this paper, the generalized Dodd-Bullough-Mikhailov equation is studied. The existence of periodic wave and unbounded wave solutions is proved by using the method of bifurcation theory of dynamical systems. Under d... In this paper, the generalized Dodd-Bullough-Mikhailov equation is studied. The existence of periodic wave and unbounded wave solutions is proved by using the method of bifurcation theory of dynamical systems. Under different parametric conditions, various sufficient conditions to guarantee the existence of the above solutions are given.Some exact explicit parametric representations of the above travelling solutions are obtained. 展开更多
关键词 unbounded travelling wave solution periodic travelling wave solution the generalized Dodd- Bullough-Mikhailov equation.
在线阅读 下载PDF
Improvement of the Bayesian neural network to study the photoneutron yield cross sections 被引量:6
20
作者 Yong-Yi Li Fan Zhang Jun Su 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2022年第11期1-9,共9页
This work is an attempt to improve the Bayesian neural network (BNN) for studying photoneutron yield cross sections as a function of the charge number Z, mass number A, and incident energy ε. The BNN was improved in ... This work is an attempt to improve the Bayesian neural network (BNN) for studying photoneutron yield cross sections as a function of the charge number Z, mass number A, and incident energy ε. The BNN was improved in terms of three aspects:numerical parameters, input layer, and network structure. First, by minimizing the deviations between the predictions and data, the numerical parameters, including the hidden layer number, hidden node number, and activation function, were selected. It was found that the BNN with three hidden layers, 10 hidden nodes, and sigmoid activation function provided the smallest deviations. Second, based on known knowledge,such as the isospin dependence and shape effect, the optimal ground-state properties were selected as input neurons. Third, the Lorentzian function was applied to map the hidden nodes to the output cross sections, and the empirical formula of the Lorentzian parameters was applied to link some of the input nodes to the output cross sections. It was found that the last two aspects improved the predictions and avoided overfitting, especially for the axially deformed nucleus. 展开更多
关键词 Bayesian neural network Photoneutron cross sections Giant dipole resonance
在线阅读 下载PDF
上一页 1 2 23 下一页 到第
使用帮助 返回顶部