Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physi...Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physical systems at risk.Severe resource constraints and insufficient security design are two major causes of many security problems in IoT applications.As an extension of the cloud,the emerging edge computing with rich resources provides us a new venue to design and deploy novel security solutions for IoT applications.Although there are some research efforts in this area,edge-based security designs for IoT applications are still in its infancy.This paper aims to present a comprehensive survey of existing IoT security solutions at the edge layer as well as to inspire more edge-based IoT security designs.We first present an edge-centric IoT architecture.Then,we extensively review the edge-based IoT security research efforts in the context of security architecture designs,firewalls,intrusion detection systems,authentication and authorization protocols,and privacy-preserving mechanisms.Finally,we propose our insight into future research directions and open research issues.展开更多
Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobi...Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.展开更多
Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include s...Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.展开更多
This paper looks at student's view of the usefulness of a problem solving and programming module in the first year of a 3-year undergraduate program.The School of Science and Technology,University of Northampton,U...This paper looks at student's view of the usefulness of a problem solving and programming module in the first year of a 3-year undergraduate program.The School of Science and Technology,University of Northampton,UK has been investigating,over the last seven years the teaching of problem solving.Including looking at whether a more visual approach has any benefits(the visual programming includes both 2-d and graphical user interfaces).Whilst the authors have discussed the subject problem solving and programming in the past [1] this paper considers the students perspective from research collected/collated by a student researcher under a new initiative within the University.All students interviewed either had completed the module within the two years of the survey or were completing the problem-solving module in their first year.展开更多
This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmo...Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmoil extraction process.In this paper,we have first examined and compared the use of palmoil waste as biomass for electricity generation in different countries with reference to Malaysia.Some areas with default accessibility in rural areas,like those in Sabah and Sarawak,require a cheap and reliable source of electricity.Palm oil waste possesses the potential to be the source.Therefore,this research examines the cost-effective comparison between electricity generated frompalm oil waste and standalone diesel electric generation in Marudi,Sarawak,Malaysia.This research aims to investigate the potential electricity generation using palm oil waste and the feasibility of implementing the technology in rural areas.To implement and analyze the feasibility,a case study has been carried out in a rural area in Sarawak,Malaysia.The finding shows the electricity cost calculation of small towns like Long Lama,Long Miri,and Long Atip,with ten nearby schools,and suggests that using EFB from palm oil waste is cheaper and reduces greenhouse gas emissions.The study also points out the need to conduct further research on power systems,such as energy storage andmicrogrids,to better understand the future of power systems.By collecting data through questionnaires and surveys,an analysis has been carried out to determine the approximate cost and quantity of palm oil waste to generate cheaper renewable energy.We concluded that electricity generation from palm oil waste is cost-effective and beneficial.The infrastructure can be a microgrid connected to the main grid.展开更多
Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-ba...Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of...Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of sparse historical data for individual users.To address these issues,this paper proposes a new paradigm,namely temporal-weather-aware transition pattern for POI recommendation(TWTransNet).This paradigm is designed to capture user transition patterns under different times and weather conditions.Additionally,we introduce the construction of a user-POI interaction graph to alleviate the problem of sparse historical data for individual users.Furthermore,when predicting user interests by aggregating graph information,some POIs may not be suitable for visitation under current weather conditions.To account for this,we propose an attention mechanism to filter POI neighbours when aggregating information from the graph,considering the impact of weather and time.Empirical results on two real-world datasets demonstrate the superior performance of our proposed method,showing a substantial improvement of 6.91%-23.31% in terms of prediction accuracy.展开更多
Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate...Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks.展开更多
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio...Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.展开更多
Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectra...Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectral similarity between buildings and backgrounds,sensor variations,and insufficient computational efficiency.To address these challenges,this paper proposes a novel Multi-scale Efficient Wavelet-based Change Detection Network(MewCDNet),which integrates the advantages of Convolutional Neural Networks and Transformers,balances computational costs,and achieves high-performance building change detection.The network employs EfficientNet-B4 as the backbone for hierarchical feature extraction,integrates multi-level feature maps through a multi-scale fusion strategy,and incorporates two key modules:Cross-temporal Difference Detection(CTDD)and Cross-scale Wavelet Refinement(CSWR).CTDD adopts a dual-branch architecture that combines pixel-wise differencing with semanticaware Euclidean distance weighting to enhance the distinction between true changes and background noise.CSWR integrates Haar-based Discrete Wavelet Transform with multi-head cross-attention mechanisms,enabling cross-scale feature fusion while significantly improving edge localization and suppressing spurious changes.Extensive experiments on four benchmark datasets demonstrate MewCDNet’s superiority over comparison methods:achieving F1 scores of 91.54%on LEVIR,93.70%on WHUCD,and 64.96%on S2Looking for building change detection.Furthermore,MewCDNet exhibits optimal performance on the multi-class⋅SYSU dataset(F1:82.71%),highlighting its exceptional generalization capability.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were ...AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were recruited and followed-up at 12 mo to assess a clinical relapse, or not. At baseline information on demographic and clinical parameters was collected. Serum and urine samples were collected for analysis of metabolomic assays using a combined direct infusion/liquid chromatography tandem mass spectrometry and nuclear magnetic resolution spectroscopy. Stool samples were also collected to measure fecal calprotectin(FCP). Dietary assessment was performed using a validated self-administered food frequency questionnaire. RESULTS Twenty patients were included(mean age: 42.7 ± 14.8 years, females: 55%). Seven patients(35%) experienced a clinical relapse during the follow-up period. While 6 patients(66.7%) with normal body weight developed a clinical relapse, 1 UC patient(9.1%) who was overweight/obese relapsed during the follow-up(P = 0.02). At baseline, poultry intake was significantly higher in patients who were still in remission during follow-up(0.9 oz vs 0.2 oz, P = 0.002). Five patients(71.4%) with FCP > 150 μg/g and 2 patients(15.4%) with normal FCP(≤ 150 μg/g) at baseline relapsed during the follow-up(P = 0.02). Interestingly, baseline urinary and serum metabolomic profiling of UC patients with or without clinical relapse within 12 mo showed a significant difference. The most important metabolites that were responsible for this discrimination were trans-aconitate, cystine and acetamide in urine, and 3-hydroxybutyrate, acetoacetate and acetone in serum. CONCLUSION A combination of baseline dietary intake, fecal calprotectin, and metabolomic factors are associated with risk of UC clinical relapse within 12 mo.展开更多
Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitiv...Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia(FES), 125 with MDD, and 237 demographically-matched healthy controls(HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with aone-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD.Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.展开更多
In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We ach...In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.展开更多
Time-sensitive networks(TSNs)support not only traditional best-effort communications but also deterministic communications,which send each packet at a deterministic time so that the data transmissions of networked con...Time-sensitive networks(TSNs)support not only traditional best-effort communications but also deterministic communications,which send each packet at a deterministic time so that the data transmissions of networked control systems can be precisely scheduled to guarantee hard real-time constraints.No-wait scheduling is suitable for such TSNs and generates the schedules of deterministic communications with the minimal network resources so that all of the remaining resources can be used to improve the throughput of best-effort communications.However,due to inappropriate message fragmentation,the realtime performance of no-wait scheduling algorithms is reduced.Therefore,in this paper,joint algorithms of message fragmentation and no-wait scheduling are proposed.First,a specification for the joint problem based on optimization modulo theories is proposed so that off-the-shelf solvers can be used to find optimal solutions.Second,to improve the scalability of our algorithm,the worst-case delay of messages is analyzed,and then,based on the analysis,a heuristic algorithm is proposed to construct low-delay schedules.Finally,we conduct extensive test cases to evaluate our proposed algorithms.The evaluation results indicate that,compared to existing algorithms,the proposed joint algorithm improves schedulability by up to 50%.展开更多
In this paper, the generalized Dodd-Bullough-Mikhailov equation is studied. The existence of periodic wave and unbounded wave solutions is proved by using the method of bifurcation theory of dynamical systems. Under d...In this paper, the generalized Dodd-Bullough-Mikhailov equation is studied. The existence of periodic wave and unbounded wave solutions is proved by using the method of bifurcation theory of dynamical systems. Under different parametric conditions, various sufficient conditions to guarantee the existence of the above solutions are given.Some exact explicit parametric representations of the above travelling solutions are obtained.展开更多
This work is an attempt to improve the Bayesian neural network (BNN) for studying photoneutron yield cross sections as a function of the charge number Z, mass number A, and incident energy ε. The BNN was improved in ...This work is an attempt to improve the Bayesian neural network (BNN) for studying photoneutron yield cross sections as a function of the charge number Z, mass number A, and incident energy ε. The BNN was improved in terms of three aspects:numerical parameters, input layer, and network structure. First, by minimizing the deviations between the predictions and data, the numerical parameters, including the hidden layer number, hidden node number, and activation function, were selected. It was found that the BNN with three hidden layers, 10 hidden nodes, and sigmoid activation function provided the smallest deviations. Second, based on known knowledge,such as the isospin dependence and shape effect, the optimal ground-state properties were selected as input neurons. Third, the Lorentzian function was applied to map the hidden nodes to the output cross sections, and the empirical formula of the Lorentzian parameters was applied to link some of the input nodes to the output cross sections. It was found that the last two aspects improved the predictions and avoided overfitting, especially for the axially deformed nucleus.展开更多
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金This research has been supported by the National Science Foundation(under grant#1723596)the National Security Agency(under grant#H98230-17-1-0355).
文摘Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physical systems at risk.Severe resource constraints and insufficient security design are two major causes of many security problems in IoT applications.As an extension of the cloud,the emerging edge computing with rich resources provides us a new venue to design and deploy novel security solutions for IoT applications.Although there are some research efforts in this area,edge-based security designs for IoT applications are still in its infancy.This paper aims to present a comprehensive survey of existing IoT security solutions at the edge layer as well as to inspire more edge-based IoT security designs.We first present an edge-centric IoT architecture.Then,we extensively review the edge-based IoT security research efforts in the context of security architecture designs,firewalls,intrusion detection systems,authentication and authorization protocols,and privacy-preserving mechanisms.Finally,we propose our insight into future research directions and open research issues.
基金supported by Hong Kong RGC under the GRF grant PolyU5106/10ENokia Research Lab (Beijing) under the grant H-ZG19+1 种基金supported by the National S&T Major Project of China under No.2009ZX03006-001Guangdong S&T Major Project under No.2009A080207002
文摘Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.
文摘Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.
文摘This paper looks at student's view of the usefulness of a problem solving and programming module in the first year of a 3-year undergraduate program.The School of Science and Technology,University of Northampton,UK has been investigating,over the last seven years the teaching of problem solving.Including looking at whether a more visual approach has any benefits(the visual programming includes both 2-d and graphical user interfaces).Whilst the authors have discussed the subject problem solving and programming in the past [1] this paper considers the students perspective from research collected/collated by a student researcher under a new initiative within the University.All students interviewed either had completed the module within the two years of the survey or were completing the problem-solving module in their first year.
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
文摘Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmoil extraction process.In this paper,we have first examined and compared the use of palmoil waste as biomass for electricity generation in different countries with reference to Malaysia.Some areas with default accessibility in rural areas,like those in Sabah and Sarawak,require a cheap and reliable source of electricity.Palm oil waste possesses the potential to be the source.Therefore,this research examines the cost-effective comparison between electricity generated frompalm oil waste and standalone diesel electric generation in Marudi,Sarawak,Malaysia.This research aims to investigate the potential electricity generation using palm oil waste and the feasibility of implementing the technology in rural areas.To implement and analyze the feasibility,a case study has been carried out in a rural area in Sarawak,Malaysia.The finding shows the electricity cost calculation of small towns like Long Lama,Long Miri,and Long Atip,with ten nearby schools,and suggests that using EFB from palm oil waste is cheaper and reduces greenhouse gas emissions.The study also points out the need to conduct further research on power systems,such as energy storage andmicrogrids,to better understand the future of power systems.By collecting data through questionnaires and surveys,an analysis has been carried out to determine the approximate cost and quantity of palm oil waste to generate cheaper renewable energy.We concluded that electricity generation from palm oil waste is cost-effective and beneficial.The infrastructure can be a microgrid connected to the main grid.
基金funding of the Deanship of Graduate Studies and Scientific Research,Jazan University,Saudi Arabia,through Project Number:ISP-2024.
文摘Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.
基金supported by Stable Support Project of Shenzhen(20231120161634002)Shenzhen Science and Technology Programme(JCYJ20240813141417023)+5 种基金Natural Science Foundation of Guangdong Province of China(2025A1515010233)Guangdong Provincial Department of Education(2024KTSCX060)Tencent‘Rhinoceros Birds’—Scientific Research Foundation for Young Teachers of Shenzhen University,Open Project of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2025B22)Hong Kong RGC General Research Fund(No.152211/23E and 15216424/24E)PolyU Internal Fund(No.P0043932,P0048988)NVIDIA AI Technology Centre.
文摘Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of sparse historical data for individual users.To address these issues,this paper proposes a new paradigm,namely temporal-weather-aware transition pattern for POI recommendation(TWTransNet).This paradigm is designed to capture user transition patterns under different times and weather conditions.Additionally,we introduce the construction of a user-POI interaction graph to alleviate the problem of sparse historical data for individual users.Furthermore,when predicting user interests by aggregating graph information,some POIs may not be suitable for visitation under current weather conditions.To account for this,we propose an attention mechanism to filter POI neighbours when aggregating information from the graph,considering the impact of weather and time.Empirical results on two real-world datasets demonstrate the superior performance of our proposed method,showing a substantial improvement of 6.91%-23.31% in terms of prediction accuracy.
基金supported by Guangdong Basic and Applied Basic Research Project(No.2025A1515012874)Foundation of Yunnan Key Laboratory of Service Computing(No.YNSC24115)+5 种基金Research Project of Pazhou Lab for Excellent Young Scholars(No.PZL2021KF0024)Guangdong Undergraduate Teaching Quality and Teaching Reform ProjectUniversity Research Project of Guangzhou Education Bureau(No.2024312189)Guangzhou Basic and Applied Basic Research Project(No.SL2024A03J00397)National Natural Science Foundation of China(No.62272113)Guangzhou Basic Research Program(No.2024A03J0398)。
文摘Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks.
文摘Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.
基金supported by the Henan Province Key R&D Project under Grant 241111210400the Henan Provincial Science and Technology Research Project under Grants 252102211047,252102211062,252102211055 and 232102210069+2 种基金the Jiangsu Provincial Scheme Double Initiative Plan JSS-CBS20230474,the XJTLU RDF-21-02-008the Science and Technology Innovation Project of Zhengzhou University of Light Industry under Grant 23XNKJTD0205the Higher Education Teaching Reform Research and Practice Project of Henan Province under Grant 2024SJGLX0126。
文摘Accurate and efficient detection of building changes in remote sensing imagery is crucial for urban planning,disaster emergency response,and resource management.However,existing methods face challenges such as spectral similarity between buildings and backgrounds,sensor variations,and insufficient computational efficiency.To address these challenges,this paper proposes a novel Multi-scale Efficient Wavelet-based Change Detection Network(MewCDNet),which integrates the advantages of Convolutional Neural Networks and Transformers,balances computational costs,and achieves high-performance building change detection.The network employs EfficientNet-B4 as the backbone for hierarchical feature extraction,integrates multi-level feature maps through a multi-scale fusion strategy,and incorporates two key modules:Cross-temporal Difference Detection(CTDD)and Cross-scale Wavelet Refinement(CSWR).CTDD adopts a dual-branch architecture that combines pixel-wise differencing with semanticaware Euclidean distance weighting to enhance the distinction between true changes and background noise.CSWR integrates Haar-based Discrete Wavelet Transform with multi-head cross-attention mechanisms,enabling cross-scale feature fusion while significantly improving edge localization and suppressing spurious changes.Extensive experiments on four benchmark datasets demonstrate MewCDNet’s superiority over comparison methods:achieving F1 scores of 91.54%on LEVIR,93.70%on WHUCD,and 64.96%on S2Looking for building change detection.Furthermore,MewCDNet exhibits optimal performance on the multi-class⋅SYSU dataset(F1:82.71%),highlighting its exceptional generalization capability.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
基金Supported by Alberta Innovates-Bio Solutionsa graduate studentship from Alberta Innovates-Health Solutions(to Keshteli AH)
文摘AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were recruited and followed-up at 12 mo to assess a clinical relapse, or not. At baseline information on demographic and clinical parameters was collected. Serum and urine samples were collected for analysis of metabolomic assays using a combined direct infusion/liquid chromatography tandem mass spectrometry and nuclear magnetic resolution spectroscopy. Stool samples were also collected to measure fecal calprotectin(FCP). Dietary assessment was performed using a validated self-administered food frequency questionnaire. RESULTS Twenty patients were included(mean age: 42.7 ± 14.8 years, females: 55%). Seven patients(35%) experienced a clinical relapse during the follow-up period. While 6 patients(66.7%) with normal body weight developed a clinical relapse, 1 UC patient(9.1%) who was overweight/obese relapsed during the follow-up(P = 0.02). At baseline, poultry intake was significantly higher in patients who were still in remission during follow-up(0.9 oz vs 0.2 oz, P = 0.002). Five patients(71.4%) with FCP > 150 μg/g and 2 patients(15.4%) with normal FCP(≤ 150 μg/g) at baseline relapsed during the follow-up(P = 0.02). Interestingly, baseline urinary and serum metabolomic profiling of UC patients with or without clinical relapse within 12 mo showed a significant difference. The most important metabolites that were responsible for this discrimination were trans-aconitate, cystine and acetamide in urine, and 3-hydroxybutyrate, acetoacetate and acetone in serum. CONCLUSION A combination of baseline dietary intake, fecal calprotectin, and metabolomic factors are associated with risk of UC clinical relapse within 12 mo.
基金funded by National Nature Science Foundation of China Key Projects(81130024,91332205,and 81630030)the National Key Technology R&D Program of the Ministry of Science and Technology of China(2016YFC0904300)+4 种基金the National Natural Science Foundation of China/Research Grants Council of Hong Kong Joint Research Scheme(8141101084)the Natural Science Foundation of China(8157051859)the Sichuan Science&Technology Department(2015JY0173)the Canadian Institutes of Health Research,Alberta Innovates:Centre for Machine Learningthe Canadian Depression Research&Intervention Network
文摘Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia(FES), 125 with MDD, and 237 demographically-matched healthy controls(HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with aone-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD.Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.
文摘In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.
基金partially supported by National Key Research and Development Program of China(2018YFB1700200)National Natural Science Foundation of China(61972389,61903356,61803368,U1908212)+2 种基金Youth Innovation Promotion Association of the Chinese Academy of Sciences,National Science and Technology Major Project(2017ZX02101007-004)Liaoning Provincial Natural Science Foundation of China(2020-MS-034,2019-YQ-09)China Postdoctoral Science Foundation(2019M661156)。
文摘Time-sensitive networks(TSNs)support not only traditional best-effort communications but also deterministic communications,which send each packet at a deterministic time so that the data transmissions of networked control systems can be precisely scheduled to guarantee hard real-time constraints.No-wait scheduling is suitable for such TSNs and generates the schedules of deterministic communications with the minimal network resources so that all of the remaining resources can be used to improve the throughput of best-effort communications.However,due to inappropriate message fragmentation,the realtime performance of no-wait scheduling algorithms is reduced.Therefore,in this paper,joint algorithms of message fragmentation and no-wait scheduling are proposed.First,a specification for the joint problem based on optimization modulo theories is proposed so that off-the-shelf solvers can be used to find optimal solutions.Second,to improve the scalability of our algorithm,the worst-case delay of messages is analyzed,and then,based on the analysis,a heuristic algorithm is proposed to construct low-delay schedules.Finally,we conduct extensive test cases to evaluate our proposed algorithms.The evaluation results indicate that,compared to existing algorithms,the proposed joint algorithm improves schedulability by up to 50%.
基金Supported by the NNSF of China(60464001) Guangxi Science Foundation(0575092).
文摘In this paper, the generalized Dodd-Bullough-Mikhailov equation is studied. The existence of periodic wave and unbounded wave solutions is proved by using the method of bifurcation theory of dynamical systems. Under different parametric conditions, various sufficient conditions to guarantee the existence of the above solutions are given.Some exact explicit parametric representations of the above travelling solutions are obtained.
基金supported by the National Natural Science Foundation of China(Nos.11905018 and 11875328).
文摘This work is an attempt to improve the Bayesian neural network (BNN) for studying photoneutron yield cross sections as a function of the charge number Z, mass number A, and incident energy ε. The BNN was improved in terms of three aspects:numerical parameters, input layer, and network structure. First, by minimizing the deviations between the predictions and data, the numerical parameters, including the hidden layer number, hidden node number, and activation function, were selected. It was found that the BNN with three hidden layers, 10 hidden nodes, and sigmoid activation function provided the smallest deviations. Second, based on known knowledge,such as the isospin dependence and shape effect, the optimal ground-state properties were selected as input neurons. Third, the Lorentzian function was applied to map the hidden nodes to the output cross sections, and the empirical formula of the Lorentzian parameters was applied to link some of the input nodes to the output cross sections. It was found that the last two aspects improved the predictions and avoided overfitting, especially for the axially deformed nucleus.