Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review s...Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary gro...The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.展开更多
Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,a...Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,are often limited in their ability to detect early-stage OA or capture subtle joint changes.These limitations result in delayed diagnoses and inconsistent outcomes.Additionally,the analysis of omics data is challenged by the complexity and high dimensionality of biological datasets,making it difficult to identify key molecular mechanisms and biomarkers.Recent advancements in artificial intelligence(AI)offer transformative potential to address these challenges.This review systematically explores the integration of AI into OA research,focusing on applications such as AI-driven early screening and risk prediction from electronic health records(EHR),automated grading and morphological analysis of imaging data,and biomarker discovery through multi-omics integration.By consolidating progress across clinical,imaging,and omics domains,this review provides a comprehensive perspective on how AI is reshaping OA research.The findings have the potential to drive innovations in personalized medicine and targeted interventions,addressing longstanding challenges in OA diagnosis and management.展开更多
Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was...Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was presented to the teachers in a course to investigate about the reasoning that teachers develop about the data analysis, particularly about the distribution concept, that involves important concepts such as averages, variability and graphics representations. The design of the activities was planned so that the teachers analyzed quantitative variables separately first, and later made an analysis of a qualitative variable versus a quantitative variable with the objective of establishing comparisons between distributions and use concepts as averages, variability, shape and outliers. The instructions in each activity indicated to the teachers to use all the resources of the software that were necessary to make the complete analysis and respond to certain questions that pretended to capture the type of representations they used to answer. The results indicate that despite the abundance of representations provided by the software, teachers focu,; on the calculation of averages to describe and compare distributions, rather than on the important properties of data such as variability, :shape and outliers. Many teachers were able to build interesting graphs reflecting important properties of the data, but cannot use them 1:o support data analysis. Hence, it is necessary to extend the teachers' understanding on data analysis so they can take advantage of the cognitive potential that computer tools to offer.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and rev...Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and revolutionizing personalized learning,AI is reshaping our world in ways once thought impossible.Spanning fields such as machine learning,deep learning,natural language processing,robotics,and ChatGPT,AI continues to push the boundaries of innovation.As AI continues to advance,it is vital to have a platform that not only disseminates cutting-edge research innovations but also fosters broad discussions on its societal impact,ethical considerations,and interdisciplinary applications.With this vision in mind,we proudly introduce Artificial Intelligence Science and Engineering(AISE)-a journal dedicated to nurturing the next wave of AI innovation and engineering applications.Our mission is to provide a premier outlet where researchers can share high-quality,impactful studies and collaborate to advance AI across academia,industry,and beyond.展开更多
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio...Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.展开更多
Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Int...Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.展开更多
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b...Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.展开更多
Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making ...Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.展开更多
This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller P...This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller Placement Problem(CPP).As SD-WAN technology continues to gain prominence for its capacity to offer flexible and efficient network management,the task of 36optimally placing controllers—responsible for orchestrating and managing network traffic—remains a critical yet complex challenge.This review delves into recent innovations in multi-objective controller placement strategies,including clustering techniques,heuristic-based approaches,and the integration of machine learning and deep learning models.Each methodology is critically evaluated in terms of its ability to minimize network latency,enhance fault tolerance,and improve overall network performance.Furthermore,this paper discusses the inherent limitations and challenges associated with these techniques,providing a critical evaluation of their current utility and outlining potential avenues for future research.By offering a thorough overview of state-of-the-art approaches to multi-objective controller placement in SD-WANs,this review aims to inform ongoing advancements and highlight emerging research opportunities in this evolving field.展开更多
Sediment quality in global estuaries was reported by assessing the degree of anthropogenic input and the corresponding ecological risks.This research intended to categorize the quantities of marine pollution at the mo...Sediment quality in global estuaries was reported by assessing the degree of anthropogenic input and the corresponding ecological risks.This research intended to categorize the quantities of marine pollution at the mouth of the Chanthaburi River,on the Eastern Gulf of Thailand,by examining the interactions amongst the heavy metals(Pb,Cd,Cu,Zn)and microplastics(MPs)in surface marine sediments.Marine pollution severity was classified using the Geo-accumulation Index(I_(geo)),Sediment Enrichment Factor(SEF),and Pollution Load Index(PLI).Spatial distribution of pollutants and geostatistical covariance were examined via Geographic Information System(GIS)and Principal Component Analysis(PCA).The average concentrations determined in sediment samples were as follows:Pb,0.369±0.022 ppm;Cd,0.0042±0.0004 ppm;Cu,5.424±0.007 ppm;Zn,33.756±0.182 ppm;and microplastics,1.36±0.06 particles/g.All metal levels were below the WASV,CCV,and TRV reference thresholds.I_(geo) and SEF indicated that Zn was moderately accumulated with minor enrichment,while other metals were unpolluted.PCA explained 90.85%of the variance,mainly reflecting Zn accumulation in downstream sites.We also found an inconspicuous correlation between heavy metals and MPs,which may be caused by distinct sources,physicochemical properties,and potential biological synergistic effects that remain unclear.A key originality of this study lies in the integration of GIS-based spatial interpolation with the PLI data to visualize and distinguish site-specific accumulation zones.The study did not assess biological uptake or biomarkers,limiting insight into actual bioavailability and toxicity to marine species.These findings provide spatially explicit evidence for targeted estuarine management and highlight the need for future studies on bioavailability and ecological risks.展开更多
Together,the heart and lung sound comprise the thoracic cavity sound,which provides informative details that reflect patient conditions,particularly heart failure(HF)patients.However,due to the limitations of human he...Together,the heart and lung sound comprise the thoracic cavity sound,which provides informative details that reflect patient conditions,particularly heart failure(HF)patients.However,due to the limitations of human hearing,a limited amount of information can be auscultated from thoracic cavity sounds.With the aid of artificial intelligence–machine learning,these features can be analyzed and aid in the care of HF patients.Machine learning of thoracic cavity sound data involves sound data pre-processing by denoising,resampling,segmentation,and normalization.Afterwards,the most crucial step is feature extraction and se-lection where relevant features are selected to train the model.The next step is classification and model performance evaluation.This review summarizes the currently available studies that utilized different machine learning models,different feature extraction and selection methods,and different classifiers to generate the desired output.Most studies have analyzed the heart sound component of thoracic cavity sound to distinguish between normal and HF patients.Additionally,some studies have aimed to classify HF patients based on thoracic cavity sounds in their entirety,while others have focused on risk strati-fication and prognostic evaluation of HF patients using thoracic cavity sounds.Overall,the results from these studies demonstrate a promisingly high level of accuracy.Therefore,future prospective studies should incorporate these machine learning models to expedite their integration into daily clinical practice for managing HF patients.展开更多
Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential bec...Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.展开更多
As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,fo...As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.展开更多
In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation ...In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.展开更多
This century's rapid urbanization has disrupted urban governance,sustainability,and resource management.The Internet of Things(IoT)and 5G have the potential to transform smart cities through real-time data process...This century's rapid urbanization has disrupted urban governance,sustainability,and resource management.The Internet of Things(IoT)and 5G have the potential to transform smart cities through real-time data processing,enhanced connectivity,and sustainable urban design.This study investigates the potential of 5G connectivity with the IoT's hierarchical framework to enhance public service provision,mitigate environmental effects,and optimize urban resource management.The article asserts that these technologies can enhance urban operations by tackling scalability,interoperability,and security issues.The research employs case studies from Singapore and Barcelona.The document moreover analyzes AI-driven security systems,6G networks,and the contributions of IoT and 5G to the advancement of a circular economy.The essay asserts that the growth of smart cities necessitates robust policy frameworks to guarantee equitable access,data protection,and ethical considerations.This study integrates prior research with practical experiences to tackle data-informed municipal governance and urban innovation.The importance of policy in fostering inclusive and sustainable urban futures is emphasized.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the clou...Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.展开更多
基金supported by the Ministry of Education and Science of the Republic of North Macedonia through the project“Utilizing AI and National Large Language Models to Advance Macedonian Language Capabilties”。
文摘Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
文摘The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.
基金supported by the National Natural Science Foundation of China(82302757)Shenzhen Science and Technology Program(JCY20240813145204006,SGDX20201103095600002,JCYJ20220818103417037,KJZD20230923115200002)+1 种基金Shenzhen Key Laboratory of Digital Surgical Printing Project(ZDSYS201707311542415)Shenzhen Development and Reform Program(XMHT20220106001).
文摘Osteoarthritis(OA)is a degenerative joint disease with significant clinical and societal impact.Traditional diagnostic methods,including subjective clinical assessments and imaging techniques such as X-rays and MRIs,are often limited in their ability to detect early-stage OA or capture subtle joint changes.These limitations result in delayed diagnoses and inconsistent outcomes.Additionally,the analysis of omics data is challenged by the complexity and high dimensionality of biological datasets,making it difficult to identify key molecular mechanisms and biomarkers.Recent advancements in artificial intelligence(AI)offer transformative potential to address these challenges.This review systematically explores the integration of AI into OA research,focusing on applications such as AI-driven early screening and risk prediction from electronic health records(EHR),automated grading and morphological analysis of imaging data,and biomarker discovery through multi-omics integration.By consolidating progress across clinical,imaging,and omics domains,this review provides a comprehensive perspective on how AI is reshaping OA research.The findings have the potential to drive innovations in personalized medicine and targeted interventions,addressing longstanding challenges in OA diagnosis and management.
文摘Results of a research about statistical reasoning that six high school teachers developed in a computer environment are presented in this article. A sequence of three activities with the support of software Fathom was presented to the teachers in a course to investigate about the reasoning that teachers develop about the data analysis, particularly about the distribution concept, that involves important concepts such as averages, variability and graphics representations. The design of the activities was planned so that the teachers analyzed quantitative variables separately first, and later made an analysis of a qualitative variable versus a quantitative variable with the objective of establishing comparisons between distributions and use concepts as averages, variability, shape and outliers. The instructions in each activity indicated to the teachers to use all the resources of the software that were necessary to make the complete analysis and respond to certain questions that pretended to capture the type of representations they used to answer. The results indicate that despite the abundance of representations provided by the software, teachers focu,; on the calculation of averages to describe and compare distributions, rather than on the important properties of data such as variability, :shape and outliers. Many teachers were able to build interesting graphs reflecting important properties of the data, but cannot use them 1:o support data analysis. Hence, it is necessary to extend the teachers' understanding on data analysis so they can take advantage of the cognitive potential that computer tools to offer.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Over the past decade,artificial intelligence(AI)has evolved at an unprecedented pace,transforming technology,industry,and society.From diagnosing diseases with remarkable accuracy to powering self-driving cars and revolutionizing personalized learning,AI is reshaping our world in ways once thought impossible.Spanning fields such as machine learning,deep learning,natural language processing,robotics,and ChatGPT,AI continues to push the boundaries of innovation.As AI continues to advance,it is vital to have a platform that not only disseminates cutting-edge research innovations but also fosters broad discussions on its societal impact,ethical considerations,and interdisciplinary applications.With this vision in mind,we proudly introduce Artificial Intelligence Science and Engineering(AISE)-a journal dedicated to nurturing the next wave of AI innovation and engineering applications.Our mission is to provide a premier outlet where researchers can share high-quality,impactful studies and collaborate to advance AI across academia,industry,and beyond.
基金supported by the Intelligent System Research Group(ISysRG)supported by Universitas Sriwijaya funded by the Competitive Research 2024.
文摘Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.
基金funded by Scientific Research Deanship at University of Ha’il,Saudi Arabia,through project number GR-24009.
文摘Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.
基金Supported by the Bavarian Academic Forum(BayWISS),as a part of the joint academic partnership digitalization program.
文摘Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research.
文摘Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.
文摘This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller Placement Problem(CPP).As SD-WAN technology continues to gain prominence for its capacity to offer flexible and efficient network management,the task of 36optimally placing controllers—responsible for orchestrating and managing network traffic—remains a critical yet complex challenge.This review delves into recent innovations in multi-objective controller placement strategies,including clustering techniques,heuristic-based approaches,and the integration of machine learning and deep learning models.Each methodology is critically evaluated in terms of its ability to minimize network latency,enhance fault tolerance,and improve overall network performance.Furthermore,this paper discusses the inherent limitations and challenges associated with these techniques,providing a critical evaluation of their current utility and outlining potential avenues for future research.By offering a thorough overview of state-of-the-art approaches to multi-objective controller placement in SD-WANs,this review aims to inform ongoing advancements and highlight emerging research opportunities in this evolving field.
基金supported by the Research Fund of Rambhai Barni Rajabhat University(RBRU,Contract No.2221/2023),Thailand.
文摘Sediment quality in global estuaries was reported by assessing the degree of anthropogenic input and the corresponding ecological risks.This research intended to categorize the quantities of marine pollution at the mouth of the Chanthaburi River,on the Eastern Gulf of Thailand,by examining the interactions amongst the heavy metals(Pb,Cd,Cu,Zn)and microplastics(MPs)in surface marine sediments.Marine pollution severity was classified using the Geo-accumulation Index(I_(geo)),Sediment Enrichment Factor(SEF),and Pollution Load Index(PLI).Spatial distribution of pollutants and geostatistical covariance were examined via Geographic Information System(GIS)and Principal Component Analysis(PCA).The average concentrations determined in sediment samples were as follows:Pb,0.369±0.022 ppm;Cd,0.0042±0.0004 ppm;Cu,5.424±0.007 ppm;Zn,33.756±0.182 ppm;and microplastics,1.36±0.06 particles/g.All metal levels were below the WASV,CCV,and TRV reference thresholds.I_(geo) and SEF indicated that Zn was moderately accumulated with minor enrichment,while other metals were unpolluted.PCA explained 90.85%of the variance,mainly reflecting Zn accumulation in downstream sites.We also found an inconspicuous correlation between heavy metals and MPs,which may be caused by distinct sources,physicochemical properties,and potential biological synergistic effects that remain unclear.A key originality of this study lies in the integration of GIS-based spatial interpolation with the PLI data to visualize and distinguish site-specific accumulation zones.The study did not assess biological uptake or biomarkers,limiting insight into actual bioavailability and toxicity to marine species.These findings provide spatially explicit evidence for targeted estuarine management and highlight the need for future studies on bioavailability and ecological risks.
文摘Together,the heart and lung sound comprise the thoracic cavity sound,which provides informative details that reflect patient conditions,particularly heart failure(HF)patients.However,due to the limitations of human hearing,a limited amount of information can be auscultated from thoracic cavity sounds.With the aid of artificial intelligence–machine learning,these features can be analyzed and aid in the care of HF patients.Machine learning of thoracic cavity sound data involves sound data pre-processing by denoising,resampling,segmentation,and normalization.Afterwards,the most crucial step is feature extraction and se-lection where relevant features are selected to train the model.The next step is classification and model performance evaluation.This review summarizes the currently available studies that utilized different machine learning models,different feature extraction and selection methods,and different classifiers to generate the desired output.Most studies have analyzed the heart sound component of thoracic cavity sound to distinguish between normal and HF patients.Additionally,some studies have aimed to classify HF patients based on thoracic cavity sounds in their entirety,while others have focused on risk strati-fication and prognostic evaluation of HF patients using thoracic cavity sounds.Overall,the results from these studies demonstrate a promisingly high level of accuracy.Therefore,future prospective studies should incorporate these machine learning models to expedite their integration into daily clinical practice for managing HF patients.
文摘Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.
基金supported by the University Putra Malaysia and the Ministry of Higher Education Malaysia under grantNumber:(FRGS/1/2023/ICT11/UPM/02/3).
文摘As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.
基金co-supported by the National Natural Science Foundation of China(No.62271093)the Natural Science Foundation of Chongqing,China(No.CSTB2023NSCQ-LZX0108)the Chongqing Graduate Research Innovation Project,China(No.CYS23093).
文摘In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.
文摘This century's rapid urbanization has disrupted urban governance,sustainability,and resource management.The Internet of Things(IoT)and 5G have the potential to transform smart cities through real-time data processing,enhanced connectivity,and sustainable urban design.This study investigates the potential of 5G connectivity with the IoT's hierarchical framework to enhance public service provision,mitigate environmental effects,and optimize urban resource management.The article asserts that these technologies can enhance urban operations by tackling scalability,interoperability,and security issues.The research employs case studies from Singapore and Barcelona.The document moreover analyzes AI-driven security systems,6G networks,and the contributions of IoT and 5G to the advancement of a circular economy.The essay asserts that the growth of smart cities necessitates robust policy frameworks to guarantee equitable access,data protection,and ethical considerations.This study integrates prior research with practical experiences to tackle data-informed municipal governance and urban innovation.The importance of policy in fostering inclusive and sustainable urban futures is emphasized.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金Supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R896).
文摘Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.