Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review s...Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.展开更多
Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish betwee...Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.展开更多
This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that...This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that the town of Pomorie is examined. Data on the quality of the consumed electric energy and the price per a four-member family are presented. The amount of the solar radiation for the town of Pomorie is audited through the PVGIS (Photovoltaic Geographical Information System). Discussed are the types of photovoltaic panels offered on the market by manufacturers in terms of the received power efficiency. Developed is a model of creating a photovoltaic system on the roof of a house, inhabited by several families. Calculations are made on the cost of the electricity generated by the proposed system. Compared is the cost of the electricity supplied by the electricity provider EVN (Energie Verntinftig Nutzen) in the town of Pomorie to the one that will be obtained using the proposed PV-system.展开更多
Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,...Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,thereafter released into the lake to grow their natural population.The external features of the fish were measured regularly during the cultivation period in the laboratory to monitor their growth.The data mining methods-based computational model can be used for fast,accurate,reliable,automatic,and improved growth monitoring procedures and classification of Ohrid trout.With this motivation,a combined approach of principal component analysis(PCA)and support vectormachine(SVM)has been implemented for the visual discrimination and quantitative classification of Ohrid trout of the experimental and natural breeding and their growth stages.The PCA results in better discrimination of breeding categories of Ohrid trout at different development phases while the maximum classification accuracy of 98.33% was achieved using the combination of PCA and SVM.The classification performance of the combination of PCA and SVM has been compared to combinations of PCA and other classification methods(multilayer perceptron,naive Bayes,randomcommittee,decision stump,random forest,and random tree).Besides,the classification accuracy of multilayer perceptron using the original features has been studied.展开更多
This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working...This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working band is expanded by changing the resonator layer’s design.Due to perfect impedance matching with plasmonic resonance characteristics,the proposed NMMA structure is achieved an excellent absorption of 99.99%at 571 THz,99.50%at 488.26 THz,and 99.32%at 598 THz frequencies.The absorption mechanism is demonstrated by the theory of impedance,electric field,and power loss density distributions,respectively.The geometric parameters are explored and analyzed to show the structure’s performance,and a near-field pattern is used to explain the absorption mechanism at the resonance frequency point.The numerical analysis method describes that the proposed structure exhibited more than 80%absorbability between 550 and 900 THz.The Computer Simulation Technology(CST Microwave Studio 2019)software is used to design the proposed structure.Furthermore,CSTHFSS interference is validated by the simulation data with the help of the finite element method(FEM).The proposed NMMA structure is also exhibits glucose concentration sensing capability as applications.So the proposed broadband absorber may have a potential application in THz sensing,imaging(MRI,thermal,color),solar energy harvesting,light modulators,and optoelectronic devices.展开更多
Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,res...Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,resulting in imminent death.On average,patients do not survive 14 months after diagnosis.The only way to minimize the impact of this inevitable disease is through early diagnosis.The Magnetic Resonance Imaging(MRI)scans,because of their better tissue contrast,are most frequently used to assess the brain tissues.The manual classification of MRI scans takes a reasonable amount of time to classify brain tumors.Besides this,dealing with MRI scans manually is also cumbersome,thus affects the classification accuracy.To eradicate this problem,researchers have come up with automatic and semiautomatic methods that help in the automation of brain tumor classification task.Although,many techniques have been devised to address this issue,the existing methods still struggle to characterize the enhancing region.This is because of low variance in enhancing region which give poor contrast in MRI scans.In this study,we propose a novel deep learning based method consisting of a series of steps,namely:data pre-processing,patch extraction,patch pre-processing,and a deep learning model with tuned hyper-parameters to classify all types of gliomas with a focus on enhancing region.Our trained model achieved better results for all glioma classes including the enhancing region.The improved performance of our technique can be attributed to several factors.Firstly,the non-local mean filter in the pre-processing step,improved the image detail while removing irrelevant noise.Secondly,the architecture we employ can capture the non-linearity of all classes including the enhancing region.Overall,the segmentation scores achieved on the Dice Similarity Coefficient(DSC)metric for normal,necrosis,edema,enhancing and non-enhancing tumor classes are 0.95,0.97,0.91,0.93,0.95;respectively.展开更多
The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collecti...The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collective human behavior is especially crucial in applications such as assessing security conditions and preventing crowd congestion.This paper investigates the capability of deep neural network(DNN)algorithms to achieve our carefully engineered pipeline for crowd analysis.It includes three principal stages that cover crowd analysis challenges.First,individual’s detection is represented using the You Only Look Once(YOLO)model for human detection and Kalman filter for multiple human tracking;Second,the density map and crowd counting of a certain location are generated using bounding boxes from a human detector;and Finally,in order to classify normal or abnormal crowds,individual activities are identified with pose estimation.The proposed system successfully achieves designing an effective collective representation of the crowd given the individuals in addition to introducing a significant change of crowd in terms of activities change.Experimental results onMOT20 and SDHA datasets demonstrate that the proposed system is robust and efficient.The framework achieves an improved performance of recognition and detection peoplewith a mean average precision of 99.0%,a real-time speed of 0.6ms non-maximumsuppression(NMS)per image for the SDHAdataset,and 95.3%mean average precision for MOT20 with 1.5ms NMS per image.展开更多
The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irre...The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs.It is possible to gain a holistic view of Alzheimer’s disease staging by combining multiple data modalities,known as image fusion.In this paper,the study proposes the early detection of Alzheimer’s disease using different modalities of Alzheimer’s disease brain images.First,the preprocessing was performed on the data.Then,the data augmentation techniques are used to handle overfitting.Also,the skull is removed to lead to good classification.In the second phase,two fusion stages are used:pixel level(early fusion)and feature level(late fusion).We fused magnetic resonance imaging and positron emission tomography images using early fusion(Laplacian Re-Decomposition)and late fusion(Canonical Correlation Analysis).The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each.Magnetic resonance imaging system’s primary benefits are providing images with excellent spatial resolution and structural information for specific organs.Positron emission tomography images can provide functional information and the metabolisms of particular tissues.This characteristic helps clinicians detect diseases and tumor progression at an early stage.Third,the feature extraction of fused images is extracted using a convolutional neural network.In the case of late fusion,the features are extracted first and then fused.Finally,the proposed system performs XGB to classify Alzheimer’s disease.The system’s performance was evaluated using accuracy,specificity,and sensitivity.All medical data were retrieved in the 2D format of 256×256 pixels.The classifiers were optimized to achieve the final results:for the decision tree,the maximum depth of a tree was 2.The best number of trees for the random forest was 60;for the support vector machine,the maximum depth was 4,and the kernel gamma was 0.01.The system achieved an accuracy of 98.06%,specificity of 94.32%,and sensitivity of 97.02%in the case of early fusion.Also,if the system achieved late fusion,accuracy was 99.22%,specificity was 96.54%,and sensitivity was 99.54%.展开更多
Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage proba...Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage probability and asymptotic outage probability are derived for the secondary system of the network.The results show that the outage probability is influenced by the source and destination number,the CSI imperfection as well as the interference from the primary transmitter,while the diversity order is independent of the CSI imperfection and the interference from the primary transmitter,yet it is equal to the minimum of the source and destination number.Moreover,extensive simulations are conducted with different system parameters to verify the theoretical analysis.展开更多
Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image a...Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Hurricane Ida ferociously affected many south-eastern and eastern parts of the United States,making it one of the strongest hurricanes in recent years.Advanced forecast and warning tool has been used to track the path...Hurricane Ida ferociously affected many south-eastern and eastern parts of the United States,making it one of the strongest hurricanes in recent years.Advanced forecast and warning tool has been used to track the path of the ex-Hurricane,Ida,as it left New Orleans on its way towards the northeast,accurately predicting significant supercell development above New York City on September 01,2021.This advanced method accurately detected the area with the highest possible level of convective instability with 24-h lead time and even Level 5,devised in the categorical outlooks legend of the system.Therefore,an extreme level implied a very high probability of the local-scale hazard occurring above the NYC.Cloud model output fields(updrafts and downdrafts,wind shear,near-surface convergence,the vertical component of relative vorticity)show the rapid development of a strong supercell storm with rotating updrafts and a mesocyclone.The characteristic hook-shaped echo signature visible in the reflectivity patterns indicates a signal for a highly precipitable(HP)supercell with the possibility of tornado initiation.Open boundary conditions represent a good basis for simulating a tornado that evolved from a supercell storm,initialized with initial data obtained from a real-time simulation in the period when the bow echo and tornado-like signature occurred.Тhe modeled results agree well with the observations.展开更多
Graph-based image classification has emerged as a powerful alternative to traditional convolutional approaches,leveraging the relational structure between image regions to improve accuracy.This paper presents an enhan...Graph-based image classification has emerged as a powerful alternative to traditional convolutional approaches,leveraging the relational structure between image regions to improve accuracy.This paper presents an enhanced graph-based image classification framework that integrates convolutional neural network(CNN)features with graph convolutional network(GCN)learning,leveraging superpixel-based image representations.The proposed framework initiates the process by segmenting input images into significant superpixels,reducing computational complexity while preserving essential spatial structures.A pre-trained CNN backbone extracts both global and local features from these superpixels,capturing critical texture and shape information.These features are structured into a graph,and the framework presents a graph classification model that learns and propagates relationships between nodes,improving global contextual understanding.By combining the strengths of CNN-based feature extraction and graph-based relational learning,the method achieves higher accuracy,faster training speeds,and greater robustness in image classification tasks.Experimental evaluations on four agricultural datasets demonstrate the proposed model’s superior performance,achieving accuracy rates of 96.57%,99.63%,95.19%,and 90.00%on Tomato Leaf Disease,Dragon Fruit,Tomato Ripeness,and Dragon Fruit and Leaf datasets,respectively.The model consistently outperforms conventional CNN(89.27%–94.23%accuracy),VIT(89.45%–99.77%accuracy),VGG16(93.97%–99.52%accuracy),and ResNet50(86.67%–99.26%accuracy)methods across all datasets,with particularly significant improvements on challenging datasets such as Tomato Ripeness(95.19%vs.86.67%–94.44%)and Dragon Fruit and Leaf(90.00%vs.82.22%–83.97%).The compact superpixel representation and efficient feature propagation mechanism further accelerate learning compared to traditional CNN and graph-based approaches.展开更多
Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiv...Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiveness of demand prediction,including neural networks,machine learning,deep learning,and advanced architectures such as CNN and LSTM.However,research on the CNN models has struggled to provide reliable outcomes due to insufficient dataset sizes,repeated investigations,and inappropriate baseline selection.To address these challenges,we propose a Tabular data-based Lightweight Convolutional Neural Network(TLCNN)model for predicting energy demand.It frames the problem as a regression task that effectively captures complex data trends for accurate forecasting.The BanE-16 dataset is preprocessed using normalization techniques for categorical and numerical data before training the model.The proposed approach dynamically selects relevant features through a two-dimensional convolutional structure that improves adaptability.The model’s performance is evaluated using MSE,MAE,and Accuracy metrics.Experimental results show that TLCNN achieves a 10.89%lower MSE than traditional ML algorithms,demonstrating superior predictive capability.Additionally,TLCNN’s lightweight structure enhances generalization while reducing computational costs,making it suitable for real-world energy forecasting tasks.This study contributes to energy informatics by introducing an optimized deep-learning framework that improves demand prediction by ensuring robustness and adaptability for tabular data.展开更多
Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the clou...Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.展开更多
Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,...Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.展开更多
Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator fo...Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator for optical region applications.The design consists of a resonator and dielectric layers made with tungsten and quartz(Fused).The proposed absorbent exhibits average absorption of more than 0.8972(89.72%)within the visible wavelength of 450–600 nm and nearly perfect absorption of 0.99(99%)at 461.61 nm.Based on computational analysis,the proposed absorber can be characterized as ZIM.The developments of ZIM absorbers have demonstrated plasmonic resonance characteristics and a perfect impedance match.The incidence obliquity in typically the range of 0◦–90◦both in TE and TM mode with maximum absorbance is more than 0.8972(∼89.72%),and up to 45◦angular stability is suitable for solar cell applications,like exploiting solar energy.The proposed structure prototype is designed and simulated by studying microwave technology numerical computer simulation(CST)tools.The finite integration technique(FIT)based simulator CST and finite element method(FEM)based simulator HFSS also helps validate the numerical data of the proposed ZIM absorber.The proposed MMA design is appropriate for substantial absorption,wide-angle stability,absolute invisible layers,magnetic resonance imaging(MRI),color images,and thermal imaging applications.展开更多
In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usual...In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usually havelarge size and various spatial resolutions. Thus, detecting objects in remote sensing images is very complicated. In this paper, we develop a model to detectobjects in remote sensing images based on the combination of picture fuzzy clustering and MapReduce method (denoted as MPFC). Firstly, picture fuzzy clustering is applied to segment the input images. Then, MapReduce is used to reducethe runtime with the guarantee of quality. To convert data for MapReduce processing, two new procedures are introduced, including Map_PFC and Reduce_PFC.The formal representation and details of two these procedures are presented in thispaper. The experiments on satellite image and remote sensing image datasets aregiven to evaluate proposed model. Validity indices and time consuming are usedto compare proposed model to picture fuzzy clustering model. The values ofvalidity indices show that picture fuzzy clustering integrated to MapReduce getsbetter quality of segmentation than using picture fuzzy clustering only. Moreover,on two selected image datasets, the run time of MPFC model is much less thanthat of picture fuzzy clustering.展开更多
Non-Orthogonal Multiple Access(NOMA)has already proven to be an effective multiple access scheme for5th Generation(5G)wireless networks.It provides improved performance in terms of system throughput,spectral efficienc...Non-Orthogonal Multiple Access(NOMA)has already proven to be an effective multiple access scheme for5th Generation(5G)wireless networks.It provides improved performance in terms of system throughput,spectral efficiency,fairness,and energy efficiency(EE).However,in conventional NOMA networks,performance degradation still exists because of the stochastic behavior of wireless channels.To combat this challenge,the concept of Intelligent Reflecting Surface(IRS)has risen to prominence as a low-cost intelligent solution for Beyond 5G(B5G)networks.In this paper,a modeling primer based on the integration of these two cutting-edge technologies,i.e.,IRS and NOMA,for B5G wireless networks is presented.An in-depth comparative analysis of IRS-assisted Power Domain(PD)-NOMA networks is provided through 3-fold investigations.First,a primer is presented on the system architecture of IRS-enabled multiple-configuration PD-NOMA systems,and parallels are drawn with conventional network configurations,i.e.,conventional NOMA,Orthogonal Multiple Access(OMA),and IRS-assisted OMA networks.Followed by this,a comparative analysis of these network configurations is showcased in terms of significant performance metrics,namely,individual users'achievable rate,sum rate,ergodic rate,EE,and outage probability.Moreover,for multi-antenna IRS-enabled NOMA networks,we exploit the active Beamforming(BF)technique by employing a greedy algorithm using a state-of-the-art branch-reduceand-bound(BRB)method.The optimality of the BRB algorithm is presented by comparing it with benchmark BF techniques,i.e.,minimum-mean-square-error,zero-forcing-BF,and maximum-ratio-transmission.Furthermore,we present an outlook on future envisioned NOMA networks,aided by IRSs,i.e.,with a variety of potential applications for 6G wireless networks.This work presents a generic performance assessment toolkit for wireless networks,focusing on IRS-assisted NOMA networks.This comparative analysis provides a solid foundation for the development of future IRS-enabled,energy-efficient wireless communication systems.展开更多
Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction...Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.展开更多
基金supported by the Ministry of Education and Science of the Republic of North Macedonia through the project“Utilizing AI and National Large Language Models to Advance Macedonian Language Capabilties”。
文摘Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.
基金The authors would like to thank the support of the Taif University Researchers Supporting Project TURSP 2020/34,Taif University,Taif Saudi Arabia for supporting this work.
文摘Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.
文摘This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that the town of Pomorie is examined. Data on the quality of the consumed electric energy and the price per a four-member family are presented. The amount of the solar radiation for the town of Pomorie is audited through the PVGIS (Photovoltaic Geographical Information System). Discussed are the types of photovoltaic panels offered on the market by manufacturers in terms of the received power efficiency. Developed is a model of creating a photovoltaic system on the roof of a house, inhabited by several families. Calculations are made on the cost of the electricity generated by the proposed system. Compared is the cost of the electricity supplied by the electricity provider EVN (Energie Verntinftig Nutzen) in the town of Pomorie to the one that will be obtained using the proposed PV-system.
基金supported by the startup foundation for introducing talent of NUIST,Nanjing,China(Project No.2243141701103).
文摘Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,thereafter released into the lake to grow their natural population.The external features of the fish were measured regularly during the cultivation period in the laboratory to monitor their growth.The data mining methods-based computational model can be used for fast,accurate,reliable,automatic,and improved growth monitoring procedures and classification of Ohrid trout.With this motivation,a combined approach of principal component analysis(PCA)and support vectormachine(SVM)has been implemented for the visual discrimination and quantitative classification of Ohrid trout of the experimental and natural breeding and their growth stages.The PCA results in better discrimination of breeding categories of Ohrid trout at different development phases while the maximum classification accuracy of 98.33% was achieved using the combination of PCA and SVM.The classification performance of the combination of PCA and SVM has been compared to combinations of PCA and other classification methods(multilayer perceptron,naive Bayes,randomcommittee,decision stump,random forest,and random tree).Besides,the classification accuracy of multilayer perceptron using the original features has been studied.
基金This work is supported by the Universiti Kebangsaan Malaysia research grant GGPM 2020-005.
文摘This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working band is expanded by changing the resonator layer’s design.Due to perfect impedance matching with plasmonic resonance characteristics,the proposed NMMA structure is achieved an excellent absorption of 99.99%at 571 THz,99.50%at 488.26 THz,and 99.32%at 598 THz frequencies.The absorption mechanism is demonstrated by the theory of impedance,electric field,and power loss density distributions,respectively.The geometric parameters are explored and analyzed to show the structure’s performance,and a near-field pattern is used to explain the absorption mechanism at the resonance frequency point.The numerical analysis method describes that the proposed structure exhibited more than 80%absorbability between 550 and 900 THz.The Computer Simulation Technology(CST Microwave Studio 2019)software is used to design the proposed structure.Furthermore,CSTHFSS interference is validated by the simulation data with the help of the finite element method(FEM).The proposed NMMA structure is also exhibits glucose concentration sensing capability as applications.So the proposed broadband absorber may have a potential application in THz sensing,imaging(MRI,thermal,color),solar energy harvesting,light modulators,and optoelectronic devices.
文摘Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,resulting in imminent death.On average,patients do not survive 14 months after diagnosis.The only way to minimize the impact of this inevitable disease is through early diagnosis.The Magnetic Resonance Imaging(MRI)scans,because of their better tissue contrast,are most frequently used to assess the brain tissues.The manual classification of MRI scans takes a reasonable amount of time to classify brain tumors.Besides this,dealing with MRI scans manually is also cumbersome,thus affects the classification accuracy.To eradicate this problem,researchers have come up with automatic and semiautomatic methods that help in the automation of brain tumor classification task.Although,many techniques have been devised to address this issue,the existing methods still struggle to characterize the enhancing region.This is because of low variance in enhancing region which give poor contrast in MRI scans.In this study,we propose a novel deep learning based method consisting of a series of steps,namely:data pre-processing,patch extraction,patch pre-processing,and a deep learning model with tuned hyper-parameters to classify all types of gliomas with a focus on enhancing region.Our trained model achieved better results for all glioma classes including the enhancing region.The improved performance of our technique can be attributed to several factors.Firstly,the non-local mean filter in the pre-processing step,improved the image detail while removing irrelevant noise.Secondly,the architecture we employ can capture the non-linearity of all classes including the enhancing region.Overall,the segmentation scores achieved on the Dice Similarity Coefficient(DSC)metric for normal,necrosis,edema,enhancing and non-enhancing tumor classes are 0.95,0.97,0.91,0.93,0.95;respectively.
文摘The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collective human behavior is especially crucial in applications such as assessing security conditions and preventing crowd congestion.This paper investigates the capability of deep neural network(DNN)algorithms to achieve our carefully engineered pipeline for crowd analysis.It includes three principal stages that cover crowd analysis challenges.First,individual’s detection is represented using the You Only Look Once(YOLO)model for human detection and Kalman filter for multiple human tracking;Second,the density map and crowd counting of a certain location are generated using bounding boxes from a human detector;and Finally,in order to classify normal or abnormal crowds,individual activities are identified with pose estimation.The proposed system successfully achieves designing an effective collective representation of the crowd given the individuals in addition to introducing a significant change of crowd in terms of activities change.Experimental results onMOT20 and SDHA datasets demonstrate that the proposed system is robust and efficient.The framework achieves an improved performance of recognition and detection peoplewith a mean average precision of 99.0%,a real-time speed of 0.6ms non-maximumsuppression(NMS)per image for the SDHAdataset,and 95.3%mean average precision for MOT20 with 1.5ms NMS per image.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ICT Creative Consilience Program(IITP-2021-2020-0-01821)supervised by the IITP(Institute for Information&communications Technology Planning&evaluation)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A2C1011198).
文摘The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs.It is possible to gain a holistic view of Alzheimer’s disease staging by combining multiple data modalities,known as image fusion.In this paper,the study proposes the early detection of Alzheimer’s disease using different modalities of Alzheimer’s disease brain images.First,the preprocessing was performed on the data.Then,the data augmentation techniques are used to handle overfitting.Also,the skull is removed to lead to good classification.In the second phase,two fusion stages are used:pixel level(early fusion)and feature level(late fusion).We fused magnetic resonance imaging and positron emission tomography images using early fusion(Laplacian Re-Decomposition)and late fusion(Canonical Correlation Analysis).The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each.Magnetic resonance imaging system’s primary benefits are providing images with excellent spatial resolution and structural information for specific organs.Positron emission tomography images can provide functional information and the metabolisms of particular tissues.This characteristic helps clinicians detect diseases and tumor progression at an early stage.Third,the feature extraction of fused images is extracted using a convolutional neural network.In the case of late fusion,the features are extracted first and then fused.Finally,the proposed system performs XGB to classify Alzheimer’s disease.The system’s performance was evaluated using accuracy,specificity,and sensitivity.All medical data were retrieved in the 2D format of 256×256 pixels.The classifiers were optimized to achieve the final results:for the decision tree,the maximum depth of a tree was 2.The best number of trees for the random forest was 60;for the support vector machine,the maximum depth was 4,and the kernel gamma was 0.01.The system achieved an accuracy of 98.06%,specificity of 94.32%,and sensitivity of 97.02%in the case of early fusion.Also,if the system achieved late fusion,accuracy was 99.22%,specificity was 96.54%,and sensitivity was 99.54%.
基金Supported by the National Natural Science Foundation of China(No.61301170,61571340)the Fundamental Research Funds for the Central Universities(No.JB150109)the 111 Project(No.B08038)
文摘Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage probability and asymptotic outage probability are derived for the secondary system of the network.The results show that the outage probability is influenced by the source and destination number,the CSI imperfection as well as the interference from the primary transmitter,while the diversity order is independent of the CSI imperfection and the interference from the primary transmitter,yet it is equal to the minimum of the source and destination number.Moreover,extensive simulations are conducted with different system parameters to verify the theoretical analysis.
文摘Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Hurricane Ida ferociously affected many south-eastern and eastern parts of the United States,making it one of the strongest hurricanes in recent years.Advanced forecast and warning tool has been used to track the path of the ex-Hurricane,Ida,as it left New Orleans on its way towards the northeast,accurately predicting significant supercell development above New York City on September 01,2021.This advanced method accurately detected the area with the highest possible level of convective instability with 24-h lead time and even Level 5,devised in the categorical outlooks legend of the system.Therefore,an extreme level implied a very high probability of the local-scale hazard occurring above the NYC.Cloud model output fields(updrafts and downdrafts,wind shear,near-surface convergence,the vertical component of relative vorticity)show the rapid development of a strong supercell storm with rotating updrafts and a mesocyclone.The characteristic hook-shaped echo signature visible in the reflectivity patterns indicates a signal for a highly precipitable(HP)supercell with the possibility of tornado initiation.Open boundary conditions represent a good basis for simulating a tornado that evolved from a supercell storm,initialized with initial data obtained from a real-time simulation in the period when the bow echo and tornado-like signature occurred.Тhe modeled results agree well with the observations.
文摘Graph-based image classification has emerged as a powerful alternative to traditional convolutional approaches,leveraging the relational structure between image regions to improve accuracy.This paper presents an enhanced graph-based image classification framework that integrates convolutional neural network(CNN)features with graph convolutional network(GCN)learning,leveraging superpixel-based image representations.The proposed framework initiates the process by segmenting input images into significant superpixels,reducing computational complexity while preserving essential spatial structures.A pre-trained CNN backbone extracts both global and local features from these superpixels,capturing critical texture and shape information.These features are structured into a graph,and the framework presents a graph classification model that learns and propagates relationships between nodes,improving global contextual understanding.By combining the strengths of CNN-based feature extraction and graph-based relational learning,the method achieves higher accuracy,faster training speeds,and greater robustness in image classification tasks.Experimental evaluations on four agricultural datasets demonstrate the proposed model’s superior performance,achieving accuracy rates of 96.57%,99.63%,95.19%,and 90.00%on Tomato Leaf Disease,Dragon Fruit,Tomato Ripeness,and Dragon Fruit and Leaf datasets,respectively.The model consistently outperforms conventional CNN(89.27%–94.23%accuracy),VIT(89.45%–99.77%accuracy),VGG16(93.97%–99.52%accuracy),and ResNet50(86.67%–99.26%accuracy)methods across all datasets,with particularly significant improvements on challenging datasets such as Tomato Ripeness(95.19%vs.86.67%–94.44%)and Dragon Fruit and Leaf(90.00%vs.82.22%–83.97%).The compact superpixel representation and efficient feature propagation mechanism further accelerate learning compared to traditional CNN and graph-based approaches.
文摘Forecasting energy demand is essential for optimizing energy generation and effectively predicting power system needs.Recently,many researchers have developed various models on tabular datasets to enhance the effectiveness of demand prediction,including neural networks,machine learning,deep learning,and advanced architectures such as CNN and LSTM.However,research on the CNN models has struggled to provide reliable outcomes due to insufficient dataset sizes,repeated investigations,and inappropriate baseline selection.To address these challenges,we propose a Tabular data-based Lightweight Convolutional Neural Network(TLCNN)model for predicting energy demand.It frames the problem as a regression task that effectively captures complex data trends for accurate forecasting.The BanE-16 dataset is preprocessed using normalization techniques for categorical and numerical data before training the model.The proposed approach dynamically selects relevant features through a two-dimensional convolutional structure that improves adaptability.The model’s performance is evaluated using MSE,MAE,and Accuracy metrics.Experimental results show that TLCNN achieves a 10.89%lower MSE than traditional ML algorithms,demonstrating superior predictive capability.Additionally,TLCNN’s lightweight structure enhances generalization while reducing computational costs,making it suitable for real-world energy forecasting tasks.This study contributes to energy informatics by introducing an optimized deep-learning framework that improves demand prediction by ensuring robustness and adaptability for tabular data.
基金Supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R896).
文摘Deep neural networks have achieved excellent classification results on several computer vision benchmarks.This has led to the popularity of machine learning as a service,where trained algorithms are hosted on the cloud and inference can be obtained on real-world data.In most applications,it is important to compress the vision data due to the enormous bandwidth and memory requirements.Video codecs exploit spatial and temporal correlations to achieve high compression ratios,but they are computationally expensive.This work computes the motion fields between consecutive frames to facilitate the efficient classification of videos.However,contrary to the normal practice of reconstructing the full-resolution frames through motion compensation,this work proposes to infer the class label from the block-based computed motion fields directly.Motion fields are a richer and more complex representation of motion vectors,where each motion vector carries the magnitude and direction information.This approach has two advantages:the cost of motion compensation and video decoding is avoided,and the dimensions of the input signal are highly reduced.This results in a shallower network for classification.The neural network can be trained using motion vectors in two ways:complex representations and magnitude-direction pairs.The proposed work trains a convolutional neural network on the direction and magnitude tensors of the motion fields.Our experimental results show 20×faster convergence during training,reduced overfitting,and accelerated inference on a hand gesture recognition dataset compared to full-resolution and downsampled frames.We validate the proposed methodology on the HGds dataset,achieving a testing accuracy of 99.21%,on the HMDB51 dataset,achieving 82.54%accuracy,and on the UCF101 dataset,achieving 97.13%accuracy,outperforming state-of-the-art methods in computational efficiency.
基金Supported by National Natural Science Foundation of China(Grant Nos.52005103,71801046,51775112,51975121)Guangdong Province Basic and Applied Basic Research Foundation of China(Grant No.2019B1515120095)+1 种基金Intelligent Manufacturing PHM Innovation Team Program(Grant Nos.2018KCXTD029,TDYB2019010)MoST International Cooperation Program(6-14).
文摘Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.
基金This work is supported by the Universiti Kebangsaan Malaysia research grant GUP-2020-074.
文摘Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator for optical region applications.The design consists of a resonator and dielectric layers made with tungsten and quartz(Fused).The proposed absorbent exhibits average absorption of more than 0.8972(89.72%)within the visible wavelength of 450–600 nm and nearly perfect absorption of 0.99(99%)at 461.61 nm.Based on computational analysis,the proposed absorber can be characterized as ZIM.The developments of ZIM absorbers have demonstrated plasmonic resonance characteristics and a perfect impedance match.The incidence obliquity in typically the range of 0◦–90◦both in TE and TM mode with maximum absorbance is more than 0.8972(∼89.72%),and up to 45◦angular stability is suitable for solar cell applications,like exploiting solar energy.The proposed structure prototype is designed and simulated by studying microwave technology numerical computer simulation(CST)tools.The finite integration technique(FIT)based simulator CST and finite element method(FEM)based simulator HFSS also helps validate the numerical data of the proposed ZIM absorber.The proposed MMA design is appropriate for substantial absorption,wide-angle stability,absolute invisible layers,magnetic resonance imaging(MRI),color images,and thermal imaging applications.
基金funded by Thuyloi University Foundation for Science and Technologyunder Grant Number TLU.STF.19-02.
文摘In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usually havelarge size and various spatial resolutions. Thus, detecting objects in remote sensing images is very complicated. In this paper, we develop a model to detectobjects in remote sensing images based on the combination of picture fuzzy clustering and MapReduce method (denoted as MPFC). Firstly, picture fuzzy clustering is applied to segment the input images. Then, MapReduce is used to reducethe runtime with the guarantee of quality. To convert data for MapReduce processing, two new procedures are introduced, including Map_PFC and Reduce_PFC.The formal representation and details of two these procedures are presented in thispaper. The experiments on satellite image and remote sensing image datasets aregiven to evaluate proposed model. Validity indices and time consuming are usedto compare proposed model to picture fuzzy clustering model. The values ofvalidity indices show that picture fuzzy clustering integrated to MapReduce getsbetter quality of segmentation than using picture fuzzy clustering only. Moreover,on two selected image datasets, the run time of MPFC model is much less thanthat of picture fuzzy clustering.
基金supported by Higher Education Commission(HEC)of Pakistan through its National Research Program for Universities(NRPU)[Ref.No.20-14560/NRPU/R&D/HEC/2021]support provided by HEC。
文摘Non-Orthogonal Multiple Access(NOMA)has already proven to be an effective multiple access scheme for5th Generation(5G)wireless networks.It provides improved performance in terms of system throughput,spectral efficiency,fairness,and energy efficiency(EE).However,in conventional NOMA networks,performance degradation still exists because of the stochastic behavior of wireless channels.To combat this challenge,the concept of Intelligent Reflecting Surface(IRS)has risen to prominence as a low-cost intelligent solution for Beyond 5G(B5G)networks.In this paper,a modeling primer based on the integration of these two cutting-edge technologies,i.e.,IRS and NOMA,for B5G wireless networks is presented.An in-depth comparative analysis of IRS-assisted Power Domain(PD)-NOMA networks is provided through 3-fold investigations.First,a primer is presented on the system architecture of IRS-enabled multiple-configuration PD-NOMA systems,and parallels are drawn with conventional network configurations,i.e.,conventional NOMA,Orthogonal Multiple Access(OMA),and IRS-assisted OMA networks.Followed by this,a comparative analysis of these network configurations is showcased in terms of significant performance metrics,namely,individual users'achievable rate,sum rate,ergodic rate,EE,and outage probability.Moreover,for multi-antenna IRS-enabled NOMA networks,we exploit the active Beamforming(BF)technique by employing a greedy algorithm using a state-of-the-art branch-reduceand-bound(BRB)method.The optimality of the BRB algorithm is presented by comparing it with benchmark BF techniques,i.e.,minimum-mean-square-error,zero-forcing-BF,and maximum-ratio-transmission.Furthermore,we present an outlook on future envisioned NOMA networks,aided by IRSs,i.e.,with a variety of potential applications for 6G wireless networks.This work presents a generic performance assessment toolkit for wireless networks,focusing on IRS-assisted NOMA networks.This comparative analysis provides a solid foundation for the development of future IRS-enabled,energy-efficient wireless communication systems.
基金Scientific Research and Technology Development Project。
文摘Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.