Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image a...Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish betwee...Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.展开更多
This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that...This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that the town of Pomorie is examined. Data on the quality of the consumed electric energy and the price per a four-member family are presented. The amount of the solar radiation for the town of Pomorie is audited through the PVGIS (Photovoltaic Geographical Information System). Discussed are the types of photovoltaic panels offered on the market by manufacturers in terms of the received power efficiency. Developed is a model of creating a photovoltaic system on the roof of a house, inhabited by several families. Calculations are made on the cost of the electricity generated by the proposed system. Compared is the cost of the electricity supplied by the electricity provider EVN (Energie Verntinftig Nutzen) in the town of Pomorie to the one that will be obtained using the proposed PV-system.展开更多
Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,...Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,thereafter released into the lake to grow their natural population.The external features of the fish were measured regularly during the cultivation period in the laboratory to monitor their growth.The data mining methods-based computational model can be used for fast,accurate,reliable,automatic,and improved growth monitoring procedures and classification of Ohrid trout.With this motivation,a combined approach of principal component analysis(PCA)and support vectormachine(SVM)has been implemented for the visual discrimination and quantitative classification of Ohrid trout of the experimental and natural breeding and their growth stages.The PCA results in better discrimination of breeding categories of Ohrid trout at different development phases while the maximum classification accuracy of 98.33% was achieved using the combination of PCA and SVM.The classification performance of the combination of PCA and SVM has been compared to combinations of PCA and other classification methods(multilayer perceptron,naive Bayes,randomcommittee,decision stump,random forest,and random tree).Besides,the classification accuracy of multilayer perceptron using the original features has been studied.展开更多
This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working...This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working band is expanded by changing the resonator layer’s design.Due to perfect impedance matching with plasmonic resonance characteristics,the proposed NMMA structure is achieved an excellent absorption of 99.99%at 571 THz,99.50%at 488.26 THz,and 99.32%at 598 THz frequencies.The absorption mechanism is demonstrated by the theory of impedance,electric field,and power loss density distributions,respectively.The geometric parameters are explored and analyzed to show the structure’s performance,and a near-field pattern is used to explain the absorption mechanism at the resonance frequency point.The numerical analysis method describes that the proposed structure exhibited more than 80%absorbability between 550 and 900 THz.The Computer Simulation Technology(CST Microwave Studio 2019)software is used to design the proposed structure.Furthermore,CSTHFSS interference is validated by the simulation data with the help of the finite element method(FEM).The proposed NMMA structure is also exhibits glucose concentration sensing capability as applications.So the proposed broadband absorber may have a potential application in THz sensing,imaging(MRI,thermal,color),solar energy harvesting,light modulators,and optoelectronic devices.展开更多
Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,res...Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,resulting in imminent death.On average,patients do not survive 14 months after diagnosis.The only way to minimize the impact of this inevitable disease is through early diagnosis.The Magnetic Resonance Imaging(MRI)scans,because of their better tissue contrast,are most frequently used to assess the brain tissues.The manual classification of MRI scans takes a reasonable amount of time to classify brain tumors.Besides this,dealing with MRI scans manually is also cumbersome,thus affects the classification accuracy.To eradicate this problem,researchers have come up with automatic and semiautomatic methods that help in the automation of brain tumor classification task.Although,many techniques have been devised to address this issue,the existing methods still struggle to characterize the enhancing region.This is because of low variance in enhancing region which give poor contrast in MRI scans.In this study,we propose a novel deep learning based method consisting of a series of steps,namely:data pre-processing,patch extraction,patch pre-processing,and a deep learning model with tuned hyper-parameters to classify all types of gliomas with a focus on enhancing region.Our trained model achieved better results for all glioma classes including the enhancing region.The improved performance of our technique can be attributed to several factors.Firstly,the non-local mean filter in the pre-processing step,improved the image detail while removing irrelevant noise.Secondly,the architecture we employ can capture the non-linearity of all classes including the enhancing region.Overall,the segmentation scores achieved on the Dice Similarity Coefficient(DSC)metric for normal,necrosis,edema,enhancing and non-enhancing tumor classes are 0.95,0.97,0.91,0.93,0.95;respectively.展开更多
The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collecti...The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collective human behavior is especially crucial in applications such as assessing security conditions and preventing crowd congestion.This paper investigates the capability of deep neural network(DNN)algorithms to achieve our carefully engineered pipeline for crowd analysis.It includes three principal stages that cover crowd analysis challenges.First,individual’s detection is represented using the You Only Look Once(YOLO)model for human detection and Kalman filter for multiple human tracking;Second,the density map and crowd counting of a certain location are generated using bounding boxes from a human detector;and Finally,in order to classify normal or abnormal crowds,individual activities are identified with pose estimation.The proposed system successfully achieves designing an effective collective representation of the crowd given the individuals in addition to introducing a significant change of crowd in terms of activities change.Experimental results onMOT20 and SDHA datasets demonstrate that the proposed system is robust and efficient.The framework achieves an improved performance of recognition and detection peoplewith a mean average precision of 99.0%,a real-time speed of 0.6ms non-maximumsuppression(NMS)per image for the SDHAdataset,and 95.3%mean average precision for MOT20 with 1.5ms NMS per image.展开更多
The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irre...The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs.It is possible to gain a holistic view of Alzheimer’s disease staging by combining multiple data modalities,known as image fusion.In this paper,the study proposes the early detection of Alzheimer’s disease using different modalities of Alzheimer’s disease brain images.First,the preprocessing was performed on the data.Then,the data augmentation techniques are used to handle overfitting.Also,the skull is removed to lead to good classification.In the second phase,two fusion stages are used:pixel level(early fusion)and feature level(late fusion).We fused magnetic resonance imaging and positron emission tomography images using early fusion(Laplacian Re-Decomposition)and late fusion(Canonical Correlation Analysis).The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each.Magnetic resonance imaging system’s primary benefits are providing images with excellent spatial resolution and structural information for specific organs.Positron emission tomography images can provide functional information and the metabolisms of particular tissues.This characteristic helps clinicians detect diseases and tumor progression at an early stage.Third,the feature extraction of fused images is extracted using a convolutional neural network.In the case of late fusion,the features are extracted first and then fused.Finally,the proposed system performs XGB to classify Alzheimer’s disease.The system’s performance was evaluated using accuracy,specificity,and sensitivity.All medical data were retrieved in the 2D format of 256×256 pixels.The classifiers were optimized to achieve the final results:for the decision tree,the maximum depth of a tree was 2.The best number of trees for the random forest was 60;for the support vector machine,the maximum depth was 4,and the kernel gamma was 0.01.The system achieved an accuracy of 98.06%,specificity of 94.32%,and sensitivity of 97.02%in the case of early fusion.Also,if the system achieved late fusion,accuracy was 99.22%,specificity was 96.54%,and sensitivity was 99.54%.展开更多
Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage proba...Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage probability and asymptotic outage probability are derived for the secondary system of the network.The results show that the outage probability is influenced by the source and destination number,the CSI imperfection as well as the interference from the primary transmitter,while the diversity order is independent of the CSI imperfection and the interference from the primary transmitter,yet it is equal to the minimum of the source and destination number.Moreover,extensive simulations are conducted with different system parameters to verify the theoretical analysis.展开更多
Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction...Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.展开更多
Global climate change,along with the rapid increase of the population,has put significant pressure on water security.A water reservoir is an effective solution for adjusting and ensuring water supply.In particular,the...Global climate change,along with the rapid increase of the population,has put significant pressure on water security.A water reservoir is an effective solution for adjusting and ensuring water supply.In particular,the reservoir water level is an essential physical indicator for the reservoirs.Forecasting the reservoir water level effectively assists the managers in making decisions and plans related to reservoir management policies.In recent years,deep learning models have been widely applied to solve forecasting problems.In this study,we propose a novel hybrid deep learning model namely the YOLOv9_ConvLSTM that integrates YOLOv9,ConvLSTM,and linear interpolation to predict reservoir water levels.It utilizes data from Sentinel-2 satellite images,generated from visible spectrum bands(Red-Blue-Green)to reconstruct true-color reservoir images.Adam is used as the optimization algorithm with the loss function being MSE(Mean Squared Error)to evaluate the model’s error during training.We implemented and validated the proposed model using Sentinel-2 satellite imagery for the An Khe reservoir in Vietnam.To assess its performance,we also conducted comparative experiments with other related models,including SegNet_ConvLSTM and UNet_ConvLSTM,on the same dataset.The model performances were validated using k-fold cross-validation and ANOVA analysis.The experimental results demonstrate that the YOLOv9_ConvLSTM model outperforms the compared models.It has been seen that the proposed approach serves as a valuable tool for reservoir water level forecasting using satellite imagery that contributes to effective water resource management.展开更多
Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,...Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.展开更多
Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator fo...Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator for optical region applications.The design consists of a resonator and dielectric layers made with tungsten and quartz(Fused).The proposed absorbent exhibits average absorption of more than 0.8972(89.72%)within the visible wavelength of 450–600 nm and nearly perfect absorption of 0.99(99%)at 461.61 nm.Based on computational analysis,the proposed absorber can be characterized as ZIM.The developments of ZIM absorbers have demonstrated plasmonic resonance characteristics and a perfect impedance match.The incidence obliquity in typically the range of 0◦–90◦both in TE and TM mode with maximum absorbance is more than 0.8972(∼89.72%),and up to 45◦angular stability is suitable for solar cell applications,like exploiting solar energy.The proposed structure prototype is designed and simulated by studying microwave technology numerical computer simulation(CST)tools.The finite integration technique(FIT)based simulator CST and finite element method(FEM)based simulator HFSS also helps validate the numerical data of the proposed ZIM absorber.The proposed MMA design is appropriate for substantial absorption,wide-angle stability,absolute invisible layers,magnetic resonance imaging(MRI),color images,and thermal imaging applications.展开更多
In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usual...In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usually havelarge size and various spatial resolutions. Thus, detecting objects in remote sensing images is very complicated. In this paper, we develop a model to detectobjects in remote sensing images based on the combination of picture fuzzy clustering and MapReduce method (denoted as MPFC). Firstly, picture fuzzy clustering is applied to segment the input images. Then, MapReduce is used to reducethe runtime with the guarantee of quality. To convert data for MapReduce processing, two new procedures are introduced, including Map_PFC and Reduce_PFC.The formal representation and details of two these procedures are presented in thispaper. The experiments on satellite image and remote sensing image datasets aregiven to evaluate proposed model. Validity indices and time consuming are usedto compare proposed model to picture fuzzy clustering model. The values ofvalidity indices show that picture fuzzy clustering integrated to MapReduce getsbetter quality of segmentation than using picture fuzzy clustering only. Moreover,on two selected image datasets, the run time of MPFC model is much less thanthat of picture fuzzy clustering.展开更多
The popularity of mobile devices with sensors is captivating the attention of researchers to modern techniques,such as the internet of things(IoT)and mobile crowdsensing(MCS).The core concept behind MCS is to use the ...The popularity of mobile devices with sensors is captivating the attention of researchers to modern techniques,such as the internet of things(IoT)and mobile crowdsensing(MCS).The core concept behind MCS is to use the power of mobile sensors to accomplish a difficult task collaboratively,with each mobile user completing much simpler micro-tasks.This paper discusses the task assignment problem in mobile crowdsensing,which is dependent on sensing time and path planning with the constraints of participant travel distance budgets and sensing time intervals.The goal is to minimize aggregate sensing time for mobile users,which reduces energy consumption to encourage more participants to engage in sensing activities and maximize total task quality.This paper introduces a two-phase task assignment framework called location time-based algorithm(LTBA).LTBA is a framework that enhances task assignment in MCS,whereas assigning tasks requires overlapping time intervals between tasks and mobile users’tasks and the location of tasks and mobile users’paths.The process of assigning the nearest task to the mobile user’s current path depends on the ant colony optimization algorithm(ACO)and Euclidean distance.LTBA combines two algorithms:(1)greedy online allocation algorithm and(2)bio-inspired traveldistance-balance-based algorithm(B-DBA).The greedy algorithm was sensing time interval-based and worked on reducing the overall sensing time of the mobile user.B-DBA was location-based and worked on maximizing total task quality.The results demonstrate that the average task quality is 0.8158,0.7093,and 0.7733 for LTBA,B-DBA,and greedy,respectively.The sensing time was reduced to 644,1782,and 685 time units for LTBA,B-DBA,and greedy,respectively.Combining the algorithms improves task assignment in MCS for both total task quality and sensing time.The results demonstrate that combining the two algorithms in LTBA is the best performance for total task quality and total sensing time,and the greedy algorithm follows it then B-DBA.展开更多
The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to...The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to low cost and flexibility more and more consumers delegate their tasks to cloud providers,the SLA emerges as a key aspect between the consumers and providers.Continuous monitoring of Quality of Service(QoS)attributes is required to implement SLAs because of the complex nature of cloud communication.Many other factors,such as user reliability,satisfaction,and penalty on violations are also taken into account.Currently,there is no such policy of cloud SLA monitoring to minimize SLA violations.In this work,we have proposed a cloud SLA monitoring policy by dividing a monitoring session into two parts,for critical and non-critical parameters.The critical and non-critical parameters will be decided on the interest of the consumer during SLA negotiation.This will help to shape a new comprehensive SLA based Proactive Resource Allocation Approach(RPAA)which will monitor SLA at runtime,analyze the SLA parameters and try to find the possibility of SLA violations.We also have implemented an adaptive system for allocating cloud IT resources based on SLA violations and detection.We have defined two main components of SLA-PRAA i.e.,(a)Handler and(b)Accounting and Billing Manager.We have also described the function of both components through algorithms.The experimental results validate the performance of our proposed method in comparison with state-of-the-art cloud SLA policies.展开更多
This work presents a dual band epsilon negative(ENG)metamaterial with a bilateral coupled split ring resonator(SRR)for use in C and X band wireless communication systems.The traditional split-ring resonator(SRR)has be...This work presents a dual band epsilon negative(ENG)metamaterial with a bilateral coupled split ring resonator(SRR)for use in C and X band wireless communication systems.The traditional split-ring resonator(SRR)has been amended with this engineered structure.The proposed metamaterial unit cell is realized on the 1.6 mm thick FR-4 printed media with a dimension of 10×10 mm2.The resonating patch built with a square split outer ring.Two interlinked inner rings are coupled vertically to the outer ring to extend its electrical length as well as to tune the resonance frequency.Numerical simulation is performed using CST studio suite 2019 to design and performance analysis.The transmission coefficient(S21)of the proposed unit cell and array configuration exhibits two resonances at 6.7 and 10.5 GHz with wide bandwidth extended from 4.86 to 8.06 GHz and 10.1 to 11.2 GHz,respectively.Negative permittivity is noted at frequencies between 6.76–9.5 GHz and 10.5–12 GHz,consecutively,with near-zero refractive index and permeability.The optimal EMR value depicts the compactness of the proposed structure.The 1×2,2×2 and 4×4 arrays are analyzed that shows similar response compared to the unit cell.Measured results of the 2×2 array shows the close similarity of S21 response as compared to simulation.The observed properties of the proposed unit cell ascertain its suitability for wireless communications by enhancing the gain and directivity of the antenna system.展开更多
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus...Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.展开更多
This paper studies several performance metrics of a wireless-powered decode-and-forward(DF) relay network with imperfect channel state information(CSI). In particular, based on the time switching(TS) protocol, the ene...This paper studies several performance metrics of a wireless-powered decode-and-forward(DF) relay network with imperfect channel state information(CSI). In particular, based on the time switching(TS) protocol, the energy-constrained relay harvesting energy from a power beacon(PB), and uses that harvested energy to forward the source information to destination. The closedform expression of the outage probability is firstly derived over Rayleigh fading channels. Then, the asymptotic analysis, throughput as well as the symbol error probability(SEP) are derived based on the expression of the outage probability. Next, both transmission power of the source and the power beacon are optimized through the throughput optimization. Finally, simulations are conducted to corroborate our theoretical analysis, and to reveal the impact of the transmission power of the source and PB as well as the imperfect CSI on the system performance.展开更多
文摘Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
基金The authors would like to thank the support of the Taif University Researchers Supporting Project TURSP 2020/34,Taif University,Taif Saudi Arabia for supporting this work.
文摘Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.
文摘This report deals with some characteristics of the electric power system of Bulgaria. Emphasis is put on the benefits of joining the small photovoltaic plants in the tourist areas of the country. As an example of that the town of Pomorie is examined. Data on the quality of the consumed electric energy and the price per a four-member family are presented. The amount of the solar radiation for the town of Pomorie is audited through the PVGIS (Photovoltaic Geographical Information System). Discussed are the types of photovoltaic panels offered on the market by manufacturers in terms of the received power efficiency. Developed is a model of creating a photovoltaic system on the roof of a house, inhabited by several families. Calculations are made on the cost of the electricity generated by the proposed system. Compared is the cost of the electricity supplied by the electricity provider EVN (Energie Verntinftig Nutzen) in the town of Pomorie to the one that will be obtained using the proposed PV-system.
基金supported by the startup foundation for introducing talent of NUIST,Nanjing,China(Project No.2243141701103).
文摘Ohrid trout(Salamo letnica)is an endemic species of fish found in Lake Ohrid in the Former Yugoslav Republic of Macedonia(FYROM).The growth of Ohrid trout was examined in a controlled environment for a certain period,thereafter released into the lake to grow their natural population.The external features of the fish were measured regularly during the cultivation period in the laboratory to monitor their growth.The data mining methods-based computational model can be used for fast,accurate,reliable,automatic,and improved growth monitoring procedures and classification of Ohrid trout.With this motivation,a combined approach of principal component analysis(PCA)and support vectormachine(SVM)has been implemented for the visual discrimination and quantitative classification of Ohrid trout of the experimental and natural breeding and their growth stages.The PCA results in better discrimination of breeding categories of Ohrid trout at different development phases while the maximum classification accuracy of 98.33% was achieved using the combination of PCA and SVM.The classification performance of the combination of PCA and SVM has been compared to combinations of PCA and other classification methods(multilayer perceptron,naive Bayes,randomcommittee,decision stump,random forest,and random tree).Besides,the classification accuracy of multilayer perceptron using the original features has been studied.
基金This work is supported by the Universiti Kebangsaan Malaysia research grant GGPM 2020-005.
文摘This study presents an Epsilon Mu near-zero(EMNZ)nanostructured metamaterial absorber(NMMA)for visible regime applications.The resonator and dielectric layers are made of tungsten(W)and quartz(fused),where the working band is expanded by changing the resonator layer’s design.Due to perfect impedance matching with plasmonic resonance characteristics,the proposed NMMA structure is achieved an excellent absorption of 99.99%at 571 THz,99.50%at 488.26 THz,and 99.32%at 598 THz frequencies.The absorption mechanism is demonstrated by the theory of impedance,electric field,and power loss density distributions,respectively.The geometric parameters are explored and analyzed to show the structure’s performance,and a near-field pattern is used to explain the absorption mechanism at the resonance frequency point.The numerical analysis method describes that the proposed structure exhibited more than 80%absorbability between 550 and 900 THz.The Computer Simulation Technology(CST Microwave Studio 2019)software is used to design the proposed structure.Furthermore,CSTHFSS interference is validated by the simulation data with the help of the finite element method(FEM).The proposed NMMA structure is also exhibits glucose concentration sensing capability as applications.So the proposed broadband absorber may have a potential application in THz sensing,imaging(MRI,thermal,color),solar energy harvesting,light modulators,and optoelectronic devices.
文摘Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,resulting in imminent death.On average,patients do not survive 14 months after diagnosis.The only way to minimize the impact of this inevitable disease is through early diagnosis.The Magnetic Resonance Imaging(MRI)scans,because of their better tissue contrast,are most frequently used to assess the brain tissues.The manual classification of MRI scans takes a reasonable amount of time to classify brain tumors.Besides this,dealing with MRI scans manually is also cumbersome,thus affects the classification accuracy.To eradicate this problem,researchers have come up with automatic and semiautomatic methods that help in the automation of brain tumor classification task.Although,many techniques have been devised to address this issue,the existing methods still struggle to characterize the enhancing region.This is because of low variance in enhancing region which give poor contrast in MRI scans.In this study,we propose a novel deep learning based method consisting of a series of steps,namely:data pre-processing,patch extraction,patch pre-processing,and a deep learning model with tuned hyper-parameters to classify all types of gliomas with a focus on enhancing region.Our trained model achieved better results for all glioma classes including the enhancing region.The improved performance of our technique can be attributed to several factors.Firstly,the non-local mean filter in the pre-processing step,improved the image detail while removing irrelevant noise.Secondly,the architecture we employ can capture the non-linearity of all classes including the enhancing region.Overall,the segmentation scores achieved on the Dice Similarity Coefficient(DSC)metric for normal,necrosis,edema,enhancing and non-enhancing tumor classes are 0.95,0.97,0.91,0.93,0.95;respectively.
文摘The identification and classification of collective people’s activities are gaining momentum as significant themes in machine learning,with many potential applications emerging.The need for representation of collective human behavior is especially crucial in applications such as assessing security conditions and preventing crowd congestion.This paper investigates the capability of deep neural network(DNN)algorithms to achieve our carefully engineered pipeline for crowd analysis.It includes three principal stages that cover crowd analysis challenges.First,individual’s detection is represented using the You Only Look Once(YOLO)model for human detection and Kalman filter for multiple human tracking;Second,the density map and crowd counting of a certain location are generated using bounding boxes from a human detector;and Finally,in order to classify normal or abnormal crowds,individual activities are identified with pose estimation.The proposed system successfully achieves designing an effective collective representation of the crowd given the individuals in addition to introducing a significant change of crowd in terms of activities change.Experimental results onMOT20 and SDHA datasets demonstrate that the proposed system is robust and efficient.The framework achieves an improved performance of recognition and detection peoplewith a mean average precision of 99.0%,a real-time speed of 0.6ms non-maximumsuppression(NMS)per image for the SDHAdataset,and 95.3%mean average precision for MOT20 with 1.5ms NMS per image.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ICT Creative Consilience Program(IITP-2021-2020-0-01821)supervised by the IITP(Institute for Information&communications Technology Planning&evaluation)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A2C1011198).
文摘The precise diagnosis of Alzheimer’s disease is critical for patient treatment,especially at the early stage,because awareness of the severity and progression risks lets patients take preventative actions before irreversible brain damage occurs.It is possible to gain a holistic view of Alzheimer’s disease staging by combining multiple data modalities,known as image fusion.In this paper,the study proposes the early detection of Alzheimer’s disease using different modalities of Alzheimer’s disease brain images.First,the preprocessing was performed on the data.Then,the data augmentation techniques are used to handle overfitting.Also,the skull is removed to lead to good classification.In the second phase,two fusion stages are used:pixel level(early fusion)and feature level(late fusion).We fused magnetic resonance imaging and positron emission tomography images using early fusion(Laplacian Re-Decomposition)and late fusion(Canonical Correlation Analysis).The proposed system used magnetic resonance imaging and positron emission tomography to take advantage of each.Magnetic resonance imaging system’s primary benefits are providing images with excellent spatial resolution and structural information for specific organs.Positron emission tomography images can provide functional information and the metabolisms of particular tissues.This characteristic helps clinicians detect diseases and tumor progression at an early stage.Third,the feature extraction of fused images is extracted using a convolutional neural network.In the case of late fusion,the features are extracted first and then fused.Finally,the proposed system performs XGB to classify Alzheimer’s disease.The system’s performance was evaluated using accuracy,specificity,and sensitivity.All medical data were retrieved in the 2D format of 256×256 pixels.The classifiers were optimized to achieve the final results:for the decision tree,the maximum depth of a tree was 2.The best number of trees for the random forest was 60;for the support vector machine,the maximum depth was 4,and the kernel gamma was 0.01.The system achieved an accuracy of 98.06%,specificity of 94.32%,and sensitivity of 97.02%in the case of early fusion.Also,if the system achieved late fusion,accuracy was 99.22%,specificity was 96.54%,and sensitivity was 99.54%.
基金Supported by the National Natural Science Foundation of China(No.61301170,61571340)the Fundamental Research Funds for the Central Universities(No.JB150109)the 111 Project(No.B08038)
文摘Given imperfect channel state information(CSI)and considering the interference from the primary transmitter,an underlay cognitive multisource multidestination relay network is proposed.A closed-form exact outage probability and asymptotic outage probability are derived for the secondary system of the network.The results show that the outage probability is influenced by the source and destination number,the CSI imperfection as well as the interference from the primary transmitter,while the diversity order is independent of the CSI imperfection and the interference from the primary transmitter,yet it is equal to the minimum of the source and destination number.Moreover,extensive simulations are conducted with different system parameters to verify the theoretical analysis.
基金Scientific Research and Technology Development Project。
文摘Model accuracy and runtime are two key issues for flood warnings in rivers.Traditional hydrodynamic models,which have a rigorous physical mechanism for flood routine,have been widely adopted for water level prediction in river,lake,and urban areas.However,these models require various types of data,in-depth domain knowledge,experience with modeling,and intensive computational time,which hinders short-term or real-time prediction.In this paper,we propose a new framework based on machine learning methods to alleviate the aforementioned limitation.We develop a wide range of machine learning models such as linear regression(LR),support vector regression(SVR),random forest regression(RFR),multilayer perceptron regression(MLPR),and light gradient boosting machine regression(LGBMR)to predict the hourly water level at Le Thuy and Kien Giang stations of the Kien Giang river based on collected data of 2010,2012,and 2020.Four evaluation metrics,that is,R^(2),Nash-Sutcliffe efficiency,mean absolute error,and root mean square error,are employed to examine the reliability of the proposed models.The results show that the LR model outperforms the SVR,RFR,MLPR,and LGBMR models.
基金funded by International School,Vietnam National University,Hanoi(VNU-IS)under project number CS.2023-10.
文摘Global climate change,along with the rapid increase of the population,has put significant pressure on water security.A water reservoir is an effective solution for adjusting and ensuring water supply.In particular,the reservoir water level is an essential physical indicator for the reservoirs.Forecasting the reservoir water level effectively assists the managers in making decisions and plans related to reservoir management policies.In recent years,deep learning models have been widely applied to solve forecasting problems.In this study,we propose a novel hybrid deep learning model namely the YOLOv9_ConvLSTM that integrates YOLOv9,ConvLSTM,and linear interpolation to predict reservoir water levels.It utilizes data from Sentinel-2 satellite images,generated from visible spectrum bands(Red-Blue-Green)to reconstruct true-color reservoir images.Adam is used as the optimization algorithm with the loss function being MSE(Mean Squared Error)to evaluate the model’s error during training.We implemented and validated the proposed model using Sentinel-2 satellite imagery for the An Khe reservoir in Vietnam.To assess its performance,we also conducted comparative experiments with other related models,including SegNet_ConvLSTM and UNet_ConvLSTM,on the same dataset.The model performances were validated using k-fold cross-validation and ANOVA analysis.The experimental results demonstrate that the YOLOv9_ConvLSTM model outperforms the compared models.It has been seen that the proposed approach serves as a valuable tool for reservoir water level forecasting using satellite imagery that contributes to effective water resource management.
基金Supported by National Natural Science Foundation of China(Grant Nos.52005103,71801046,51775112,51975121)Guangdong Province Basic and Applied Basic Research Foundation of China(Grant No.2019B1515120095)+1 种基金Intelligent Manufacturing PHM Innovation Team Program(Grant Nos.2018KCXTD029,TDYB2019010)MoST International Cooperation Program(6-14).
文摘Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.
基金This work is supported by the Universiti Kebangsaan Malaysia research grant GUP-2020-074.
文摘Broadband response metamaterial absorber(MMA)remains a challenge among researchers.A nanostructured new zero-indexed metamaterial(ZIM)absorber is presented in this study,constructed with a hexagonal shape resonator for optical region applications.The design consists of a resonator and dielectric layers made with tungsten and quartz(Fused).The proposed absorbent exhibits average absorption of more than 0.8972(89.72%)within the visible wavelength of 450–600 nm and nearly perfect absorption of 0.99(99%)at 461.61 nm.Based on computational analysis,the proposed absorber can be characterized as ZIM.The developments of ZIM absorbers have demonstrated plasmonic resonance characteristics and a perfect impedance match.The incidence obliquity in typically the range of 0◦–90◦both in TE and TM mode with maximum absorbance is more than 0.8972(∼89.72%),and up to 45◦angular stability is suitable for solar cell applications,like exploiting solar energy.The proposed structure prototype is designed and simulated by studying microwave technology numerical computer simulation(CST)tools.The finite integration technique(FIT)based simulator CST and finite element method(FEM)based simulator HFSS also helps validate the numerical data of the proposed ZIM absorber.The proposed MMA design is appropriate for substantial absorption,wide-angle stability,absolute invisible layers,magnetic resonance imaging(MRI),color images,and thermal imaging applications.
基金funded by Thuyloi University Foundation for Science and Technologyunder Grant Number TLU.STF.19-02.
文摘In image processing, one of the most important steps is image segmentation. The objects in remote sensing images often have to be detected in order toperform next steps in image processing. Remote sensing images usually havelarge size and various spatial resolutions. Thus, detecting objects in remote sensing images is very complicated. In this paper, we develop a model to detectobjects in remote sensing images based on the combination of picture fuzzy clustering and MapReduce method (denoted as MPFC). Firstly, picture fuzzy clustering is applied to segment the input images. Then, MapReduce is used to reducethe runtime with the guarantee of quality. To convert data for MapReduce processing, two new procedures are introduced, including Map_PFC and Reduce_PFC.The formal representation and details of two these procedures are presented in thispaper. The experiments on satellite image and remote sensing image datasets aregiven to evaluate proposed model. Validity indices and time consuming are usedto compare proposed model to picture fuzzy clustering model. The values ofvalidity indices show that picture fuzzy clustering integrated to MapReduce getsbetter quality of segmentation than using picture fuzzy clustering only. Moreover,on two selected image datasets, the run time of MPFC model is much less thanthat of picture fuzzy clustering.
文摘The popularity of mobile devices with sensors is captivating the attention of researchers to modern techniques,such as the internet of things(IoT)and mobile crowdsensing(MCS).The core concept behind MCS is to use the power of mobile sensors to accomplish a difficult task collaboratively,with each mobile user completing much simpler micro-tasks.This paper discusses the task assignment problem in mobile crowdsensing,which is dependent on sensing time and path planning with the constraints of participant travel distance budgets and sensing time intervals.The goal is to minimize aggregate sensing time for mobile users,which reduces energy consumption to encourage more participants to engage in sensing activities and maximize total task quality.This paper introduces a two-phase task assignment framework called location time-based algorithm(LTBA).LTBA is a framework that enhances task assignment in MCS,whereas assigning tasks requires overlapping time intervals between tasks and mobile users’tasks and the location of tasks and mobile users’paths.The process of assigning the nearest task to the mobile user’s current path depends on the ant colony optimization algorithm(ACO)and Euclidean distance.LTBA combines two algorithms:(1)greedy online allocation algorithm and(2)bio-inspired traveldistance-balance-based algorithm(B-DBA).The greedy algorithm was sensing time interval-based and worked on reducing the overall sensing time of the mobile user.B-DBA was location-based and worked on maximizing total task quality.The results demonstrate that the average task quality is 0.8158,0.7093,and 0.7733 for LTBA,B-DBA,and greedy,respectively.The sensing time was reduced to 644,1782,and 685 time units for LTBA,B-DBA,and greedy,respectively.Combining the algorithms improves task assignment in MCS for both total task quality and sensing time.The results demonstrate that combining the two algorithms in LTBA is the best performance for total task quality and total sensing time,and the greedy algorithm follows it then B-DBA.
文摘The cloud service level agreement(SLA)manage the relationship between service providers and consumers in cloud computing.SLA is an integral and critical part of modern era IT vendors and communication contracts.Due to low cost and flexibility more and more consumers delegate their tasks to cloud providers,the SLA emerges as a key aspect between the consumers and providers.Continuous monitoring of Quality of Service(QoS)attributes is required to implement SLAs because of the complex nature of cloud communication.Many other factors,such as user reliability,satisfaction,and penalty on violations are also taken into account.Currently,there is no such policy of cloud SLA monitoring to minimize SLA violations.In this work,we have proposed a cloud SLA monitoring policy by dividing a monitoring session into two parts,for critical and non-critical parameters.The critical and non-critical parameters will be decided on the interest of the consumer during SLA negotiation.This will help to shape a new comprehensive SLA based Proactive Resource Allocation Approach(RPAA)which will monitor SLA at runtime,analyze the SLA parameters and try to find the possibility of SLA violations.We also have implemented an adaptive system for allocating cloud IT resources based on SLA violations and detection.We have defined two main components of SLA-PRAA i.e.,(a)Handler and(b)Accounting and Billing Manager.We have also described the function of both components through algorithms.The experimental results validate the performance of our proposed method in comparison with state-of-the-art cloud SLA policies.
基金This work is supported by the Universiti Kebangsaan Malaysia Research grant code GUP-2020-074This research work is also supported by Bangabandhu Science and Technology Fellowship Trust,Ministry of Science and Technology,Bangladesh.
文摘This work presents a dual band epsilon negative(ENG)metamaterial with a bilateral coupled split ring resonator(SRR)for use in C and X band wireless communication systems.The traditional split-ring resonator(SRR)has been amended with this engineered structure.The proposed metamaterial unit cell is realized on the 1.6 mm thick FR-4 printed media with a dimension of 10×10 mm2.The resonating patch built with a square split outer ring.Two interlinked inner rings are coupled vertically to the outer ring to extend its electrical length as well as to tune the resonance frequency.Numerical simulation is performed using CST studio suite 2019 to design and performance analysis.The transmission coefficient(S21)of the proposed unit cell and array configuration exhibits two resonances at 6.7 and 10.5 GHz with wide bandwidth extended from 4.86 to 8.06 GHz and 10.1 to 11.2 GHz,respectively.Negative permittivity is noted at frequencies between 6.76–9.5 GHz and 10.5–12 GHz,consecutively,with near-zero refractive index and permeability.The optimal EMR value depicts the compactness of the proposed structure.The 1×2,2×2 and 4×4 arrays are analyzed that shows similar response compared to the unit cell.Measured results of the 2×2 array shows the close similarity of S21 response as compared to simulation.The observed properties of the proposed unit cell ascertain its suitability for wireless communications by enhancing the gain and directivity of the antenna system.
文摘Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.
基金support by the National Natural Science Foundation of China (nos. 61571340, 61301170)the Fundamental Research Funds for the Central Universities of China under Grant JB150109the 111 Project under Grant B08038
文摘This paper studies several performance metrics of a wireless-powered decode-and-forward(DF) relay network with imperfect channel state information(CSI). In particular, based on the time switching(TS) protocol, the energy-constrained relay harvesting energy from a power beacon(PB), and uses that harvested energy to forward the source information to destination. The closedform expression of the outage probability is firstly derived over Rayleigh fading channels. Then, the asymptotic analysis, throughput as well as the symbol error probability(SEP) are derived based on the expression of the outage probability. Next, both transmission power of the source and the power beacon are optimized through the throughput optimization. Finally, simulations are conducted to corroborate our theoretical analysis, and to reveal the impact of the transmission power of the source and PB as well as the imperfect CSI on the system performance.