In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Several optimization methods,such as Particle Swarm Optimization(PSO)and Genetic Algorithm(GA),are used to select the most suitable Static Synchronous Compensator(STATCOM)technology for the optimal operation of the po...Several optimization methods,such as Particle Swarm Optimization(PSO)and Genetic Algorithm(GA),are used to select the most suitable Static Synchronous Compensator(STATCOM)technology for the optimal operation of the power system,as well as to determine its optimal location and size to minimize power losses.An IEEE 14 bus system,integrating three wind turbines based on Squirrel Cage Induction Generators(SCIGs),is used to test the applicability of the proposed algorithms.The results demonstrate that these algorithms are capable of selecting the most appropriate technology while optimally sizing and locating the STATCOM to reduce power losses in the network.Specifically,the optimized STATCOM allocation using the Particle Swarm Optimization(PSO)achieves a 7.44%reduction in total active power loss compared to the optimized allocation using the Genetic Algorithm(GA).Furthermore,the voltage magnitudes at buses 4,9,and 10,which initially had exceeded the upper voltage limit,were reduced and brought within acceptable ranges,thereby improving the system’s overall voltage profile.Consequently,the optimal allocation of the STATCOM significantly enhances the efficiency and performance of the power network.展开更多
This research proposes an improved Puma optimization algorithm(IPuma)as a novel dynamic recon-figuration tool for a photovoltaic(PV)array linked in total-cross-tied(TCT).The proposed algorithm utilizes the Newton-Raph...This research proposes an improved Puma optimization algorithm(IPuma)as a novel dynamic recon-figuration tool for a photovoltaic(PV)array linked in total-cross-tied(TCT).The proposed algorithm utilizes the Newton-Raphson search rule(NRSR)to boost the exploration process,especially in search spaces with more local regions,and boost the exploitation with adaptive parameters alternating with random parameters in the original Puma.The effectiveness of the introduced IPuma is confirmed through comprehensive evaluations on the CEC’20 benchmark problems.It shows superior performance compared to both established and modern metaheuristic algorithms in terms of effectively navigating the search space and achieving convergence towards near-optimal regions.The findings indicated that the IPuma algorithm demonstrates considerable statistical promise and surpasses the performance of competing algorithms.In addition,the proposed IPuma is utilized to reconfigure a 9×9 PV array that operates under different shade patterns,such as lower triangular(LT),long wide(LW),and short wide(SW).In addition to other programmed approaches,such as the Whale optimization algorithm(WOA),grey wolf optimizer(GWO),Harris Hawks optimization(HHO),particle swarm optimization(PSO),gravitational search algorithm(GSA),biogeography-based optimization(BBO),sine cosine algorithm(SCA),equilibrium optimizer(EO),and original Puma,the indicated method is contrasted to the traditional configurations of TCT and Sudoku.In addition,the metrics of mismatch power loss,maximum efficiency improvement,efficiency improvement ratio,and peak-to-mean ratio are calculated to assess the effectiveness of the indicated approach.The proposed IPuma improved the generated power by 36.72%,28.03%,and 40.97%for SW,LW,and LT,respectively,outperforming the TCT configuration.In addition,it achieved the best maximum efficiency improvement among the algorithms considered,with 26.86%,21.89%,and 29.07%for the examined patterns.The results highlight the superiority and competence of the proposed approach in both convergence rates and stability,as well as applicability to dynamically reconfigure the PV system and enhance its harvested energy.展开更多
Accurate parameter extraction of photovoltaic(PV)models plays a critical role in enabling precise performance prediction,optimal system sizing,and effective operational control under diverse environmental conditions.W...Accurate parameter extraction of photovoltaic(PV)models plays a critical role in enabling precise performance prediction,optimal system sizing,and effective operational control under diverse environmental conditions.While a wide range of metaheuristic optimisation techniques have been applied to this problem,many existing methods are hindered by slow convergence rates,susceptibility to premature stagnation,and reduced accuracy when applied to complex multi-diode PV configurations.These limitations can lead to suboptimal modelling,reducing the efficiency of PV system design and operation.In this work,we propose an enhanced hybrid optimisation approach,the modified Spider Wasp Optimization(mSWO)with Opposition-Based Learning algorithm,which integrates the exploration and exploitation capabilities of the Spider Wasp Optimization(SWO)metaheuristic with the diversityenhancing mechanism of Opposition-Based Learning(OBL).The hybridisation is designed to dynamically expand the search space coverage,avoid premature convergence,and improve both convergence speed and precision in highdimensional optimisation tasks.The mSWO algorithm is applied to three well-established PV configurations:the single diode model(SDM),the double diode model(DDM),and the triple diode model(TDM).Real experimental current-voltage(I-V)datasets from a commercial PV module under standard test conditions(STC)are used for evaluation.Comparative analysis is conducted against eighteen advanced metaheuristic algorithms,including BSDE,RLGBO,GWOCS,MFO,EO,TSA,and SCA.Performance metrics include minimum,mean,and maximum root mean square error(RMSE),standard deviation(SD),and convergence behaviour over 30 independent runs.The results reveal that mSWO consistently delivers superior accuracy and robustness across all PV models,achieving the lowest RMSE values of 0.000986022(SDM),0.000982884(DDM),and 0.000982529(TDM),with minimal SD values,indicating remarkable repeatability.Convergence analyses further show that mSWO reaches optimal solutions more rapidly and with fewer oscillations than all competing methods,with the performance gap widening as model complexity increases.These findings demonstrate that mSWO provides a scalable,computationally efficient,and highly reliable framework for PV parameter extraction.Its adaptability to models of growing complexity suggests strong potential for broader applications in renewable energy systems,including performance monitoring,fault detection,and intelligent control,thereby contributing to the optimisation of next-generation solar energy solutions.展开更多
The generation of high-quality 3D models from single 2D images remains challenging in terms of accuracy and completeness.Deep learning has emerged as a promising solution,offering new avenues for improvements.However,...The generation of high-quality 3D models from single 2D images remains challenging in terms of accuracy and completeness.Deep learning has emerged as a promising solution,offering new avenues for improvements.However,building models from scratch is computationally expensive and requires large datasets.This paper presents a transfer-learning-based approach for category-specific 3D reconstruction from a single 2D image.The core idea is to fine-tune a pre-trained model on specific object categories using new,unseen data,resulting in specialized versions of the model that are better adapted to reconstruct particular objects.The proposed approach utilizes a three-phase pipeline comprising image acquisition,3D reconstruction,and refinement.After ensuring the quality of the input image,a ResNet50 model is used for object recognition,directing the image to the corresponding category-specific model to generate a voxel-based representation.The voxel-based 3D model is then refined by transforming it into a detailed triangular mesh representation using the Marching Cubes algorithm and Laplacian smoothing.An experimental study,using the Pix2Vox model and the Pascal3D dataset,has been conducted to evaluate and validate the effectiveness of the proposed approach.Results demonstrate that category-specific fine-tuning of Pix2Vox significantly outperforms both the original model and the general model fine-tuned for all object categories,with substantial gains in Intersection over Union(IoU)scores.Visual assessments confirm improvements in geometric detail and surface realism.These findings indicate that combining transfer learning with category-specific fine tuning and refinement strategy of our approach leads to better-quality 3D model generation.展开更多
Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis a...Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment.An automated screening for DR has been identified as an effective method for early DR detection,which can decrease the workload associated to manual grading as well as save diagnosis costs and time.Several studies have been carried out to develop automated detection and classification models for DR.This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy(DR).The proposed model incorporates different processes namely data collection,preprocessing,segmentation,feature extraction and classification.At first,the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server.Then,the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization(CLAHE)model.Next,the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering(ASKFCM)model.Afterwards,deep Convolution Neural Network(CNN)based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes(GNB)model.The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.展开更多
In present digital era,an exponential increase in Internet of Things(IoT)devices poses several design issues for business concerning security and privacy.Earlier studies indicate that the blockchain technology is foun...In present digital era,an exponential increase in Internet of Things(IoT)devices poses several design issues for business concerning security and privacy.Earlier studies indicate that the blockchain technology is found to be a significant solution to resolve the challenges of data security exist in IoT.In this view,this paper presents a new privacy-preserving Secure Ant Colony optimization with Multi Kernel Support Vector Machine(ACOMKSVM)with Elliptical Curve cryptosystem(ECC)for secure and reliable IoT data sharing.This program uses blockchain to ensure protection and integrity of some data while it has the technology to create secure ACOMKSVM training algorithms in partial views of IoT data,collected from various data providers.Then,ECC is used to create effective and accurate privacy that protects ACOMKSVM secure learning process.In this study,the authors deployed blockchain technique to create a secure and reliable data exchange platform across multiple data providers,where IoT data is encrypted and recorded in a distributed ledger.The security analysis showed that the specific data ensures confidentiality of critical data from each data provider and protects the parameters of the ACOMKSVM model for data analysts.To examine the performance of the proposed method,it is tested against two benchmark dataset such as Breast Cancer Wisconsin Data Set(BCWD)and Heart Disease Data Set(HDD)from UCI AI repository.The simulation outcome indicated that the ACOMKSVM model has outperformed all the compared methods under several aspects.展开更多
Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for med...Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation;however,the algorithms become trapped in local minima and have low convergence speeds,particularly as the number of threshold levels increases.Consequently,in this paper,we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm(JSA)(an optimizer).We modify the JSA to prevent descents into local minima,and we accelerate convergence toward optimal solutions.The improvement is achieved by applying two novel strategies:Rankingbased updating and an adaptive method.Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions.We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution;we allow a small amount of exploration to avoid descents into local minima.The two strategies are integrated with the JSA to produce an improved JSA(IJSA)that optimally thresholds brain MR images.To compare the performances of the IJSA and JSA,seven brain MR images were segmented at threshold levels of 3,4,5,6,7,8,10,15,20,25,and 30.IJSA was compared with several other recent image segmentation algorithms,including the improved and standard marine predator algorithms,the modified salp and standard salp swarm algorithms,the equilibrium optimizer,and the standard JSA in terms of fitness,the Structured Similarity Index Metric(SSIM),the peak signal-to-noise ratio(PSNR),the standard deviation(SD),and the Features Similarity Index Metric(FSIM).The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM,the PSNR,the objective values,and the SD;in terms of the SSIM,IJSA was competitive with the others.展开更多
Artificial intelligence(AI)is expanding its roots in medical diagnostics.Various acute and chronic diseases can be identified accurately at the initial level by using AI methods to prevent the progression of health co...Artificial intelligence(AI)is expanding its roots in medical diagnostics.Various acute and chronic diseases can be identified accurately at the initial level by using AI methods to prevent the progression of health complications.Kidney diseases are producing a high impact on global health and medical practitioners are suggested that the diagnosis at earlier stages is one of the foremost approaches to avert chronic kidney disease and renal failure.High blood pressure,diabetes mellitus,and glomerulonephritis are the root causes of kidney disease.Therefore,the present study is proposed a set of multiple techniques such as simulation,modeling,and optimization of intelligent kidney disease prediction(SMOIKD)which is based on computational intelligence approaches.Initially,seven parameters were used for the fuzzy logic system(FLS),and then twenty-five different attributes of the kidney dataset were used for the artificial neural network(ANN)and deep extreme machine learning(DEML).The expert system was proposed with the assistance of medical experts.For the quick and accurate evaluation of the proposed system,Matlab version 2019 was used.The proposed SMOIKD-FLSANN-DEML expert system has shown 94.16%accuracy.Hence this study concluded that SMOIKD-FLS-ANN-DEML system is effective to accurately diagnose kidney disease at initial levels.展开更多
With the massive success of deep networks,there have been signi-cant efforts to analyze cancer diseases,especially skin cancer.For this purpose,this work investigates the capability of deep networks in diagnosing a va...With the massive success of deep networks,there have been signi-cant efforts to analyze cancer diseases,especially skin cancer.For this purpose,this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images.This paper aims to develop and ne-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images.Fine-tuning is a powerful method to obtain enhanced classication results by the customized pre-trained network.Regularization,batch normalization,and hyperparameter optimization are performed for ne-tuning the proposed deep network.The proposed ne-tuned ResNet50 model successfully classied 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset.The developed deep model was compared against two powerful models,i.e.,InceptionV3 and VGG16,using the Dice similarity coefcient(DSC)and the area under the curve(AUC).The evaluation results show that the proposed model achieved higher results than some recent and robust models.展开更多
Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Sp...Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.展开更多
Detecting COVID-19 cases as early as possible became a critical issue that must be addressed to avoid the pandemic’s additional spread and early provide the appropriate treatment to the affected patients.This study a...Detecting COVID-19 cases as early as possible became a critical issue that must be addressed to avoid the pandemic’s additional spread and early provide the appropriate treatment to the affected patients.This study aimed to develop a COVID-19 diagnosis and prediction(AIMDP)model that could identify patients with COVID-19 and distinguish it from other viral pneumonia signs detected in chest computed tomography(CT)scans.The proposed system uses convolutional neural networks(CNNs)as a deep learning technology to process hundreds of CT chest scan images and speeds up COVID-19 case prediction to facilitate its containment.We employed the whale optimization algorithm(WOA)to select the most relevant patient signs.A set of experiments validated AIMDP performance.It demonstrated the superiority of AIMDP in terms of the area under the curve-receiver operating characteristic(AUC-ROC)curve,positive predictive value(PPV),negative predictive rate(NPR)and negative predictive value(NPV).AIMDP was applied to a dataset of hundreds of real data and CT images,and it was found to achieve 96%AUC for diagnosing COVID-19 and 98%for overall accuracy.The results showed the promising performance of AIMDP for diagnosing COVID-19 when compared to other recent diagnosing and predicting models.展开更多
The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of th...The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.展开更多
Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
Plant diseases are a major impendence to food security,and due to a lack of key infrastructure in many regions of the world,quick identification is still challenging.Harvest losses owing to illnesses are a severe prob...Plant diseases are a major impendence to food security,and due to a lack of key infrastructure in many regions of the world,quick identification is still challenging.Harvest losses owing to illnesses are a severe problem for both large farming structures and rural communities,motivating our mission.Because of the large range of diseases,identifying and classifying diseases with human eyes is not only time-consuming and labor intensive,but also prone to being mistaken with a high error rate.Deep learning-enabled breakthroughs in computer vision have cleared the road for smartphone-assisted plant disease and diagnosis.The proposed work describes a deep learning approach for detection plant disease.Therefore,we proposed a deep learning model strategy for detecting plant disease and classification of plant leaf diseases.In our research,we focused on detecting plant diseases in five crops divided into 25 different types of classes(wheat,cotton,grape,corn,and cucumbers).In this task,we used a public image database of healthy and diseased plant leaves acquired under realistic conditions.For our work,a deep convolutional neural model AlexNet and Particle Swarm optimization was trained for this task we found that the metrics(accuracy,specificity,Sensitivity,precision,and Fscore)of the tested deep learning networks achieves an accuracy of 98.83%,specificity of 98.56%,Sensitivity of 98.78%,precision of 98.67%,and F-score of 98.47%,demonstrating the feasibility of this approach.展开更多
In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is test...In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.展开更多
Computer vision is one of the significant trends in computer science.It plays as a vital role in many applications,especially in the medical field.Early detection and segmentation of different tumors is a big challeng...Computer vision is one of the significant trends in computer science.It plays as a vital role in many applications,especially in the medical field.Early detection and segmentation of different tumors is a big challenge in the medical world.The proposed framework uses ultrasound images from Kaggle,applying five diverse models to denoise the images,using the best possible noise-free image as input to the U-Net model for segmentation of the tumor,and then using the Convolution Neural Network(CNN)model to classify whether the tumor is benign,malignant,or normal.The main challenge faced by the framework in the segmentation is the speckle noise.It’s is a multiplicative and negative issue in breast ultrasound imaging,because of this noise,the image resolution and contrast become reduced,which affects the diagnostic value of this imaging modality.As result,speckle noise reduction is very vital for the segmentation process.The framework uses five models such as Generative Adversarial Denoising Network(DGAN-Net),Denoising U-Shaped Net(D-U-NET),Batch Renormalization U-Net(Br-UNET),Generative Adversarial Network(GAN),and Nonlocal Neutrosophic ofWiener Filtering(NLNWF)for reducing the speckle noise from the breast ultrasound images then choose the best image according to peak signal to noise ratio(PSNR)for each level of speckle-noise.The five used methods have been compared with classical filters such as Bilateral,Frost,Kuan,and Lee and they proved their efficiency according to PSNR in different levels of noise.The five diverse models are achieved PSNR results for speckle noise at level(0.1,0.25,0.5,0.75),(33.354,29.415,27.218,24.115),(31.424,28.353,27.246,24.244),(32.243,28.42,27.744,24.893),(31.234,28.212,26.983,23.234)and(33.013,29.491,28.556,25.011)forDGAN,Br-U-NET,D-U-NET,GANand NLNWF respectively.According to the value of PSNR and level of speckle noise,the best image passed for segmentation using U-Net and classification usingCNNto detect tumor type.The experiments proved the quality ofU-Net and CNN in segmentation and classification respectively,since they achieved 95.11 and 95.13 in segmentation and 95.55 and 95.67 in classification as dice score and accuracy respectively.展开更多
This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised ...This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised detection goes in this paper analysis through 4 steps:(1)selection of the most informative features from the considered data;(2)definition of the number of clusters based on the elbow criterion.The experimental results showed that the optimal number of clusters that group the considered data in an unsupervised manner corresponds to 2 clusters;(3)proposition of a new approach for hybridization of both hard and fuzzy clustering tuned with Ant Lion Optimization(ALO);(4)comparison with some existing metaheuristic optimizations such as Genetic Algorithm(GA)and Particle Swarm Optimization(PSO).By employing a multi-angle analysis based on the cluster validation indices,the confusion matrix,the efficiencies and purities rates,the average cost variation,the computational time and the Sammon mapping visualization,the results highlight the effectiveness of the improved Gustafson-Kessel algorithm optimized withALO(ALOGK)to validate the proposed approach.Even if the paper gives a complete clustering analysis,its novel contribution concerns only the Steps(1)and(3)considered above.The first contribution lies in the method used for Step(1)to select the most informative features and variables.We used the t-Statistic technique to rank them.Afterwards,a feature mapping is applied using Self-Organizing Map(SOM)to identify the level of correlation between them.Then,Particle Swarm Optimization(PSO),a metaheuristic optimization technique,is used to reduce the data set dimension.The second contribution of thiswork concern the third step,where each one of the clustering algorithms as K-means(KM),Global K-means(GlobalKM),Partitioning AroundMedoids(PAM),Fuzzy C-means(FCM),Gustafson-Kessel(GK)and Gath-Geva(GG)is optimized and tuned with ALO.展开更多
Determining the optimum location of facilities is critical in many fields,particularly in healthcare.This study proposes the application of a suitable location model for field hospitals during the novel coronavirus 20...Determining the optimum location of facilities is critical in many fields,particularly in healthcare.This study proposes the application of a suitable location model for field hospitals during the novel coronavirus 2019(COVID-19)pandemic.The used model is the most appropriate among the three most common location models utilized to solve healthcare problems(the set covering model,the maximal covering model,and the P-median model).The proposed nonlinear binary constrained model is a slight modification of the maximal covering model with a set of nonlinear constraints.The model is used to determine the optimum location of field hospitals for COVID-19 risk reduction.The designed mathematical model and the solution method are used to deploy field hospitals in eight governorates in Upper Egypt.In this case study,a discrete binary gaining–sharing knowledge-based optimization(DBGSK)algorithm is proposed.The DBGSK algorithm is based on how humans acquire and share knowledge throughout their life.The DBGSK algorithm mainly depends on two junior and senior binary stages.These two stages enable DBGSK to explore and exploit the search space efficiently and effectively,and thus it can solve problems in binary space.展开更多
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Several optimization methods,such as Particle Swarm Optimization(PSO)and Genetic Algorithm(GA),are used to select the most suitable Static Synchronous Compensator(STATCOM)technology for the optimal operation of the power system,as well as to determine its optimal location and size to minimize power losses.An IEEE 14 bus system,integrating three wind turbines based on Squirrel Cage Induction Generators(SCIGs),is used to test the applicability of the proposed algorithms.The results demonstrate that these algorithms are capable of selecting the most appropriate technology while optimally sizing and locating the STATCOM to reduce power losses in the network.Specifically,the optimized STATCOM allocation using the Particle Swarm Optimization(PSO)achieves a 7.44%reduction in total active power loss compared to the optimized allocation using the Genetic Algorithm(GA).Furthermore,the voltage magnitudes at buses 4,9,and 10,which initially had exceeded the upper voltage limit,were reduced and brought within acceptable ranges,thereby improving the system’s overall voltage profile.Consequently,the optimal allocation of the STATCOM significantly enhances the efficiency and performance of the power network.
基金funded by the Deanship of Scientific Research and Libraries,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding After Publication,grant No.(RPFAP-82-1445)。
文摘This research proposes an improved Puma optimization algorithm(IPuma)as a novel dynamic recon-figuration tool for a photovoltaic(PV)array linked in total-cross-tied(TCT).The proposed algorithm utilizes the Newton-Raphson search rule(NRSR)to boost the exploration process,especially in search spaces with more local regions,and boost the exploitation with adaptive parameters alternating with random parameters in the original Puma.The effectiveness of the introduced IPuma is confirmed through comprehensive evaluations on the CEC’20 benchmark problems.It shows superior performance compared to both established and modern metaheuristic algorithms in terms of effectively navigating the search space and achieving convergence towards near-optimal regions.The findings indicated that the IPuma algorithm demonstrates considerable statistical promise and surpasses the performance of competing algorithms.In addition,the proposed IPuma is utilized to reconfigure a 9×9 PV array that operates under different shade patterns,such as lower triangular(LT),long wide(LW),and short wide(SW).In addition to other programmed approaches,such as the Whale optimization algorithm(WOA),grey wolf optimizer(GWO),Harris Hawks optimization(HHO),particle swarm optimization(PSO),gravitational search algorithm(GSA),biogeography-based optimization(BBO),sine cosine algorithm(SCA),equilibrium optimizer(EO),and original Puma,the indicated method is contrasted to the traditional configurations of TCT and Sudoku.In addition,the metrics of mismatch power loss,maximum efficiency improvement,efficiency improvement ratio,and peak-to-mean ratio are calculated to assess the effectiveness of the indicated approach.The proposed IPuma improved the generated power by 36.72%,28.03%,and 40.97%for SW,LW,and LT,respectively,outperforming the TCT configuration.In addition,it achieved the best maximum efficiency improvement among the algorithms considered,with 26.86%,21.89%,and 29.07%for the examined patterns.The results highlight the superiority and competence of the proposed approach in both convergence rates and stability,as well as applicability to dynamically reconfigure the PV system and enhance its harvested energy.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R442)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Accurate parameter extraction of photovoltaic(PV)models plays a critical role in enabling precise performance prediction,optimal system sizing,and effective operational control under diverse environmental conditions.While a wide range of metaheuristic optimisation techniques have been applied to this problem,many existing methods are hindered by slow convergence rates,susceptibility to premature stagnation,and reduced accuracy when applied to complex multi-diode PV configurations.These limitations can lead to suboptimal modelling,reducing the efficiency of PV system design and operation.In this work,we propose an enhanced hybrid optimisation approach,the modified Spider Wasp Optimization(mSWO)with Opposition-Based Learning algorithm,which integrates the exploration and exploitation capabilities of the Spider Wasp Optimization(SWO)metaheuristic with the diversityenhancing mechanism of Opposition-Based Learning(OBL).The hybridisation is designed to dynamically expand the search space coverage,avoid premature convergence,and improve both convergence speed and precision in highdimensional optimisation tasks.The mSWO algorithm is applied to three well-established PV configurations:the single diode model(SDM),the double diode model(DDM),and the triple diode model(TDM).Real experimental current-voltage(I-V)datasets from a commercial PV module under standard test conditions(STC)are used for evaluation.Comparative analysis is conducted against eighteen advanced metaheuristic algorithms,including BSDE,RLGBO,GWOCS,MFO,EO,TSA,and SCA.Performance metrics include minimum,mean,and maximum root mean square error(RMSE),standard deviation(SD),and convergence behaviour over 30 independent runs.The results reveal that mSWO consistently delivers superior accuracy and robustness across all PV models,achieving the lowest RMSE values of 0.000986022(SDM),0.000982884(DDM),and 0.000982529(TDM),with minimal SD values,indicating remarkable repeatability.Convergence analyses further show that mSWO reaches optimal solutions more rapidly and with fewer oscillations than all competing methods,with the performance gap widening as model complexity increases.These findings demonstrate that mSWO provides a scalable,computationally efficient,and highly reliable framework for PV parameter extraction.Its adaptability to models of growing complexity suggests strong potential for broader applications in renewable energy systems,including performance monitoring,fault detection,and intelligent control,thereby contributing to the optimisation of next-generation solar energy solutions.
基金funded by the Research,Development,and Innovation Authority(RDIA)—Kingdom of Saudi Arabia—under supervision Energy,Industry,and Advanced Technologies Research Center,Taibah University,Madinah,Saudi Arabia with grant number(12979-iau-2023-TAU-R-3-1-EI-).
文摘The generation of high-quality 3D models from single 2D images remains challenging in terms of accuracy and completeness.Deep learning has emerged as a promising solution,offering new avenues for improvements.However,building models from scratch is computationally expensive and requires large datasets.This paper presents a transfer-learning-based approach for category-specific 3D reconstruction from a single 2D image.The core idea is to fine-tune a pre-trained model on specific object categories using new,unseen data,resulting in specialized versions of the model that are better adapted to reconstruct particular objects.The proposed approach utilizes a three-phase pipeline comprising image acquisition,3D reconstruction,and refinement.After ensuring the quality of the input image,a ResNet50 model is used for object recognition,directing the image to the corresponding category-specific model to generate a voxel-based representation.The voxel-based 3D model is then refined by transforming it into a detailed triangular mesh representation using the Marching Cubes algorithm and Laplacian smoothing.An experimental study,using the Pix2Vox model and the Pascal3D dataset,has been conducted to evaluate and validate the effectiveness of the proposed approach.Results demonstrate that category-specific fine-tuning of Pix2Vox significantly outperforms both the original model and the general model fine-tuned for all object categories,with substantial gains in Intersection over Union(IoU)scores.Visual assessments confirm improvements in geometric detail and surface realism.These findings indicate that combining transfer learning with category-specific fine tuning and refinement strategy of our approach leads to better-quality 3D model generation.
基金RUSA-Phase 2.0 grant sanctioned vide Letter No.F.24-51/2014-U,Policy(TNMulti-Gen)Dept.of Edn.Govt.of India,Dt.09.10.2018.
文摘Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment.An automated screening for DR has been identified as an effective method for early DR detection,which can decrease the workload associated to manual grading as well as save diagnosis costs and time.Several studies have been carried out to develop automated detection and classification models for DR.This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy(DR).The proposed model incorporates different processes namely data collection,preprocessing,segmentation,feature extraction and classification.At first,the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server.Then,the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization(CLAHE)model.Next,the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering(ASKFCM)model.Afterwards,deep Convolution Neural Network(CNN)based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes(GNB)model.The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.
文摘In present digital era,an exponential increase in Internet of Things(IoT)devices poses several design issues for business concerning security and privacy.Earlier studies indicate that the blockchain technology is found to be a significant solution to resolve the challenges of data security exist in IoT.In this view,this paper presents a new privacy-preserving Secure Ant Colony optimization with Multi Kernel Support Vector Machine(ACOMKSVM)with Elliptical Curve cryptosystem(ECC)for secure and reliable IoT data sharing.This program uses blockchain to ensure protection and integrity of some data while it has the technology to create secure ACOMKSVM training algorithms in partial views of IoT data,collected from various data providers.Then,ECC is used to create effective and accurate privacy that protects ACOMKSVM secure learning process.In this study,the authors deployed blockchain technique to create a secure and reliable data exchange platform across multiple data providers,where IoT data is encrypted and recorded in a distributed ledger.The security analysis showed that the specific data ensures confidentiality of critical data from each data provider and protects the parameters of the ACOMKSVM model for data analysts.To examine the performance of the proposed method,it is tested against two benchmark dataset such as Breast Cancer Wisconsin Data Set(BCWD)and Heart Disease Data Set(HDD)from UCI AI repository.The simulation outcome indicated that the ACOMKSVM model has outperformed all the compared methods under several aspects.
基金This research was supported by the Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Image segmentation is vital when analyzing medical images,especially magnetic resonance(MR)images of the brain.Recently,several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation;however,the algorithms become trapped in local minima and have low convergence speeds,particularly as the number of threshold levels increases.Consequently,in this paper,we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm(JSA)(an optimizer).We modify the JSA to prevent descents into local minima,and we accelerate convergence toward optimal solutions.The improvement is achieved by applying two novel strategies:Rankingbased updating and an adaptive method.Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions.We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution;we allow a small amount of exploration to avoid descents into local minima.The two strategies are integrated with the JSA to produce an improved JSA(IJSA)that optimally thresholds brain MR images.To compare the performances of the IJSA and JSA,seven brain MR images were segmented at threshold levels of 3,4,5,6,7,8,10,15,20,25,and 30.IJSA was compared with several other recent image segmentation algorithms,including the improved and standard marine predator algorithms,the modified salp and standard salp swarm algorithms,the equilibrium optimizer,and the standard JSA in terms of fitness,the Structured Similarity Index Metric(SSIM),the peak signal-to-noise ratio(PSNR),the standard deviation(SD),and the Features Similarity Index Metric(FSIM).The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM,the PSNR,the objective values,and the SD;in terms of the SSIM,IJSA was competitive with the others.
文摘Artificial intelligence(AI)is expanding its roots in medical diagnostics.Various acute and chronic diseases can be identified accurately at the initial level by using AI methods to prevent the progression of health complications.Kidney diseases are producing a high impact on global health and medical practitioners are suggested that the diagnosis at earlier stages is one of the foremost approaches to avert chronic kidney disease and renal failure.High blood pressure,diabetes mellitus,and glomerulonephritis are the root causes of kidney disease.Therefore,the present study is proposed a set of multiple techniques such as simulation,modeling,and optimization of intelligent kidney disease prediction(SMOIKD)which is based on computational intelligence approaches.Initially,seven parameters were used for the fuzzy logic system(FLS),and then twenty-five different attributes of the kidney dataset were used for the artificial neural network(ANN)and deep extreme machine learning(DEML).The expert system was proposed with the assistance of medical experts.For the quick and accurate evaluation of the proposed system,Matlab version 2019 was used.The proposed SMOIKD-FLSANN-DEML expert system has shown 94.16%accuracy.Hence this study concluded that SMOIKD-FLS-ANN-DEML system is effective to accurately diagnose kidney disease at initial levels.
文摘With the massive success of deep networks,there have been signi-cant efforts to analyze cancer diseases,especially skin cancer.For this purpose,this work investigates the capability of deep networks in diagnosing a variety of dermoscopic lesion images.This paper aims to develop and ne-tune a deep learning architecture to diagnose different skin cancer grades based on dermatoscopic images.Fine-tuning is a powerful method to obtain enhanced classication results by the customized pre-trained network.Regularization,batch normalization,and hyperparameter optimization are performed for ne-tuning the proposed deep network.The proposed ne-tuned ResNet50 model successfully classied 7-respective classes of dermoscopic lesions using the publicly available HAM10000 dataset.The developed deep model was compared against two powerful models,i.e.,InceptionV3 and VGG16,using the Dice similarity coefcient(DSC)and the area under the curve(AUC).The evaluation results show that the proposed model achieved higher results than some recent and robust models.
基金The author extends his appreciation to theDeputyship forResearch&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IFPSAU-2021/01/17758).
文摘Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.
文摘Detecting COVID-19 cases as early as possible became a critical issue that must be addressed to avoid the pandemic’s additional spread and early provide the appropriate treatment to the affected patients.This study aimed to develop a COVID-19 diagnosis and prediction(AIMDP)model that could identify patients with COVID-19 and distinguish it from other viral pneumonia signs detected in chest computed tomography(CT)scans.The proposed system uses convolutional neural networks(CNNs)as a deep learning technology to process hundreds of CT chest scan images and speeds up COVID-19 case prediction to facilitate its containment.We employed the whale optimization algorithm(WOA)to select the most relevant patient signs.A set of experiments validated AIMDP performance.It demonstrated the superiority of AIMDP in terms of the area under the curve-receiver operating characteristic(AUC-ROC)curve,positive predictive value(PPV),negative predictive rate(NPR)and negative predictive value(NPV).AIMDP was applied to a dataset of hundreds of real data and CT images,and it was found to achieve 96%AUC for diagnosing COVID-19 and 98%for overall accuracy.The results showed the promising performance of AIMDP for diagnosing COVID-19 when compared to other recent diagnosing and predicting models.
文摘The main aim of this paper is to propose a new memory dependent derivative(MDD)theory which called threetemperature nonlinear generalized anisotropic micropolar-thermoelasticity.The system of governing equations of the problems associated with the proposed theory is extremely difficult or impossible to solve analytically due to nonlinearity,MDD diffusion,multi-variable nature,multi-stage processing and anisotropic properties of the considered material.Therefore,we propose a novel boundary element method(BEM)formulation for modeling and simulation of such system.The computational performance of the proposed technique has been investigated.The numerical results illustrate the effects of time delays and kernel functions on the nonlinear three-temperature and nonlinear displacement components.The numerical results also demonstrate the validity,efficiency and accuracy of the proposed methodology.The findings and solutions of this study contribute to the further development of industrial applications and devices typically include micropolar-thermoelastic materials.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
文摘Plant diseases are a major impendence to food security,and due to a lack of key infrastructure in many regions of the world,quick identification is still challenging.Harvest losses owing to illnesses are a severe problem for both large farming structures and rural communities,motivating our mission.Because of the large range of diseases,identifying and classifying diseases with human eyes is not only time-consuming and labor intensive,but also prone to being mistaken with a high error rate.Deep learning-enabled breakthroughs in computer vision have cleared the road for smartphone-assisted plant disease and diagnosis.The proposed work describes a deep learning approach for detection plant disease.Therefore,we proposed a deep learning model strategy for detecting plant disease and classification of plant leaf diseases.In our research,we focused on detecting plant diseases in five crops divided into 25 different types of classes(wheat,cotton,grape,corn,and cucumbers).In this task,we used a public image database of healthy and diseased plant leaves acquired under realistic conditions.For our work,a deep convolutional neural model AlexNet and Particle Swarm optimization was trained for this task we found that the metrics(accuracy,specificity,Sensitivity,precision,and Fscore)of the tested deep learning networks achieves an accuracy of 98.83%,specificity of 98.56%,Sensitivity of 98.78%,precision of 98.67%,and F-score of 98.47%,demonstrating the feasibility of this approach.
基金the Deanship of Scientific Research at King Khalid University for funding this work through the Big Research Group Project under grant number(R.G.P2/16/40).
文摘In this paper,a discrete Lotka-Volterra predator-prey model is proposed that considers mixed functional responses of Holling types I and III.The equilibrium points of the model are obtained,and their stability is tested.The dynamical behavior of this model is studied according to the change of the control parameters.We find that the complex dynamical behavior extends from a stable state to chaotic attractors.Finally,the analytical results are clarified by some numerical simulations.
文摘Computer vision is one of the significant trends in computer science.It plays as a vital role in many applications,especially in the medical field.Early detection and segmentation of different tumors is a big challenge in the medical world.The proposed framework uses ultrasound images from Kaggle,applying five diverse models to denoise the images,using the best possible noise-free image as input to the U-Net model for segmentation of the tumor,and then using the Convolution Neural Network(CNN)model to classify whether the tumor is benign,malignant,or normal.The main challenge faced by the framework in the segmentation is the speckle noise.It’s is a multiplicative and negative issue in breast ultrasound imaging,because of this noise,the image resolution and contrast become reduced,which affects the diagnostic value of this imaging modality.As result,speckle noise reduction is very vital for the segmentation process.The framework uses five models such as Generative Adversarial Denoising Network(DGAN-Net),Denoising U-Shaped Net(D-U-NET),Batch Renormalization U-Net(Br-UNET),Generative Adversarial Network(GAN),and Nonlocal Neutrosophic ofWiener Filtering(NLNWF)for reducing the speckle noise from the breast ultrasound images then choose the best image according to peak signal to noise ratio(PSNR)for each level of speckle-noise.The five used methods have been compared with classical filters such as Bilateral,Frost,Kuan,and Lee and they proved their efficiency according to PSNR in different levels of noise.The five diverse models are achieved PSNR results for speckle noise at level(0.1,0.25,0.5,0.75),(33.354,29.415,27.218,24.115),(31.424,28.353,27.246,24.244),(32.243,28.42,27.744,24.893),(31.234,28.212,26.983,23.234)and(33.013,29.491,28.556,25.011)forDGAN,Br-U-NET,D-U-NET,GANand NLNWF respectively.According to the value of PSNR and level of speckle noise,the best image passed for segmentation using U-Net and classification usingCNNto detect tumor type.The experiments proved the quality ofU-Net and CNN in segmentation and classification respectively,since they achieved 95.11 and 95.13 in segmentation and 95.55 and 95.67 in classification as dice score and accuracy respectively.
文摘This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised detection goes in this paper analysis through 4 steps:(1)selection of the most informative features from the considered data;(2)definition of the number of clusters based on the elbow criterion.The experimental results showed that the optimal number of clusters that group the considered data in an unsupervised manner corresponds to 2 clusters;(3)proposition of a new approach for hybridization of both hard and fuzzy clustering tuned with Ant Lion Optimization(ALO);(4)comparison with some existing metaheuristic optimizations such as Genetic Algorithm(GA)and Particle Swarm Optimization(PSO).By employing a multi-angle analysis based on the cluster validation indices,the confusion matrix,the efficiencies and purities rates,the average cost variation,the computational time and the Sammon mapping visualization,the results highlight the effectiveness of the improved Gustafson-Kessel algorithm optimized withALO(ALOGK)to validate the proposed approach.Even if the paper gives a complete clustering analysis,its novel contribution concerns only the Steps(1)and(3)considered above.The first contribution lies in the method used for Step(1)to select the most informative features and variables.We used the t-Statistic technique to rank them.Afterwards,a feature mapping is applied using Self-Organizing Map(SOM)to identify the level of correlation between them.Then,Particle Swarm Optimization(PSO),a metaheuristic optimization technique,is used to reduce the data set dimension.The second contribution of thiswork concern the third step,where each one of the clustering algorithms as K-means(KM),Global K-means(GlobalKM),Partitioning AroundMedoids(PAM),Fuzzy C-means(FCM),Gustafson-Kessel(GK)and Gath-Geva(GG)is optimized and tuned with ALO.
基金funded by Deanship of Scientific Research,King Saud University,through the Vice Deanship of Scientific Research.
文摘Determining the optimum location of facilities is critical in many fields,particularly in healthcare.This study proposes the application of a suitable location model for field hospitals during the novel coronavirus 2019(COVID-19)pandemic.The used model is the most appropriate among the three most common location models utilized to solve healthcare problems(the set covering model,the maximal covering model,and the P-median model).The proposed nonlinear binary constrained model is a slight modification of the maximal covering model with a set of nonlinear constraints.The model is used to determine the optimum location of field hospitals for COVID-19 risk reduction.The designed mathematical model and the solution method are used to deploy field hospitals in eight governorates in Upper Egypt.In this case study,a discrete binary gaining–sharing knowledge-based optimization(DBGSK)algorithm is proposed.The DBGSK algorithm is based on how humans acquire and share knowledge throughout their life.The DBGSK algorithm mainly depends on two junior and senior binary stages.These two stages enable DBGSK to explore and exploit the search space efficiently and effectively,and thus it can solve problems in binary space.