The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star g...In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.展开更多
BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially as...BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.展开更多
All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the confo...All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the conformational flexibility,resulting in various crystal structures.Varying synthesis parameters can yield distinct crystal structures from Cs,Pb,and Br precursors,and manually exploring the relationship between these synthesis parameters and the resulting crystal structure is both labour-intensive and time-consuming.Machine learning(ML)can rapidly uncover insights and drive discoveries in chemical synthesis with the support of data,significantly reducing both the cost and development cycle of materials.Here,we gathered synthesis parameters from published literature(220 synthesis runs)and implemented eight distinct ML models,including eXtreme Gradient Boosting(XGB),Decision Tree(DT),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),Logistic Regression(LR),Gradient Boosting(GB),and K-Nearest(KN)to classify and predict Cs-Pb-Br crystal structures from given synthesis parameters.Validation accuracy,precision,F1 score,recall,and average area under the curve(AUC)are employed to evaluate these ML models.The XGB model exhibited the best performance,achieving a validation accuracy of 0.841.The trained XGB model was subsequently utilised to predict the structure from 10 experimental runs using a randomised set of parameters,achieving a testing accuracy of 0.8.The results indicate that the Cs/Pb molar ratio,reaction time,and the concentration of organic compounds(ligands)play crucial roles in synthesising various crystal structures of Cs-Pb-Br.This study demonstrates a significant decrease in effort required for experimental procedures and builds a foundational basis for predicting crystal structures from synthesis parameters.展开更多
UAV marine monitoring plays an essential role in marine environmental protection because of its flexibility and convenience,low cost and convenient maintenance.In marine environmental monitoring,the similarity between...UAV marine monitoring plays an essential role in marine environmental protection because of its flexibility and convenience,low cost and convenient maintenance.In marine environmental monitoring,the similarity between objects such as oil spill and sea surface,Spartina alterniflora and algae is high,and the effect of the general segmentation algorithm is poor,which brings new challenges to the segmentation of UAV marine images.Panoramic segmentation can do object detection and semantic segmentation at the same time,which can well solve the polymorphism problem of objects in UAV ocean images.Currently,there are few studies on UAV marine image recognition with panoptic segmentation.In addition,there are no publicly available panoptic segmentation datasets for UAV images.In this work,we collect and annotate UAV images to form a panoptic segmentation UAV dataset named UAV-OUC-SEG and propose a panoptic segmentation method named PanopticUAV.First,to deal with the large intraclass variability in scale,deformable convolution and CBAM attention mechanism are employed in the backbone to obtain more accurate features.Second,due to the complexity and diversity of marine images,boundary masks by the Laplacian operator equation from the ground truth are merged into feature maps to improve boundary segmentation precision.Experiments demonstrate the advantages of PanopticUAV beyond the most other advanced approaches on the UAV-OUC-SEG dataset.展开更多
Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary ...Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary to develop a machine-learning model to detect and predict commercial flights using automatic dependent surveillance–broadcast data.This study combined data-quality detection,anomaly detection,and abnormality-classification-model development.The research methodology involved the following stages:problem statement,data selection and labeling,prediction-model development,deployment,and testing.The data labeling process was based on the rules framed by the international civil aviation organization for commercial,jet-engine flights and validated by expert commercial pilots.The results showed that the best prediction model,the quadratic-discriminant-analysis,was 93%accurate,indicating a“good fit”.Moreover,the model’s area-under-the-curve results for abnormal and normal detection were 0.97 and 0.96,respectively,thus confirming its“good fit”.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
Base isolators used in buildings provide both a good acceleration reduction and structural vibration control structures.The base isolators may lose their damping capacity over time due to environmental or dynamic effe...Base isolators used in buildings provide both a good acceleration reduction and structural vibration control structures.The base isolators may lose their damping capacity over time due to environmental or dynamic effects.This deterioration of them requires the determination of the maintenance and repair needs and is important for the long-termisolator life.In this study,an artificial intelligence prediction model has been developed to determine the damage and maintenance-repair requirements of isolators as a result of environmental effects and dynamic factors over time.With the developed model,the required damping capacity of the isolator structure was estimated and compared with the previously placed isolator capacity,and the decrease in the damping property was tried to be determined.For this purpose,a data set was created by collecting the behavior of structures with single degrees of freedom(SDOF),different stiffness,damping ratio and natural period isolated from the foundation under far fault earthquakes.The data is divided into 5 different damping classes varying between 10%and 50%.Machine learning model was trained in damping classes with the data on the structure’s response to random seismic vibrations.As a result of the isolator behavior under randomly selected earthquakes,the recorded motion and structural acceleration of the structure against any seismic vibration were examined,and the decrease in the damping capacity was estimated on a class basis.The performance loss of the isolators,which are separated according to their damping properties,has been tried to be determined,and the reductions in the amounts to be taken into account have been determined by class.In the developed prediction model,using various supervised machine learning classification algorithms,the classification algorithm providing the highest precision for the model has been decided.When the results are examined,it has been determined that the damping of the isolator structure with the machine learning method is predicted successfully at a level exceeding 96%,and it is an effective method in deciding whether there is a decrease in the damping capacity.展开更多
Recently,nano-systems based on molecular communications via diffusion(MCvD)have been implemented in a variety of nanomedical applications,most notably in targeted drug delivery system(TDDS)scenarios.Furthermore,becaus...Recently,nano-systems based on molecular communications via diffusion(MCvD)have been implemented in a variety of nanomedical applications,most notably in targeted drug delivery system(TDDS)scenarios.Furthermore,because the MCvD is unreliable and there exists molecular noise and inter symbol interference(ISI),cooperative nano-relays can acquire the reliability for drug delivery to targeted diseased cells,especially if the separation distance between the nano transmitter and nano receiver is increased.In this work,we propose an approach for optimizing the performance of the nano system using cooperative molecular communications with a nano relay scheme,while accounting for blood flow effects in terms of drift velocity.The fractions of the molecular drug that should be allocated to the nano transmitter and nano relay positioning are computed using a collaborative optimization problem solved by theModified Central Force Optimization(MCFO)algorithm.Unlike the previous work,the probability of bit error is expressed in a closed-form expression.It is used as an objective function to determine the optimal velocity of the drug molecules and the detection threshold at the nano receiver.The simulation results show that the probability of bit error can be dramatically reduced by optimizing the drift velocity,detection threshold,location of the nano-relay in the proposed nano system,and molecular drug budget.展开更多
A philosophy for the design of novel,lightweight,multi-layered armor,referred to as Composite Armor Philosophy(CAP),which can adapt to the passive protection of light-,medium-,and heavy-armored vehicles,is presented i...A philosophy for the design of novel,lightweight,multi-layered armor,referred to as Composite Armor Philosophy(CAP),which can adapt to the passive protection of light-,medium-,and heavy-armored vehicles,is presented in this study.CAP can serve as a guiding principle to assist designers in comprehending the distinct roles fulfilled by each component.The CAP proposal comprises four functional layers,organized in a suggested hierarchy of materials.Particularly notable is the inclusion of a ceramic-composite principle,representing an advanced and innovative solution in the field of armor design.This paper showcases real-world defense industry applications,offering case studies that demonstrate the effectiveness of this advanced approach.CAP represents a significant milestone in the history of passive protection,marking an evolutionary leap in the field.This philosophical approach provides designers with a powerful toolset with which to enhance the protection capabilities of military vehicles,making them more resilient and better equipped to meet the challenges of modern warfare.展开更多
Nowadays,inspired by the great success of Transformers in Natural Language Processing,many applications of Vision Transformers(ViTs)have been investigated in the field of medical image analysis including breast ultras...Nowadays,inspired by the great success of Transformers in Natural Language Processing,many applications of Vision Transformers(ViTs)have been investigated in the field of medical image analysis including breast ultrasound(BUS)image segmentation and classification.In this paper,we propose an efficient multi-task framework to segment and classify tumors in BUS images using hybrid convolutional neural networks(CNNs)-ViTs architecture and Multi-Perceptron(MLP)-Mixer.The proposed method uses a two-encoder architecture with EfficientNetV2 backbone and an adapted ViT encoder to extract tumor regions in BUS images.The self-attention(SA)mechanism in the Transformer encoder allows capturing a wide range of high-level and complex features while the EfficientNetV2 encoder preserves local information in image.To fusion the extracted features,a Channel Attention Fusion(CAF)module is introduced.The CAF module selectively emphasizes important features from both encoders,improving the integration of high-level and local information.The resulting feature maps are reconstructed to obtain the segmentation maps using a decoder.Then,our method classifies the segmented tumor regions into benign and malignant using a simple and efficient classifier based on MLP-Mixer,that is applied for the first time,to the best of our knowledge,for the task of lesion classification in BUS images.Experimental results illustrate the outperformance of our framework compared to recent works for the task of segmentation by producing 83.42%in terms of Dice coefficient as well as for the classification with 86%in terms of accuracy.展开更多
The value of system assimilation is to improve working relationships between tutors and learners while increasing workflow efficiency among tertiary institutions with low operational costs. E-skills could be described...The value of system assimilation is to improve working relationships between tutors and learners while increasing workflow efficiency among tertiary institutions with low operational costs. E-skills could be described as electronic education development, to assist ICT professionals to reach their future career goals and aim to help users boost their ICT skills. In a society that is expanding, it is also a crucial issue to take into account. Researchers have turned their attention to this topic because of its significance and contribution to the empowerment of graduates in digital education. Many scholars have proposed many methods for integrating e-skills into society with impressive results, but the rising rate of graduate unemployment in South Africa is gradually becoming a big worry in our society. A model based on Activity Theory (AT) and e-skills will be developed in our tertiary institution to equip graduates with skills that will increase their employability and provide more individualized work opportunities as part of this study’s effort to solve this issue. With the use of the Statistical Package for the Social Sciences (SPSS) and Cronbach’s Alpha for validity and reliability testing, the study will create an experimental performance to assess the approach taken to measure e-skills in tertiary institutions to empower graduates in South Africa. The study established that system development and e-skilled models for tertiary institutions are growing gradually, especially in South African institutions, that empower graduates with profitable employability with experiences to improve work operation in the industries. In conclusion, system development and e-skills are very demanding but important to empower graduate employability to determine competency in the professional workforce.展开更多
Since its inception in 2009,Bitcoin has become and is currently the most successful and widely used cryptocurrency.It introduced blockchain technology,which allows transactions that transfer funds between users to tak...Since its inception in 2009,Bitcoin has become and is currently the most successful and widely used cryptocurrency.It introduced blockchain technology,which allows transactions that transfer funds between users to take place online,in an immutable manner.No real-world identities are needed or stored in the blockchain.At the same time,all transactions are publicly available and auditable,making Bitcoin a pseudo-anonymous ledger of transactions.The volume of transactions that are broadcast on a daily basis is considerably large.We propose a set of features that can be extracted from transaction data.Using this,we apply a data processing pipeline to ultimately cluster transactions via a k-means clustering algorithm,according to the transaction properties.Finally,according to these properties,we are able to characterize these clusters and the transactions they include.Our work mainly differentiates from previous studies in that it applies an unsupervised learning method to cluster transactions instead of addresses.Using the novel features we introduce,our work classifies transactions in multiple clusters,while previous studies only attempt binary classification.Results indicate that most transactions fall into a cluster that can be described as common user transactions.Other clusters include transactions made by online exchanges and lending services,those relating to mining activities as well as smaller clusters,one of which contains possibly illicit or fraudulent transactions.We evaluated our results against an online database of addresses that belong to known actors,such as online exchanges,and found that our results generally agree with them,which enhances the validity of our methods.展开更多
Correction:Vis.Comput.Ind.Biomed.Art 7,2(2024)https://doi.org/10.1186/s42492-024-00155-w Following publication of the original article[1],the authors reported that the wrong version of abstract and keywords were mista...Correction:Vis.Comput.Ind.Biomed.Art 7,2(2024)https://doi.org/10.1186/s42492-024-00155-w Following publication of the original article[1],the authors reported that the wrong version of abstract and keywords were mistakenly inserted to this article.展开更多
Novel inorganic pigments based on manganese lazulite compositions were obtained by aqueous solution precipitation and subsequent heating.The obtained samples were evaluated by XRD(X-ray diffraction),infrared spectrosc...Novel inorganic pigments based on manganese lazulite compositions were obtained by aqueous solution precipitation and subsequent heating.The obtained samples were evaluated by XRD(X-ray diffraction),infrared spectroscopy,UV-visible reflectance spectra,and L*a*b*values.In addition,changes after exposure to acid and base solutions and the coloring power compared to titanium dioxide or zinc oxide were evaluated.In the XRD patterns of the samples,XRD peaks due to lazulite were observed,although they were in a mixed state.The samples were generally light red,becoming orange at 500℃ and purple at 700℃.The samples were sensitive to acid and base solutions,which darkened the color of the samples.The coloring power of the samples in this study was close to that of zinc oxide.展开更多
Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a ...Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a proactive cybersecurity approach to safeguard SMEs against these threats. Furthermore, to mitigate these risks, we propose a comprehensive framework of practical and scalable cybersecurity measurements/protocols specifically for SMEs. These measures encompass a spectrum of solutions, from technological fortifications to employee training initiatives and regulatory compliance strategies, in an effort to cultivate resilience and awareness among SMEs. Additionally, we introduce a specially designed a Java-based questionnaire software tool in order to provide an initial framework for essential cybersecurity measures and evaluation for SMEs. This tool covers crucial topics such as social engineering and phishing attempts, implementing antimalware and ransomware defense mechanisms, secure data management and backup strategies and methods for preventing insider threats. By incorporating globally recognized frameworks and standards like ISO/IEC 27001 and NIST guidelines, this questionnaire offers a roadmap for establishing and enhancing cybersecurity measures.展开更多
Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu...Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.展开更多
Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,...Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,given the challenges faced by health specialists to carry out continuous monitoring,the development of an intelligent anomaly detection system is proposed.Unlike other authors,where they use supervised techniques,this work proposes using unsupervised techniques due to the advantages they offer.These advantages include the lack of prior labeling of data,and the detection of anomalies previously not contemplated,among others.In the present work,an individualized methodology consisting of two phases is developed:characterizing the normal sitting pattern and determining abnormal samples.An analysis has been carried out between different unsupervised techniques to study which ones are more suitable for postural diagnosis.It can be concluded,among other aspects,that the utilization of dimensionality reduction techniques leads to improved results.Moreover,the normality characterization phase is deemed necessary for enhancing the system’s learning capabilities.Additionally,employing an individualized approach to the model aids in capturing the particularities of the various pathologies present among subjects.展开更多
Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps...Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.展开更多
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
文摘In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.
文摘BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.
基金the Italian Space Agency(Agenzia Spaziale Italiana,ASI)in the framework of the Research Day“Giornate della Ricerca Spaziale”initiative through the contract ASI N.2023-4-U.0.
文摘All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the conformational flexibility,resulting in various crystal structures.Varying synthesis parameters can yield distinct crystal structures from Cs,Pb,and Br precursors,and manually exploring the relationship between these synthesis parameters and the resulting crystal structure is both labour-intensive and time-consuming.Machine learning(ML)can rapidly uncover insights and drive discoveries in chemical synthesis with the support of data,significantly reducing both the cost and development cycle of materials.Here,we gathered synthesis parameters from published literature(220 synthesis runs)and implemented eight distinct ML models,including eXtreme Gradient Boosting(XGB),Decision Tree(DT),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),Logistic Regression(LR),Gradient Boosting(GB),and K-Nearest(KN)to classify and predict Cs-Pb-Br crystal structures from given synthesis parameters.Validation accuracy,precision,F1 score,recall,and average area under the curve(AUC)are employed to evaluate these ML models.The XGB model exhibited the best performance,achieving a validation accuracy of 0.841.The trained XGB model was subsequently utilised to predict the structure from 10 experimental runs using a randomised set of parameters,achieving a testing accuracy of 0.8.The results indicate that the Cs/Pb molar ratio,reaction time,and the concentration of organic compounds(ligands)play crucial roles in synthesising various crystal structures of Cs-Pb-Br.This study demonstrates a significant decrease in effort required for experimental procedures and builds a foundational basis for predicting crystal structures from synthesis parameters.
基金This work was partially supported by the National Key Research and Development Program of China under Grant No.2018AAA0100400the Natural Science Foundation of Shandong Province under Grants Nos.ZR2020MF131 and ZR2021ZD19the Science and Technology Program of Qingdao under Grant No.21-1-4-ny-19-nsh.
文摘UAV marine monitoring plays an essential role in marine environmental protection because of its flexibility and convenience,low cost and convenient maintenance.In marine environmental monitoring,the similarity between objects such as oil spill and sea surface,Spartina alterniflora and algae is high,and the effect of the general segmentation algorithm is poor,which brings new challenges to the segmentation of UAV marine images.Panoramic segmentation can do object detection and semantic segmentation at the same time,which can well solve the polymorphism problem of objects in UAV ocean images.Currently,there are few studies on UAV marine image recognition with panoptic segmentation.In addition,there are no publicly available panoptic segmentation datasets for UAV images.In this work,we collect and annotate UAV images to form a panoptic segmentation UAV dataset named UAV-OUC-SEG and propose a panoptic segmentation method named PanopticUAV.First,to deal with the large intraclass variability in scale,deformable convolution and CBAM attention mechanism are employed in the backbone to obtain more accurate features.Second,due to the complexity and diversity of marine images,boundary masks by the Laplacian operator equation from the ground truth are merged into feature maps to improve boundary segmentation precision.Experiments demonstrate the advantages of PanopticUAV beyond the most other advanced approaches on the UAV-OUC-SEG dataset.
文摘Airplanes are a social necessity for movement of humans,goods,and other.They are generally safe modes of transportation;however,incidents and accidents occasionally occur.To prevent aviation accidents,it is necessary to develop a machine-learning model to detect and predict commercial flights using automatic dependent surveillance–broadcast data.This study combined data-quality detection,anomaly detection,and abnormality-classification-model development.The research methodology involved the following stages:problem statement,data selection and labeling,prediction-model development,deployment,and testing.The data labeling process was based on the rules framed by the international civil aviation organization for commercial,jet-engine flights and validated by expert commercial pilots.The results showed that the best prediction model,the quadratic-discriminant-analysis,was 93%accurate,indicating a“good fit”.Moreover,the model’s area-under-the-curve results for abnormal and normal detection were 0.97 and 0.96,respectively,thus confirming its“good fit”.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(2020R1A2C1A01011131)the Energy Cloud R&D Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT(2019M3F2A1073164).
文摘Base isolators used in buildings provide both a good acceleration reduction and structural vibration control structures.The base isolators may lose their damping capacity over time due to environmental or dynamic effects.This deterioration of them requires the determination of the maintenance and repair needs and is important for the long-termisolator life.In this study,an artificial intelligence prediction model has been developed to determine the damage and maintenance-repair requirements of isolators as a result of environmental effects and dynamic factors over time.With the developed model,the required damping capacity of the isolator structure was estimated and compared with the previously placed isolator capacity,and the decrease in the damping property was tried to be determined.For this purpose,a data set was created by collecting the behavior of structures with single degrees of freedom(SDOF),different stiffness,damping ratio and natural period isolated from the foundation under far fault earthquakes.The data is divided into 5 different damping classes varying between 10%and 50%.Machine learning model was trained in damping classes with the data on the structure’s response to random seismic vibrations.As a result of the isolator behavior under randomly selected earthquakes,the recorded motion and structural acceleration of the structure against any seismic vibration were examined,and the decrease in the damping capacity was estimated on a class basis.The performance loss of the isolators,which are separated according to their damping properties,has been tried to be determined,and the reductions in the amounts to be taken into account have been determined by class.In the developed prediction model,using various supervised machine learning classification algorithms,the classification algorithm providing the highest precision for the model has been decided.When the results are examined,it has been determined that the damping of the isolator structure with the machine learning method is predicted successfully at a level exceeding 96%,and it is an effective method in deciding whether there is a decrease in the damping capacity.
基金the Researchers Supporting Project Number(RSP2023R 102)King Saud University,Riyadh,Saudi Arabia.
文摘Recently,nano-systems based on molecular communications via diffusion(MCvD)have been implemented in a variety of nanomedical applications,most notably in targeted drug delivery system(TDDS)scenarios.Furthermore,because the MCvD is unreliable and there exists molecular noise and inter symbol interference(ISI),cooperative nano-relays can acquire the reliability for drug delivery to targeted diseased cells,especially if the separation distance between the nano transmitter and nano receiver is increased.In this work,we propose an approach for optimizing the performance of the nano system using cooperative molecular communications with a nano relay scheme,while accounting for blood flow effects in terms of drift velocity.The fractions of the molecular drug that should be allocated to the nano transmitter and nano relay positioning are computed using a collaborative optimization problem solved by theModified Central Force Optimization(MCFO)algorithm.Unlike the previous work,the probability of bit error is expressed in a closed-form expression.It is used as an objective function to determine the optimal velocity of the drug molecules and the detection threshold at the nano receiver.The simulation results show that the probability of bit error can be dramatically reduced by optimizing the drift velocity,detection threshold,location of the nano-relay in the proposed nano system,and molecular drug budget.
基金co-financed by the European Regional Development Fund of the European UnionGreek national funds through the Operational Program Competitiveness,Entrepreneurship and Innovation,under the call RESEARCH-CREATE-INNOVATE(project code:T1EDK-04429)。
文摘A philosophy for the design of novel,lightweight,multi-layered armor,referred to as Composite Armor Philosophy(CAP),which can adapt to the passive protection of light-,medium-,and heavy-armored vehicles,is presented in this study.CAP can serve as a guiding principle to assist designers in comprehending the distinct roles fulfilled by each component.The CAP proposal comprises four functional layers,organized in a suggested hierarchy of materials.Particularly notable is the inclusion of a ceramic-composite principle,representing an advanced and innovative solution in the field of armor design.This paper showcases real-world defense industry applications,offering case studies that demonstrate the effectiveness of this advanced approach.CAP represents a significant milestone in the history of passive protection,marking an evolutionary leap in the field.This philosophical approach provides designers with a powerful toolset with which to enhance the protection capabilities of military vehicles,making them more resilient and better equipped to meet the challenges of modern warfare.
文摘Nowadays,inspired by the great success of Transformers in Natural Language Processing,many applications of Vision Transformers(ViTs)have been investigated in the field of medical image analysis including breast ultrasound(BUS)image segmentation and classification.In this paper,we propose an efficient multi-task framework to segment and classify tumors in BUS images using hybrid convolutional neural networks(CNNs)-ViTs architecture and Multi-Perceptron(MLP)-Mixer.The proposed method uses a two-encoder architecture with EfficientNetV2 backbone and an adapted ViT encoder to extract tumor regions in BUS images.The self-attention(SA)mechanism in the Transformer encoder allows capturing a wide range of high-level and complex features while the EfficientNetV2 encoder preserves local information in image.To fusion the extracted features,a Channel Attention Fusion(CAF)module is introduced.The CAF module selectively emphasizes important features from both encoders,improving the integration of high-level and local information.The resulting feature maps are reconstructed to obtain the segmentation maps using a decoder.Then,our method classifies the segmented tumor regions into benign and malignant using a simple and efficient classifier based on MLP-Mixer,that is applied for the first time,to the best of our knowledge,for the task of lesion classification in BUS images.Experimental results illustrate the outperformance of our framework compared to recent works for the task of segmentation by producing 83.42%in terms of Dice coefficient as well as for the classification with 86%in terms of accuracy.
文摘The value of system assimilation is to improve working relationships between tutors and learners while increasing workflow efficiency among tertiary institutions with low operational costs. E-skills could be described as electronic education development, to assist ICT professionals to reach their future career goals and aim to help users boost their ICT skills. In a society that is expanding, it is also a crucial issue to take into account. Researchers have turned their attention to this topic because of its significance and contribution to the empowerment of graduates in digital education. Many scholars have proposed many methods for integrating e-skills into society with impressive results, but the rising rate of graduate unemployment in South Africa is gradually becoming a big worry in our society. A model based on Activity Theory (AT) and e-skills will be developed in our tertiary institution to equip graduates with skills that will increase their employability and provide more individualized work opportunities as part of this study’s effort to solve this issue. With the use of the Statistical Package for the Social Sciences (SPSS) and Cronbach’s Alpha for validity and reliability testing, the study will create an experimental performance to assess the approach taken to measure e-skills in tertiary institutions to empower graduates in South Africa. The study established that system development and e-skilled models for tertiary institutions are growing gradually, especially in South African institutions, that empower graduates with profitable employability with experiences to improve work operation in the industries. In conclusion, system development and e-skills are very demanding but important to empower graduate employability to determine competency in the professional workforce.
基金co-financed by the European Union Horizon Europe Research and Innovation Programme under Grant Agreements No.101058174 and No 101091895.
文摘Since its inception in 2009,Bitcoin has become and is currently the most successful and widely used cryptocurrency.It introduced blockchain technology,which allows transactions that transfer funds between users to take place online,in an immutable manner.No real-world identities are needed or stored in the blockchain.At the same time,all transactions are publicly available and auditable,making Bitcoin a pseudo-anonymous ledger of transactions.The volume of transactions that are broadcast on a daily basis is considerably large.We propose a set of features that can be extracted from transaction data.Using this,we apply a data processing pipeline to ultimately cluster transactions via a k-means clustering algorithm,according to the transaction properties.Finally,according to these properties,we are able to characterize these clusters and the transactions they include.Our work mainly differentiates from previous studies in that it applies an unsupervised learning method to cluster transactions instead of addresses.Using the novel features we introduce,our work classifies transactions in multiple clusters,while previous studies only attempt binary classification.Results indicate that most transactions fall into a cluster that can be described as common user transactions.Other clusters include transactions made by online exchanges and lending services,those relating to mining activities as well as smaller clusters,one of which contains possibly illicit or fraudulent transactions.We evaluated our results against an online database of addresses that belong to known actors,such as online exchanges,and found that our results generally agree with them,which enhances the validity of our methods.
文摘Correction:Vis.Comput.Ind.Biomed.Art 7,2(2024)https://doi.org/10.1186/s42492-024-00155-w Following publication of the original article[1],the authors reported that the wrong version of abstract and keywords were mistakenly inserted to this article.
文摘Novel inorganic pigments based on manganese lazulite compositions were obtained by aqueous solution precipitation and subsequent heating.The obtained samples were evaluated by XRD(X-ray diffraction),infrared spectroscopy,UV-visible reflectance spectra,and L*a*b*values.In addition,changes after exposure to acid and base solutions and the coloring power compared to titanium dioxide or zinc oxide were evaluated.In the XRD patterns of the samples,XRD peaks due to lazulite were observed,although they were in a mixed state.The samples were generally light red,becoming orange at 500℃ and purple at 700℃.The samples were sensitive to acid and base solutions,which darkened the color of the samples.The coloring power of the samples in this study was close to that of zinc oxide.
文摘Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a proactive cybersecurity approach to safeguard SMEs against these threats. Furthermore, to mitigate these risks, we propose a comprehensive framework of practical and scalable cybersecurity measurements/protocols specifically for SMEs. These measures encompass a spectrum of solutions, from technological fortifications to employee training initiatives and regulatory compliance strategies, in an effort to cultivate resilience and awareness among SMEs. Additionally, we introduce a specially designed a Java-based questionnaire software tool in order to provide an initial framework for essential cybersecurity measures and evaluation for SMEs. This tool covers crucial topics such as social engineering and phishing attempts, implementing antimalware and ransomware defense mechanisms, secure data management and backup strategies and methods for preventing insider threats. By incorporating globally recognized frameworks and standards like ISO/IEC 27001 and NIST guidelines, this questionnaire offers a roadmap for establishing and enhancing cybersecurity measures.
基金partially funded by the Programa Nacional de Becas y Crédito Educativo of Peru and the Universitat de València,Spain.
文摘Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.
基金FEDER/Ministry of Science and Innovation-State Research Agency/Project PID2020-112667RB-I00 funded by MCIN/AEI/10.13039/501100011033the Basque Government,IT1726-22+2 种基金by the predoctoral contracts PRE_2022_2_0022 and EP_2023_1_0015 of the Basque Governmentpartially supported by the Italian MIUR,PRIN 2020 Project“COMMON-WEARS”,N.2020HCWWLP,CUP:H23C22000230005co-funding from Next Generation EU,in the context of the National Recovery and Resilience Plan,through the Italian MUR,PRIN 2022 Project”COCOWEARS”(A framework for COntinuum COmputing WEARable Systems),N.2022T2XNJE,CUP:H53D23003640006.
文摘Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,given the challenges faced by health specialists to carry out continuous monitoring,the development of an intelligent anomaly detection system is proposed.Unlike other authors,where they use supervised techniques,this work proposes using unsupervised techniques due to the advantages they offer.These advantages include the lack of prior labeling of data,and the detection of anomalies previously not contemplated,among others.In the present work,an individualized methodology consisting of two phases is developed:characterizing the normal sitting pattern and determining abnormal samples.An analysis has been carried out between different unsupervised techniques to study which ones are more suitable for postural diagnosis.It can be concluded,among other aspects,that the utilization of dimensionality reduction techniques leads to improved results.Moreover,the normality characterization phase is deemed necessary for enhancing the system’s learning capabilities.Additionally,employing an individualized approach to the model aids in capturing the particularities of the various pathologies present among subjects.
文摘Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.