Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying ...Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.展开更多
Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.S...Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.Standard classification methods fail to address these dual challenges,limiting their real-world performance.In this paper,a novel,three-phase training framework is proposed that learns a robust ordinal classifier directly from noisy labels.The approach synergistically combines a rank-based ordinal regression backbone with a cooperative,semi-supervised learning strategy to dynamically partition the data into clean and noisy subsets.A hybrid training objective is then employed,applying a supervised ordinal loss to the clean set.The noisy set is simultaneously trained using a dualobjective that combines a semi-supervised ordinal loss with a parallel,label-agnostic contrastive loss.This design allows themodel to learn fromthe entire noisy subset while using contrastive learning to mitigate the risk of error propagation frompotentially corrupt supervision.Extensive experiments on a new,large-scale,multi-site clinical dataset validate our approach.Themethod achieves state-of-the-art performance with 80.71%accuracy and a 76.86%F1-score,significantly outperforming existing approaches,including a 2.26%improvement over the strongest baseline method.This work provides not only a robust solution for a practical medical imaging problem but also a generalizable framework for other tasks plagued by noisy ordinal labels.展开更多
As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a no...As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.展开更多
This paper presents a robust finite-time visual servo control strategy for the tracking problem of omni-directional mobile manipulators(OMMs)subject to mismatched disturbances.First,the nonlinear kinematic model of vi...This paper presents a robust finite-time visual servo control strategy for the tracking problem of omni-directional mobile manipulators(OMMs)subject to mismatched disturbances.First,the nonlinear kinematic model of visual servoing for OMMs with mismatched disturbances is explicitly presented to solve the whole-body inverse kinematic problem.Second,a sliding mode observer augmented with an integral terminal sliding mode controller is proposed to handle these uncertainties and ensure that the system converges to a small region around the equilibrium point.The boundary layer technique is employed to mitigate the chattering phenomenon.Furthermore,a strict finite-time Lyapunov stability analysis is conducted.An experimental comparison between the proposed algorithm and a traditional position-based visual servo controller is carried out,and the results demonstrate the superiority of the proposed control algorithm.展开更多
BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially as...BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.展开更多
Intelligent Traffic Management(ITM)has progressively developed into a critical component of modern transportation networks,significantly enhancing traffic flow and reducing congestion in urban environments.This resear...Intelligent Traffic Management(ITM)has progressively developed into a critical component of modern transportation networks,significantly enhancing traffic flow and reducing congestion in urban environments.This research proposes an enhanced framework that leverages Deep Q-Learning(DQL),Game Theory(GT),and Stochastic Optimization(SO)to tackle the complex dynamics in transportation networks.The DQL component utilizes the distribution of traffic conditions for epsilon-greedy policy formulation and action and choice reward calculation,ensuring resilient decision-making.GT models the interaction between vehicles and intersections through probabilistic distributions of various features to enhance performance.Results demonstrate that the proposed framework is a scalable solution for dynamic optimization in transportation networks.展开更多
Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu...Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.展开更多
In the evolving landscape of secure communication,steganography has become increasingly vital to secure the transmission of secret data through an insecure public network.Several steganographic algorithms have been pr...In the evolving landscape of secure communication,steganography has become increasingly vital to secure the transmission of secret data through an insecure public network.Several steganographic algorithms have been proposed using digital images with a common objective of balancing a trade-off between the payload size and the quality of the stego image.In the existing steganographic works,a remarkable distortion of the stego image persists when the payload size is increased,making several existing works impractical to the current world of vast data.This paper introduces FuzzyStego,a novel approach designed to enhance the stego image’s quality by minimizing the effect of the payload size on the stego image’s quality.In line with the limitations of traditional methods like Pixel Value Differencing(PVD),Transform Domain Techniques,and Least Significant Bit(LSB)insertion,such as image quality degradation,vulnerability to processing attacks,and restricted capacity,FuzzyStego utilizes fuzzy logic to categorize pixels into intensity levels:Low(L),Medium-Low(ML),Medium(M),Medium-High(MH),and High(H).This classification enables adaptive data embedding,minimizing detectability by adjusting the hidden bit count according to the intensity levels.Experimental results show that FuzzyStego achieves an average Peak Signal-to-Noise Ratio(PSNR)of 58.638 decibels(dB)and a Structural Similarity Index Measure(SSIM)of almost 1.00,demonstrating its promising capability to preserve image quality while embedding data effectively.展开更多
The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
The balancing market in the energy sector plays a critical role in physically and financially balancing the supply and demand.Modeling dynamics in the balancing market can provide valuable insights and prognosis for p...The balancing market in the energy sector plays a critical role in physically and financially balancing the supply and demand.Modeling dynamics in the balancing market can provide valuable insights and prognosis for power grid stability and secure energy supply.While complex machine learning models can achieve high accuracy,their“blackbox”nature severely limits the model interpretability.In this paper,we explore the trade-off between model accuracy and interpretability for the energy balancing market.Particularly,we take the example of forecasting manual frequency restoration reserve(mFRR)activation price in the balancing market using real market data from different energy price zones.We explore the interpretability of mFRR forecasting using two models:extreme gradient boosting(XGBoost)machine and explainable boosting machine(EBM).We also integrate the two models,and we benchmark all the models against a baseline naive model.Our results show that EBM provides forecasting accuracy comparable to XGBoost while yielding a considerable level of interpretability.Our analysis also underscores the challenge of accurately predicting the mFRR price for the instances when the activation price deviates significantly from the spot price.Importantly,EBM's interpretability features reveal insights into non-linear mFRR price drivers and regional market dynamics.Our study demonstrates that EBM is a viable and valuable interpretable alternative to complex black-box AI models in the forecast for the balancing market.展开更多
Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a ...Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a proactive cybersecurity approach to safeguard SMEs against these threats. Furthermore, to mitigate these risks, we propose a comprehensive framework of practical and scalable cybersecurity measurements/protocols specifically for SMEs. These measures encompass a spectrum of solutions, from technological fortifications to employee training initiatives and regulatory compliance strategies, in an effort to cultivate resilience and awareness among SMEs. Additionally, we introduce a specially designed a Java-based questionnaire software tool in order to provide an initial framework for essential cybersecurity measures and evaluation for SMEs. This tool covers crucial topics such as social engineering and phishing attempts, implementing antimalware and ransomware defense mechanisms, secure data management and backup strategies and methods for preventing insider threats. By incorporating globally recognized frameworks and standards like ISO/IEC 27001 and NIST guidelines, this questionnaire offers a roadmap for establishing and enhancing cybersecurity measures.展开更多
Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,...Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,given the challenges faced by health specialists to carry out continuous monitoring,the development of an intelligent anomaly detection system is proposed.Unlike other authors,where they use supervised techniques,this work proposes using unsupervised techniques due to the advantages they offer.These advantages include the lack of prior labeling of data,and the detection of anomalies previously not contemplated,among others.In the present work,an individualized methodology consisting of two phases is developed:characterizing the normal sitting pattern and determining abnormal samples.An analysis has been carried out between different unsupervised techniques to study which ones are more suitable for postural diagnosis.It can be concluded,among other aspects,that the utilization of dimensionality reduction techniques leads to improved results.Moreover,the normality characterization phase is deemed necessary for enhancing the system’s learning capabilities.Additionally,employing an individualized approach to the model aids in capturing the particularities of the various pathologies present among subjects.展开更多
Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps...Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.展开更多
In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star g...In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Humans achieve cognitive development through continuous interaction with their environment,enhancing both perception and behavior.However,current robots lack the capacity for human-like action and evolution,posing a b...Humans achieve cognitive development through continuous interaction with their environment,enhancing both perception and behavior.However,current robots lack the capacity for human-like action and evolution,posing a bottleneck to improving robotic intelligence.Existing research predominantly models robots as one-way,static mappings from observations to actions,neglecting the dynamic processes of perception and behavior.This paper introduces a novel approach to robot cognitive learning by considering physical properties.We propose a theoretical framework wherein a robot is conceptualized as a three-body physical system comprising a perception-body(P-body),a cognition-body(C-body),and a behavior-body(B-body).Each body engages in physical dynamics and operates within a closed-loop interaction.Significantly,three crucial interactions connect these bodies.The C-body relies on the Pbody's extracted states and reciprocally offers long-term rewards,optimizing the P-body's perception policy.In addition,the C-body directs the B-body's actions through sub-goals,and subsequent P-body-derived states facilitate the C-body's cognition dynamics learning.At last,the B-body would follow the sub-goal generated by the C-body and perform actions conditioned on the perceptive state from the P-body,which leads to the next interactive step.These interactions foster the joint evolution of each body,culminating in optimal design.To validate our approach,we employ a navigation task using a four-legged robot,D'Kitty,equipped with a movable global camera.Navigational prowess demands intricate coordination of sensing,planning,and D'Kitty's motion.Leveraging our framework yields superior task performance compared with conventional methodologies.In conclusion,this paper establishes a paradigm shift in robot cognitive learning by integrating physical interactions across the P-body,C-body,and B-body,while considering physical properties.Our framework's successful application to a navigation task underscores its efficacy in enhancing robotic intelligence.展开更多
The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Tradi...The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Traditional authentication methods have proven insufficient,exposing systems to risks such as Sybil,Denial of Service(DoS),and Eclipse attacks.This study critically examines the limitations of current security protocols,focusing on authentication and data exchange vulnerabilities,and explores blockchain technology as a potential solution.Blockchain’s decentralized and cryptographically secure framework can significantly enhance Vehicle-to-Vehicle(V2V)communication,ensure data integrity,and enable transparent,immutable transactions within the supply chain.Additionally,blockchain strengthens authentication,secures digital identities,and improves data sharing,reducing the risk of unauthorized access and data breaches.Our contribution lies in the proposal to integrate Artificial Intelligence(AI)with blockchain technology to further improve security by refining cryptographic methods,automating key management,and bolstering anomaly detection.Despite challenges related to computational complexity,latency,scalability,and regulatory concerns,the combination of blockchain,AI offers the transformative potential to enhance the security,transparency,and efficiency of connected vehicle systems and their supply chains.展开更多
A common method for monitoring seawater quality involves collecting samples periodically and analyzing them in a laboratory.This method presents several challenges such as transportation of samples,limited access to t...A common method for monitoring seawater quality involves collecting samples periodically and analyzing them in a laboratory.This method presents several challenges such as transportation of samples,limited access to testing areas,high costs,and non-instantaneous tests.In this paper,a new Wireless Sensor Network(WSN)based seawater quality monitoring(SQM)system is designed and constructed to observe the seawater parameters that are indicative of marine pollution such as pH,electrical conductivity,temperature,and turbidity,along with geospatial data in real-time.It consists of one master node and several portable sensor nodes that are deployed at different locations on the sea surface.The IEEE 802.15.4 communication standard is utilized between master node and sensor nodes using star topology,while GSM/GPRS is used to connect the master node to a remote server.Collected data from the sensor nodes can be instantly viewed on data grids,graphics,and a map via both a developed web application and a hybrid mobile application.Additionally,the data can be filtered by different parameters and downloaded in spreadsheet format for integration with geographical information systems.After calibrating the sensors,experimental tests were conducted off the coast of Antalya Kucuk Calticak Bay over two separate periods totaling 14 d with only a 2%data loss.Furthermore,a verification test was performed for the sensors,where R-squared values ranged between 0.7 and 1.0,indicating a high correlation between sensor node data and standard instrument data.展开更多
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da...Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.展开更多
Robots are increasingly expected to replace humans in many repetitive and high-precision tasks,of which surface scanning is a typical example.However,it is usually difficult for a robot to independently deal with a su...Robots are increasingly expected to replace humans in many repetitive and high-precision tasks,of which surface scanning is a typical example.However,it is usually difficult for a robot to independently deal with a surface scanning task with uncertainties in,for example the irregular surface shapes and surface properties.Moreover,it usually requires surface modelling with additional sensors,which might be time-consuming and costly.A human-robot collaboration-based approach that allows a human user and a robot to assist each other in scanning uncertain surfaces with uniform properties,such as scanning human skin in ultrasound examination is proposed.In this approach,teleoperation is used to obtain the operator's intent while allowing the operator to operate remotely.After external force perception and friction estimation,the orientation of the robot endeffector can be autonomously adjusted to keep as perpendicular to the surface as possible.Force control enables the robotic manipulator to maintain a constant contact force with the surface.And hybrid force/motion control ensures that force,position,and pose can be regulated without interfering with each other while reducing the operator's workload.The proposed method is validated using the Elite robot to perform a mock Bultrasound scanning experiment.展开更多
文摘Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.
文摘Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.Standard classification methods fail to address these dual challenges,limiting their real-world performance.In this paper,a novel,three-phase training framework is proposed that learns a robust ordinal classifier directly from noisy labels.The approach synergistically combines a rank-based ordinal regression backbone with a cooperative,semi-supervised learning strategy to dynamically partition the data into clean and noisy subsets.A hybrid training objective is then employed,applying a supervised ordinal loss to the clean set.The noisy set is simultaneously trained using a dualobjective that combines a semi-supervised ordinal loss with a parallel,label-agnostic contrastive loss.This design allows themodel to learn fromthe entire noisy subset while using contrastive learning to mitigate the risk of error propagation frompotentially corrupt supervision.Extensive experiments on a new,large-scale,multi-site clinical dataset validate our approach.Themethod achieves state-of-the-art performance with 80.71%accuracy and a 76.86%F1-score,significantly outperforming existing approaches,including a 2.26%improvement over the strongest baseline method.This work provides not only a robust solution for a practical medical imaging problem but also a generalizable framework for other tasks plagued by noisy ordinal labels.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number(RGP2/367/46)+1 种基金This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.
基金supported by the Artificial Intelligence Innovation and Development Special Fund of Shanghai(No.2019RGZN01041)the National Natural Science Foundation of China(No.92048205).
文摘This paper presents a robust finite-time visual servo control strategy for the tracking problem of omni-directional mobile manipulators(OMMs)subject to mismatched disturbances.First,the nonlinear kinematic model of visual servoing for OMMs with mismatched disturbances is explicitly presented to solve the whole-body inverse kinematic problem.Second,a sliding mode observer augmented with an integral terminal sliding mode controller is proposed to handle these uncertainties and ensure that the system converges to a small region around the equilibrium point.The boundary layer technique is employed to mitigate the chattering phenomenon.Furthermore,a strict finite-time Lyapunov stability analysis is conducted.An experimental comparison between the proposed algorithm and a traditional position-based visual servo controller is carried out,and the results demonstrate the superiority of the proposed control algorithm.
文摘BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.
基金the Deanship of Scientific Research at King Khalid University for funding this research through the Large Group Research Project under grant number RGP2/324/46.
文摘Intelligent Traffic Management(ITM)has progressively developed into a critical component of modern transportation networks,significantly enhancing traffic flow and reducing congestion in urban environments.This research proposes an enhanced framework that leverages Deep Q-Learning(DQL),Game Theory(GT),and Stochastic Optimization(SO)to tackle the complex dynamics in transportation networks.The DQL component utilizes the distribution of traffic conditions for epsilon-greedy policy formulation and action and choice reward calculation,ensuring resilient decision-making.GT models the interaction between vehicles and intersections through probabilistic distributions of various features to enhance performance.Results demonstrate that the proposed framework is a scalable solution for dynamic optimization in transportation networks.
基金partially funded by the Programa Nacional de Becas y Crédito Educativo of Peru and the Universitat de València,Spain.
文摘Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems.
文摘In the evolving landscape of secure communication,steganography has become increasingly vital to secure the transmission of secret data through an insecure public network.Several steganographic algorithms have been proposed using digital images with a common objective of balancing a trade-off between the payload size and the quality of the stego image.In the existing steganographic works,a remarkable distortion of the stego image persists when the payload size is increased,making several existing works impractical to the current world of vast data.This paper introduces FuzzyStego,a novel approach designed to enhance the stego image’s quality by minimizing the effect of the payload size on the stego image’s quality.In line with the limitations of traditional methods like Pixel Value Differencing(PVD),Transform Domain Techniques,and Least Significant Bit(LSB)insertion,such as image quality degradation,vulnerability to processing attacks,and restricted capacity,FuzzyStego utilizes fuzzy logic to categorize pixels into intensity levels:Low(L),Medium-Low(ML),Medium(M),Medium-High(MH),and High(H).This classification enables adaptive data embedding,minimizing detectability by adjusting the hidden bit count according to the intensity levels.Experimental results show that FuzzyStego achieves an average Peak Signal-to-Noise Ratio(PSNR)of 58.638 decibels(dB)and a Structural Similarity Index Measure(SSIM)of almost 1.00,demonstrating its promising capability to preserve image quality while embedding data effectively.
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
基金PriTEM project funded by UiO:Energy Convergence Environments
文摘The balancing market in the energy sector plays a critical role in physically and financially balancing the supply and demand.Modeling dynamics in the balancing market can provide valuable insights and prognosis for power grid stability and secure energy supply.While complex machine learning models can achieve high accuracy,their“blackbox”nature severely limits the model interpretability.In this paper,we explore the trade-off between model accuracy and interpretability for the energy balancing market.Particularly,we take the example of forecasting manual frequency restoration reserve(mFRR)activation price in the balancing market using real market data from different energy price zones.We explore the interpretability of mFRR forecasting using two models:extreme gradient boosting(XGBoost)machine and explainable boosting machine(EBM).We also integrate the two models,and we benchmark all the models against a baseline naive model.Our results show that EBM provides forecasting accuracy comparable to XGBoost while yielding a considerable level of interpretability.Our analysis also underscores the challenge of accurately predicting the mFRR price for the instances when the activation price deviates significantly from the spot price.Importantly,EBM's interpretability features reveal insights into non-linear mFRR price drivers and regional market dynamics.Our study demonstrates that EBM is a viable and valuable interpretable alternative to complex black-box AI models in the forecast for the balancing market.
文摘Small and Medium-sized Enterprises (SMEs) are considered the backbone of global economy, but they often face cyberthreats which threaten their financial stability and operational continuity. This work aims to offer a proactive cybersecurity approach to safeguard SMEs against these threats. Furthermore, to mitigate these risks, we propose a comprehensive framework of practical and scalable cybersecurity measurements/protocols specifically for SMEs. These measures encompass a spectrum of solutions, from technological fortifications to employee training initiatives and regulatory compliance strategies, in an effort to cultivate resilience and awareness among SMEs. Additionally, we introduce a specially designed a Java-based questionnaire software tool in order to provide an initial framework for essential cybersecurity measures and evaluation for SMEs. This tool covers crucial topics such as social engineering and phishing attempts, implementing antimalware and ransomware defense mechanisms, secure data management and backup strategies and methods for preventing insider threats. By incorporating globally recognized frameworks and standards like ISO/IEC 27001 and NIST guidelines, this questionnaire offers a roadmap for establishing and enhancing cybersecurity measures.
基金FEDER/Ministry of Science and Innovation-State Research Agency/Project PID2020-112667RB-I00 funded by MCIN/AEI/10.13039/501100011033the Basque Government,IT1726-22+2 种基金by the predoctoral contracts PRE_2022_2_0022 and EP_2023_1_0015 of the Basque Governmentpartially supported by the Italian MIUR,PRIN 2020 Project“COMMON-WEARS”,N.2020HCWWLP,CUP:H23C22000230005co-funding from Next Generation EU,in the context of the National Recovery and Resilience Plan,through the Italian MUR,PRIN 2022 Project”COCOWEARS”(A framework for COntinuum COmputing WEARable Systems),N.2022T2XNJE,CUP:H53D23003640006.
文摘Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status.To date,this detection has relied on in-person observation by medical specialists.However,given the challenges faced by health specialists to carry out continuous monitoring,the development of an intelligent anomaly detection system is proposed.Unlike other authors,where they use supervised techniques,this work proposes using unsupervised techniques due to the advantages they offer.These advantages include the lack of prior labeling of data,and the detection of anomalies previously not contemplated,among others.In the present work,an individualized methodology consisting of two phases is developed:characterizing the normal sitting pattern and determining abnormal samples.An analysis has been carried out between different unsupervised techniques to study which ones are more suitable for postural diagnosis.It can be concluded,among other aspects,that the utilization of dimensionality reduction techniques leads to improved results.Moreover,the normality characterization phase is deemed necessary for enhancing the system’s learning capabilities.Additionally,employing an individualized approach to the model aids in capturing the particularities of the various pathologies present among subjects.
文摘Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.
文摘In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金jointly funded by the National Science and Technology Major Project of the Ministry of Science and Technology of China(2018AAA0102900)the"New Generation Artificial Intelligence"Key Field Research and Development Plan of Guangdong Province(2021B0101410002)。
文摘Humans achieve cognitive development through continuous interaction with their environment,enhancing both perception and behavior.However,current robots lack the capacity for human-like action and evolution,posing a bottleneck to improving robotic intelligence.Existing research predominantly models robots as one-way,static mappings from observations to actions,neglecting the dynamic processes of perception and behavior.This paper introduces a novel approach to robot cognitive learning by considering physical properties.We propose a theoretical framework wherein a robot is conceptualized as a three-body physical system comprising a perception-body(P-body),a cognition-body(C-body),and a behavior-body(B-body).Each body engages in physical dynamics and operates within a closed-loop interaction.Significantly,three crucial interactions connect these bodies.The C-body relies on the Pbody's extracted states and reciprocally offers long-term rewards,optimizing the P-body's perception policy.In addition,the C-body directs the B-body's actions through sub-goals,and subsequent P-body-derived states facilitate the C-body's cognition dynamics learning.At last,the B-body would follow the sub-goal generated by the C-body and perform actions conditioned on the perceptive state from the P-body,which leads to the next interactive step.These interactions foster the joint evolution of each body,culminating in optimal design.To validate our approach,we employ a navigation task using a four-legged robot,D'Kitty,equipped with a movable global camera.Navigational prowess demands intricate coordination of sensing,planning,and D'Kitty's motion.Leveraging our framework yields superior task performance compared with conventional methodologies.In conclusion,this paper establishes a paradigm shift in robot cognitive learning by integrating physical interactions across the P-body,C-body,and B-body,while considering physical properties.Our framework's successful application to a navigation task underscores its efficacy in enhancing robotic intelligence.
文摘The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Traditional authentication methods have proven insufficient,exposing systems to risks such as Sybil,Denial of Service(DoS),and Eclipse attacks.This study critically examines the limitations of current security protocols,focusing on authentication and data exchange vulnerabilities,and explores blockchain technology as a potential solution.Blockchain’s decentralized and cryptographically secure framework can significantly enhance Vehicle-to-Vehicle(V2V)communication,ensure data integrity,and enable transparent,immutable transactions within the supply chain.Additionally,blockchain strengthens authentication,secures digital identities,and improves data sharing,reducing the risk of unauthorized access and data breaches.Our contribution lies in the proposal to integrate Artificial Intelligence(AI)with blockchain technology to further improve security by refining cryptographic methods,automating key management,and bolstering anomaly detection.Despite challenges related to computational complexity,latency,scalability,and regulatory concerns,the combination of blockchain,AI offers the transformative potential to enhance the security,transparency,and efficiency of connected vehicle systems and their supply chains.
基金The Scientific Research Projects Coordination Unit of Akdeniz University(Türkiye)under contract No.FBA-2022-5542.
文摘A common method for monitoring seawater quality involves collecting samples periodically and analyzing them in a laboratory.This method presents several challenges such as transportation of samples,limited access to testing areas,high costs,and non-instantaneous tests.In this paper,a new Wireless Sensor Network(WSN)based seawater quality monitoring(SQM)system is designed and constructed to observe the seawater parameters that are indicative of marine pollution such as pH,electrical conductivity,temperature,and turbidity,along with geospatial data in real-time.It consists of one master node and several portable sensor nodes that are deployed at different locations on the sea surface.The IEEE 802.15.4 communication standard is utilized between master node and sensor nodes using star topology,while GSM/GPRS is used to connect the master node to a remote server.Collected data from the sensor nodes can be instantly viewed on data grids,graphics,and a map via both a developed web application and a hybrid mobile application.Additionally,the data can be filtered by different parameters and downloaded in spreadsheet format for integration with geographical information systems.After calibrating the sensors,experimental tests were conducted off the coast of Antalya Kucuk Calticak Bay over two separate periods totaling 14 d with only a 2%data loss.Furthermore,a verification test was performed for the sensors,where R-squared values ranged between 0.7 and 1.0,indicating a high correlation between sensor node data and standard instrument data.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.
基金Engineering and Physical Sciences Research Council(EPSRC),Grant/Award Number:EP/S001913。
文摘Robots are increasingly expected to replace humans in many repetitive and high-precision tasks,of which surface scanning is a typical example.However,it is usually difficult for a robot to independently deal with a surface scanning task with uncertainties in,for example the irregular surface shapes and surface properties.Moreover,it usually requires surface modelling with additional sensors,which might be time-consuming and costly.A human-robot collaboration-based approach that allows a human user and a robot to assist each other in scanning uncertain surfaces with uniform properties,such as scanning human skin in ultrasound examination is proposed.In this approach,teleoperation is used to obtain the operator's intent while allowing the operator to operate remotely.After external force perception and friction estimation,the orientation of the robot endeffector can be autonomously adjusted to keep as perpendicular to the surface as possible.Force control enables the robotic manipulator to maintain a constant contact force with the surface.And hybrid force/motion control ensures that force,position,and pose can be regulated without interfering with each other while reducing the operator's workload.The proposed method is validated using the Elite robot to perform a mock Bultrasound scanning experiment.