This study presents an energy-efficient Internet of Things(IoT)-based wireless sensor network(WSN)framework for autonomous data validation in remote environmental monitoring.We address two critical challenges in WSNs:...This study presents an energy-efficient Internet of Things(IoT)-based wireless sensor network(WSN)framework for autonomous data validation in remote environmental monitoring.We address two critical challenges in WSNs:ensuring data reliability and optimizing energy consumption.Our novel approach integrates an artificial neural network(ANN)-based multi-fault detection algorithm with an energy-efficient IoT-WSN architecture.The proposed ANN model is designed to simultaneously detect multiple fault types,including spike faults,stuckat faults,outliers,and out-of-range faults.We collected sensor data at 5-minute intervals over three months,using temperature and humidity sensors.The ANN was trained on 70%of the 26,280 data points per sensor,with 15%each for validation and testing.Our framework demonstrated a 97.1%improvement in fault detection accuracy(measured by F1 score)compared to existing methods,including rule-based,moving average,and statistical outlier detection approaches.The energy efficiency of the system was evaluated through 24-h power consumption tests,showing significant savings over traditional WSN architectures.Key contributions include a multi-fault detection ANN model balancing accuracy and computational efficiency,an energy-optimized IoTWSN architecture for remote deployments,and a comprehensive performance evaluation framework.While our approach offers improvements in both data validation and energy efficiency,we acknowledge limitations such as potential scalability issues and the need for further real-world testing.This research advances the field of remote environmental monitoring by providing a robust,energy-efficient solution for ensuring data reliability in challenging deployment scenarios.Future work will explore more advanced machine learning techniques and extended field testing to further validate and improve the system’s performance.展开更多
The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address th...The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address the issue of system-level higher task dissipation during the execution of parallel workloads with common deadlines by introducing a machine learning-based framework that includes task migration using energy-efficient earliest deadline first scheduling(EA-EDF).ML-based EA-EDF enhances the overall throughput and optimizes the energy to avoid delay and performance degradation in a multiprocessor system.The proposed system model allocates processors to the ready task set in such a way that their deadlines are guaranteed.A full task migration policy is also integrated to ensure proper task mapping that ensures inter-process linkage among the arrived tasks with the same deadlines.The execution of a task can halt on one CPU and reschedule the execution on a different processor to avoid delay and ensure meeting the deadline.Our approach shows promising potential for machine-learning-based schedulability analysis enables a comparison between different ML models and shows a promising reduction in energy as compared with other ML-aware task migration techniques for SoC like Multi-Layer Feed-Forward Neural Networks(MLFNN)based on convolutional neural network(CNN),Random Forest(RF)and Deep learning(DL)algorithm.The Simulations are conducted using super pipelined microarchitecture of advanced micro devices(AMD)XScale PXA270 using instruction and data cache per core 32 Kbyte I-cache and 32 Kbyte D-cache on various utilization factors(u_(i))12%,31%and 50%.The proposed approach consumes 5.3%less energy when almost half of the CPU is running and on a lower workload consumes 1.04%less energy.The proposed design accumulatively gives significant improvements by reducing the energy dissipation on three clock rates by 4.41%,on 624 MHz by 5.4%and 5.9%on applications operating on 416 and 312 MHz standard operating frequencies.展开更多
Authentication is the most crucial aspect of security and a predominant measure employed in cybersecurity.Cloud computing provides a shared electronic device resource for users via the internet,and the authentication ...Authentication is the most crucial aspect of security and a predominant measure employed in cybersecurity.Cloud computing provides a shared electronic device resource for users via the internet,and the authentication techniques used must protect data from attacks.Previous approaches failed to resolve the challenge of making passwords secure,memorable,usable,and time-saving.Graphical Password(GP)is still not widely utilized in reality because consumers suffer from multiple login stages.This paper proposes an Indexed Choice-Based Graphical Password(ICGP)scheme for improving the authentication part.ICGP consists of two stages:registration and authentication.At the registration stage,the user registers his/her data user name a number called Index Number(IN),and chooses an image from a grid of images.After completing the registration,ICGP gives the user a random unique number(UNo)to be a user ID.At the authentication stage,the user chooses a different image from the grid based on the random appearance of the registered image dimensions on the grid plus the registered Index Number.ICGP password is a combination of three factors;user’s name,UNo,and any image.According to the experiments,the proposed ICGP has achieved great improvements when compared to prior methods.The ICGP has increased the possible password numbers from 9.77e+6 to 3.74e+30,the password space from 1.20e+34 to 1.37e+84,and decreased the password entropy from 7.16e−7 to 8.26e−30.展开更多
This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller P...This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller Placement Problem(CPP).As SD-WAN technology continues to gain prominence for its capacity to offer flexible and efficient network management,the task of 36optimally placing controllers—responsible for orchestrating and managing network traffic—remains a critical yet complex challenge.This review delves into recent innovations in multi-objective controller placement strategies,including clustering techniques,heuristic-based approaches,and the integration of machine learning and deep learning models.Each methodology is critically evaluated in terms of its ability to minimize network latency,enhance fault tolerance,and improve overall network performance.Furthermore,this paper discusses the inherent limitations and challenges associated with these techniques,providing a critical evaluation of their current utility and outlining potential avenues for future research.By offering a thorough overview of state-of-the-art approaches to multi-objective controller placement in SD-WANs,this review aims to inform ongoing advancements and highlight emerging research opportunities in this evolving field.展开更多
Vehicle overtaking poses significant risks and leads to injuries and losses on Malaysia’s roads.In most scenarios,insufficient and untimely information available to drivers for accessing road conditions and their sur...Vehicle overtaking poses significant risks and leads to injuries and losses on Malaysia’s roads.In most scenarios,insufficient and untimely information available to drivers for accessing road conditions and their surrounding environment is the primary factor that causes these incidents.To address these issues,a comprehensive system is required to provide real-time assistance to drivers.Building upon our previous research on a LoRa-based lane change decision-aid system,this study proposes an enhanced Vehicle Overtaking System(VOS).This system utilizes long-range(LoRa)communication for reliable real-time data exchange between vehicles(V2V)and the cloud(V2C).By providing drivers with critical information,including surrounding vehicle movements,through visual and audible warnings,the VOS aims to support vehicle overtaking decisions by calculating the safe distance between vehicles as per the Association of State Highway and Transportation Officials(AASHTO)guidelines.This study also examines the performance of LoRa communication strength and data transmission at various distances using a cloud monitoring tool or dashboard.展开更多
Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid fram...Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid framework that integrates EfficientNet-B8,Vision Transformer(ViT),and Knowledge Graph Fusion(KGF)to enhance plant disease classification across 38 distinct disease categories.The proposed framework leverages deep learning and semantic enrichment to improve classification accuracy and interpretability.EfficientNet-B8,a convolutional neural network(CNN)with optimized depth and width scaling,captures fine-grained spatial details in high-resolution plant images,aiding in the detection of subtle disease symptoms.In parallel,ViT,a transformer-based architecture,effectively models long-range dependencies and global structural patterns within the images,ensuring robust disease pattern recognition.Furthermore,KGF incorporates domain-specific metadata,such as crop type,environmental conditions,and disease relationships,to provide contextual intelligence and improve classification accuracy.The proposed model was rigorously evaluated on a large-scale dataset containing diverse plant disease images,achieving outstanding performance with a 99.7%training accuracy and 99.3%testing accuracy.The precision and F1-score were consistently high across all disease classes,demonstrating the framework’s ability to minimize false positives and false negatives.Compared to conventional deep learning approaches,this hybrid method offers a more comprehensive and interpretable solution by integrating self-attention mechanisms and domain knowledge.Beyond its superior classification performance,this model opens avenues for optimizing metadata dependency and reducing computational complexity,making it more feasible for real-world deployment in resource-constrained agricultural settings.The proposed framework represents an advancement in precision agriculture,providing scalable,intelligent disease diagnosis that enhances crop protection and food security.展开更多
Data security is crucial for improving the confidentiality,integrity,and authenticity of the image content.Maintaining these security factors poses significant challenges,particularly in healthcare,business,and social...Data security is crucial for improving the confidentiality,integrity,and authenticity of the image content.Maintaining these security factors poses significant challenges,particularly in healthcare,business,and social media sectors,where information security and personal privacy are paramount.The cryptography concept introduces a solution to these challenges.This paper proposes an innovative hybrid image encryption algorithm capable of encrypting several types of images.The technique merges the Tiny Encryption Algorithm(TEA)and Rivest-Shamir-Adleman(RSA)algorithms called(TEA-RSA).The performance of this algorithm is promising in terms of cost and complexity,an encryption time which is below 10ms was recorded.It is implied by correlation coefficient analysis that after encryption there is a notable decrease in pixel correlation,therefore making it effective at disguising pixel relationships via obfuscation.Moreover,our technique achieved the highest Normalized Pixel Cross-Correlation(NPCC),Number of Pixel Change Rate(NPCR)value over 99%consistent,and a Unified Average Changing Intensity(UACI)value which stands at around 33.86 thereby making it insensitive to statistical attacks hence leading to massive alteration of pixel values and intensities.These make clear the resistance of this process to any sort of hacking attempt whatsoever that might want unauthorized access into its domain.It is important to note that the integrity of images is well preserved throughout encryption as well as decryption stages in line with these low decryption times are clear indications.These results collectively indicate that the algorithm is effective in ensuring secure and efficient image encryption while maintaining the overall integrity and quality of the encrypted images.The proposed hybrid approach has been investigated against cryptoanalysis such as Cyphertext-only attacks,Known-plaintext attacks,Chosen-plaintext attacks,and Chosen-ciphertext attacks.Moreover,the proposed approach explains a good achievement against cropping and differential attacks.展开更多
The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Tradi...The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Traditional authentication methods have proven insufficient,exposing systems to risks such as Sybil,Denial of Service(DoS),and Eclipse attacks.This study critically examines the limitations of current security protocols,focusing on authentication and data exchange vulnerabilities,and explores blockchain technology as a potential solution.Blockchain’s decentralized and cryptographically secure framework can significantly enhance Vehicle-to-Vehicle(V2V)communication,ensure data integrity,and enable transparent,immutable transactions within the supply chain.Additionally,blockchain strengthens authentication,secures digital identities,and improves data sharing,reducing the risk of unauthorized access and data breaches.Our contribution lies in the proposal to integrate Artificial Intelligence(AI)with blockchain technology to further improve security by refining cryptographic methods,automating key management,and bolstering anomaly detection.Despite challenges related to computational complexity,latency,scalability,and regulatory concerns,the combination of blockchain,AI offers the transformative potential to enhance the security,transparency,and efficiency of connected vehicle systems and their supply chains.展开更多
Attribute-based Encryption(ABE)enhances the confidentiality of Electronic Health Records(EHR)(also known as Personal Health Records(PHR))by binding access rights not to individual identities,but to user attribute sets...Attribute-based Encryption(ABE)enhances the confidentiality of Electronic Health Records(EHR)(also known as Personal Health Records(PHR))by binding access rights not to individual identities,but to user attribute sets such as roles,specialties,or certifications.This data-centric cryptographic paradigm enables highly fine-grained,policydriven access control,minimizing the need for identity management and supporting scalable multi-user scenarios.This paper presents a comprehensive and critical survey of ABE schemes developed specifically for EHR/PHR systems over the past decade.It explores the evolution of these schemes,analyzing their design principles,strengths,limitations,and the level of granularity they offer in access control.The review also evaluates the security guarantees,efficiency,and practical applicability of these schemes in real-world healthcare environments.Furthermore,the paper outlines the current state of ABE as a mechanism for safeguarding EHR data and managing user access,while also identifying the key challenges that remain.Open issues such as scalability,revocation mechanisms,policy updates,and interoperability are discussed in detail,providing valuable insights for researchers and practitioners aiming to advance the secure management of health information systems.展开更多
Precisely forecasting the performance of Deep Learning(DL)models,particularly in critical areas such as Uniform Resource Locator(URL)-based threat detection,aids in improving systems developed for difficult tasks.In c...Precisely forecasting the performance of Deep Learning(DL)models,particularly in critical areas such as Uniform Resource Locator(URL)-based threat detection,aids in improving systems developed for difficult tasks.In cybersecurity,recognizing harmful URLs is vital to lowering risks associated with phishing,malware,and other online-based attacks.Since it directly affects the model’s capacity to differentiate between benign and harmful URLs,finding the optimum mix of hyperparameters in DL models is a significant difficulty.Two commonly used architectures for sequential and spatial data processing,Long Short-Term Memory(LSTM)/Gated Recurrent Unit(GRU)and Convolutional Neural Network(CNN)/Long Short-Term Memory(LSTM)models are targeted in this study to have higher predictive capacity by modifying crucial hyperparameters such as learning rate,batch size,and dropout rate using cloud capability.Research finds the best settings for the models by testing 50 dropout rates(between 0.1 and 0.5)with different learning rates and batch sizes.Performances were measured in the form of accuracy,precision,recall,F1-score,and errors such as Mean Absolute Error(MAE),Mean Squared Error(MSE),Root Mean Squared Error(RMSE)and Mean Absolute Percent Error(MAPE).In our results,CNN/LSTM performed better often than LSTM/GRU,with up to 10%better F1-score and much lower MAPE when the learning rate was 0.001 and the dropout rate was 0.2.These results show the value of fine-tuning hyperparameters to increase model performance and reduce errors.Higher on many of the parameters,CNN/LSTM architecture became obvious as the more trustworthy one.It also discussed the importance of DL in enhancing URL attack detection mechanisms to provide increased accuracy and precision for real-world cybersecurity.展开更多
In recent years,automation has become a key focus in software development as organizations seek to improve efficiency and reduce time-to-market.The integration of artificial intelligence(AI)tools,particularly those us...In recent years,automation has become a key focus in software development as organizations seek to improve efficiency and reduce time-to-market.The integration of artificial intelligence(AI)tools,particularly those using natural language processing(NLP)like ChatGPT,has opened new possibilities for automating various stages of the development lifecycle.The primary objective of this study is to evaluate the effectiveness of ChatGPT in automating various phases of software development.An artificial intelligence(AI)tool was developed using the OpenAI—Application Programming Interface(API),incorporating two key functionalities:1)generating user stories based on case or process inputs,and 2)estimating the effort required to execute each user story.Additionally,ChatGPT was employed to generate application code.The AI tool was tested in three case studies,each explored under two different development strategies:a semi-automated process utilizing the AI tools and a traditional manual approach.The results demonstrated a significant reduction in total development time,ranging from 40%to 51%.However,it was observed that the generated content could be inaccurate and incomplete,necessitating review and debugging before being applied to projects.In conclusion,given the increasing shift towards automation in software engineering,further research is critical to enhance the efficiency and reliability of AI tools,particularly those that leverage natural language processing(NLP)technologies.展开更多
Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial loss...Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial losses,making it essential for Human Resource(HR)departments to prioritize turnover reduction.In this context,Artificial Intelligence(AI)has emerged as a vital tool in strengthening business strategies and people management.This paper incorporates two new representative features,introducing three types of feature engineering to enhance the analysis of employee turnover in the IBM HR Analytics dataset.Key Machine Learning(ML)techniques were subsequently employed in this work,such as Support Vector Machine(SVM),Random Forest(RF),Logistic Regression(LR),Extreme Gradient Boosting(XGBoost),and especially Categorical Boosting(CatBoost),a gradient boosting algorithm optimized for categorical data to analyze employee turnover.Adopting the unique feature engineering process enables CatBoost to enhance model accuracy and robustness while effectively analyzing complex patterns within employee data.Experimental results demonstrate the effectiveness of our proposed methodology,achieving the highest accuracy of 90.14%and an F1-score of 0.88 on the IBM dataset.To assess the capability of our detection system,we have also used an extended dataset,achieving an optimal accuracy of 98.10%and an F1-score of 0.98.These results strongly indicate the efficiency of our proposed methodology and highlight the impact of feature engineering on predictive performance.Moreover,by pinpointing the top ten factors influencing attrition,including“Monthly Income”,“Over Time”,“Total Satisfaction”,and others,this research equips HR departments with insights to implement targeted retention strategies,such as enhancing compensation or job satisfaction,to retain key talent before they consider leaving.展开更多
The 6G network architecture introduces the paradigm of Trust+Security,representing a shift in network protection strategies from external defense mechanisms to endogenous security enforcement.While ZTNs(zerotrust netw...The 6G network architecture introduces the paradigm of Trust+Security,representing a shift in network protection strategies from external defense mechanisms to endogenous security enforcement.While ZTNs(zerotrust networks)have demonstrated significant advancements in constructing trust-centric frameworks,most existing ZTN implementations lack comprehensive integration of security deployment and traffic monitoring capabilities.Furthermore,current ZTN designs generally do not facilitate dynamic assessment of user reputation.To address these limitations,this study proposes a DPZTN(Data-plane-based Zero Trust Network).DPZTN framework extends traditional ZTN models by incorporating security mechanisms directly into the data plane.Additionally,blockchain infrastructure is used to enable decentralized identity authentication and distributed access control.A pivotal element within the proposed framework is ZTNE(Zero-Trust Network Element),which executes access control policies and performs real-time user traffic inspection.To enable dynamic and fine-grained evaluation of user trustworthiness,this study introduces BBEA(Bayesian-based Behavior Evaluation Algorithm).BBEA provides a framework for continuous user behavior analysis,supporting adaptive privilege management and behavior-informed access control.Experimental results demonstrate that ZTNE combined with BBEA,can effectively respond to both individual and mixed attack types by promptly adjusting user behavior scores and dynamically modifying access privileges based on initial privilege levels.Under conditions supporting up to 10,000 concurrent users,the control system maintains approximately 65%CPU usage and less than 60%memory usage,with average user authentication latency around 1 s and access control latency close to 1 s.展开更多
Accurate non-line of sight(NLOS)identification technique in ultra-wideband(UWB)location-based services is critical for applications like drone communication and autonomous navigation.However,current methods using bina...Accurate non-line of sight(NLOS)identification technique in ultra-wideband(UWB)location-based services is critical for applications like drone communication and autonomous navigation.However,current methods using binary classification(LOS/NLOS)oversimplify real-world complexities,with limited generalisation and adaptability to varying indoor environments,thereby reducing the accuracy of positioning.This study proposes an extreme gradient boosting(XGBoost)model to identify multi-class NLOS conditions.We optimise the model using grid search and genetic algorithms.Initially,the grid search approach is used to identify the most favourable values for integer hyperparameters.In order to achieve an optimised model configuration,the genetic algorithm is employed to fine-tune the floating-point hyperparameters.The model evaluations utilise a wide-ranging dataset of real-world measurements obtained with a Qorvo DW1000 UWB device,covering various indoor scenarios.Experimental results show that our proposed XGBoost achieved the highest overall accuracy of 99.47%,precision of 99%,recall of 99%,and an F-score of 99%on an open-source dataset.Additionally,based on a local dataset,the model achieved the highest performance,with an accuracy of 96%,precision of 96%,recall of 97%,and an F-score of 97%.In contrast to current machine learning methods in the literature,the suggestion model enhances classification accuracy and effectively addresses the NLOS/LOS identification as a multiclass propagation channel.This approach provides a robust solution with generalisation and adaptability across various dataset types and environments for more reliable and accurate indoor positioning technologies.展开更多
Diagnosing dental disorders using routine photographs can significantly reduce chair-side workload and expand access to care.However,most AI-based image analysis systems suffer from limited interpretability and are tr...Diagnosing dental disorders using routine photographs can significantly reduce chair-side workload and expand access to care.However,most AI-based image analysis systems suffer from limited interpretability and are trained on class-imbalanced datasets.In this study,we developed a balanced,transformer-based pipeline to detect three common dental disorders:tooth discoloration,calculus,and hypodontia,from standard color images.After applying a color-standardized preprocessing pipeline and performing stratified data splitting,the proposed vision transformer model was fine-tuned and subsequently evaluated using standard classification benchmarks.The model achieved an impressive accuracy of 98.94%,with precision,recall and F1 scores all greater than or equal to 98%for the three classes.To ensure interpretability,three complementary saliency methods,attention roll-out,layer-wise relevance propagation,and LIME,verified that predictions rely on clinically meaningful cues such as stained enamel,supragingival deposits,and edentulous gaps.The proposed method addresses class imbalance through dataset balancing,enhances interpretability using multiple explanation methods,and demonstrates the effectiveness of transformers over CNNs in dental imaging.This method offers a transparent,real-time screening tool suitable for both clinical and tele-dentistry frameworks,providing accessible,clarity-guided care pathways.展开更多
Synthetic Aperture Radar(SAR)has become one of the most effective tools in ship detection.However,due to significant background interference,small targets,and challenges related to target scattering intensity in SAR i...Synthetic Aperture Radar(SAR)has become one of the most effective tools in ship detection.However,due to significant background interference,small targets,and challenges related to target scattering intensity in SAR images,current ship target detection faces serious issues of missed detections and false positives,and the network structures are overly complex.To address this issue,this paper proposes a lightweight model based on YOLOv8,named OD-YOLOv8.Firstly,we adopt a simplified neural network architecture,VanillaNet,to replace the backbone network,significantly reducing the number of parameters and computational complexity while ensuring accuracy.Secondly,we introduce a dynamic,multi-dimensional attention mechanism by designing the ODC2f module with ODConv to replace the original C2f module and using GSConv to replace two down-sampling convolutions to reduce the number of parameters.Then,to alleviate the issues of missed detections and false positives for small targets,we discard one of the original large target detection layers and add a detection layer specifically for small targets.Finally,based on a dynamic non-monotonic focusing mechanism,we employ the Wise-IoU(Intersection over Union)loss function to significantly improve detection accuracy.Experimental results on the HRSID dataset show that,compared to the original YOLOv8,OD-YOLOv8 improves mAP@0.5 and mAP@0.5–0.95 by 2.7%and 3.5%,respectively,while reducing the number of parameters and GFLOPs by 72.9%and 4.9%,respectively.Moreover,the model also performs exceptionally well on the SSDD dataset,with AP and AP50 increasing by 1.7%and 0.4%,respectively.OD-YOLOv8 achieves an excellent balance between model lightweightness and accuracy,making it highly valuable for end-to-end industrial deployment.展开更多
Globally,liver cancer ranks as the sixth most frequent malignancy cancer.The importance of early detection is undeniable,as liver cancer is the fifth most common disease in men and the ninth most common cancer in wome...Globally,liver cancer ranks as the sixth most frequent malignancy cancer.The importance of early detection is undeniable,as liver cancer is the fifth most common disease in men and the ninth most common cancer in women.Recent advances in imaging,biomarker discovery,and genetic profiling have greatly enhanced the ability to diagnose liver cancer.Early identification is vital since liver cancer is often asymptomatic,making diagnosis difficult.Imaging techniques such as Magnetic Resonance Imaging(MRI),Computed Tomography(CT),and ultrasonography can be used to identify liver cancer once a sample of liver tissue is taken.In recent research,reliable detection of liver cancer with minimal computing computational complexity and time has remained a serious difficulty.This paper employs the DenseNet model to enhance the detection of liver nodules with tumors by segmenting them using UNet and VGG using Fastai(UVF)in CT images.Its dense interconnections distinguish the DenseNet between layers.These dense connections facilitate the propagation of gradients and the flow of information throughout the network,thereby enhancing the efficacy and performance of training.DenseNet’s architecture combines dense blocks,bottleneck layers,and transition layers,allowing it to achieve a compromise between expressiveness and computing efficiency.Finally,the 3D liver nodular models were created using a raycasting volume rendering approach.Compared to other state-of-the-art deep neural networks,it is suitable for clinical applications to assist doctors in diagnosing liver cancer.The proposed approach was tested on a 3Dircadb dataset.According to experiments,UVF segmentation on the 3Dircadb dataset is 97.9%accurate.According to the study,the DenseNet and UVF segment liver cancer better than prior methods.The system proposes automated 3D liver cancer tumor visualization.展开更多
Many existing watermarking approaches aim to provide a Robust Reversible Data Hiding(RRDH)method.However,most of these approaches degrade under geometric and non-geometric attacks.This paper presents a novel RRDH appr...Many existing watermarking approaches aim to provide a Robust Reversible Data Hiding(RRDH)method.However,most of these approaches degrade under geometric and non-geometric attacks.This paper presents a novel RRDH approach using Polar Harmonic Fourier Moments(PHFMs)and linear interpolation.The primary objective is to enhance the robustness of the embedded watermark and improve the imperceptibility of the watermarked image.The proposed method leverages the high-fidelity and anti-geometric transformation properties of PHFMs.The image is transformed into the frequency domain of RRDH,after which compensation data is embedded using a twodimensional RDH scheme.Linear interpolation modification is applied to reduce the modifications caused by the embedded data,minimize complexity,and preserve imperceptibility features.As a result,both the robustness and reliability of the embedded data are effectively recovered.Experimental results demonstrate that our approach achieves superior visual quality and strong resistance to geometric transformation attacks.Extensive calculations show that the proposed RRDH method outperforms existing methods.The imperceptibility metrics achieved include a Peak Signalto-Noise Ratio(PSNR)of 52 dB and a Structural Similarity Index Measure(SSIM)of 0.9990,reflecting high fidelity and minimal degradation in the watermarked image.Additionally,robustness measurements indicate a PSNR of 43 dB,along with reduced computational complexity.展开更多
A Distributed Denial-of-Service(DDoS)attack poses a significant challenge in the digital age,disrupting online services with operational and financial consequences.Detecting such attacks requires innovative and effect...A Distributed Denial-of-Service(DDoS)attack poses a significant challenge in the digital age,disrupting online services with operational and financial consequences.Detecting such attacks requires innovative and effective solutions.The primary challenge lies in selecting the best among several DDoS detection models.This study presents a framework that combines several DDoS detection models and Multiple-Criteria Decision-Making(MCDM)techniques to compare and select the most effective models.The framework integrates a decision matrix from training several models on the CiC-DDOS2019 dataset with Fuzzy Weighted Zero Inconsistency Criterion(FWZIC)and MultiAttribute Boundary Approximation Area Comparison(MABAC)methodologies.FWZIC assigns weights to evaluate criteria,while MABAC compares detection models based on the assessed criteria.The results indicate that the FWZIC approach assigns weights to criteria reliably,with time complexity receiving the highest weight(0.2585)and F1 score receiving the lowest weight(0.14644).Among the models evaluated using the MABAC approach,the Support Vector Machine(SVM)ranked first with a score of 0.0444,making it the most suitable for this work.In contrast,Naive Bayes(NB)ranked lowest with a score of 0.0018.Objective validation and sensitivity analysis proved the reliability of the framework.This study provides a practical approach and insights for cybersecurity practitioners and researchers to evaluate DDoS detection models.展开更多
In the broader field of mechanical technology,and particularly in the context of self-driving vehicles,cameras and Light Detection and Ranging(LiDAR)sensors provide complementary modalities that hold significant poten...In the broader field of mechanical technology,and particularly in the context of self-driving vehicles,cameras and Light Detection and Ranging(LiDAR)sensors provide complementary modalities that hold significant potential for sensor fusion.However,directly merging multi-sensor data through point projection often results in information loss due to quantization,and managing the differing data formats from multiple sensors remains a persistent challenge.To address these issues,we propose a new fusion method that leverages continuous convolution,point-pooling,and a learned Multilayer Perceptron(MLP)to achieve superior detection performance.Our approach integrates the segmentation mask with raw LiDAR points rather than relying on projected points,effectively avoiding quantization loss.Additionally,when retrieving corresponding semantic information from images through point cloud projection,we employ linear interpolation and upsample the image feature maps to mitigate quantization loss.We employ nearest-neighbor search and continuous convolution to seamlessly fuse data from different formats.Moreover,we integrate pooling and aggregation operations,which serve as conceptual extensions of convolution,and are specifically designed to reconcile the inherent disparities among these data representations.Our detection network operates in two stages:in the first stage,preliminary proposals and segmentation features are generated;in the second stage,we refine the fusion results together with the segmentation mask to yield the final prediction.Notably,in our approach,the image network is used solely to provide semantic information,serving to enhance the point cloud features.Extensive experiments on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset demonstrate the effectiveness of our approach,which achieves both high precision and robust performance in 3D object detection tasks.展开更多
基金supported by King Saud University through Researchers Supporting Project number(RSPD2024R1006),King Saud University,Riyadh,Saudi Arabia.
文摘This study presents an energy-efficient Internet of Things(IoT)-based wireless sensor network(WSN)framework for autonomous data validation in remote environmental monitoring.We address two critical challenges in WSNs:ensuring data reliability and optimizing energy consumption.Our novel approach integrates an artificial neural network(ANN)-based multi-fault detection algorithm with an energy-efficient IoT-WSN architecture.The proposed ANN model is designed to simultaneously detect multiple fault types,including spike faults,stuckat faults,outliers,and out-of-range faults.We collected sensor data at 5-minute intervals over three months,using temperature and humidity sensors.The ANN was trained on 70%of the 26,280 data points per sensor,with 15%each for validation and testing.Our framework demonstrated a 97.1%improvement in fault detection accuracy(measured by F1 score)compared to existing methods,including rule-based,moving average,and statistical outlier detection approaches.The energy efficiency of the system was evaluated through 24-h power consumption tests,showing significant savings over traditional WSN architectures.Key contributions include a multi-fault detection ANN model balancing accuracy and computational efficiency,an energy-optimized IoTWSN architecture for remote deployments,and a comprehensive performance evaluation framework.While our approach offers improvements in both data validation and energy efficiency,we acknowledge limitations such as potential scalability issues and the need for further real-world testing.This research advances the field of remote environmental monitoring by providing a robust,energy-efficient solution for ensuring data reliability in challenging deployment scenarios.Future work will explore more advanced machine learning techniques and extended field testing to further validate and improve the system’s performance.
文摘The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address the issue of system-level higher task dissipation during the execution of parallel workloads with common deadlines by introducing a machine learning-based framework that includes task migration using energy-efficient earliest deadline first scheduling(EA-EDF).ML-based EA-EDF enhances the overall throughput and optimizes the energy to avoid delay and performance degradation in a multiprocessor system.The proposed system model allocates processors to the ready task set in such a way that their deadlines are guaranteed.A full task migration policy is also integrated to ensure proper task mapping that ensures inter-process linkage among the arrived tasks with the same deadlines.The execution of a task can halt on one CPU and reschedule the execution on a different processor to avoid delay and ensure meeting the deadline.Our approach shows promising potential for machine-learning-based schedulability analysis enables a comparison between different ML models and shows a promising reduction in energy as compared with other ML-aware task migration techniques for SoC like Multi-Layer Feed-Forward Neural Networks(MLFNN)based on convolutional neural network(CNN),Random Forest(RF)and Deep learning(DL)algorithm.The Simulations are conducted using super pipelined microarchitecture of advanced micro devices(AMD)XScale PXA270 using instruction and data cache per core 32 Kbyte I-cache and 32 Kbyte D-cache on various utilization factors(u_(i))12%,31%and 50%.The proposed approach consumes 5.3%less energy when almost half of the CPU is running and on a lower workload consumes 1.04%less energy.The proposed design accumulatively gives significant improvements by reducing the energy dissipation on three clock rates by 4.41%,on 624 MHz by 5.4%and 5.9%on applications operating on 416 and 312 MHz standard operating frequencies.
基金Supporting Project number(RSP2024R444),King Saud University,Riyadh,Saudi Arabia.
文摘Authentication is the most crucial aspect of security and a predominant measure employed in cybersecurity.Cloud computing provides a shared electronic device resource for users via the internet,and the authentication techniques used must protect data from attacks.Previous approaches failed to resolve the challenge of making passwords secure,memorable,usable,and time-saving.Graphical Password(GP)is still not widely utilized in reality because consumers suffer from multiple login stages.This paper proposes an Indexed Choice-Based Graphical Password(ICGP)scheme for improving the authentication part.ICGP consists of two stages:registration and authentication.At the registration stage,the user registers his/her data user name a number called Index Number(IN),and chooses an image from a grid of images.After completing the registration,ICGP gives the user a random unique number(UNo)to be a user ID.At the authentication stage,the user chooses a different image from the grid based on the random appearance of the registered image dimensions on the grid plus the registered Index Number.ICGP password is a combination of three factors;user’s name,UNo,and any image.According to the experiments,the proposed ICGP has achieved great improvements when compared to prior methods.The ICGP has increased the possible password numbers from 9.77e+6 to 3.74e+30,the password space from 1.20e+34 to 1.37e+84,and decreased the password entropy from 7.16e−7 to 8.26e−30.
文摘This review article provides a comprehensive analysis of the latest advancements and persistent challenges in Software-Defined Wide Area Networks(SD-WANs),with a particular emphasis on the multi-objective Controller Placement Problem(CPP).As SD-WAN technology continues to gain prominence for its capacity to offer flexible and efficient network management,the task of 36optimally placing controllers—responsible for orchestrating and managing network traffic—remains a critical yet complex challenge.This review delves into recent innovations in multi-objective controller placement strategies,including clustering techniques,heuristic-based approaches,and the integration of machine learning and deep learning models.Each methodology is critically evaluated in terms of its ability to minimize network latency,enhance fault tolerance,and improve overall network performance.Furthermore,this paper discusses the inherent limitations and challenges associated with these techniques,providing a critical evaluation of their current utility and outlining potential avenues for future research.By offering a thorough overview of state-of-the-art approaches to multi-objective controller placement in SD-WANs,this review aims to inform ongoing advancements and highlight emerging research opportunities in this evolving field.
文摘Vehicle overtaking poses significant risks and leads to injuries and losses on Malaysia’s roads.In most scenarios,insufficient and untimely information available to drivers for accessing road conditions and their surrounding environment is the primary factor that causes these incidents.To address these issues,a comprehensive system is required to provide real-time assistance to drivers.Building upon our previous research on a LoRa-based lane change decision-aid system,this study proposes an enhanced Vehicle Overtaking System(VOS).This system utilizes long-range(LoRa)communication for reliable real-time data exchange between vehicles(V2V)and the cloud(V2C).By providing drivers with critical information,including surrounding vehicle movements,through visual and audible warnings,the VOS aims to support vehicle overtaking decisions by calculating the safe distance between vehicles as per the Association of State Highway and Transportation Officials(AASHTO)guidelines.This study also examines the performance of LoRa communication strength and data transmission at various distances using a cloud monitoring tool or dashboard.
文摘Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid framework that integrates EfficientNet-B8,Vision Transformer(ViT),and Knowledge Graph Fusion(KGF)to enhance plant disease classification across 38 distinct disease categories.The proposed framework leverages deep learning and semantic enrichment to improve classification accuracy and interpretability.EfficientNet-B8,a convolutional neural network(CNN)with optimized depth and width scaling,captures fine-grained spatial details in high-resolution plant images,aiding in the detection of subtle disease symptoms.In parallel,ViT,a transformer-based architecture,effectively models long-range dependencies and global structural patterns within the images,ensuring robust disease pattern recognition.Furthermore,KGF incorporates domain-specific metadata,such as crop type,environmental conditions,and disease relationships,to provide contextual intelligence and improve classification accuracy.The proposed model was rigorously evaluated on a large-scale dataset containing diverse plant disease images,achieving outstanding performance with a 99.7%training accuracy and 99.3%testing accuracy.The precision and F1-score were consistently high across all disease classes,demonstrating the framework’s ability to minimize false positives and false negatives.Compared to conventional deep learning approaches,this hybrid method offers a more comprehensive and interpretable solution by integrating self-attention mechanisms and domain knowledge.Beyond its superior classification performance,this model opens avenues for optimizing metadata dependency and reducing computational complexity,making it more feasible for real-world deployment in resource-constrained agricultural settings.The proposed framework represents an advancement in precision agriculture,providing scalable,intelligent disease diagnosis that enhances crop protection and food security.
文摘Data security is crucial for improving the confidentiality,integrity,and authenticity of the image content.Maintaining these security factors poses significant challenges,particularly in healthcare,business,and social media sectors,where information security and personal privacy are paramount.The cryptography concept introduces a solution to these challenges.This paper proposes an innovative hybrid image encryption algorithm capable of encrypting several types of images.The technique merges the Tiny Encryption Algorithm(TEA)and Rivest-Shamir-Adleman(RSA)algorithms called(TEA-RSA).The performance of this algorithm is promising in terms of cost and complexity,an encryption time which is below 10ms was recorded.It is implied by correlation coefficient analysis that after encryption there is a notable decrease in pixel correlation,therefore making it effective at disguising pixel relationships via obfuscation.Moreover,our technique achieved the highest Normalized Pixel Cross-Correlation(NPCC),Number of Pixel Change Rate(NPCR)value over 99%consistent,and a Unified Average Changing Intensity(UACI)value which stands at around 33.86 thereby making it insensitive to statistical attacks hence leading to massive alteration of pixel values and intensities.These make clear the resistance of this process to any sort of hacking attempt whatsoever that might want unauthorized access into its domain.It is important to note that the integrity of images is well preserved throughout encryption as well as decryption stages in line with these low decryption times are clear indications.These results collectively indicate that the algorithm is effective in ensuring secure and efficient image encryption while maintaining the overall integrity and quality of the encrypted images.The proposed hybrid approach has been investigated against cryptoanalysis such as Cyphertext-only attacks,Known-plaintext attacks,Chosen-plaintext attacks,and Chosen-ciphertext attacks.Moreover,the proposed approach explains a good achievement against cropping and differential attacks.
文摘The rapid growth of the automotive industry has raised significant concerns about the security of connected vehicles and their integrated supply chains,which are increasingly vulnerable to advanced cyber threats.Traditional authentication methods have proven insufficient,exposing systems to risks such as Sybil,Denial of Service(DoS),and Eclipse attacks.This study critically examines the limitations of current security protocols,focusing on authentication and data exchange vulnerabilities,and explores blockchain technology as a potential solution.Blockchain’s decentralized and cryptographically secure framework can significantly enhance Vehicle-to-Vehicle(V2V)communication,ensure data integrity,and enable transparent,immutable transactions within the supply chain.Additionally,blockchain strengthens authentication,secures digital identities,and improves data sharing,reducing the risk of unauthorized access and data breaches.Our contribution lies in the proposal to integrate Artificial Intelligence(AI)with blockchain technology to further improve security by refining cryptographic methods,automating key management,and bolstering anomaly detection.Despite challenges related to computational complexity,latency,scalability,and regulatory concerns,the combination of blockchain,AI offers the transformative potential to enhance the security,transparency,and efficiency of connected vehicle systems and their supply chains.
文摘Attribute-based Encryption(ABE)enhances the confidentiality of Electronic Health Records(EHR)(also known as Personal Health Records(PHR))by binding access rights not to individual identities,but to user attribute sets such as roles,specialties,or certifications.This data-centric cryptographic paradigm enables highly fine-grained,policydriven access control,minimizing the need for identity management and supporting scalable multi-user scenarios.This paper presents a comprehensive and critical survey of ABE schemes developed specifically for EHR/PHR systems over the past decade.It explores the evolution of these schemes,analyzing their design principles,strengths,limitations,and the level of granularity they offer in access control.The review also evaluates the security guarantees,efficiency,and practical applicability of these schemes in real-world healthcare environments.Furthermore,the paper outlines the current state of ABE as a mechanism for safeguarding EHR data and managing user access,while also identifying the key challenges that remain.Open issues such as scalability,revocation mechanisms,policy updates,and interoperability are discussed in detail,providing valuable insights for researchers and practitioners aiming to advance the secure management of health information systems.
文摘Precisely forecasting the performance of Deep Learning(DL)models,particularly in critical areas such as Uniform Resource Locator(URL)-based threat detection,aids in improving systems developed for difficult tasks.In cybersecurity,recognizing harmful URLs is vital to lowering risks associated with phishing,malware,and other online-based attacks.Since it directly affects the model’s capacity to differentiate between benign and harmful URLs,finding the optimum mix of hyperparameters in DL models is a significant difficulty.Two commonly used architectures for sequential and spatial data processing,Long Short-Term Memory(LSTM)/Gated Recurrent Unit(GRU)and Convolutional Neural Network(CNN)/Long Short-Term Memory(LSTM)models are targeted in this study to have higher predictive capacity by modifying crucial hyperparameters such as learning rate,batch size,and dropout rate using cloud capability.Research finds the best settings for the models by testing 50 dropout rates(between 0.1 and 0.5)with different learning rates and batch sizes.Performances were measured in the form of accuracy,precision,recall,F1-score,and errors such as Mean Absolute Error(MAE),Mean Squared Error(MSE),Root Mean Squared Error(RMSE)and Mean Absolute Percent Error(MAPE).In our results,CNN/LSTM performed better often than LSTM/GRU,with up to 10%better F1-score and much lower MAPE when the learning rate was 0.001 and the dropout rate was 0.2.These results show the value of fine-tuning hyperparameters to increase model performance and reduce errors.Higher on many of the parameters,CNN/LSTM architecture became obvious as the more trustworthy one.It also discussed the importance of DL in enhancing URL attack detection mechanisms to provide increased accuracy and precision for real-world cybersecurity.
文摘In recent years,automation has become a key focus in software development as organizations seek to improve efficiency and reduce time-to-market.The integration of artificial intelligence(AI)tools,particularly those using natural language processing(NLP)like ChatGPT,has opened new possibilities for automating various stages of the development lifecycle.The primary objective of this study is to evaluate the effectiveness of ChatGPT in automating various phases of software development.An artificial intelligence(AI)tool was developed using the OpenAI—Application Programming Interface(API),incorporating two key functionalities:1)generating user stories based on case or process inputs,and 2)estimating the effort required to execute each user story.Additionally,ChatGPT was employed to generate application code.The AI tool was tested in three case studies,each explored under two different development strategies:a semi-automated process utilizing the AI tools and a traditional manual approach.The results demonstrated a significant reduction in total development time,ranging from 40%to 51%.However,it was observed that the generated content could be inaccurate and incomplete,necessitating review and debugging before being applied to projects.In conclusion,given the increasing shift towards automation in software engineering,further research is critical to enhance the efficiency and reliability of AI tools,particularly those that leverage natural language processing(NLP)technologies.
基金supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(IITP-2024-00156287,50%)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Artificial Intelligence Convergence Innovation Human Resources Development(IITP-2023-RS-2023-00256629,25%)grant funded by the Korea government(MSIT)supported by the Korea Internet&Security Agency(KISA)-Information Security College Support Project(25%).
文摘Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial losses,making it essential for Human Resource(HR)departments to prioritize turnover reduction.In this context,Artificial Intelligence(AI)has emerged as a vital tool in strengthening business strategies and people management.This paper incorporates two new representative features,introducing three types of feature engineering to enhance the analysis of employee turnover in the IBM HR Analytics dataset.Key Machine Learning(ML)techniques were subsequently employed in this work,such as Support Vector Machine(SVM),Random Forest(RF),Logistic Regression(LR),Extreme Gradient Boosting(XGBoost),and especially Categorical Boosting(CatBoost),a gradient boosting algorithm optimized for categorical data to analyze employee turnover.Adopting the unique feature engineering process enables CatBoost to enhance model accuracy and robustness while effectively analyzing complex patterns within employee data.Experimental results demonstrate the effectiveness of our proposed methodology,achieving the highest accuracy of 90.14%and an F1-score of 0.88 on the IBM dataset.To assess the capability of our detection system,we have also used an extended dataset,achieving an optimal accuracy of 98.10%and an F1-score of 0.98.These results strongly indicate the efficiency of our proposed methodology and highlight the impact of feature engineering on predictive performance.Moreover,by pinpointing the top ten factors influencing attrition,including“Monthly Income”,“Over Time”,“Total Satisfaction”,and others,this research equips HR departments with insights to implement targeted retention strategies,such as enhancing compensation or job satisfaction,to retain key talent before they consider leaving.
基金funded by the Basic Research Operating Expenses Postgraduate Innovation Programme(Grant No.W24YJS00010,received by J.Yan)the National Key R&D Program of China(Grant No.2018YFA0701604,received by H.Zhou)the National Natural Science Foundation of China(NSFC)(Grant No.62341102,received by H.Zhou).
文摘The 6G network architecture introduces the paradigm of Trust+Security,representing a shift in network protection strategies from external defense mechanisms to endogenous security enforcement.While ZTNs(zerotrust networks)have demonstrated significant advancements in constructing trust-centric frameworks,most existing ZTN implementations lack comprehensive integration of security deployment and traffic monitoring capabilities.Furthermore,current ZTN designs generally do not facilitate dynamic assessment of user reputation.To address these limitations,this study proposes a DPZTN(Data-plane-based Zero Trust Network).DPZTN framework extends traditional ZTN models by incorporating security mechanisms directly into the data plane.Additionally,blockchain infrastructure is used to enable decentralized identity authentication and distributed access control.A pivotal element within the proposed framework is ZTNE(Zero-Trust Network Element),which executes access control policies and performs real-time user traffic inspection.To enable dynamic and fine-grained evaluation of user trustworthiness,this study introduces BBEA(Bayesian-based Behavior Evaluation Algorithm).BBEA provides a framework for continuous user behavior analysis,supporting adaptive privilege management and behavior-informed access control.Experimental results demonstrate that ZTNE combined with BBEA,can effectively respond to both individual and mixed attack types by promptly adjusting user behavior scores and dynamically modifying access privileges based on initial privilege levels.Under conditions supporting up to 10,000 concurrent users,the control system maintains approximately 65%CPU usage and less than 60%memory usage,with average user authentication latency around 1 s and access control latency close to 1 s.
文摘Accurate non-line of sight(NLOS)identification technique in ultra-wideband(UWB)location-based services is critical for applications like drone communication and autonomous navigation.However,current methods using binary classification(LOS/NLOS)oversimplify real-world complexities,with limited generalisation and adaptability to varying indoor environments,thereby reducing the accuracy of positioning.This study proposes an extreme gradient boosting(XGBoost)model to identify multi-class NLOS conditions.We optimise the model using grid search and genetic algorithms.Initially,the grid search approach is used to identify the most favourable values for integer hyperparameters.In order to achieve an optimised model configuration,the genetic algorithm is employed to fine-tune the floating-point hyperparameters.The model evaluations utilise a wide-ranging dataset of real-world measurements obtained with a Qorvo DW1000 UWB device,covering various indoor scenarios.Experimental results show that our proposed XGBoost achieved the highest overall accuracy of 99.47%,precision of 99%,recall of 99%,and an F-score of 99%on an open-source dataset.Additionally,based on a local dataset,the model achieved the highest performance,with an accuracy of 96%,precision of 96%,recall of 97%,and an F-score of 97%.In contrast to current machine learning methods in the literature,the suggestion model enhances classification accuracy and effectively addresses the NLOS/LOS identification as a multiclass propagation channel.This approach provides a robust solution with generalisation and adaptability across various dataset types and environments for more reliable and accurate indoor positioning technologies.
文摘Diagnosing dental disorders using routine photographs can significantly reduce chair-side workload and expand access to care.However,most AI-based image analysis systems suffer from limited interpretability and are trained on class-imbalanced datasets.In this study,we developed a balanced,transformer-based pipeline to detect three common dental disorders:tooth discoloration,calculus,and hypodontia,from standard color images.After applying a color-standardized preprocessing pipeline and performing stratified data splitting,the proposed vision transformer model was fine-tuned and subsequently evaluated using standard classification benchmarks.The model achieved an impressive accuracy of 98.94%,with precision,recall and F1 scores all greater than or equal to 98%for the three classes.To ensure interpretability,three complementary saliency methods,attention roll-out,layer-wise relevance propagation,and LIME,verified that predictions rely on clinically meaningful cues such as stained enamel,supragingival deposits,and edentulous gaps.The proposed method addresses class imbalance through dataset balancing,enhances interpretability using multiple explanation methods,and demonstrates the effectiveness of transformers over CNNs in dental imaging.This method offers a transparent,real-time screening tool suitable for both clinical and tele-dentistry frameworks,providing accessible,clarity-guided care pathways.
基金supported by the Open Research Fund Program of State Key Laboratory of Maritime Technology and Safety in 2024funding from the National Natural Science Foundation of China(Grant No.52331012)the Natural Science Foundation of Shanghai(Grant No.21ZR1426500).
文摘Synthetic Aperture Radar(SAR)has become one of the most effective tools in ship detection.However,due to significant background interference,small targets,and challenges related to target scattering intensity in SAR images,current ship target detection faces serious issues of missed detections and false positives,and the network structures are overly complex.To address this issue,this paper proposes a lightweight model based on YOLOv8,named OD-YOLOv8.Firstly,we adopt a simplified neural network architecture,VanillaNet,to replace the backbone network,significantly reducing the number of parameters and computational complexity while ensuring accuracy.Secondly,we introduce a dynamic,multi-dimensional attention mechanism by designing the ODC2f module with ODConv to replace the original C2f module and using GSConv to replace two down-sampling convolutions to reduce the number of parameters.Then,to alleviate the issues of missed detections and false positives for small targets,we discard one of the original large target detection layers and add a detection layer specifically for small targets.Finally,based on a dynamic non-monotonic focusing mechanism,we employ the Wise-IoU(Intersection over Union)loss function to significantly improve detection accuracy.Experimental results on the HRSID dataset show that,compared to the original YOLOv8,OD-YOLOv8 improves mAP@0.5 and mAP@0.5–0.95 by 2.7%and 3.5%,respectively,while reducing the number of parameters and GFLOPs by 72.9%and 4.9%,respectively.Moreover,the model also performs exceptionally well on the SSDD dataset,with AP and AP50 increasing by 1.7%and 0.4%,respectively.OD-YOLOv8 achieves an excellent balance between model lightweightness and accuracy,making it highly valuable for end-to-end industrial deployment.
文摘Globally,liver cancer ranks as the sixth most frequent malignancy cancer.The importance of early detection is undeniable,as liver cancer is the fifth most common disease in men and the ninth most common cancer in women.Recent advances in imaging,biomarker discovery,and genetic profiling have greatly enhanced the ability to diagnose liver cancer.Early identification is vital since liver cancer is often asymptomatic,making diagnosis difficult.Imaging techniques such as Magnetic Resonance Imaging(MRI),Computed Tomography(CT),and ultrasonography can be used to identify liver cancer once a sample of liver tissue is taken.In recent research,reliable detection of liver cancer with minimal computing computational complexity and time has remained a serious difficulty.This paper employs the DenseNet model to enhance the detection of liver nodules with tumors by segmenting them using UNet and VGG using Fastai(UVF)in CT images.Its dense interconnections distinguish the DenseNet between layers.These dense connections facilitate the propagation of gradients and the flow of information throughout the network,thereby enhancing the efficacy and performance of training.DenseNet’s architecture combines dense blocks,bottleneck layers,and transition layers,allowing it to achieve a compromise between expressiveness and computing efficiency.Finally,the 3D liver nodular models were created using a raycasting volume rendering approach.Compared to other state-of-the-art deep neural networks,it is suitable for clinical applications to assist doctors in diagnosing liver cancer.The proposed approach was tested on a 3Dircadb dataset.According to experiments,UVF segmentation on the 3Dircadb dataset is 97.9%accurate.According to the study,the DenseNet and UVF segment liver cancer better than prior methods.The system proposes automated 3D liver cancer tumor visualization.
文摘Many existing watermarking approaches aim to provide a Robust Reversible Data Hiding(RRDH)method.However,most of these approaches degrade under geometric and non-geometric attacks.This paper presents a novel RRDH approach using Polar Harmonic Fourier Moments(PHFMs)and linear interpolation.The primary objective is to enhance the robustness of the embedded watermark and improve the imperceptibility of the watermarked image.The proposed method leverages the high-fidelity and anti-geometric transformation properties of PHFMs.The image is transformed into the frequency domain of RRDH,after which compensation data is embedded using a twodimensional RDH scheme.Linear interpolation modification is applied to reduce the modifications caused by the embedded data,minimize complexity,and preserve imperceptibility features.As a result,both the robustness and reliability of the embedded data are effectively recovered.Experimental results demonstrate that our approach achieves superior visual quality and strong resistance to geometric transformation attacks.Extensive calculations show that the proposed RRDH method outperforms existing methods.The imperceptibility metrics achieved include a Peak Signalto-Noise Ratio(PSNR)of 52 dB and a Structural Similarity Index Measure(SSIM)of 0.9990,reflecting high fidelity and minimal degradation in the watermarked image.Additionally,robustness measurements indicate a PSNR of 43 dB,along with reduced computational complexity.
文摘A Distributed Denial-of-Service(DDoS)attack poses a significant challenge in the digital age,disrupting online services with operational and financial consequences.Detecting such attacks requires innovative and effective solutions.The primary challenge lies in selecting the best among several DDoS detection models.This study presents a framework that combines several DDoS detection models and Multiple-Criteria Decision-Making(MCDM)techniques to compare and select the most effective models.The framework integrates a decision matrix from training several models on the CiC-DDOS2019 dataset with Fuzzy Weighted Zero Inconsistency Criterion(FWZIC)and MultiAttribute Boundary Approximation Area Comparison(MABAC)methodologies.FWZIC assigns weights to evaluate criteria,while MABAC compares detection models based on the assessed criteria.The results indicate that the FWZIC approach assigns weights to criteria reliably,with time complexity receiving the highest weight(0.2585)and F1 score receiving the lowest weight(0.14644).Among the models evaluated using the MABAC approach,the Support Vector Machine(SVM)ranked first with a score of 0.0444,making it the most suitable for this work.In contrast,Naive Bayes(NB)ranked lowest with a score of 0.0018.Objective validation and sensitivity analysis proved the reliability of the framework.This study provides a practical approach and insights for cybersecurity practitioners and researchers to evaluate DDoS detection models.
文摘In the broader field of mechanical technology,and particularly in the context of self-driving vehicles,cameras and Light Detection and Ranging(LiDAR)sensors provide complementary modalities that hold significant potential for sensor fusion.However,directly merging multi-sensor data through point projection often results in information loss due to quantization,and managing the differing data formats from multiple sensors remains a persistent challenge.To address these issues,we propose a new fusion method that leverages continuous convolution,point-pooling,and a learned Multilayer Perceptron(MLP)to achieve superior detection performance.Our approach integrates the segmentation mask with raw LiDAR points rather than relying on projected points,effectively avoiding quantization loss.Additionally,when retrieving corresponding semantic information from images through point cloud projection,we employ linear interpolation and upsample the image feature maps to mitigate quantization loss.We employ nearest-neighbor search and continuous convolution to seamlessly fuse data from different formats.Moreover,we integrate pooling and aggregation operations,which serve as conceptual extensions of convolution,and are specifically designed to reconcile the inherent disparities among these data representations.Our detection network operates in two stages:in the first stage,preliminary proposals and segmentation features are generated;in the second stage,we refine the fusion results together with the segmentation mask to yield the final prediction.Notably,in our approach,the image network is used solely to provide semantic information,serving to enhance the point cloud features.Extensive experiments on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset demonstrate the effectiveness of our approach,which achieves both high precision and robust performance in 3D object detection tasks.