Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce...Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.展开更多
The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)...The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron...The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.展开更多
When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based ...When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.展开更多
Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitiv...Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.展开更多
Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud ar...Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.展开更多
Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized ma...Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized management and transmission latency of the cloud makes it difficult to support real-time processing and distributed access structures.As a result,fog computing and the Internet of Things(IoT)have emerged as crucial applications.Fog-assisted proxy re-encryption is a commonly adopted technique for sharing cloud ciphertexts.It allows a semitrusted proxy to transforma data owner’s ciphertext into another re-encrypted ciphertext intended for a data requester,without compromising any information about the original ciphertext.Yet,the user revocation and cloud ciphertext renewal problems still lack effective and secure mechanisms.Motivated by it,we propose a revocable conditional proxy re-encryption scheme offering ciphertext evolution(R-CPRE-CE).In particular,a periodically updated time key is used to revoke the user’s access privileges while an access condition prevents a malicious proxy from reencrypting unauthorized ciphertext.We also demonstrate that our scheme is provably secure under the notion of indistinguishability against adaptively chosen identity and chosen ciphertext attacks in the random oracle model.Performance analysis shows that our scheme reduces the computation time for a complete data access cycle from an initial query to the final decryption by approximately 47.05%compared to related schemes.展开更多
Industrial Cyber-Physical Systems(ICPSs)play a vital role in modern industries by providing an intellectual foundation for automated operations.With the increasing integration of information-driven processes,ensuring ...Industrial Cyber-Physical Systems(ICPSs)play a vital role in modern industries by providing an intellectual foundation for automated operations.With the increasing integration of information-driven processes,ensuring the security of Industrial Control Production Systems(ICPSs)has become a critical challenge.These systems are highly vulnerable to attacks such as denial-of-service(DoS),eclipse,and Sybil attacks,which can significantly disrupt industrial operations.This work proposes an effective protection strategy using an Artificial Intelligence(AI)-enabled Smart Contract(SC)framework combined with the Heterogeneous Barzilai-Borwein Support Vector(HBBSV)method for industrial-based CPS environments.The approach reduces run time and minimizes the probability of attacks.Initially,secured ICPSs are achieved through a comprehensive exchange of views on production plant strategies for condition monitoring using SC and blockchain(BC)integrated within a BC network.The SC executes the HBBSV strategy to verify the security consensus.The Barzilai-Borwein Support Vectorized algorithm computes abnormal attack occurrence probabilities to ensure that components operate within acceptable production line conditions.When a component remains within these conditions,no security breach occurs.Conversely,if a component does not satisfy the condition boundaries,a security lapse is detected,and those components are isolated.The HBBSV method thus strengthens protection against DoS,eclipse,and Sybil attacks.Experimental results demonstrate that the proposed HBBSV approach significantly improves security by enhancing authentication accuracy while reducing run time and authentication time compared to existing techniques.展开更多
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra...The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.展开更多
Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review s...Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.展开更多
Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates dise...Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates disease severity,supports effective treatment strategies,and reduces reliance on excessive pesticide use.Traditional machine learning approaches have been applied for automated rice disease diagnosis;however,these methods depend heavily on manual image preprocessing and handcrafted feature extraction,which are labor-intensive and time-consuming and often require domain expertise.Recently,end-to-end deep learning(DL) models have been introduced for this task,but they often lack robustness and generalizability across diverse datasets.To address these limitations,we propose a novel end-toend training framework for convolutional neural network(CNN) and attention-based model ensembles(E2ETCA).This framework integrates features from two state-of-the-art(SOTA) CNN models,Inception V3 and DenseNet-201,and an attention-based vision transformer(ViT) model.The fused features are passed through an additional fully connected layer with softmax activation for final classification.The entire process is trained end-to-end,enhancing its suitability for realworld deployment.Furthermore,we extract and analyze the learned features using a support vector machine(SVM),a traditional machine learning classifier,to provide comparative insights.We evaluate the proposed E2ETCA framework on three publicly available datasets,the Mendeley Rice Leaf Disease Image Samples dataset,the Kaggle Rice Diseases Image dataset,the Bangladesh Rice Research Institute dataset,and a combined version of all three.Using standard evaluation metrics(accuracy,precision,recall,and F1-score),our framework demonstrates superior performance compared to existing SOTA methods in rice disease diagnosis,with potential applicability to other agricultural disease detection tasks.展开更多
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun...Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.展开更多
In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To ad...In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs,this paper presents PER-MATD3,a multi-agent deep reinforcement learning algorithm with prioritized experience replay(PER)into the Centralized Training with Decentralized Execution(CTDE)framework.Specifically,PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution,while leveraging a shared replay buffer with prioritized sampling and centralized critic during training to accelerate convergence and improve sample efficiency.Simulation results show that PER-MATD3 reduces average task latency by up to 23%,improves energy efficiency by 21%,and enhances service coverage compared to state-of-the-art baselines,demonstrating its effectiveness and practicality in scenarios without terrestrial networks.展开更多
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b...Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.展开更多
Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated...Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.展开更多
High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of ...High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.展开更多
With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows...With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.展开更多
文摘Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.
基金funded and supported by the Ongoing Research Funding program(ORF-2025-314),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(20P2PF11)Sino-UK Industrial Fund(RP202G0289)LIAS(20P2ED10,20P2RE969)Data Science Enhancement Fund(20P2RE237)Fight for Sight(24NN201)Sino-UK Education Fund(OP202006)BBSRC(RM32G0178B8).
文摘The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.
文摘When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.
文摘Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.
基金supported in part by the National Science and Technology Council of Taiwan under the contract numbers NSTC 114-2221-E-019-055-MY2 and NSTC 114-2221-E-019-069.
文摘Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.
基金supported in part by the National Science and Technology Council of Republic of China under the contract numbers NSTC 114-2221-E-019-055-MY2NSTC 114-2221-E-019-069.
文摘Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized management and transmission latency of the cloud makes it difficult to support real-time processing and distributed access structures.As a result,fog computing and the Internet of Things(IoT)have emerged as crucial applications.Fog-assisted proxy re-encryption is a commonly adopted technique for sharing cloud ciphertexts.It allows a semitrusted proxy to transforma data owner’s ciphertext into another re-encrypted ciphertext intended for a data requester,without compromising any information about the original ciphertext.Yet,the user revocation and cloud ciphertext renewal problems still lack effective and secure mechanisms.Motivated by it,we propose a revocable conditional proxy re-encryption scheme offering ciphertext evolution(R-CPRE-CE).In particular,a periodically updated time key is used to revoke the user’s access privileges while an access condition prevents a malicious proxy from reencrypting unauthorized ciphertext.We also demonstrate that our scheme is provably secure under the notion of indistinguishability against adaptively chosen identity and chosen ciphertext attacks in the random oracle model.Performance analysis shows that our scheme reduces the computation time for a complete data access cycle from an initial query to the final decryption by approximately 47.05%compared to related schemes.
文摘Industrial Cyber-Physical Systems(ICPSs)play a vital role in modern industries by providing an intellectual foundation for automated operations.With the increasing integration of information-driven processes,ensuring the security of Industrial Control Production Systems(ICPSs)has become a critical challenge.These systems are highly vulnerable to attacks such as denial-of-service(DoS),eclipse,and Sybil attacks,which can significantly disrupt industrial operations.This work proposes an effective protection strategy using an Artificial Intelligence(AI)-enabled Smart Contract(SC)framework combined with the Heterogeneous Barzilai-Borwein Support Vector(HBBSV)method for industrial-based CPS environments.The approach reduces run time and minimizes the probability of attacks.Initially,secured ICPSs are achieved through a comprehensive exchange of views on production plant strategies for condition monitoring using SC and blockchain(BC)integrated within a BC network.The SC executes the HBBSV strategy to verify the security consensus.The Barzilai-Borwein Support Vectorized algorithm computes abnormal attack occurrence probabilities to ensure that components operate within acceptable production line conditions.When a component remains within these conditions,no security breach occurs.Conversely,if a component does not satisfy the condition boundaries,a security lapse is detected,and those components are isolated.The HBBSV method thus strengthens protection against DoS,eclipse,and Sybil attacks.Experimental results demonstrate that the proposed HBBSV approach significantly improves security by enhancing authentication accuracy while reducing run time and authentication time compared to existing techniques.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under Grant No.(GPIP:1074-612-2024).
文摘The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.
基金supported by the Ministry of Education and Science of the Republic of North Macedonia through the project“Utilizing AI and National Large Language Models to Advance Macedonian Language Capabilties”。
文摘Artificial intelligence(AI)is reshaping financial systems and services,as intelligent AI agents increasingly form the foundation of autonomous,goal-driven systems capable of reasoning,learning,and action.This review synthesizes recent research and developments in the application of AI agents across core financial domains.Specifically,it covers the deployment of agent-based AI in algorithmic trading,fraud detection,credit risk assessment,roboadvisory,and regulatory compliance(RegTech).The review focuses on advanced agent-based methodologies,including reinforcement learning,multi-agent systems,and autonomous decision-making frameworks,particularly those leveraging large language models(LLMs),contrasting these with traditional AI or purely statistical models.Our primary goals are to consolidate current knowledge,identify significant trends and architectural approaches,review the practical efficiency and impact of current applications,and delineate key challenges and promising future research directions.The increasing sophistication of AI agents offers unprecedented opportunities for innovation in finance,yet presents complex technical,ethical,and regulatory challenges that demand careful consideration and proactive strategies.This review aims to provide a comprehensive understanding of this rapidly evolving landscape,highlighting the role of agent-based AI in the ongoing transformation of the financial industry,and is intended to serve financial institutions,regulators,investors,analysts,researchers,and other key stakeholders in the financial ecosystem.
基金the Begum Rokeya University,Rangpur,and the United Arab Emirates University,UAE for partially supporting this work。
文摘Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates disease severity,supports effective treatment strategies,and reduces reliance on excessive pesticide use.Traditional machine learning approaches have been applied for automated rice disease diagnosis;however,these methods depend heavily on manual image preprocessing and handcrafted feature extraction,which are labor-intensive and time-consuming and often require domain expertise.Recently,end-to-end deep learning(DL) models have been introduced for this task,but they often lack robustness and generalizability across diverse datasets.To address these limitations,we propose a novel end-toend training framework for convolutional neural network(CNN) and attention-based model ensembles(E2ETCA).This framework integrates features from two state-of-the-art(SOTA) CNN models,Inception V3 and DenseNet-201,and an attention-based vision transformer(ViT) model.The fused features are passed through an additional fully connected layer with softmax activation for final classification.The entire process is trained end-to-end,enhancing its suitability for realworld deployment.Furthermore,we extract and analyze the learned features using a support vector machine(SVM),a traditional machine learning classifier,to provide comparative insights.We evaluate the proposed E2ETCA framework on three publicly available datasets,the Mendeley Rice Leaf Disease Image Samples dataset,the Kaggle Rice Diseases Image dataset,the Bangladesh Rice Research Institute dataset,and a combined version of all three.Using standard evaluation metrics(accuracy,precision,recall,and F1-score),our framework demonstrates superior performance compared to existing SOTA methods in rice disease diagnosis,with potential applicability to other agricultural disease detection tasks.
基金financially supported byChongqingUniversity of Technology Graduate Innovation Foundation(Grant No.gzlcx20253267).
文摘Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.
基金supported by the National Natural Science Foundation of China under Grant No.61701100.
文摘In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs,this paper presents PER-MATD3,a multi-agent deep reinforcement learning algorithm with prioritized experience replay(PER)into the Centralized Training with Decentralized Execution(CTDE)framework.Specifically,PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution,while leveraging a shared replay buffer with prioritized sampling and centralized critic during training to accelerate convergence and improve sample efficiency.Simulation results show that PER-MATD3 reduces average task latency by up to 23%,improves energy efficiency by 21%,and enhances service coverage compared to state-of-the-art baselines,demonstrating its effectiveness and practicality in scenarios without terrestrial networks.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Metaverse Support Program to Nurture the Best Talents(IITP-2024-RS-2023-00254529)grant funded by the Korea government(MSIT).
文摘Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.
文摘Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(RS-2020-NR049579).
文摘High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.
基金supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea(NRF-2025S1A5A2A01005171)by the BK21 programat Chungbuk National University(2025).
文摘With an increase in internet-connected devices and a dependency on online services,the threat of Distributed Denial of Service(DDoS)attacks has become a significant concern in cybersecurity.The proposed system follows a multi-step process,beginning with the collection of datasets from different edge devices and network nodes.To verify its effectiveness,experiments were conducted using the CICDoS2017,NSL-KDD,and CICIDS benchmark datasets alongside other existing models.Recursive feature elimination(RFE)with random forest is used to select features from the CICDDoS2019 dataset,on which a BiLSTM model is trained on local nodes.Local models are trained until convergence or stability criteria are met while simultaneously sharing the updates globally for collaborative learning.A centralised server evaluates real-time traffic using the global BiLSTM model,which triggers alerts for potential DDoS attacks.Furthermore,blockchain technology is employed to secure model updates and to provide an immutable audit trail,thereby ensuring trust and accountability among network nodes.This research introduces a novel decentralized method called Federated Random Forest Bidirectional Long Short-Term Memory(FRF-BiLSTM)for detecting DDoS attacks,utilizing the advanced Bidirectional Long Short-Term Memory Networks(BiLSTMs)to analyze sequences in both forward and backward directions.The outcome shows the proposed model achieves a mean accuracy of 97.1%with an average training delay of 88.7 s and testing delay of 21.4 s.The model demonstrates scalability and the best detection performance in large-scale attack scenarios.