Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce...Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.展开更多
The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)...The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.展开更多
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d...Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhance...This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhanced with gated attention mechanisms,to refine segmentation by emphasizing spatially relevant anatomical features while suppressing irrelevant details.By isolating skeletal structures which remain stable over time compared to soft tissues,this method leverages bones as reliable biometric markers for identity verification.The model integrates custom-designed encoder and decoder blocks with attention gates,achieving high segmentation precision.To evaluate the impact of architectural choices,we conducted an ablation study comparing Attention U-Net with and without attentionmechanisms,alongside an analysis of data augmentation effects.Training and evaluation were performed on a curated chest X-ray dataset,with segmentation performance measured using Dice score,precision,and loss functions,achieving over 98% precision and 94% Dice score.The extracted bone structures were further processed to derive unique biometric patterns,enabling robust and privacy-preserving person identification.Our findings highlight the effectiveness of attentionmechanisms in improving segmentation accuracy and underscore the potential of chest bonebased biometrics in forensic and medical imaging.This work paves the way for integrating artificial intelligence into real-world forensic workflows,offering a non-invasive and reliable solution for post-mortem identification.展开更多
Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Int...Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron...The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.展开更多
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the...This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.展开更多
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,...A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.展开更多
The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare I...The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.展开更多
Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud ar...Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.展开更多
When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based ...When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.展开更多
Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized ma...Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized management and transmission latency of the cloud makes it difficult to support real-time processing and distributed access structures.As a result,fog computing and the Internet of Things(IoT)have emerged as crucial applications.Fog-assisted proxy re-encryption is a commonly adopted technique for sharing cloud ciphertexts.It allows a semitrusted proxy to transforma data owner’s ciphertext into another re-encrypted ciphertext intended for a data requester,without compromising any information about the original ciphertext.Yet,the user revocation and cloud ciphertext renewal problems still lack effective and secure mechanisms.Motivated by it,we propose a revocable conditional proxy re-encryption scheme offering ciphertext evolution(R-CPRE-CE).In particular,a periodically updated time key is used to revoke the user’s access privileges while an access condition prevents a malicious proxy from reencrypting unauthorized ciphertext.We also demonstrate that our scheme is provably secure under the notion of indistinguishability against adaptively chosen identity and chosen ciphertext attacks in the random oracle model.Performance analysis shows that our scheme reduces the computation time for a complete data access cycle from an initial query to the final decryption by approximately 47.05%compared to related schemes.展开更多
This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on th...This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on this long-term trajectory,the foundational approaches such as probabilistic inference,early neural networks,rulebasedmethods,and feature-level fusion established the principles of uncertainty handling andmulti-sensor integration in the 1990s.The fusion methods of 2000s marked the consolidation of these ideas through advanced Kalman and particle filtering,Bayesian–Dempster–Shafer hybrids,distributed consensus algorithms,and machine learning ensembles for more robust and domain-specific implementations.From 2011 to 2020,the widespread adoption of deep learning transformed the field driving some major breakthroughs in the autonomous vehicles domain.A key contribution of this work is the assessment of contemporary methods against the JDL model,revealing gaps at higher levels-especially in situation and impact assessment.Contemporary methods offer only limited implementation of higher-level fusion.The survey also reviews the benchmark multi-sensor datasets,noting their role in advancing the field while identifying major shortcomings like the lack of domain diversity and hierarchical coverage.By synthesizing developments across decades and paradigms,this survey provides both a historical narrative and a forward-looking perspective.It highlights unresolved challenges in transparency,scalability,robustness,and trustworthiness,while identifying emerging paradigms such as neuromorphic fusion and explainable AI as promising directions.This paves the way forward for advancing sensor fusion towards transparent and adaptive next-generation autonomous systems.展开更多
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun...Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.展开更多
Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e....Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e.,run to failure)or time-based preventive maintenance(i.e.,scheduled servicing),prove ineffective for complex systems with many Internet of Things(IoT)devices and sensors because they fall short in detecting faults at early stages when it is most crucial.This paper presents a predictive maintenance framework based on a hybrid deep learning model that integrates the capabilities of Long Short-Term Memory(LSTM)Networks and Convolutional Neural Networks(CNNs).The framework integrates spatial feature extraction and temporal sequence modeling to accurately classify the health state of industrial equipment into three categories,including Normal,Require Maintenance,and Failed.The framework uses a modular pipeline that includes IoT-enabled data collection along with secure transmission methods to manage cloud storage and provide real-time fault classification.The FD004 subset of the NASA C-MAPSS dataset,containing multivariate sensor readings from aircraft engines,serves as the training and evaluation data for the model.Experimental results show that the LSTM-CNN model outperforms baseline models such as LSTM-SVM and LSTM-RNN,achieving an overall average accuracy of 86.66%,precision of 86.00%,recall of 86.33%,and F1-score of 86.33%.Contrary to the previous LSTM-CNN-based predictive maintenance models that either provide a binary classification or rely on synthetically balanced data,our paper provides a three-class maintenance state(i.e.,Normal,Require Maintenance,and Failed)along with threshold-based labeling that retains the true nature of the degradation.In addition,our work also provides an IoT-to-cloud-based modular architecture for deployment.It offers Computerized Maintenance Management System(CMMS)integration,making our proposed solution not only technically sound but also practical and innovative.The solution achieves real-world industrial deployment readiness through its reliable performance alongside its scalable system design.展开更多
文摘Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.
基金funded and supported by the Ongoing Research Funding program(ORF-2025-314),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.
基金supported by the Competitive Research Fund of the University of Aizu,Japan(Grant No.P-13).
文摘Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金funded by Umm Al-Qura University,Saudi Arabia under grant number:25UQU4300346GSSR08.
文摘This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhanced with gated attention mechanisms,to refine segmentation by emphasizing spatially relevant anatomical features while suppressing irrelevant details.By isolating skeletal structures which remain stable over time compared to soft tissues,this method leverages bones as reliable biometric markers for identity verification.The model integrates custom-designed encoder and decoder blocks with attention gates,achieving high segmentation precision.To evaluate the impact of architectural choices,we conducted an ablation study comparing Attention U-Net with and without attentionmechanisms,alongside an analysis of data augmentation effects.Training and evaluation were performed on a curated chest X-ray dataset,with segmentation performance measured using Dice score,precision,and loss functions,achieving over 98% precision and 94% Dice score.The extracted bone structures were further processed to derive unique biometric patterns,enabling robust and privacy-preserving person identification.Our findings highlight the effectiveness of attentionmechanisms in improving segmentation accuracy and underscore the potential of chest bonebased biometrics in forensic and medical imaging.This work paves the way for integrating artificial intelligence into real-world forensic workflows,offering a non-invasive and reliable solution for post-mortem identification.
基金funded by Scientific Research Deanship at University of Ha’il,Saudi Arabia,through project number GR-24009.
文摘Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(20P2PF11)Sino-UK Industrial Fund(RP202G0289)LIAS(20P2ED10,20P2RE969)Data Science Enhancement Fund(20P2RE237)Fight for Sight(24NN201)Sino-UK Education Fund(OP202006)BBSRC(RM32G0178B8).
文摘The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.
基金financially supported by Ongoing Research Funding Program(ORF-2025-846),King Saud University,Riyadh,Saudi Arabia.
文摘This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.
基金The National Science Foundation funded this research under the Dy-namics of Coupled Natural and Human Systems program(Grants No.DEB-1212183 and BCS-1826839)support from San Diego State University and Auburn University.
文摘A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.
文摘The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.
基金supported in part by the National Science and Technology Council of Taiwan under the contract numbers NSTC 114-2221-E-019-055-MY2 and NSTC 114-2221-E-019-069.
文摘Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.
文摘When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.
基金supported in part by the National Science and Technology Council of Republic of China under the contract numbers NSTC 114-2221-E-019-055-MY2NSTC 114-2221-E-019-069.
文摘Cloud data sharing is an important issue in modern times.To maintain the privacy and confidentiality of data stored in the cloud,encryption is an inevitable process before uploading the data.However,the centralized management and transmission latency of the cloud makes it difficult to support real-time processing and distributed access structures.As a result,fog computing and the Internet of Things(IoT)have emerged as crucial applications.Fog-assisted proxy re-encryption is a commonly adopted technique for sharing cloud ciphertexts.It allows a semitrusted proxy to transforma data owner’s ciphertext into another re-encrypted ciphertext intended for a data requester,without compromising any information about the original ciphertext.Yet,the user revocation and cloud ciphertext renewal problems still lack effective and secure mechanisms.Motivated by it,we propose a revocable conditional proxy re-encryption scheme offering ciphertext evolution(R-CPRE-CE).In particular,a periodically updated time key is used to revoke the user’s access privileges while an access condition prevents a malicious proxy from reencrypting unauthorized ciphertext.We also demonstrate that our scheme is provably secure under the notion of indistinguishability against adaptively chosen identity and chosen ciphertext attacks in the random oracle model.Performance analysis shows that our scheme reduces the computation time for a complete data access cycle from an initial query to the final decryption by approximately 47.05%compared to related schemes.
文摘This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on this long-term trajectory,the foundational approaches such as probabilistic inference,early neural networks,rulebasedmethods,and feature-level fusion established the principles of uncertainty handling andmulti-sensor integration in the 1990s.The fusion methods of 2000s marked the consolidation of these ideas through advanced Kalman and particle filtering,Bayesian–Dempster–Shafer hybrids,distributed consensus algorithms,and machine learning ensembles for more robust and domain-specific implementations.From 2011 to 2020,the widespread adoption of deep learning transformed the field driving some major breakthroughs in the autonomous vehicles domain.A key contribution of this work is the assessment of contemporary methods against the JDL model,revealing gaps at higher levels-especially in situation and impact assessment.Contemporary methods offer only limited implementation of higher-level fusion.The survey also reviews the benchmark multi-sensor datasets,noting their role in advancing the field while identifying major shortcomings like the lack of domain diversity and hierarchical coverage.By synthesizing developments across decades and paradigms,this survey provides both a historical narrative and a forward-looking perspective.It highlights unresolved challenges in transparency,scalability,robustness,and trustworthiness,while identifying emerging paradigms such as neuromorphic fusion and explainable AI as promising directions.This paves the way forward for advancing sensor fusion towards transparent and adaptive next-generation autonomous systems.
基金financially supported byChongqingUniversity of Technology Graduate Innovation Foundation(Grant No.gzlcx20253267).
文摘Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet.
文摘Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e.,run to failure)or time-based preventive maintenance(i.e.,scheduled servicing),prove ineffective for complex systems with many Internet of Things(IoT)devices and sensors because they fall short in detecting faults at early stages when it is most crucial.This paper presents a predictive maintenance framework based on a hybrid deep learning model that integrates the capabilities of Long Short-Term Memory(LSTM)Networks and Convolutional Neural Networks(CNNs).The framework integrates spatial feature extraction and temporal sequence modeling to accurately classify the health state of industrial equipment into three categories,including Normal,Require Maintenance,and Failed.The framework uses a modular pipeline that includes IoT-enabled data collection along with secure transmission methods to manage cloud storage and provide real-time fault classification.The FD004 subset of the NASA C-MAPSS dataset,containing multivariate sensor readings from aircraft engines,serves as the training and evaluation data for the model.Experimental results show that the LSTM-CNN model outperforms baseline models such as LSTM-SVM and LSTM-RNN,achieving an overall average accuracy of 86.66%,precision of 86.00%,recall of 86.33%,and F1-score of 86.33%.Contrary to the previous LSTM-CNN-based predictive maintenance models that either provide a binary classification or rely on synthetically balanced data,our paper provides a three-class maintenance state(i.e.,Normal,Require Maintenance,and Failed)along with threshold-based labeling that retains the true nature of the degradation.In addition,our work also provides an IoT-to-cloud-based modular architecture for deployment.It offers Computerized Maintenance Management System(CMMS)integration,making our proposed solution not only technically sound but also practical and innovative.The solution achieves real-world industrial deployment readiness through its reliable performance alongside its scalable system design.