The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)...The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.展开更多
Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitiv...Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.展开更多
Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates dise...Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates disease severity,supports effective treatment strategies,and reduces reliance on excessive pesticide use.Traditional machine learning approaches have been applied for automated rice disease diagnosis;however,these methods depend heavily on manual image preprocessing and handcrafted feature extraction,which are labor-intensive and time-consuming and often require domain expertise.Recently,end-to-end deep learning(DL) models have been introduced for this task,but they often lack robustness and generalizability across diverse datasets.To address these limitations,we propose a novel end-toend training framework for convolutional neural network(CNN) and attention-based model ensembles(E2ETCA).This framework integrates features from two state-of-the-art(SOTA) CNN models,Inception V3 and DenseNet-201,and an attention-based vision transformer(ViT) model.The fused features are passed through an additional fully connected layer with softmax activation for final classification.The entire process is trained end-to-end,enhancing its suitability for realworld deployment.Furthermore,we extract and analyze the learned features using a support vector machine(SVM),a traditional machine learning classifier,to provide comparative insights.We evaluate the proposed E2ETCA framework on three publicly available datasets,the Mendeley Rice Leaf Disease Image Samples dataset,the Kaggle Rice Diseases Image dataset,the Bangladesh Rice Research Institute dataset,and a combined version of all three.Using standard evaluation metrics(accuracy,precision,recall,and F1-score),our framework demonstrates superior performance compared to existing SOTA methods in rice disease diagnosis,with potential applicability to other agricultural disease detection tasks.展开更多
The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary gro...The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.展开更多
In modern construction,Lightweight Aggregate Concrete(LWAC)has been recognized as a vital material of concern because of its unique properties,such as reduced density and improved thermal insulation.Despite the extens...In modern construction,Lightweight Aggregate Concrete(LWAC)has been recognized as a vital material of concern because of its unique properties,such as reduced density and improved thermal insulation.Despite the extensive knowledge regarding its macroscopic properties,there is a wide knowledge gap in understanding the influence of microscale parameters like aggregate porosity and volume ratio on the mechanical response of LWAC.This study aims to bridge this knowledge gap,spurred by the need to enhance the predictability and applicability of LWAC in various construction environments.With the help of advanced numerical methods,including the finite element method and a random circular aggregate model,this study critically evaluates the role played by these microscale factors.We found that an increase in the aggregate porosity from 23.5%to 48.5%leads to a drastic change of weakness from the bonding interface to the aggregate,reducing compressive strength by up to 24.2%and tensile strength by 27.8%.Similarly,the increase in the volume ratio of lightweight aggregate from 25%to 40%leads to a reduction in compressive strength by 13.0%and tensile strength by 9.23%.These results highlight the imperative role of microscale properties on the mechanical properties of LWAC.By supplying precise quantitative details on the effect of porosity and aggregate volume ratio,this research makes significant contributions to construction materials science by providing useful recommendations for the creation and optimization of LWAC with improved performance and sustainability in construction.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywo...This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.展开更多
Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educat...Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.展开更多
The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneousl...The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
This review paper explores advanced methods to prompt Large LanguageModels(LLMs)into generating objectionable or unintended behaviors through adversarial prompt injection attacks.We examine a series of novel projects ...This review paper explores advanced methods to prompt Large LanguageModels(LLMs)into generating objectionable or unintended behaviors through adversarial prompt injection attacks.We examine a series of novel projects like HOUYI,Robustly Aligned LLM(RA-LLM),StruQ,and Virtual Prompt Injection that compel LLMs to produce affirmative responses to harmful queries.Several new benchmarks,such as PromptBench,AdvBench,AttackEval,INJECAGENT,and Robustness Suite,have been created to evaluate the performance and resilience of LLMs against these adversarial attacks.Results show significant success rates in misleading models like Vicuna-7B,LLaMA-2-7B-Chat,GPT-3.5,and GPT-4.The review highlights limitations in existing defense mechanisms and proposes future directions for enhancing LLM alignment and safety protocols,including the concept of LLM SELF DEFENSE.Our study emphasizes the need for improved robustness in LLMs,which will potentially shape the future of Artificial Intelligence(AI)driven applications and security protocols.Understanding the vulnerabilities of LLMs is crucial for developing effective defenses against adversarial prompt injection attacks.This paper proposes a systemic classification framework that discusses various types of prompt injection attacks and defenses.We also go through a broad spectrum of stateof-the-art attack methods(such as HouYi and Virtual Prompt Injection)alongside advanced defense mechanisms(like RA-LLM,StruQ,and LLM Self-Defense),providing critical insights into vulnerabilities and robustness.We also integrate and compare results from multiple recent benchmarks,including PromptBench,INJECENT,and BIPIA.展开更多
Stress is mental tension caused by difficult situations,often experienced by hospital workers and IT professionals who work long hours.It is essential to detect the stress in shift workers to improve their health.Howe...Stress is mental tension caused by difficult situations,often experienced by hospital workers and IT professionals who work long hours.It is essential to detect the stress in shift workers to improve their health.However,existing models measure stress with physiological signals such as PPG,EDA,and blink data,which could not identify the stress level accurately.Additionally,the works face challenges with limited data,inefficient spatial relationships,security issues with health data,and long-range temporal dependencies.In this paper,we have developed a federated learning-based stress detection system for IT and hospital workers,integrating physiological and behavioral indicators for accurate stress detection.Furthermore,the study introduces a hybrid deep learning classifier called ResTFTNet to capture spatial features and complex temporal relationships to detect stress effectively.The proposed work involves two localmodels and a globalmodel,to develop a federated learning framework to enhance stress detection.Thedatasets are pre-processed using the bandpass filter noise removal technique and normalization.The Recursive Feature Elimination feature selection method improves themodel performance.FL aggregates thesemodels using FedAvg to ensure privacy by keeping data localized.After evaluating ResTFTNet with existing models,including Convolution Neural Network,Long-Short-Term-Memory,and Support VectorMachine,the proposed model shows exceptional performance with an accuracy of 99.3%.This work provides an accurate and privacy-preserving method for detecting stress in hospital and IT staff.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
This study presents a systematic review of applications of artificial intelligence(abbreviated as AI)and blockchain in supply chain provenance traceability and legal forensics cover five sectors:integrated circuits(ab...This study presents a systematic review of applications of artificial intelligence(abbreviated as AI)and blockchain in supply chain provenance traceability and legal forensics cover five sectors:integrated circuits(abbreviated as ICs),pharmaceuticals,electric vehicles(abbreviated as EVs),drones(abbreviated as UAVs),and robotics—in response to rising trade tensions and geopolitical conflicts,which have heightened concerns over product origin fraud and information security.While previous literature often focuses on single-industry contexts or isolated technologies,this reviewcomprehensively surveys these sectors and categorizes 116 peer-reviewed studies by application domain,technical architecture,and functional objective.Special attention is given to traceability control mechanisms,data integrity,and the use of forensic technologies to detect origin fraud.The study further evaluates real-world implementations,including blockchain-enabled drug tracking systems,EV battery raw material traceability,and UAV authentication frameworks,demonstrating the practical value of these technologies.By identifying technological challenges and policy implications,this research provides a comprehensive foundation for future academic inquiry,industrial adoption,and regulatory development aimed at enhancing transparency,resilience,and trust in global supply chains.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model n...The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model needs to be generated which can effectively classify the COVID and non-COVID cases.In this work,our aim is to develop a diagnostic model based on deep features using effectiveness of Chest X-ray(CXR)in distinguishing COVID from non-COVID cases.The proposed diagnostic framework utilizes CXR to diagnose COVID-19 and includes Grad-CAM visualizations for a visual interpretation of predicted images.The model’s performance was evaluated using various metrics,including accuracy,precision,recall,F1-score,and Gmean.Several machine learning models,such as random forest,dense neural network,SVM,twin SVM,extreme learning machine,random vector functional link,and kernel ridge regression,were selected to diagnose COVID-19 cases.Transfer learning was used to extract deep features.For feature extraction many CNN-based models such as Inception V3,MobileNet,ResNet50,VGG16 and Xception models are used.It was evident from the experiments that ResNet50 architecture outperformed all other CNN architectures based on AUC.The TWSVM classifier achieved the highest AUC score of 0.98 based on the ResNet50 feature vector.展开更多
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d...Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of re...The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of review of Generative Adversarial Networks. Earlier reviews that targeted reviewing certain architecture of the GAN or emphasizing a specific application-oriented area have done so in a narrow spirit and lacked the systematic comparative analysis of the models’ performance metrics. Numerous reviews do not apply standardized frameworks, showing gaps in the efficiency evaluation of GANs, training stability, and suitability for specific tasks. In this work, a systemic review of GAN models using the PRISMA framework is developed in detail to fill the gap by structurally evaluating GAN architectures. A wide variety of GAN models have been discussed in this review, starting from the basic Conditional GAN, Wasserstein GAN, and Deep Convolutional GAN, and have gone down to many specialized models, such as EVAGAN, FCGAN, and SIF-GAN, for different applications across various domains like fault diagnosis, network security, medical imaging, and image segmentation. The PRISMA methodology systematically filters relevant studies by inclusion and exclusion criteria to ensure transparency and replicability in the review process. Hence, all models are assessed relative to specific performance metrics such as accuracy, stability, and computational efficiency. There are multiple benefits to using the PRISMA approach in this setup. Not only does this help in finding optimal models suitable for various applications, but it also provides an explicit framework for comparing GAN performance. In addition to this, diverse types of GAN are included to ensure a comprehensive view of the state-of-the-art techniques. This work is essential not only in terms of its result but also because it guides the direction of future research by pinpointing which types of applications require some GAN architectures, works to improve specific task model selection, and points out areas for further research on the development and application of GANs.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
基金funded and supported by the Ongoing Research Funding program(ORF-2025-314),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.
文摘Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.
基金the Begum Rokeya University,Rangpur,and the United Arab Emirates University,UAE for partially supporting this work。
文摘Rice is one of the most important staple crops globally.Rice plant diseases can severely reduce crop yields and,in extreme cases,lead to total production loss.Early diagnosis enables timely intervention,mitigates disease severity,supports effective treatment strategies,and reduces reliance on excessive pesticide use.Traditional machine learning approaches have been applied for automated rice disease diagnosis;however,these methods depend heavily on manual image preprocessing and handcrafted feature extraction,which are labor-intensive and time-consuming and often require domain expertise.Recently,end-to-end deep learning(DL) models have been introduced for this task,but they often lack robustness and generalizability across diverse datasets.To address these limitations,we propose a novel end-toend training framework for convolutional neural network(CNN) and attention-based model ensembles(E2ETCA).This framework integrates features from two state-of-the-art(SOTA) CNN models,Inception V3 and DenseNet-201,and an attention-based vision transformer(ViT) model.The fused features are passed through an additional fully connected layer with softmax activation for final classification.The entire process is trained end-to-end,enhancing its suitability for realworld deployment.Furthermore,we extract and analyze the learned features using a support vector machine(SVM),a traditional machine learning classifier,to provide comparative insights.We evaluate the proposed E2ETCA framework on three publicly available datasets,the Mendeley Rice Leaf Disease Image Samples dataset,the Kaggle Rice Diseases Image dataset,the Bangladesh Rice Research Institute dataset,and a combined version of all three.Using standard evaluation metrics(accuracy,precision,recall,and F1-score),our framework demonstrates superior performance compared to existing SOTA methods in rice disease diagnosis,with potential applicability to other agricultural disease detection tasks.
文摘The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.
基金supported by National Science Foundation of China(10972015,11172015)the Beijing Natural Science Foundation(8162008).
文摘In modern construction,Lightweight Aggregate Concrete(LWAC)has been recognized as a vital material of concern because of its unique properties,such as reduced density and improved thermal insulation.Despite the extensive knowledge regarding its macroscopic properties,there is a wide knowledge gap in understanding the influence of microscale parameters like aggregate porosity and volume ratio on the mechanical response of LWAC.This study aims to bridge this knowledge gap,spurred by the need to enhance the predictability and applicability of LWAC in various construction environments.With the help of advanced numerical methods,including the finite element method and a random circular aggregate model,this study critically evaluates the role played by these microscale factors.We found that an increase in the aggregate porosity from 23.5%to 48.5%leads to a drastic change of weakness from the bonding interface to the aggregate,reducing compressive strength by up to 24.2%and tensile strength by 27.8%.Similarly,the increase in the volume ratio of lightweight aggregate from 25%to 40%leads to a reduction in compressive strength by 13.0%and tensile strength by 9.23%.These results highlight the imperative role of microscale properties on the mechanical properties of LWAC.By supplying precise quantitative details on the effect of porosity and aggregate volume ratio,this research makes significant contributions to construction materials science by providing useful recommendations for the creation and optimization of LWAC with improved performance and sustainability in construction.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金Project(N-12-NM-LU01-C01) supported by Construction of NTIS (National Science & Technology Information Service) Program Funded by the National Science & Technology Commission (NSTC), Korea
文摘This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.
基金supported by Competitive Research by the University of Aizu.
文摘Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.
文摘The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
文摘This review paper explores advanced methods to prompt Large LanguageModels(LLMs)into generating objectionable or unintended behaviors through adversarial prompt injection attacks.We examine a series of novel projects like HOUYI,Robustly Aligned LLM(RA-LLM),StruQ,and Virtual Prompt Injection that compel LLMs to produce affirmative responses to harmful queries.Several new benchmarks,such as PromptBench,AdvBench,AttackEval,INJECAGENT,and Robustness Suite,have been created to evaluate the performance and resilience of LLMs against these adversarial attacks.Results show significant success rates in misleading models like Vicuna-7B,LLaMA-2-7B-Chat,GPT-3.5,and GPT-4.The review highlights limitations in existing defense mechanisms and proposes future directions for enhancing LLM alignment and safety protocols,including the concept of LLM SELF DEFENSE.Our study emphasizes the need for improved robustness in LLMs,which will potentially shape the future of Artificial Intelligence(AI)driven applications and security protocols.Understanding the vulnerabilities of LLMs is crucial for developing effective defenses against adversarial prompt injection attacks.This paper proposes a systemic classification framework that discusses various types of prompt injection attacks and defenses.We also go through a broad spectrum of stateof-the-art attack methods(such as HouYi and Virtual Prompt Injection)alongside advanced defense mechanisms(like RA-LLM,StruQ,and LLM Self-Defense),providing critical insights into vulnerabilities and robustness.We also integrate and compare results from multiple recent benchmarks,including PromptBench,INJECENT,and BIPIA.
文摘Stress is mental tension caused by difficult situations,often experienced by hospital workers and IT professionals who work long hours.It is essential to detect the stress in shift workers to improve their health.However,existing models measure stress with physiological signals such as PPG,EDA,and blink data,which could not identify the stress level accurately.Additionally,the works face challenges with limited data,inefficient spatial relationships,security issues with health data,and long-range temporal dependencies.In this paper,we have developed a federated learning-based stress detection system for IT and hospital workers,integrating physiological and behavioral indicators for accurate stress detection.Furthermore,the study introduces a hybrid deep learning classifier called ResTFTNet to capture spatial features and complex temporal relationships to detect stress effectively.The proposed work involves two localmodels and a globalmodel,to develop a federated learning framework to enhance stress detection.Thedatasets are pre-processed using the bandpass filter noise removal technique and normalization.The Recursive Feature Elimination feature selection method improves themodel performance.FL aggregates thesemodels using FedAvg to ensure privacy by keeping data localized.After evaluating ResTFTNet with existing models,including Convolution Neural Network,Long-Short-Term-Memory,and Support VectorMachine,the proposed model shows exceptional performance with an accuracy of 99.3%.This work provides an accurate and privacy-preserving method for detecting stress in hospital and IT staff.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
文摘This study presents a systematic review of applications of artificial intelligence(abbreviated as AI)and blockchain in supply chain provenance traceability and legal forensics cover five sectors:integrated circuits(abbreviated as ICs),pharmaceuticals,electric vehicles(abbreviated as EVs),drones(abbreviated as UAVs),and robotics—in response to rising trade tensions and geopolitical conflicts,which have heightened concerns over product origin fraud and information security.While previous literature often focuses on single-industry contexts or isolated technologies,this reviewcomprehensively surveys these sectors and categorizes 116 peer-reviewed studies by application domain,technical architecture,and functional objective.Special attention is given to traceability control mechanisms,data integrity,and the use of forensic technologies to detect origin fraud.The study further evaluates real-world implementations,including blockchain-enabled drug tracking systems,EV battery raw material traceability,and UAV authentication frameworks,demonstrating the practical value of these technologies.By identifying technological challenges and policy implications,this research provides a comprehensive foundation for future academic inquiry,industrial adoption,and regulatory development aimed at enhancing transparency,resilience,and trust in global supply chains.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model needs to be generated which can effectively classify the COVID and non-COVID cases.In this work,our aim is to develop a diagnostic model based on deep features using effectiveness of Chest X-ray(CXR)in distinguishing COVID from non-COVID cases.The proposed diagnostic framework utilizes CXR to diagnose COVID-19 and includes Grad-CAM visualizations for a visual interpretation of predicted images.The model’s performance was evaluated using various metrics,including accuracy,precision,recall,F1-score,and Gmean.Several machine learning models,such as random forest,dense neural network,SVM,twin SVM,extreme learning machine,random vector functional link,and kernel ridge regression,were selected to diagnose COVID-19 cases.Transfer learning was used to extract deep features.For feature extraction many CNN-based models such as Inception V3,MobileNet,ResNet50,VGG16 and Xception models are used.It was evident from the experiments that ResNet50 architecture outperformed all other CNN architectures based on AUC.The TWSVM classifier achieved the highest AUC score of 0.98 based on the ResNet50 feature vector.
基金supported by the Competitive Research Fund of the University of Aizu,Japan(Grant No.P-13).
文摘Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
文摘The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of review of Generative Adversarial Networks. Earlier reviews that targeted reviewing certain architecture of the GAN or emphasizing a specific application-oriented area have done so in a narrow spirit and lacked the systematic comparative analysis of the models’ performance metrics. Numerous reviews do not apply standardized frameworks, showing gaps in the efficiency evaluation of GANs, training stability, and suitability for specific tasks. In this work, a systemic review of GAN models using the PRISMA framework is developed in detail to fill the gap by structurally evaluating GAN architectures. A wide variety of GAN models have been discussed in this review, starting from the basic Conditional GAN, Wasserstein GAN, and Deep Convolutional GAN, and have gone down to many specialized models, such as EVAGAN, FCGAN, and SIF-GAN, for different applications across various domains like fault diagnosis, network security, medical imaging, and image segmentation. The PRISMA methodology systematically filters relevant studies by inclusion and exclusion criteria to ensure transparency and replicability in the review process. Hence, all models are assessed relative to specific performance metrics such as accuracy, stability, and computational efficiency. There are multiple benefits to using the PRISMA approach in this setup. Not only does this help in finding optimal models suitable for various applications, but it also provides an explicit framework for comparing GAN performance. In addition to this, diverse types of GAN are included to ensure a comprehensive view of the state-of-the-art techniques. This work is essential not only in terms of its result but also because it guides the direction of future research by pinpointing which types of applications require some GAN architectures, works to improve specific task model selection, and points out areas for further research on the development and application of GANs.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.