Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educat...Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.展开更多
The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneousl...The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model n...The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model needs to be generated which can effectively classify the COVID and non-COVID cases.In this work,our aim is to develop a diagnostic model based on deep features using effectiveness of Chest X-ray(CXR)in distinguishing COVID from non-COVID cases.The proposed diagnostic framework utilizes CXR to diagnose COVID-19 and includes Grad-CAM visualizations for a visual interpretation of predicted images.The model’s performance was evaluated using various metrics,including accuracy,precision,recall,F1-score,and Gmean.Several machine learning models,such as random forest,dense neural network,SVM,twin SVM,extreme learning machine,random vector functional link,and kernel ridge regression,were selected to diagnose COVID-19 cases.Transfer learning was used to extract deep features.For feature extraction many CNN-based models such as Inception V3,MobileNet,ResNet50,VGG16 and Xception models are used.It was evident from the experiments that ResNet50 architecture outperformed all other CNN architectures based on AUC.The TWSVM classifier achieved the highest AUC score of 0.98 based on the ResNet50 feature vector.展开更多
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d...Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of re...The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of review of Generative Adversarial Networks. Earlier reviews that targeted reviewing certain architecture of the GAN or emphasizing a specific application-oriented area have done so in a narrow spirit and lacked the systematic comparative analysis of the models’ performance metrics. Numerous reviews do not apply standardized frameworks, showing gaps in the efficiency evaluation of GANs, training stability, and suitability for specific tasks. In this work, a systemic review of GAN models using the PRISMA framework is developed in detail to fill the gap by structurally evaluating GAN architectures. A wide variety of GAN models have been discussed in this review, starting from the basic Conditional GAN, Wasserstein GAN, and Deep Convolutional GAN, and have gone down to many specialized models, such as EVAGAN, FCGAN, and SIF-GAN, for different applications across various domains like fault diagnosis, network security, medical imaging, and image segmentation. The PRISMA methodology systematically filters relevant studies by inclusion and exclusion criteria to ensure transparency and replicability in the review process. Hence, all models are assessed relative to specific performance metrics such as accuracy, stability, and computational efficiency. There are multiple benefits to using the PRISMA approach in this setup. Not only does this help in finding optimal models suitable for various applications, but it also provides an explicit framework for comparing GAN performance. In addition to this, diverse types of GAN are included to ensure a comprehensive view of the state-of-the-art techniques. This work is essential not only in terms of its result but also because it guides the direction of future research by pinpointing which types of applications require some GAN architectures, works to improve specific task model selection, and points out areas for further research on the development and application of GANs.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This pape...The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understand...Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.展开更多
Machine learning models can predict material properties quickly and accurately at a low computational cost.This study generated novel hybridized nanocomposites with unsaturated polyester resin as the matrix and Areca ...Machine learning models can predict material properties quickly and accurately at a low computational cost.This study generated novel hybridized nanocomposites with unsaturated polyester resin as the matrix and Areca fruit husk fiber(AFHF),tamarind fruit fiber(TFF),and nano-sized coconut shell powder(NCSP).It is challenging to determine the optimal proportion of raw materials in this composite to achieve maximum mechanical properties.This task was accomplished with the help of ML techniques in this study.The tensile strength of the hybridized nanocomposite was increased by 134.06% compared to the neat unsaturated polyester resin at a 10:5:2 wt.% ratio,AFHF:TFF:NCSP.The stiffness and impact behavior of hybridized nanocomposites were similar.The scanning electron microscope showed homogeneous reinforcement and nanofiller distribution in the matrix.However,the hybridized nanocomposite with a 20:5:0 wt.% combination ratio had the highest strain at break of 5.98%,AFHF:TFF:NCSP.The effectiveness of recurrent neural networks and recurrent neural networks with Levenberg’s algorithm was assessed using R2,mean absolute errors,and minimum squared errors.Tensile and impact strength of hybridized nanocomposites were well predicted by the recurrent neural network with Levenberg’s model with 2 and 3 hidden layers,80 neurons and 80 neurons,respectively.A recurrent neural network model with 4 hidden layers,60 neurons,and 2 hidden layers,100 neurons predicted hybridized nanocomposites’Young’s modulus and elongation at break with maximum R2 values.The mean absolute errors and minimum squared errors were evaluated to ensure the reliability of the machine learning algorithms.The models optimize hybridized nanocomposites’mechanical properties,saving time and money during experimental characterization.展开更多
Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Dee...Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Deep Belief Network(DBN),Bidirectional Recurrent Neural Network(BiDirRNN),Gated Recurrent Unit(GRU),and others using the NASA B0005 dataset of 591,458 instances.Results indicate that DBN excels in capacity estimation,achieving orders-of-magnitude lower error values and explaining over 99.97%of the predicted variable’s variance.When computational efficiency is paramount,the Deep Neural Network(DNN)offers a strong alternative,delivering near-competitive accuracy with significantly reduced prediction times.The GRU achieves the best overall performance for SOC estimation,attaining an R^(2) of 0.9999,while the BiDirRNN provides a marginally lower error at a slightly higher computational speed.In contrast,Convolutional Neural Networks(CNN)and Radial Basis Function Networks(RBFN)exhibit relatively high error rates,making them less viable for real-world battery management.Analyses of error distributions reveal that the top-performing models cluster most predictions within tight bounds,limiting the risk of overcharging or deep discharging.These findings highlight the trade-off between accuracy and computational overhead,offering valuable guidance for battery management system(BMS)designers seeking optimal performance under constrained resources.Future work may further explore advanced data augmentation and domain adaptation techniques to enhance these models’robustness in diverse operating conditions.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,...A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.展开更多
Background:Pneumoconioses,a group of occupational lung diseases caused by inhalation of mineral dust,pose significant health risks to affected individuals.Accurate assessment of profusion(extent of lung involvement)in...Background:Pneumoconioses,a group of occupational lung diseases caused by inhalation of mineral dust,pose significant health risks to affected individuals.Accurate assessment of profusion(extent of lung involvement)in chest radiographs is essential for screening,diagnosis and monitoring of the diseases along with epidemiological classification.This study explores an automated classification system combining U-Net-based segmentation for lung field delineation and DenseNet121 with ImageNet-based transfer learning for profusion classification.Methods:Lung field segmentation using U-Net achieved precise delineation,ensuring accurate region-of-interest definition.Transfer learning with DenseNet121 leveraged pre-trained knowledge from ImageNet,minimizing the need for extensive training.The model was fine-tuned with International Labour Organization(ILO)-2022 version standard chest radiographs and evaluated on a diverse dataset of ILO-2000 version standardized radiographs.Results:The U-Net-based segmentation demonstrated robust performance(Accuracy 94%and Dice Coefficient 90%),facilitating subsequent profusion classification.The DenseNet121-based transfer learning model exhibited high accuracy(95%),precision(92%),and recall(94%)for classifying four profusion levels on test ILO 2000/2011D dataset.The final Evaluation on ILO-2000 radiographs highlighted its generalization capability.Conclusion:The proposed system offers clinical promise,aiding radiologists,pulmonologists,general physicians,and occupational health specialists in pneumoconioses screening,diagnosis,monitoring and epidemiological classification.Best of our knowledge,this is the first work in the field of automated Classification of Profusion in Chest Radiographs of Pneumoconioses based on recently published latest ILO-2022 standard.Future research should focus on further refinement and real-world validation.This approach exemplifies the potential of deep learning for enhancing the accuracy and efficiency of pneumoconioses assessment,benefiting industrial workers,patients,and healthcare providers.展开更多
The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interac...The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interactive capabilities,VR technology has demonstrated exceptional utility in surgical simulation,rehabilitation,physical therapy,mental health,and psychological treatment.By creating highly realistic and precisely controlled environments,VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients.The convergence of LLMs and VR extends the potential of both technologies.LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions.This convergence enhances the quality of training,decision-making,and patient engagement,paving the way for innovative healthcare delivery.This study aims to comprehensively review the current applications,research advancements,and challenges associated with these two technologies in healthcare and medicine.The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision,establishing them as critical forces in the transformation of modern medicine.展开更多
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金supported by Competitive Research by the University of Aizu.
文摘Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.
文摘The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘The COVID-19 pandemic,which was declared by the WHO,had created a global health crisis and disrupted people’s daily lives.A large number of people were affected by the COVID-19 pandemic.Therefore,a diagnostic model needs to be generated which can effectively classify the COVID and non-COVID cases.In this work,our aim is to develop a diagnostic model based on deep features using effectiveness of Chest X-ray(CXR)in distinguishing COVID from non-COVID cases.The proposed diagnostic framework utilizes CXR to diagnose COVID-19 and includes Grad-CAM visualizations for a visual interpretation of predicted images.The model’s performance was evaluated using various metrics,including accuracy,precision,recall,F1-score,and Gmean.Several machine learning models,such as random forest,dense neural network,SVM,twin SVM,extreme learning machine,random vector functional link,and kernel ridge regression,were selected to diagnose COVID-19 cases.Transfer learning was used to extract deep features.For feature extraction many CNN-based models such as Inception V3,MobileNet,ResNet50,VGG16 and Xception models are used.It was evident from the experiments that ResNet50 architecture outperformed all other CNN architectures based on AUC.The TWSVM classifier achieved the highest AUC score of 0.98 based on the ResNet50 feature vector.
基金supported by the Competitive Research Fund of the University of Aizu,Japan(Grant No.P-13).
文摘Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
文摘The growing spectrum of Generative Adversarial Network (GAN) applications in medical imaging, cyber security, data augmentation, and the field of remote sensing tasks necessitate a sharp spike in the criticality of review of Generative Adversarial Networks. Earlier reviews that targeted reviewing certain architecture of the GAN or emphasizing a specific application-oriented area have done so in a narrow spirit and lacked the systematic comparative analysis of the models’ performance metrics. Numerous reviews do not apply standardized frameworks, showing gaps in the efficiency evaluation of GANs, training stability, and suitability for specific tasks. In this work, a systemic review of GAN models using the PRISMA framework is developed in detail to fill the gap by structurally evaluating GAN architectures. A wide variety of GAN models have been discussed in this review, starting from the basic Conditional GAN, Wasserstein GAN, and Deep Convolutional GAN, and have gone down to many specialized models, such as EVAGAN, FCGAN, and SIF-GAN, for different applications across various domains like fault diagnosis, network security, medical imaging, and image segmentation. The PRISMA methodology systematically filters relevant studies by inclusion and exclusion criteria to ensure transparency and replicability in the review process. Hence, all models are assessed relative to specific performance metrics such as accuracy, stability, and computational efficiency. There are multiple benefits to using the PRISMA approach in this setup. Not only does this help in finding optimal models suitable for various applications, but it also provides an explicit framework for comparing GAN performance. In addition to this, diverse types of GAN are included to ensure a comprehensive view of the state-of-the-art techniques. This work is essential not only in terms of its result but also because it guides the direction of future research by pinpointing which types of applications require some GAN architectures, works to improve specific task model selection, and points out areas for further research on the development and application of GANs.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
文摘The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金supported by NSF China(No.61960206002,62020106005,42050105,62061146002)Shanghai Pilot Program for Basic Research–Shanghai Jiao Tong University.
文摘Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.
文摘Machine learning models can predict material properties quickly and accurately at a low computational cost.This study generated novel hybridized nanocomposites with unsaturated polyester resin as the matrix and Areca fruit husk fiber(AFHF),tamarind fruit fiber(TFF),and nano-sized coconut shell powder(NCSP).It is challenging to determine the optimal proportion of raw materials in this composite to achieve maximum mechanical properties.This task was accomplished with the help of ML techniques in this study.The tensile strength of the hybridized nanocomposite was increased by 134.06% compared to the neat unsaturated polyester resin at a 10:5:2 wt.% ratio,AFHF:TFF:NCSP.The stiffness and impact behavior of hybridized nanocomposites were similar.The scanning electron microscope showed homogeneous reinforcement and nanofiller distribution in the matrix.However,the hybridized nanocomposite with a 20:5:0 wt.% combination ratio had the highest strain at break of 5.98%,AFHF:TFF:NCSP.The effectiveness of recurrent neural networks and recurrent neural networks with Levenberg’s algorithm was assessed using R2,mean absolute errors,and minimum squared errors.Tensile and impact strength of hybridized nanocomposites were well predicted by the recurrent neural network with Levenberg’s model with 2 and 3 hidden layers,80 neurons and 80 neurons,respectively.A recurrent neural network model with 4 hidden layers,60 neurons,and 2 hidden layers,100 neurons predicted hybridized nanocomposites’Young’s modulus and elongation at break with maximum R2 values.The mean absolute errors and minimum squared errors were evaluated to ensure the reliability of the machine learning algorithms.The models optimize hybridized nanocomposites’mechanical properties,saving time and money during experimental characterization.
文摘Accurate capacity and State of Charge(SOC)estimation are crucial for ensuring the safety and longevity of lithium-ion batteries in electric vehicles.This study examines ten machine learning architectures,Including Deep Belief Network(DBN),Bidirectional Recurrent Neural Network(BiDirRNN),Gated Recurrent Unit(GRU),and others using the NASA B0005 dataset of 591,458 instances.Results indicate that DBN excels in capacity estimation,achieving orders-of-magnitude lower error values and explaining over 99.97%of the predicted variable’s variance.When computational efficiency is paramount,the Deep Neural Network(DNN)offers a strong alternative,delivering near-competitive accuracy with significantly reduced prediction times.The GRU achieves the best overall performance for SOC estimation,attaining an R^(2) of 0.9999,while the BiDirRNN provides a marginally lower error at a slightly higher computational speed.In contrast,Convolutional Neural Networks(CNN)and Radial Basis Function Networks(RBFN)exhibit relatively high error rates,making them less viable for real-world battery management.Analyses of error distributions reveal that the top-performing models cluster most predictions within tight bounds,limiting the risk of overcharging or deep discharging.These findings highlight the trade-off between accuracy and computational overhead,offering valuable guidance for battery management system(BMS)designers seeking optimal performance under constrained resources.Future work may further explore advanced data augmentation and domain adaptation techniques to enhance these models’robustness in diverse operating conditions.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金The National Science Foundation funded this research under the Dy-namics of Coupled Natural and Human Systems program(Grants No.DEB-1212183 and BCS-1826839)support from San Diego State University and Auburn University.
文摘A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.
文摘Background:Pneumoconioses,a group of occupational lung diseases caused by inhalation of mineral dust,pose significant health risks to affected individuals.Accurate assessment of profusion(extent of lung involvement)in chest radiographs is essential for screening,diagnosis and monitoring of the diseases along with epidemiological classification.This study explores an automated classification system combining U-Net-based segmentation for lung field delineation and DenseNet121 with ImageNet-based transfer learning for profusion classification.Methods:Lung field segmentation using U-Net achieved precise delineation,ensuring accurate region-of-interest definition.Transfer learning with DenseNet121 leveraged pre-trained knowledge from ImageNet,minimizing the need for extensive training.The model was fine-tuned with International Labour Organization(ILO)-2022 version standard chest radiographs and evaluated on a diverse dataset of ILO-2000 version standardized radiographs.Results:The U-Net-based segmentation demonstrated robust performance(Accuracy 94%and Dice Coefficient 90%),facilitating subsequent profusion classification.The DenseNet121-based transfer learning model exhibited high accuracy(95%),precision(92%),and recall(94%)for classifying four profusion levels on test ILO 2000/2011D dataset.The final Evaluation on ILO-2000 radiographs highlighted its generalization capability.Conclusion:The proposed system offers clinical promise,aiding radiologists,pulmonologists,general physicians,and occupational health specialists in pneumoconioses screening,diagnosis,monitoring and epidemiological classification.Best of our knowledge,this is the first work in the field of automated Classification of Profusion in Chest Radiographs of Pneumoconioses based on recently published latest ILO-2022 standard.Future research should focus on further refinement and real-world validation.This approach exemplifies the potential of deep learning for enhancing the accuracy and efficiency of pneumoconioses assessment,benefiting industrial workers,patients,and healthcare providers.
基金Supported by Noncommunicable Chronic Diseases-National Science and Technology Major Project(2024ZD0523200)National Natural Science Foundation of China(62301330,62101346).
文摘The convergence of large language models(LLMs)and virtual reality(VR)technologies has led to significant breakthroughs across multiple domains,particularly in healthcare and medicine.Owing to its immersive and interactive capabilities,VR technology has demonstrated exceptional utility in surgical simulation,rehabilitation,physical therapy,mental health,and psychological treatment.By creating highly realistic and precisely controlled environments,VR not only enhances the efficiency of medical training but also enables personalized therapeutic approaches for patients.The convergence of LLMs and VR extends the potential of both technologies.LLM-empowered VR can transform medical education through interactive learning platforms and address complex healthcare challenges using comprehensive solutions.This convergence enhances the quality of training,decision-making,and patient engagement,paving the way for innovative healthcare delivery.This study aims to comprehensively review the current applications,research advancements,and challenges associated with these two technologies in healthcare and medicine.The rapid evolution of these technologies is driving the healthcare industry toward greater intelligence and precision,establishing them as critical forces in the transformation of modern medicine.