Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron...The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.展开更多
This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywo...This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.展开更多
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ...Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.展开更多
Depression is a serious medical condition and is a leading cause of disability worldwide.Current depression diagnostics and assessment has significant limitations due to heterogeneity of clinical presentations,lack of...Depression is a serious medical condition and is a leading cause of disability worldwide.Current depression diagnostics and assessment has significant limitations due to heterogeneity of clinical presentations,lack of objective assessments,and assessments that rely on patients'perceptions,memory,and recall.Digital phenotyping(DP),especially assessments conducted using mobile health technologies,has the potential to greatly improve accuracy of depression diagnostics by generating objectively measurable endophenotypes.DP includes two primary sources of digital data generated using ecological momentary assessments(EMA),assessments conducted in real-time,in subjects'natural environment.This includes active EMA,data that require active input by the subject,and passive EMA or passive sensing,data passively and automatically collected from subjects'personal digital devices.The raw data is then analyzed using machine learning algorithms to identify behavioral patterns that correlate with patients'clinical status.Preliminary investigations have also shown that linguistic and behavioral clues from social media data and data extracted from the electronic medical records can be used to predict depression status.These other sources of data and recent advances in telepsychiatry can further enhance DP of the depressed patients.Success of DP endeavors depends on critical contributions from both psychiatric and engineering disciplines.The current review integrates important perspectives from both disciplines and discusses parameters for successful interdisciplinary collaborations.A clinically-relevant model for incorporating DP in clinical setting is presented.This model,based on investigations conducted by our group,delineates development of a depression prediction system and its integration in clinical setting to enhance depression diagnostics and inform the clinical decision making process.Benefits,challenges,and opportunities pertaining to clinical integration of DP of depression diagnostics are discussed from interdisciplinary perspectives.展开更多
In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM1...In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM10000,ISBI2018,and ISBI2019 datasets.Initially,we consider a pretrained deep neural network model,DarkeNet19,and fine-tune the parameters of third convolutional layer to generate the image gradients.All the visualized images are fused using a High-Frequency approach along with Multilayered Feed-Forward Neural Network(HFaFFNN).The resultant image is further enhanced by employing a log-opening based activation function to generate a localized binary image.Later,two pre-trained deep models,Darknet-53 and NasNet-mobile,are employed and fine-tuned according to the selected datasets.The concept of transfer learning is later explored to train both models,where the input feed is the generated localized lesion images.In the subsequent step,the extracted features are fused using parallel max entropy correlation(PMEC)technique.To avoid the problem of overfitting and to select the most discriminant feature information,we implement a hybrid optimization algorithm called entropy-kurtosis controlled whale optimization(EKWO)algorithm.The selected features are finally passed to the softmax classifier for the final classification.Three datasets are used for the experimental process,such as HAM10000,ISBI2018,and ISBI2019 to achieve an accuracy of 95.8%,97.1%,and 85.35%,respectively.展开更多
Since its launch in 2011, the Materials Genome Initiative(MGI) has drawn the attention of researchers from academia,government, and industry worldwide. As one of the three tools of the MGI, the use of materials data...Since its launch in 2011, the Materials Genome Initiative(MGI) has drawn the attention of researchers from academia,government, and industry worldwide. As one of the three tools of the MGI, the use of materials data, for the first time, has emerged as an extremely significant approach in materials discovery. Data science has been applied in different disciplines as an interdisciplinary field to extract knowledge from data. The concept of materials data science has been utilized to demonstrate its application in materials science. To explore its potential as an active research branch in the big data era, a three-tier system has been put forward to define the infrastructure for the classification, curation and knowledge extraction of materials data.展开更多
This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-S...This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.展开更多
The Brain Tumor(BT)is created by an uncontrollable rise of anomalous cells in brain tissue,and it consists of 2 types of cancers they are malignant and benign tumors.The benevolent BT does not affect the neighbouring ...The Brain Tumor(BT)is created by an uncontrollable rise of anomalous cells in brain tissue,and it consists of 2 types of cancers they are malignant and benign tumors.The benevolent BT does not affect the neighbouring healthy and normal tissue;however,the malignant could affect the adjacent brain tissues,which results in death.Initial recognition of BT is highly significant to protecting the patient’s life.Generally,the BT can be identified through the magnetic resonance imaging(MRI)scanning technique.But the radiotherapists are not offering effective tumor segmentation in MRI images because of the position and unequal shape of the tumor in the brain.Recently,ML has prevailed against standard image processing techniques.Several studies denote the superiority of machine learning(ML)techniques over standard techniques.Therefore,this study develops novel brain tumor detection and classification model using met heuristic optimization with machine learning(BTDC-MOML)model.To accomplish the detection of brain tumor effectively,a Computer-Aided Design(CAD)model using Machine Learning(ML)technique is proposed in this research manuscript.Initially,the input image pre-processing is performed using Gaborfiltering(GF)based noise removal,contrast enhancement,and skull stripping.Next,mayfly optimization with the Kapur’s thresholding based segmentation process takes place.For feature extraction proposes,local diagonal extreme patterns(LDEP)are exploited.At last,the Extreme Gradient Boosting(XGBoost)model can be used for the BT classification process.The accuracy analysis is performed in terms of Learning accuracy,and the validation accuracy is performed to determine the efficiency of the proposed research work.The experimental validation of the proposed model demonstrates its promising performance over other existing methods.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained promine...Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.展开更多
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot...Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers.展开更多
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con...The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.展开更多
This paper illustrates some exploration and innovation of software engineering education for VSEs under the background of Chinese "double first-class" new situation and new engineering subject, including aca...This paper illustrates some exploration and innovation of software engineering education for VSEs under the background of Chinese "double first-class" new situation and new engineering subject, including academic strategy, curriculum system, ability training, teaching methods, project practice, and so on. Based on the actual situations and characteristics of Hunan University, this paper focuses on some undergraduate education practice, so that students can adapt software engineering development in VSEs with ISO/IEC 29110 series of standards and guides.展开更多
Automated segmentation of white matter (WM) and gray matter (GM) is a very important task for detecting multiple diseases. The paper proposed a simple method for WM and GM extraction form magnetic resonance imaging (M...Automated segmentation of white matter (WM) and gray matter (GM) is a very important task for detecting multiple diseases. The paper proposed a simple method for WM and GM extraction form magnetic resonance imaging (MRI) of brain. The proposed methods based on binarization, wavelet decomposition, and convexhull produce very effective results in the context of visual inspection and as well as quantifiably. It tested on three different (Transvers, Sagittal, Coronal) types of MRI of brain image and the validation of experiment indicate accurate detection and segmentation of the interesting structures or particular region of MRI of brain image.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm...The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.展开更多
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(20P2PF11)Sino-UK Industrial Fund(RP202G0289)LIAS(20P2ED10,20P2RE969)Data Science Enhancement Fund(20P2RE237)Fight for Sight(24NN201)Sino-UK Education Fund(OP202006)BBSRC(RM32G0178B8).
文摘The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.
基金Project(N-12-NM-LU01-C01) supported by Construction of NTIS (National Science & Technology Information Service) Program Funded by the National Science & Technology Commission (NSTC), Korea
文摘This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.
文摘Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.
文摘Depression is a serious medical condition and is a leading cause of disability worldwide.Current depression diagnostics and assessment has significant limitations due to heterogeneity of clinical presentations,lack of objective assessments,and assessments that rely on patients'perceptions,memory,and recall.Digital phenotyping(DP),especially assessments conducted using mobile health technologies,has the potential to greatly improve accuracy of depression diagnostics by generating objectively measurable endophenotypes.DP includes two primary sources of digital data generated using ecological momentary assessments(EMA),assessments conducted in real-time,in subjects'natural environment.This includes active EMA,data that require active input by the subject,and passive EMA or passive sensing,data passively and automatically collected from subjects'personal digital devices.The raw data is then analyzed using machine learning algorithms to identify behavioral patterns that correlate with patients'clinical status.Preliminary investigations have also shown that linguistic and behavioral clues from social media data and data extracted from the electronic medical records can be used to predict depression status.These other sources of data and recent advances in telepsychiatry can further enhance DP of the depressed patients.Success of DP endeavors depends on critical contributions from both psychiatric and engineering disciplines.The current review integrates important perspectives from both disciplines and discusses parameters for successful interdisciplinary collaborations.A clinically-relevant model for incorporating DP in clinical setting is presented.This model,based on investigations conducted by our group,delineates development of a depression prediction system and its integration in clinical setting to enhance depression diagnostics and inform the clinical decision making process.Benefits,challenges,and opportunities pertaining to clinical integration of DP of depression diagnostics are discussed from interdisciplinary perspectives.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM10000,ISBI2018,and ISBI2019 datasets.Initially,we consider a pretrained deep neural network model,DarkeNet19,and fine-tune the parameters of third convolutional layer to generate the image gradients.All the visualized images are fused using a High-Frequency approach along with Multilayered Feed-Forward Neural Network(HFaFFNN).The resultant image is further enhanced by employing a log-opening based activation function to generate a localized binary image.Later,two pre-trained deep models,Darknet-53 and NasNet-mobile,are employed and fine-tuned according to the selected datasets.The concept of transfer learning is later explored to train both models,where the input feed is the generated localized lesion images.In the subsequent step,the extracted features are fused using parallel max entropy correlation(PMEC)technique.To avoid the problem of overfitting and to select the most discriminant feature information,we implement a hybrid optimization algorithm called entropy-kurtosis controlled whale optimization(EKWO)algorithm.The selected features are finally passed to the softmax classifier for the final classification.Three datasets are used for the experimental process,such as HAM10000,ISBI2018,and ISBI2019 to achieve an accuracy of 95.8%,97.1%,and 85.35%,respectively.
基金Project supported by the National Key R&D Program of China(Grant No.2016YFB0700503)the National High Technology Research and Development Program of China(Grant No.2015AA03420)+2 种基金Beijing Municipal Science and Technology Project,China(Grant No.D161100002416001)the National Natural Science Foundation of China(Grant No.51172018)Kennametal Inc
文摘Since its launch in 2011, the Materials Genome Initiative(MGI) has drawn the attention of researchers from academia,government, and industry worldwide. As one of the three tools of the MGI, the use of materials data, for the first time, has emerged as an extremely significant approach in materials discovery. Data science has been applied in different disciplines as an interdisciplinary field to extract knowledge from data. The concept of materials data science has been utilized to demonstrate its application in materials science. To explore its potential as an active research branch in the big data era, a three-tier system has been put forward to define the infrastructure for the classification, curation and knowledge extraction of materials data.
文摘This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.
文摘The Brain Tumor(BT)is created by an uncontrollable rise of anomalous cells in brain tissue,and it consists of 2 types of cancers they are malignant and benign tumors.The benevolent BT does not affect the neighbouring healthy and normal tissue;however,the malignant could affect the adjacent brain tissues,which results in death.Initial recognition of BT is highly significant to protecting the patient’s life.Generally,the BT can be identified through the magnetic resonance imaging(MRI)scanning technique.But the radiotherapists are not offering effective tumor segmentation in MRI images because of the position and unequal shape of the tumor in the brain.Recently,ML has prevailed against standard image processing techniques.Several studies denote the superiority of machine learning(ML)techniques over standard techniques.Therefore,this study develops novel brain tumor detection and classification model using met heuristic optimization with machine learning(BTDC-MOML)model.To accomplish the detection of brain tumor effectively,a Computer-Aided Design(CAD)model using Machine Learning(ML)technique is proposed in this research manuscript.Initially,the input image pre-processing is performed using Gaborfiltering(GF)based noise removal,contrast enhancement,and skull stripping.Next,mayfly optimization with the Kapur’s thresholding based segmentation process takes place.For feature extraction proposes,local diagonal extreme patterns(LDEP)are exploited.At last,the Extreme Gradient Boosting(XGBoost)model can be used for the BT classification process.The accuracy analysis is performed in terms of Learning accuracy,and the validation accuracy is performed to determine the efficiency of the proposed research work.The experimental validation of the proposed model demonstrates its promising performance over other existing methods.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported by the National Natural Science Foundation of China(No.52277055).
文摘Traditional data-driven fault diagnosis methods depend on expert experience to manually extract effective fault features of signals,which has certain limitations.Conversely,deep learning techniques have gained prominence as a central focus of research in the field of fault diagnosis by strong fault feature extraction ability and end-to-end fault diagnosis efficiency.Recently,utilizing the respective advantages of convolution neural network(CNN)and Transformer in local and global feature extraction,research on cooperating the two have demonstrated promise in the field of fault diagnosis.However,the cross-channel convolution mechanism in CNN and the self-attention calculations in Transformer contribute to excessive complexity in the cooperative model.This complexity results in high computational costs and limited industrial applicability.To tackle the above challenges,this paper proposes a lightweight CNN-Transformer named as SEFormer for rotating machinery fault diagnosis.First,a separable multiscale depthwise convolution block is designed to extract and integrate multiscale feature information from different channel dimensions of vibration signals.Then,an efficient self-attention block is developed to capture critical fine-grained features of the signal from a global perspective.Finally,experimental results on the planetary gearbox dataset and themotor roller bearing dataset prove that the proposed framework can balance the advantages of robustness,generalization and lightweight compared to recent state-of-the-art fault diagnosis models based on CNN and Transformer.This study presents a feasible strategy for developing a lightweight rotating machinery fault diagnosis framework aimed at economical deployment.
基金supported by the Project SP2023/074 Application of Machine and Process Control Advanced Methods supported by the Ministry of Education,Youth and Sports,Czech Republic.
文摘Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers.
基金supported by the Key Project of National Natural Science Foundation of China-Civil Aviation Joint Fund under Grant No.U2033212。
文摘The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.
基金supported by the Natural Science Foundation of China Hunan Province (No.2016JJ2057)the Science Foundation of China Hunan Provincial Education Department (No.15C0546)
文摘This paper illustrates some exploration and innovation of software engineering education for VSEs under the background of Chinese "double first-class" new situation and new engineering subject, including academic strategy, curriculum system, ability training, teaching methods, project practice, and so on. Based on the actual situations and characteristics of Hunan University, this paper focuses on some undergraduate education practice, so that students can adapt software engineering development in VSEs with ISO/IEC 29110 series of standards and guides.
文摘Automated segmentation of white matter (WM) and gray matter (GM) is a very important task for detecting multiple diseases. The paper proposed a simple method for WM and GM extraction form magnetic resonance imaging (MRI) of brain. The proposed methods based on binarization, wavelet decomposition, and convexhull produce very effective results in the context of visual inspection and as well as quantifiably. It tested on three different (Transvers, Sagittal, Coronal) types of MRI of brain image and the validation of experiment indicate accurate detection and segmentation of the interesting structures or particular region of MRI of brain image.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
文摘The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.