Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate...Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.展开更多
Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BC...Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BCIs,including their fundamental principles,technical advancements,and applications in specific domains.However,these reviews often focus on signal processing,hardware development,or limited applications such as motor rehabilitation or communication.This paper aims to offer a comprehensive review of recent electroencephalogram(EEG)-based BCI applications in the medical field across 8 critical areas,encompassing rehabilitation,daily communication,epilepsy,cerebral resuscitation,sleep,neurodegenerative diseases,anesthesiology,and emotion recognition.Moreover,the current challenges and future trends of BCIs were also discussed,including personal privacy and ethical concerns,network security vulnerabilities,safety issues,and biocompatibility.展开更多
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca...Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm...The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.展开更多
Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in sci...Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in scientific research and medical felds.This article reviews the types of EEG signals,multiple EEG signal analysis methods,and the application of relevant methods in the neuroscience feld and for diagnosing neurological diseases.First,3 types of EEG signals,including time-invariant EEG,accurate event-related EEG,and random event-related EEG,are introduced.Second,5 main directions for the methods of EEG analysis,including power spectrum analysis,time-frequency analysis,connectivity analysis,source localization methods,and machine learning methods,are described in the main section,along with diferent sub-methods and effect evaluations for solving the same problem.Finally,the application scenarios of different EEG analysis methods are emphasized,and the advantages and disadvantages of similar methods are distinguished.This article is expected to assist researchers in selecting suitable EEG analysis methods based on their research objectives,provide references for subsequent research,and summarize current issues and prospects for the future.展开更多
Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiv...Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiveness of sinter quality prediction,an intelligent flare monitoring system for sintering machine tails that combines hybrid neural networks integrating convolutional neural network with long short-term memory(CNN-LSTM)networks was proposed.The system utilized a high-temperature thermal imager for image acquisition at the sintering machine tail and employed a zone-triggered method to accurately capture dynamic feature images under challenging conditions of high-temperature,high dust,and occlusion.The feature images were then segmented through a triple-iteration multi-thresholding approach based on the maximum between-class variance method to minimize detail loss during the segmentation process.Leveraging the advantages of CNN and LSTM networks in capturing temporal and spatial information,a comprehensive model for sinter quality prediction was constructed,with inputs including the proportion of combustion layer,porosity rate,temperature distribution,and image features obtained from the convolutional neural network,and outputs comprising quality indicators such as underburning index,uniformity index,and FeO content of the sinter.The accuracy is notably increased,achieving a 95.8%hit rate within an error margin of±1.0.After the system is applied,the average qualified rate of FeO content increases from 87.24%to 89.99%,representing an improvement of 2.75%.The average monthly solid fuel consumption is reduced from 49.75 to 46.44 kg/t,leading to a 6.65%reduction and underscoring significant energy saving and cost reduction effects.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought conveni...Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought convenience to people’s lives. The number of people using the internet around the globe has also increased significantly, exerting a profound influence on artificial intelligence. Further, the constant upgrading and development of artificial intelligence has led to the continuous innovation and improvement of computer technology. Countries around the world have also registered an increase in investment, paying more attention to artificial intelligence. Through an analysis of the current development situation and the existing applications of artificial intelligence, this paper explicates the role of artificial intelligence in the face of the unceasing expansion of computer network technology.展开更多
The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(I...The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
Investigation and analysis of the current status of farmers'livelihood capital and promptly discovering and solving problems in farmers'livelihood development are of great practical significance for optimizing...Investigation and analysis of the current status of farmers'livelihood capital and promptly discovering and solving problems in farmers'livelihood development are of great practical significance for optimizing farmers'livelihood strategies and enhancing farmers'livelihood sustainable development capability.Based on the framework of sustainable livelihood analysis,taking Yangshan County as an example,this paper uses field surveys,questionnaires and interviews to summarize and analyze the current status and characteristics and main problems of local farmers'livelihood capitals on the basis of the data of 628 farmer samples.It proposes countermeasures for future development of farmers'livelihoods.Implementing these strategies will be helpful for improving the livelihoods capital structure of farmers'and enhancing their sustainable development capability.展开更多
The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of ...The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of Digital Twins(DTs),the virtual replicas of physical components and processes,has also been found to be a game changer for the ever-evolving IoMT.However,these advancements in the healthcare domain come with significant cybersecurity challenges,exposing it to malicious attacks and several security threats.Intrusion Detection Systems(IDSs)serve as a critical defense mechanism,yet traditional IDS approaches often struggle with the complexity and scale of IoMT networks.With this context,this paper follows a systematic approach to analyze the existing literature and highlight the current trends and challenges related to IDS in the IoMT domain.We leveraged techniques like bibliographic and keyword analysis to collect 832 research works published from 2007 to 2025,aligned with the theme“Digital Twins and IDS in IoMT.”It was found that by simulating device behaviours and network interactions in IoMT,DTs not only provide a proactive platform for early threat detection,but also offer a scalable and adaptive approach to mitigating evolving security threats in IoMT.Overall,this review provides a closer look into the role of IDS and DT in securing IoMT systems and sheds light on the possible research directions for developers and the research community.展开更多
Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for ...Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for impulse effects. Firstly, to address the inequality constraints,the penalty method is introduced. Then, a novel optimization strategy is developed, which only requires that the team objective function be strongly convex.展开更多
Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have d...Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have different research careers?It is very obvious,the scholars may gain a high impact if they have more research experience or have spent more time(in terms of research career in a year).Then we cannot compare two scholars who have different research careers.Many bibliometrics indicators address the time-span of scholars.In this series,the h-index sequence and EM/EM’-index sequence have been introduced for assessment and comparison of the scientific impact of scholars.The h-index sequence,EM-index sequence,and EM’-index sequence consider the yearly impact of scholars,and comparison is done by the index value along with their component value.The time-series indicators fail to give a comparative analysis between senior and junior scholars if there is a huge difference in both scholars’research careers.Design/methodology/approach:We have proposed the cumulative index calculation method to appraise the scientific impact of scholars till that age and tested it with 89 scholars data.Findings:The proposed mechanism is implemented and tested on 89 scholars’publication data,providing a clear difference between the scientific impact of two scholars.This also helps in predicting future prominent scholars based on their research impact.Research limitations:This study adopts a simplistic approach by assigning equal credit to all authors,regardless of their individual contributions.Further,the potential impact of career breaks on research productivity is not taken into account.These assumptions may limit the generalizability of our findings Practical implications:The proposed method can be used by respected institutions to compare their scholars impact.Funding agencies can also use it for similar purposes.Originality/value:This research adds to the existing literature by introducing a novel methodology for comparing the scientific impact of scholars.The outcomes of this research have notable implications for the development of more precise and unbiased research assessment frameworks,enabling a more equitable evaluation of scholarly contributions.展开更多
Advancements in deep learning have considerably enhanced techniques for Rapid Entire Body Assess-ment(REBA)pose estimation by leveraging progress in three-dimensional human modeling.This survey provides an extensive o...Advancements in deep learning have considerably enhanced techniques for Rapid Entire Body Assess-ment(REBA)pose estimation by leveraging progress in three-dimensional human modeling.This survey provides an extensive overview of recent advancements,particularly emphasizing monocular image-based methodologies and their incorporation into ergonomic risk assessment frameworks.By reviewing literature from 2016 to 2024,this study offers a current and comprehensive analysis of techniques,existing challenges,and emerging trends in three-dimensional human pose estimation.In contrast to traditional reviews organized by learning paradigms,this survey examines how three-dimensional pose estimation is effectively utilized within musculoskeletal disorder(MSD)assessments,focusing on essential advancements,comparative analyses,and ergonomic implications.We extend existing image-based clas-sification schemes by examining state-of-the-art two-dimensional models that enhance monocular three-dimensional prediction accuracy and analyze skeleton representations by evaluating joint connectivity and spatial configuration,offering insights into how structural variability influences model robustness.A core contribution of this work is the identification of a critical research gap:the limited exploration of estimating REBA scores directly from single RGB images using monocular three-dimensional pose estimation.Most existing studies depend on depth sensors or sequential inputs,limiting applicability in real-time and resource-constrained environments.Our review emphasizes this gap and proposes future research directions to develop accurate,lightweight,and generalizable models suitable for practical deployment.This survey is a valuable resource for researchers and practitioners in computer vision,ergonomics,and related disciplines,offering a structured understanding of current methodologies and guidance for future innovation in three-dimensional human pose estimation for REBA-based ergonomic risk assessment.展开更多
With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with ...With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with complex attacks in SDN environments,thus to address the network security issues from the viewpoint of Artificial Intelligence(AI),this paper introduces the Crayfish Optimization Algorithm(COA)to the field of intrusion detection for both SDN and traditional network architectures,and based on the characteristics of the original COA,an Improved Crayfish Optimization Algorithm(ICOA)is proposed by integrating strategies of elite reverse learning,Levy flight,crowding factor and parameter modification.The ICOA is then utilized for AI-integrated feature selection of intrusion detection for both SDN and traditional network architectures,to reduce the dimensionality of the data and improve the performance of network intrusion detection.Finally,the performance evaluation is performed by testing not only the NSL-KDD dataset and the UNSW-NB 15 dataset for traditional networks but also the InSDN dataset for SDN-based networks.Experimental results show that ICOA improves the accuracy by 0.532%and 2.928%respectively compared with GWO and COA in traditional networks.In SDN networks,the accuracy of ICOA is 0.25%and 0.3%higher than COA and PSO.These findings collectively indicate that AI-integrated feature selection based on the proposed ICOA can promote network intrusion detection for both SDN and traditional architectures.展开更多
This study explores the feasibility of constructing an intelligent educational evaluation system based on the CIPP model and artificial intelligence technology in the context of new engineering disciplines.By integrat...This study explores the feasibility of constructing an intelligent educational evaluation system based on the CIPP model and artificial intelligence technology in the context of new engineering disciplines.By integrating the CIPP model with AI technology,a novel intelligent educational evaluation system was designed.Through experimental validation and case studies,the system demonstrated significant effectiveness in improving teaching quality,facilitating personalized student development,and optimizing educational resource allocation.Additionally,the study predicts potential changes this system could bring to the education industry and proposes relevant policy recommendations.Although the current research has limitations,with technological advancements in the future,this system is expected to provide stronger support for innovations in engineering education models.展开更多
The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both g...The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.展开更多
基金Strategic Priority Research Program of the Chinese Academy of Sciences,No.XDB0740000National Key Research and Development Program of China,No.2022YFB3904200,No.2022YFF0711601+1 种基金Key Project of Innovation LREIS,No.PI009National Natural Science Foundation of China,No.42471503。
文摘Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.
基金supported by the National Key R&D Program of China(2021YFF1200602)the National Science Fund for Excellent Overseas Scholars(0401260011)+3 种基金the National Defense Science and Technology Innovation Fund of Chinese Academy of Sciences(c02022088)the Tianjin Science and Technology Program(20JCZDJC00810)the National Natural Science Foundation of China(82202798)the Shanghai Sailing Program(22YF1404200).
文摘Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BCIs,including their fundamental principles,technical advancements,and applications in specific domains.However,these reviews often focus on signal processing,hardware development,or limited applications such as motor rehabilitation or communication.This paper aims to offer a comprehensive review of recent electroencephalogram(EEG)-based BCI applications in the medical field across 8 critical areas,encompassing rehabilitation,daily communication,epilepsy,cerebral resuscitation,sleep,neurodegenerative diseases,anesthesiology,and emotion recognition.Moreover,the current challenges and future trends of BCIs were also discussed,including personal privacy and ethical concerns,network security vulnerabilities,safety issues,and biocompatibility.
基金Supported by the National Natural Science Foundation of China(U1903214,62372339,62371350,61876135)the Ministry of Education Industry University Cooperative Education Project(202102246004,220800006041043,202002142012)the Fundamental Research Funds for the Central Universities(2042023kf1033)。
文摘Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
文摘The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.
基金supported by the STI2030 Major Projects(2021ZD0204300)the National Natural Science Foundation of China(61803003,62003228).
文摘Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in scientific research and medical felds.This article reviews the types of EEG signals,multiple EEG signal analysis methods,and the application of relevant methods in the neuroscience feld and for diagnosing neurological diseases.First,3 types of EEG signals,including time-invariant EEG,accurate event-related EEG,and random event-related EEG,are introduced.Second,5 main directions for the methods of EEG analysis,including power spectrum analysis,time-frequency analysis,connectivity analysis,source localization methods,and machine learning methods,are described in the main section,along with diferent sub-methods and effect evaluations for solving the same problem.Finally,the application scenarios of different EEG analysis methods are emphasized,and the advantages and disadvantages of similar methods are distinguished.This article is expected to assist researchers in selecting suitable EEG analysis methods based on their research objectives,provide references for subsequent research,and summarize current issues and prospects for the future.
基金founded by the Open Project Program of Anhui Province Key Laboratory of Metallurgical Engineering and Resources Recycling(Anhui University of Technology)(No.SKF21-06)Research Fund for Young Teachers of Anhui University of Technology in 2020(No.QZ202001).
文摘Real-time prediction and precise control of sinter quality are pivotal for energy saving,cost reduction,quality improvement and efficiency enhancement in the ironmaking process.To advance,the accuracy and comprehensiveness of sinter quality prediction,an intelligent flare monitoring system for sintering machine tails that combines hybrid neural networks integrating convolutional neural network with long short-term memory(CNN-LSTM)networks was proposed.The system utilized a high-temperature thermal imager for image acquisition at the sintering machine tail and employed a zone-triggered method to accurately capture dynamic feature images under challenging conditions of high-temperature,high dust,and occlusion.The feature images were then segmented through a triple-iteration multi-thresholding approach based on the maximum between-class variance method to minimize detail loss during the segmentation process.Leveraging the advantages of CNN and LSTM networks in capturing temporal and spatial information,a comprehensive model for sinter quality prediction was constructed,with inputs including the proportion of combustion layer,porosity rate,temperature distribution,and image features obtained from the convolutional neural network,and outputs comprising quality indicators such as underburning index,uniformity index,and FeO content of the sinter.The accuracy is notably increased,achieving a 95.8%hit rate within an error margin of±1.0.After the system is applied,the average qualified rate of FeO content increases from 87.24%to 89.99%,representing an improvement of 2.75%.The average monthly solid fuel consumption is reduced from 49.75 to 46.44 kg/t,leading to a 6.65%reduction and underscoring significant energy saving and cost reduction effects.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
文摘Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought convenience to people’s lives. The number of people using the internet around the globe has also increased significantly, exerting a profound influence on artificial intelligence. Further, the constant upgrading and development of artificial intelligence has led to the continuous innovation and improvement of computer technology. Countries around the world have also registered an increase in investment, paying more attention to artificial intelligence. Through an analysis of the current development situation and the existing applications of artificial intelligence, this paper explicates the role of artificial intelligence in the face of the unceasing expansion of computer network technology.
基金supported by the National Natural Science Foundation of China(Nos.62272418,62102058)Basic Public Welfare Research Program of Zhejiang Province(No.LGG18E050011)the Major Open Project of Key Laboratory for Advanced Design and Intelligent Computing of the Ministry of Education under Grant ADIC2023ZD001,National Undergraduate Training Program on Innovation and Entrepreneurship(No.202410345054).
文摘The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.
基金Supported by Guangdong Province Philosophy and Social Science Planning Project(GD24CGL18&GD23CGL02).
文摘Investigation and analysis of the current status of farmers'livelihood capital and promptly discovering and solving problems in farmers'livelihood development are of great practical significance for optimizing farmers'livelihood strategies and enhancing farmers'livelihood sustainable development capability.Based on the framework of sustainable livelihood analysis,taking Yangshan County as an example,this paper uses field surveys,questionnaires and interviews to summarize and analyze the current status and characteristics and main problems of local farmers'livelihood capitals on the basis of the data of 628 farmer samples.It proposes countermeasures for future development of farmers'livelihoods.Implementing these strategies will be helpful for improving the livelihoods capital structure of farmers'and enhancing their sustainable development capability.
基金This research is conducted as part of the project titled“Digital Twin-based Intrusion Detection System Using Federated Learning for IoMT”(2024-2027),supported by C3iHub,IIT Kanpur,India,under Sanction Order No.:IHUB-NTIHAC/2024/01/3.
文摘The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of Digital Twins(DTs),the virtual replicas of physical components and processes,has also been found to be a game changer for the ever-evolving IoMT.However,these advancements in the healthcare domain come with significant cybersecurity challenges,exposing it to malicious attacks and several security threats.Intrusion Detection Systems(IDSs)serve as a critical defense mechanism,yet traditional IDS approaches often struggle with the complexity and scale of IoMT networks.With this context,this paper follows a systematic approach to analyze the existing literature and highlight the current trends and challenges related to IDS in the IoMT domain.We leveraged techniques like bibliographic and keyword analysis to collect 832 research works published from 2007 to 2025,aligned with the theme“Digital Twins and IDS in IoMT.”It was found that by simulating device behaviours and network interactions in IoMT,DTs not only provide a proactive platform for early threat detection,but also offer a scalable and adaptive approach to mitigating evolving security threats in IoMT.Overall,this review provides a closer look into the role of IDS and DT in securing IoMT systems and sheds light on the possible research directions for developers and the research community.
基金supported in part by the National Natural Science Foundation of China(62276119)the Natural Science Foundation of Jiangsu Province(BK20241764)the Postgraduate Research & Practice Innovation Program of Jiangsu Province(KYCX22_2860)
文摘Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for impulse effects. Firstly, to address the inequality constraints,the penalty method is introduced. Then, a novel optimization strategy is developed, which only requires that the team objective function be strongly convex.
文摘Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have different research careers?It is very obvious,the scholars may gain a high impact if they have more research experience or have spent more time(in terms of research career in a year).Then we cannot compare two scholars who have different research careers.Many bibliometrics indicators address the time-span of scholars.In this series,the h-index sequence and EM/EM’-index sequence have been introduced for assessment and comparison of the scientific impact of scholars.The h-index sequence,EM-index sequence,and EM’-index sequence consider the yearly impact of scholars,and comparison is done by the index value along with their component value.The time-series indicators fail to give a comparative analysis between senior and junior scholars if there is a huge difference in both scholars’research careers.Design/methodology/approach:We have proposed the cumulative index calculation method to appraise the scientific impact of scholars till that age and tested it with 89 scholars data.Findings:The proposed mechanism is implemented and tested on 89 scholars’publication data,providing a clear difference between the scientific impact of two scholars.This also helps in predicting future prominent scholars based on their research impact.Research limitations:This study adopts a simplistic approach by assigning equal credit to all authors,regardless of their individual contributions.Further,the potential impact of career breaks on research productivity is not taken into account.These assumptions may limit the generalizability of our findings Practical implications:The proposed method can be used by respected institutions to compare their scholars impact.Funding agencies can also use it for similar purposes.Originality/value:This research adds to the existing literature by introducing a novel methodology for comparing the scientific impact of scholars.The outcomes of this research have notable implications for the development of more precise and unbiased research assessment frameworks,enabling a more equitable evaluation of scholarly contributions.
文摘Advancements in deep learning have considerably enhanced techniques for Rapid Entire Body Assess-ment(REBA)pose estimation by leveraging progress in three-dimensional human modeling.This survey provides an extensive overview of recent advancements,particularly emphasizing monocular image-based methodologies and their incorporation into ergonomic risk assessment frameworks.By reviewing literature from 2016 to 2024,this study offers a current and comprehensive analysis of techniques,existing challenges,and emerging trends in three-dimensional human pose estimation.In contrast to traditional reviews organized by learning paradigms,this survey examines how three-dimensional pose estimation is effectively utilized within musculoskeletal disorder(MSD)assessments,focusing on essential advancements,comparative analyses,and ergonomic implications.We extend existing image-based clas-sification schemes by examining state-of-the-art two-dimensional models that enhance monocular three-dimensional prediction accuracy and analyze skeleton representations by evaluating joint connectivity and spatial configuration,offering insights into how structural variability influences model robustness.A core contribution of this work is the identification of a critical research gap:the limited exploration of estimating REBA scores directly from single RGB images using monocular three-dimensional pose estimation.Most existing studies depend on depth sensors or sequential inputs,limiting applicability in real-time and resource-constrained environments.Our review emphasizes this gap and proposes future research directions to develop accurate,lightweight,and generalizable models suitable for practical deployment.This survey is a valuable resource for researchers and practitioners in computer vision,ergonomics,and related disciplines,offering a structured understanding of current methodologies and guidance for future innovation in three-dimensional human pose estimation for REBA-based ergonomic risk assessment.
基金supported by the National Natural Science Foundation of China under Grant 61602162the Hubei Provincial Science and Technology Plan Project under Grant 2023BCB041.
文摘With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with complex attacks in SDN environments,thus to address the network security issues from the viewpoint of Artificial Intelligence(AI),this paper introduces the Crayfish Optimization Algorithm(COA)to the field of intrusion detection for both SDN and traditional network architectures,and based on the characteristics of the original COA,an Improved Crayfish Optimization Algorithm(ICOA)is proposed by integrating strategies of elite reverse learning,Levy flight,crowding factor and parameter modification.The ICOA is then utilized for AI-integrated feature selection of intrusion detection for both SDN and traditional network architectures,to reduce the dimensionality of the data and improve the performance of network intrusion detection.Finally,the performance evaluation is performed by testing not only the NSL-KDD dataset and the UNSW-NB 15 dataset for traditional networks but also the InSDN dataset for SDN-based networks.Experimental results show that ICOA improves the accuracy by 0.532%and 2.928%respectively compared with GWO and COA in traditional networks.In SDN networks,the accuracy of ICOA is 0.25%and 0.3%higher than COA and PSO.These findings collectively indicate that AI-integrated feature selection based on the proposed ICOA can promote network intrusion detection for both SDN and traditional architectures.
基金Liaoning Provincial Social Science Planning Fund“Research on the Educational Intelligent Evaluation System Based on the CIPP Model and Artificial Intelligence under the Background of New Engineering”(L22BTJ005)。
文摘This study explores the feasibility of constructing an intelligent educational evaluation system based on the CIPP model and artificial intelligence technology in the context of new engineering disciplines.By integrating the CIPP model with AI technology,a novel intelligent educational evaluation system was designed.Through experimental validation and case studies,the system demonstrated significant effectiveness in improving teaching quality,facilitating personalized student development,and optimizing educational resource allocation.Additionally,the study predicts potential changes this system could bring to the education industry and proposes relevant policy recommendations.Although the current research has limitations,with technological advancements in the future,this system is expected to provide stronger support for innovations in engineering education models.
基金funded by the National Natural Science Foundation of China(Grant No.62272236)the Natural Science Foundation of Jiangsu Province(Grant No.BK20201136).
文摘The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.