Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e....Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e.,run to failure)or time-based preventive maintenance(i.e.,scheduled servicing),prove ineffective for complex systems with many Internet of Things(IoT)devices and sensors because they fall short in detecting faults at early stages when it is most crucial.This paper presents a predictive maintenance framework based on a hybrid deep learning model that integrates the capabilities of Long Short-Term Memory(LSTM)Networks and Convolutional Neural Networks(CNNs).The framework integrates spatial feature extraction and temporal sequence modeling to accurately classify the health state of industrial equipment into three categories,including Normal,Require Maintenance,and Failed.The framework uses a modular pipeline that includes IoT-enabled data collection along with secure transmission methods to manage cloud storage and provide real-time fault classification.The FD004 subset of the NASA C-MAPSS dataset,containing multivariate sensor readings from aircraft engines,serves as the training and evaluation data for the model.Experimental results show that the LSTM-CNN model outperforms baseline models such as LSTM-SVM and LSTM-RNN,achieving an overall average accuracy of 86.66%,precision of 86.00%,recall of 86.33%,and F1-score of 86.33%.Contrary to the previous LSTM-CNN-based predictive maintenance models that either provide a binary classification or rely on synthetically balanced data,our paper provides a three-class maintenance state(i.e.,Normal,Require Maintenance,and Failed)along with threshold-based labeling that retains the true nature of the degradation.In addition,our work also provides an IoT-to-cloud-based modular architecture for deployment.It offers Computerized Maintenance Management System(CMMS)integration,making our proposed solution not only technically sound but also practical and innovative.The solution achieves real-world industrial deployment readiness through its reliable performance alongside its scalable system design.展开更多
As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impact...As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impacting travel experiences and posing safety risks.Smart urban transportation management emerges as a strategic solution,conceptualized here as a multidimensional big data problem.The success of this strategy hinges on the effective collection of information from diverse,extensive,and heterogeneous data sources,necessitating the implementation of full⁃stack Information and Communication Technology(ICT)solutions.The main idea of the work is to investigate the current technologies of Intelligent Transportation Systems(ITS)and enhance the safety of urban transportation systems.Machine learning models,trained on historical data,can predict traffic congestion,allowing for the implementation of preventive measures.Deep learning architectures,with their ability to handle complex data representations,further refine traffic predictions,contributing to more accurate and dynamic transportation management.The background of this research underscores the challenges posed by traffic congestion in metropolitan areas and emphasizes the need for advanced technological solutions.By integrating GPS and GIS technologies with machine learning algorithms,this work aims to pay attention to the development of intelligent transportation systems that not only address current challenges but also pave the way for future advancements in urban transportation management.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
The pressure-preserving controller is the key component of deep in situ pressure-preserving coring(IPP-Coring).With increasing drilling depth,the environmental temperature and pressure increase accordingly.However,due...The pressure-preserving controller is the key component of deep in situ pressure-preserving coring(IPP-Coring).With increasing drilling depth,the environmental temperature and pressure increase accordingly.However,due to the strength and sealing problems of pressure-preserving controllers,the coring pressure is generally lower than 70 MPa.Establishing a high-temperature and ultrahigh-pressure test system is highly important for improving the strength and sealing performance of pressure-preserving controllers.This paper introduces a high-temperature and ultrahigh-pressure test system for deep IPP-Coring controller performance analysis.The device includes six parts:an auxiliary air source system,a pressurization system,a temperature control system,a hydraulic system,a data acquisition and electrical control system,and an ultrahigh-pressure vessel.The test system can reconstruct a 150℃ and 200 MPa in situ environment and simulate and test the movement state of the corer and the stability of the pressure-preserving action trigger of the pressure-preserving controller in the deep IPP-Coring process.To verify the performance of this test system,saddle-shaped pressure-preserving controllers made of four different materials were subjected to pressure tests under normal-temperature and high-temperature conditions.The results showed that the ultimate pressure-bearing capability of the pressure-preserving controller greatly varied between normal-temperature and high-temperature conditions.The pressure-preserving ability and sealing performance of the pressure-preserving controller decreased significantly at high temperature,and the pressure-preserving controller exhibited significantly different sealing failure characteristics due to material differences.This study is important for progressing the extraction and evaluation of deep reservoir resources.展开更多
Pressure-preserved coring technologies are critical for deep-earth resource exploration but are constrained by the inability to achieve multidirectional coring,restricting exploration range while escalating costs and ...Pressure-preserved coring technologies are critical for deep-earth resource exploration but are constrained by the inability to achieve multidirectional coring,restricting exploration range while escalating costs and environmental impacts.We developed a multidirectional pressure-preserved coring system based on magnetic control for deep-earth environments up to 5000 m.The system integrates a magnetically controlled method and key pressure-preserved components to ensure precise self-triggering and self-sealing.It is supported by geometric control equations for optimizing structural stability.Their structure was verified and optimized through theoretical and numerical calculations to meet design objectives.To clarify the self-triggering mechanism in complex environments,a dynamic interference model was established,verifying stability during multidirectional coring.The prototype was fabricated,and functional tests confirmed that it met its design objectives.In a 300-meter-deep test inclined well,10 coring operations were completed with a 100%pressure-preserved success rate,confirming the accuracy of the dynamic interference model analysis.Field trials in a 1970-meter-deep inclined petroleum well,representative of complex environments,demonstrated an in-situ pressure preservation efficiency of 92.18%at 22 MPa.This system innovatively expands the application scope of pressure-preserved coring,providing technical support for efficient and sustainable deep resources exploration and mining.展开更多
Extracting typical operational scenarios is essential for making flexible decisions in the dispatch of a new power system.A novel deep time series aggregation scheme(DTSAs)is proposed to generate typical operational s...Extracting typical operational scenarios is essential for making flexible decisions in the dispatch of a new power system.A novel deep time series aggregation scheme(DTSAs)is proposed to generate typical operational scenarios,considering the large amount of historical operational snapshot data.Specifically,DTSAs analyse the intrinsic mechanisms of different scheduling operational scenario switching to mathematically represent typical operational scenarios.A Gramian angular summation field-based operational scenario image encoder was designed to convert operational scenario sequences into highdimensional spaces.This enables DTSAs to fully capture the spatiotemporal characteristics of new power systems using deep feature iterative aggregation models.The encoder also facilitates the generation of typical operational scenarios that conform to historical data distributions while ensuring the integrity of grid operational snapshots.Case studies demonstrate that the proposed method extracted new fine-grained power system dispatch schemes and outperformed the latest high-dimensional feature-screening methods.In addition,experiments with different new energy access ratios were conducted to verify the robustness of the proposed method.DTSAs enable dispatchers to master the operation experience of the power system in advance,and actively respond to the dynamic changes of the operation scenarios under the high access rate of new energy.展开更多
The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability an...The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion...This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.展开更多
Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users...Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users’emotional responses,listening habits,and personal preferences to provide personalized suggestions.A significant challenge they face is the“cold start”problem,where new users have no past interactions to guide recommendations.To improve user experience,these systems aimto effectively recommendmusic even to such users by considering their listening behavior and music popularity.This paper introduces a novel music recommendation system that combines order clustering and a convolutional neural network,utilizing user comments and rankings as input.Initially,the system organizes users into clusters based on semantic similarity,followed by the utilization of their rating similarities as input for the convolutional neural network.This network then predicts ratings for unreviewed music by users.Additionally,the system analyses user music listening behaviour and music popularity.Music popularity can help to address cold start users as well.Finally,the proposed method recommends unreviewed music based on predicted high rankings and popularity,taking into account each user’s music listening habits.The proposed method combines predicted high rankings and popularity by first selecting popular unreviewedmusic that themodel predicts to have the highest ratings for each user.Among these,the most popular tracks are prioritized,defined by metrics such as frequency of listening across users.The number of recommended tracks is aligned with each user’s typical listening rate.The experimental findings demonstrate that the new method outperformed other classification techniques and prior recommendation systems,yielding a mean absolute error(MAE)rate and rootmean square error(RMSE)rate of approximately 0.0017,a hit rate of 82.45%,an average normalized discounted cumulative gain(nDCG)of 82.3%,and a prediction accuracy of new ratings at 99.388%.展开更多
Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature o...Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature of inter-Public Land Mobile Network(inter-PLMN)signaling.Traditional rule-based defenses are inadequate for protecting cloud-native 5G core networks,particularly as roaming expands into enterprise and Internet of Things(IoT)domains.This work addresses these challenges by designing a scalable 5G Standalone testbed,generating the first intrusion detection dataset specifically tailored to roaming threats,and proposing a deep learning based intrusion detection framework for cloud-native environments.Six deep learning models including Multilayer Perceptron(MLP),one-dimensional Convolutional Neural Network(1D CNN),Autoencoder(AE),Recurrent Neural Network(RNN),Gated Recurrent Unit(GRU),and Long Short-Term Memory(LSTM)were evaluated on the dataset using both weighted and balanced metrics to account for strong class imbalance.While all models achieved over 99%accuracy,recurrent architectures such as GRU and LSTM outperformed others in balanced accuracy and macro-level evaluation,demonstrating superior effectiveness in detecting rare but high-impact attacks.These results confirm the importance of sequence-aware Artificial Intelligence(AI)models for securing roaming scenarios,where transient and contextdependent threats are common.The proposed framework provides a foundation for intelligent,adaptive intrusion detection in 5G and offers a path toward resilient security in Beyond 5G and 6G networks.展开更多
Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate...Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.展开更多
Objective:To systematically evaluate prediction models for postoperative deep vein thrombosis(DVT)in elderly hip fracture patients and assess their methodological quality and predictive performance.Methods:Following P...Objective:To systematically evaluate prediction models for postoperative deep vein thrombosis(DVT)in elderly hip fracture patients and assess their methodological quality and predictive performance.Methods:Following PRISMA guidelines,we searched eight databases(PubMed,Embase,Cochrane Library,Web of Science,CINAHL,CNKI,Wanfang,VIP)from inception to May 2025.Studies developing or validating DVT prediction models in elderly hip fracture patients were included.Two reviewers independently screened studies,extracted data,and assessed risk of bias and applicability using the PROBAST tool.Results:Eleven studies were included,all conducted in China between 2021 and 2025.Sample sizes ranged from 101 to 504 patients(total n=3,286).Models incorporated 3 to 9 predictors,with D-dimer,age,and time from injury to surgery being most common.All 11 studies(100%)were rated as high risk of bias,primarily due to small sample sizes,lack of validation,and inadequate missing data handling.Applicability concerns were low in 8 studies(72.7%).AUC values ranged from 0.648 to 0.967,with 10 studies(90.9%)reporting AUC>0.7.Meta-analysis identified time from injury to surgery(OR=4.63,95%CI:2.58–6.68),age(OR=1.99),D-dimer(OR=1.51),and Caprini score(OR=1.75)as significant predictors.Conclusion:Current DVT prediction models for elderly hip fracture patients demonstrate acceptable discrimination but are limited by high risk of bias and lack of external validation.Prospective,multicenter studies with rigorous validation are needed to develop clinically applicable models.展开更多
It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using...It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.展开更多
In deep oil reservoir development,enhanced oil recovery(EOR)techniques encounter significant challenges under high-temperature and high-salinity conditions.Traditional profile-control agents often fail to maintain sta...In deep oil reservoir development,enhanced oil recovery(EOR)techniques encounter significant challenges under high-temperature and high-salinity conditions.Traditional profile-control agents often fail to maintain stable blocking under extreme conditions and exhibit poor resistance to high temperature and high salinity.This study develops a functionalized nanographite system(the MEGO system)with superior high-temperature dispersibility and thermosalinity-responsive capability through polyether amine(PEA)grafting and noncovalent interactions with disodium naphthalene sulfonate(DNS)molecules.The grafted PEA and DNS provide steric hindrance and electrostatic repulsion,enhancing thermal and salinity resistance.After ten days of aggregation,the MEGO system forms stable particle aggregates(55.51-61.80 lm)that are suitable for deep reservoir migration and profile control.Both experiments and simulations reveal that particle size variations are synergistically controlled by temperature and salt ions(Na^(+),Ca^(2+),and Mg^(2+)).Compared with monovalent ions,divalent ions promote nanographite aggregation more strongly through double-layer compression and bridging effects.In core displacement experiments,the MEGO system demonstrated superior performance in reservoirs with permeabilities ranging from 21.6 to 103 mD.The aggregates formed within the pore throats significantly enhanced flow resistance,expanded the sweep volume,and increased the overall oil recovery to 56.01%.This research indicates that the MEGO system holds excellent potential for EOR in deep oil reservoirs.展开更多
The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films...The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films annually in more than 20 languages,personalized recommendations are essential to highlight relevant content.To overcome the limitations of traditional recommender systems-such as static latent vectors,poor handling of cold-start scenarios,and the absence of uncertainty modeling-we propose a deep Collaborative Neural Generative Embedding(C-NGE)model.C-NGE dynamically learns user and item representations by integrating rating information and metadata features in a unified neural framework.It uses metadata as sampled noise and applies the reparameterization trick to capture latent patterns better and support predictions for new users or items without retraining.We evaluate CNGE on the Indian Regional Movies(IRM)dataset,along with MovieLens 100 K and 1 M.Results show that our model consistently outperforms several existing methods,and its extensibility allows for incorporating additional signals like user reviews and multimodal data to enhance recommendation quality.展开更多
Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication direct...Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication directly affects patient outcomes in these settings;nonetheless,current resource allocation techniques struggle with complicated interference patterns and different service needs of AI-native healthcare systems.In dense installations where conventional approaches fail,this paper tackles the challenge of combining network efficiency with medical care priority.Thus,we offer a Dueling Deep Q-Network(DDQN)-based resource allocation approach for AI-native healthcare systems in 6G dense networks.First,we create a point-line graph coloringbased interference model to capture the unique characteristics of medical device communications.Building on this foundation,we suggest a DDQN approach to optimal resource allocation over multiple medical services by combining advantage estimate with healthcare-aware state evaluation.Unlike traditional graph-based models,this one correctly depicts the overlapping coverage areas common in hospital environments.Building on this basis,our DDQN design allows the system to prioritize medical needs while distributing resources by separating healthcare state assessment from advantage estimation.Experimental findings show that the suggested DDQN outperforms state-of-the-art techniques in dense healthcare installations by 14.6%greater network throughput and 13.7%better resource use.The solution shows particularly strong in maintaining service quality under vital conditions with 5.5%greater Qo S satisfaction for emergency services and 8.2%quicker recovery from interruptions.展开更多
This study introduces a Transformer-based multimodal fusion framework for simulating multiphase flow and heat transfer in carbon dioxide(CO_(2))–water enhanced geothermal systems(EGS).The model integrates geological ...This study introduces a Transformer-based multimodal fusion framework for simulating multiphase flow and heat transfer in carbon dioxide(CO_(2))–water enhanced geothermal systems(EGS).The model integrates geological parameters,thermal gradients,and control schedules to enable fast and accurate prediction of complex reservoir dynamics.The main contributions are:(i)development of a workflow that couples physics-based reservoir simulation with a Transformer neural network architecture,(ii)design of physics-guided loss functions to enforce conservation of mass and energy,(iii)application of the surrogate model to closed-loop optimization using a differential evolution(DE)algorithm,and(iv)incorporation of economic performance metrics,such as net present value(NPV),into decision support.The proposed framework achieves root mean square error(RMSE)of 3–5%,mean absolute error(MAE)below 4%,and coefficients of determination greater than 0.95 across multiple prediction targets,including production rates,pressure distributions,and temperature fields.When compared with recurrent neural network(RNN)baselines such as gated recurrent units(GRU)and long short-term memory networks(LSTM),as well as a physics-informed reduced-order model,the Transformer-based approach demonstrates superior accuracy and computational efficiency.Optimization experiments further show a 15–20%improvement in NPV,highlighting the framework’s potential for real-time forecasting,optimization,and decision-making in geothermal reservoir engineering.展开更多
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ...Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.展开更多
文摘Modern industrial environments require uninterrupted machinery operation to maintain productivity standards while ensuring safety and minimizing costs.Conventional maintenance methods,such as reactive maintenance(i.e.,run to failure)or time-based preventive maintenance(i.e.,scheduled servicing),prove ineffective for complex systems with many Internet of Things(IoT)devices and sensors because they fall short in detecting faults at early stages when it is most crucial.This paper presents a predictive maintenance framework based on a hybrid deep learning model that integrates the capabilities of Long Short-Term Memory(LSTM)Networks and Convolutional Neural Networks(CNNs).The framework integrates spatial feature extraction and temporal sequence modeling to accurately classify the health state of industrial equipment into three categories,including Normal,Require Maintenance,and Failed.The framework uses a modular pipeline that includes IoT-enabled data collection along with secure transmission methods to manage cloud storage and provide real-time fault classification.The FD004 subset of the NASA C-MAPSS dataset,containing multivariate sensor readings from aircraft engines,serves as the training and evaluation data for the model.Experimental results show that the LSTM-CNN model outperforms baseline models such as LSTM-SVM and LSTM-RNN,achieving an overall average accuracy of 86.66%,precision of 86.00%,recall of 86.33%,and F1-score of 86.33%.Contrary to the previous LSTM-CNN-based predictive maintenance models that either provide a binary classification or rely on synthetically balanced data,our paper provides a three-class maintenance state(i.e.,Normal,Require Maintenance,and Failed)along with threshold-based labeling that retains the true nature of the degradation.In addition,our work also provides an IoT-to-cloud-based modular architecture for deployment.It offers Computerized Maintenance Management System(CMMS)integration,making our proposed solution not only technically sound but also practical and innovative.The solution achieves real-world industrial deployment readiness through its reliable performance alongside its scalable system design.
文摘As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impacting travel experiences and posing safety risks.Smart urban transportation management emerges as a strategic solution,conceptualized here as a multidimensional big data problem.The success of this strategy hinges on the effective collection of information from diverse,extensive,and heterogeneous data sources,necessitating the implementation of full⁃stack Information and Communication Technology(ICT)solutions.The main idea of the work is to investigate the current technologies of Intelligent Transportation Systems(ITS)and enhance the safety of urban transportation systems.Machine learning models,trained on historical data,can predict traffic congestion,allowing for the implementation of preventive measures.Deep learning architectures,with their ability to handle complex data representations,further refine traffic predictions,contributing to more accurate and dynamic transportation management.The background of this research underscores the challenges posed by traffic congestion in metropolitan areas and emphasizes the need for advanced technological solutions.By integrating GPS and GIS technologies with machine learning algorithms,this work aims to pay attention to the development of intelligent transportation systems that not only address current challenges but also pave the way for future advancements in urban transportation management.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金funding support from National Natural Science Foundation of China(Grant Nos.52225403,51827901,and 52304146).
文摘The pressure-preserving controller is the key component of deep in situ pressure-preserving coring(IPP-Coring).With increasing drilling depth,the environmental temperature and pressure increase accordingly.However,due to the strength and sealing problems of pressure-preserving controllers,the coring pressure is generally lower than 70 MPa.Establishing a high-temperature and ultrahigh-pressure test system is highly important for improving the strength and sealing performance of pressure-preserving controllers.This paper introduces a high-temperature and ultrahigh-pressure test system for deep IPP-Coring controller performance analysis.The device includes six parts:an auxiliary air source system,a pressurization system,a temperature control system,a hydraulic system,a data acquisition and electrical control system,and an ultrahigh-pressure vessel.The test system can reconstruct a 150℃ and 200 MPa in situ environment and simulate and test the movement state of the corer and the stability of the pressure-preserving action trigger of the pressure-preserving controller in the deep IPP-Coring process.To verify the performance of this test system,saddle-shaped pressure-preserving controllers made of four different materials were subjected to pressure tests under normal-temperature and high-temperature conditions.The results showed that the ultimate pressure-bearing capability of the pressure-preserving controller greatly varied between normal-temperature and high-temperature conditions.The pressure-preserving ability and sealing performance of the pressure-preserving controller decreased significantly at high temperature,and the pressure-preserving controller exhibited significantly different sealing failure characteristics due to material differences.This study is important for progressing the extraction and evaluation of deep reservoir resources.
基金supported by the National Key Research and Development Program of China(No.2023YFF0615401)Joint Funds of the National Natural Science Foundation of China(No.U24A2087)+1 种基金Research Fund of State Key Laboratory of Geomechanics and Geotechnical Engineering,Institute of Rock and Soil Mechanics,Chinese Academy of Sciences(No.SKLGME022009)the National Natural Science Foundation of China(No.42477191)。
文摘Pressure-preserved coring technologies are critical for deep-earth resource exploration but are constrained by the inability to achieve multidirectional coring,restricting exploration range while escalating costs and environmental impacts.We developed a multidirectional pressure-preserved coring system based on magnetic control for deep-earth environments up to 5000 m.The system integrates a magnetically controlled method and key pressure-preserved components to ensure precise self-triggering and self-sealing.It is supported by geometric control equations for optimizing structural stability.Their structure was verified and optimized through theoretical and numerical calculations to meet design objectives.To clarify the self-triggering mechanism in complex environments,a dynamic interference model was established,verifying stability during multidirectional coring.The prototype was fabricated,and functional tests confirmed that it met its design objectives.In a 300-meter-deep test inclined well,10 coring operations were completed with a 100%pressure-preserved success rate,confirming the accuracy of the dynamic interference model analysis.Field trials in a 1970-meter-deep inclined petroleum well,representative of complex environments,demonstrated an in-situ pressure preservation efficiency of 92.18%at 22 MPa.This system innovatively expands the application scope of pressure-preserved coring,providing technical support for efficient and sustainable deep resources exploration and mining.
基金The Key R&D Project of Jilin Province,Grant/Award Number:20230201067GX。
文摘Extracting typical operational scenarios is essential for making flexible decisions in the dispatch of a new power system.A novel deep time series aggregation scheme(DTSAs)is proposed to generate typical operational scenarios,considering the large amount of historical operational snapshot data.Specifically,DTSAs analyse the intrinsic mechanisms of different scheduling operational scenario switching to mathematically represent typical operational scenarios.A Gramian angular summation field-based operational scenario image encoder was designed to convert operational scenario sequences into highdimensional spaces.This enables DTSAs to fully capture the spatiotemporal characteristics of new power systems using deep feature iterative aggregation models.The encoder also facilitates the generation of typical operational scenarios that conform to historical data distributions while ensuring the integrity of grid operational snapshots.Case studies demonstrate that the proposed method extracted new fine-grained power system dispatch schemes and outperformed the latest high-dimensional feature-screening methods.In addition,experiments with different new energy access ratios were conducted to verify the robustness of the proposed method.DTSAs enable dispatchers to master the operation experience of the power system in advance,and actively respond to the dynamic changes of the operation scenarios under the high access rate of new energy.
基金supported by National Natural Science Foundation of China under Grant No.62372110Fujian Provincial Natural Science of Foundation under Grants 2023J02008,2024H0009.
文摘The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
基金supported by the National Science and Technology Council,Taiwan[Grant NSTC 111-2628-E-006-005-MY3]supported by the Ocean Affairs Council,Taiwansponsored in part by Higher Education Sprout Project,Ministry of Education to the Headquarters of University Advancement at National Cheng Kung University(NCKU).
文摘This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.
基金funded by the National Nature Sciences Foundation of China with Grant No.42250410321。
文摘Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users’emotional responses,listening habits,and personal preferences to provide personalized suggestions.A significant challenge they face is the“cold start”problem,where new users have no past interactions to guide recommendations.To improve user experience,these systems aimto effectively recommendmusic even to such users by considering their listening behavior and music popularity.This paper introduces a novel music recommendation system that combines order clustering and a convolutional neural network,utilizing user comments and rankings as input.Initially,the system organizes users into clusters based on semantic similarity,followed by the utilization of their rating similarities as input for the convolutional neural network.This network then predicts ratings for unreviewed music by users.Additionally,the system analyses user music listening behaviour and music popularity.Music popularity can help to address cold start users as well.Finally,the proposed method recommends unreviewed music based on predicted high rankings and popularity,taking into account each user’s music listening habits.The proposed method combines predicted high rankings and popularity by first selecting popular unreviewedmusic that themodel predicts to have the highest ratings for each user.Among these,the most popular tracks are prioritized,defined by metrics such as frequency of listening across users.The number of recommended tracks is aligned with each user’s typical listening rate.The experimental findings demonstrate that the new method outperformed other classification techniques and prior recommendation systems,yielding a mean absolute error(MAE)rate and rootmean square error(RMSE)rate of approximately 0.0017,a hit rate of 82.45%,an average normalized discounted cumulative gain(nDCG)of 82.3%,and a prediction accuracy of new ratings at 99.388%.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00441484,Development of Open Roaming Technology for Private 5G Network)。
文摘Roaming in 5G networks enables seamless global mobility but also introduces significant security risks due to legacy protocol dependencies,uneven Security Edge Protection Proxy(SEPP)deployment,and the dynamic nature of inter-Public Land Mobile Network(inter-PLMN)signaling.Traditional rule-based defenses are inadequate for protecting cloud-native 5G core networks,particularly as roaming expands into enterprise and Internet of Things(IoT)domains.This work addresses these challenges by designing a scalable 5G Standalone testbed,generating the first intrusion detection dataset specifically tailored to roaming threats,and proposing a deep learning based intrusion detection framework for cloud-native environments.Six deep learning models including Multilayer Perceptron(MLP),one-dimensional Convolutional Neural Network(1D CNN),Autoencoder(AE),Recurrent Neural Network(RNN),Gated Recurrent Unit(GRU),and Long Short-Term Memory(LSTM)were evaluated on the dataset using both weighted and balanced metrics to account for strong class imbalance.While all models achieved over 99%accuracy,recurrent architectures such as GRU and LSTM outperformed others in balanced accuracy and macro-level evaluation,demonstrating superior effectiveness in detecting rare but high-impact attacks.These results confirm the importance of sequence-aware Artificial Intelligence(AI)models for securing roaming scenarios,where transient and contextdependent threats are common.The proposed framework provides a foundation for intelligent,adaptive intrusion detection in 5G and offers a path toward resilient security in Beyond 5G and 6G networks.
文摘Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.
文摘Objective:To systematically evaluate prediction models for postoperative deep vein thrombosis(DVT)in elderly hip fracture patients and assess their methodological quality and predictive performance.Methods:Following PRISMA guidelines,we searched eight databases(PubMed,Embase,Cochrane Library,Web of Science,CINAHL,CNKI,Wanfang,VIP)from inception to May 2025.Studies developing or validating DVT prediction models in elderly hip fracture patients were included.Two reviewers independently screened studies,extracted data,and assessed risk of bias and applicability using the PROBAST tool.Results:Eleven studies were included,all conducted in China between 2021 and 2025.Sample sizes ranged from 101 to 504 patients(total n=3,286).Models incorporated 3 to 9 predictors,with D-dimer,age,and time from injury to surgery being most common.All 11 studies(100%)were rated as high risk of bias,primarily due to small sample sizes,lack of validation,and inadequate missing data handling.Applicability concerns were low in 8 studies(72.7%).AUC values ranged from 0.648 to 0.967,with 10 studies(90.9%)reporting AUC>0.7.Meta-analysis identified time from injury to surgery(OR=4.63,95%CI:2.58–6.68),age(OR=1.99),D-dimer(OR=1.51),and Caprini score(OR=1.75)as significant predictors.Conclusion:Current DVT prediction models for elderly hip fracture patients demonstrate acceptable discrimination but are limited by high risk of bias and lack of external validation.Prospective,multicenter studies with rigorous validation are needed to develop clinically applicable models.
基金supported by the National Natural Science Foundation of China(Grant Nos.42375062 and 42275158)the National Key Scientific and Technological Infrastructure project“Earth System Science Numerical Simulator Facility”(EarthLab)the Natural Science Foundation of Gansu Province(Grant No.22JR5RF1080)。
文摘It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.
基金supported by the General Program of the National Natural Science Foundation of China(52074335)the National Key Research and Development Program of China(2022YFE0129900 and 2019YFA0708700)+1 种基金the Fundamental Research Funds for the Central Universities(23CX07003A)the Special Funding Program for the Operational Expenses of National Research Institutions(SKLDOG2024-ZYRC-01).
文摘In deep oil reservoir development,enhanced oil recovery(EOR)techniques encounter significant challenges under high-temperature and high-salinity conditions.Traditional profile-control agents often fail to maintain stable blocking under extreme conditions and exhibit poor resistance to high temperature and high salinity.This study develops a functionalized nanographite system(the MEGO system)with superior high-temperature dispersibility and thermosalinity-responsive capability through polyether amine(PEA)grafting and noncovalent interactions with disodium naphthalene sulfonate(DNS)molecules.The grafted PEA and DNS provide steric hindrance and electrostatic repulsion,enhancing thermal and salinity resistance.After ten days of aggregation,the MEGO system forms stable particle aggregates(55.51-61.80 lm)that are suitable for deep reservoir migration and profile control.Both experiments and simulations reveal that particle size variations are synergistically controlled by temperature and salt ions(Na^(+),Ca^(2+),and Mg^(2+)).Compared with monovalent ions,divalent ions promote nanographite aggregation more strongly through double-layer compression and bridging effects.In core displacement experiments,the MEGO system demonstrated superior performance in reservoirs with permeabilities ranging from 21.6 to 103 mD.The aggregates formed within the pore throats significantly enhanced flow resistance,expanded the sweep volume,and increased the overall oil recovery to 56.01%.This research indicates that the MEGO system holds excellent potential for EOR in deep oil reservoirs.
文摘The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films annually in more than 20 languages,personalized recommendations are essential to highlight relevant content.To overcome the limitations of traditional recommender systems-such as static latent vectors,poor handling of cold-start scenarios,and the absence of uncertainty modeling-we propose a deep Collaborative Neural Generative Embedding(C-NGE)model.C-NGE dynamically learns user and item representations by integrating rating information and metadata features in a unified neural framework.It uses metadata as sampled noise and applies the reparameterization trick to capture latent patterns better and support predictions for new users or items without retraining.We evaluate CNGE on the Indian Regional Movies(IRM)dataset,along with MovieLens 100 K and 1 M.Results show that our model consistently outperforms several existing methods,and its extensibility allows for incorporating additional signals like user reviews and multimodal data to enhance recommendation quality.
基金supported by National Natural Science Foundation of China under Granted No.62202247。
文摘Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication directly affects patient outcomes in these settings;nonetheless,current resource allocation techniques struggle with complicated interference patterns and different service needs of AI-native healthcare systems.In dense installations where conventional approaches fail,this paper tackles the challenge of combining network efficiency with medical care priority.Thus,we offer a Dueling Deep Q-Network(DDQN)-based resource allocation approach for AI-native healthcare systems in 6G dense networks.First,we create a point-line graph coloringbased interference model to capture the unique characteristics of medical device communications.Building on this foundation,we suggest a DDQN approach to optimal resource allocation over multiple medical services by combining advantage estimate with healthcare-aware state evaluation.Unlike traditional graph-based models,this one correctly depicts the overlapping coverage areas common in hospital environments.Building on this basis,our DDQN design allows the system to prioritize medical needs while distributing resources by separating healthcare state assessment from advantage estimation.Experimental findings show that the suggested DDQN outperforms state-of-the-art techniques in dense healthcare installations by 14.6%greater network throughput and 13.7%better resource use.The solution shows particularly strong in maintaining service quality under vital conditions with 5.5%greater Qo S satisfaction for emergency services and 8.2%quicker recovery from interruptions.
文摘This study introduces a Transformer-based multimodal fusion framework for simulating multiphase flow and heat transfer in carbon dioxide(CO_(2))–water enhanced geothermal systems(EGS).The model integrates geological parameters,thermal gradients,and control schedules to enable fast and accurate prediction of complex reservoir dynamics.The main contributions are:(i)development of a workflow that couples physics-based reservoir simulation with a Transformer neural network architecture,(ii)design of physics-guided loss functions to enforce conservation of mass and energy,(iii)application of the surrogate model to closed-loop optimization using a differential evolution(DE)algorithm,and(iv)incorporation of economic performance metrics,such as net present value(NPV),into decision support.The proposed framework achieves root mean square error(RMSE)of 3–5%,mean absolute error(MAE)below 4%,and coefficients of determination greater than 0.95 across multiple prediction targets,including production rates,pressure distributions,and temperature fields.When compared with recurrent neural network(RNN)baselines such as gated recurrent units(GRU)and long short-term memory networks(LSTM),as well as a physics-informed reduced-order model,the Transformer-based approach demonstrates superior accuracy and computational efficiency.Optimization experiments further show a 15–20%improvement in NPV,highlighting the framework’s potential for real-time forecasting,optimization,and decision-making in geothermal reservoir engineering.
文摘Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.