This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini...This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.展开更多
This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curric...This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curricula,it elucidates its advantages and operational mechanisms in interdisciplinary PBL.Combining case studies and empirical research,the investigation proposes implementation pathways and strategies for the generative AI-enhanced interdisciplinary PBL model,detailing specific applications across three phases:project preparation,implementation,and evaluation.The research demonstrates that generative AI-enabled interdisciplinary project-based learning can effectively enhance students’learning motivation,interdisciplinary thinking capabilities,and innovative competencies,providing new conceptual frameworks and practical approaches for educational model innovation.展开更多
Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse ...Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy.展开更多
Robot calligraphy visually reflects the motion capability of robotic manipulators.While traditional researches mainly focus on image generation and the writing of simple calligraphic strokes or characters,this article...Robot calligraphy visually reflects the motion capability of robotic manipulators.While traditional researches mainly focus on image generation and the writing of simple calligraphic strokes or characters,this article presents a generative adversarial network(GAN)-based motion learning method for robotic calligraphy synthesis(Gan2CS)that can enhance the efficiency in writing complex calligraphy words and reproducing classic calligraphy works.The key technologies in the proposed approach include:(1)adopting the GAN to learn the motion parameters from the robot writing operation;(2)converting the learnt motion data into the style font and realising the transition from static calligraphy images to dynamic writing demonstration;(3)reproducing high-precision calligraphy works by synthesising the writing motion data hierarchically.In this study,the motion trajectories of sample calligraphy images are firstly extracted and converted into the robot module.The robot performs the writing with motion planning,and the writing motion parameters of calligraphy strokes are learnt with GANs.Then the motion data of basic strokes is synthesised based on the hierarchical process of‘stroke-radicalpart-character’.And the robot re-writes the synthesised characters whose similarity with the original calligraphy characters is evaluated.Regular calligraphy characters have been tested in the experiments for method validation and the results validated that the robot can actualise the robotic calligraphy synthesis of writing motion data with GAN.展开更多
As energy demands continue to rise in modern society,the development of high-performance lithium-ion batteries(LIBs)has become crucial.However,traditional research methods of material science face challenges such as l...As energy demands continue to rise in modern society,the development of high-performance lithium-ion batteries(LIBs)has become crucial.However,traditional research methods of material science face challenges such as lengthy timelines and complex processes.In recent years,the integration of machine learning(ML)in LIB materials,including electrolytes,solid-state electrolytes,and electrodes,has yielded remarkable achievements.This comprehensive review explores the latest applications of ML in predicting LIB material performance,covering the core principles and recent advancements in three key inverse material design strategies:high-throughput virtual screening,global optimization,and generative models.These strategies have played a pivotal role in fostering LIB material innovations.Meanwhile,the paper briefly discusses the challenges associated with applying ML to materials research and offers insights and directions for future research.展开更多
Efficiently tracking and imaging interested moving targets is crucial across various applications,from autonomous systems to surveillance.However,persistent challenges remain in various fields,including environmental ...Efficiently tracking and imaging interested moving targets is crucial across various applications,from autonomous systems to surveillance.However,persistent challenges remain in various fields,including environmental intricacies,limitations in perceptual technologies,and privacy considerations.We present a teacher-student learning model,the generative adversarial network(GAN)-guided diffractive neural network(DNN),which performs visual tracking and imaging of the interested moving target.The GAN,as a teacher model,empowers efficient acquisition of the skill to differentiate the specific target of interest in the domains of visual tracking and imaging.The DNN-based student model learns to master the skill to differentiate the interested target from the GAN.The process of obtaining a GAN-guided DNN starts with capturing moving objects effectively using an event camera with high temporal resolution and low latency.Then,the generative power of GAN is utilized to generate data with position-tracking capability for the interested moving target,subsequently serving as labels to the training of the DNN.The DNN learns to image the target during training while retaining the target’s positional information.Our experimental demonstration highlights the efficacy of the GAN-guided DNN in visual tracking and imaging of the interested moving target.We expect the GAN-guided DNN can significantly enhance autonomous systems and surveillance.展开更多
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by...The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.展开更多
Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece...Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.展开更多
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o...Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively.展开更多
This study systematically reviews the applications of generative artificial intelligence(GAI)in breast cancer research,focusing on its role in diagnosis and therapeutic development.While GAI has gained significant att...This study systematically reviews the applications of generative artificial intelligence(GAI)in breast cancer research,focusing on its role in diagnosis and therapeutic development.While GAI has gained significant attention across various domains,its utility in breast cancer research has yet to be comprehensively reviewed.This study aims to fill that gap by synthesizing existing research into a unified document.A comprehensive search was conducted following Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)guidelines,resulting in the retrieval of 3827 articles,of which 31 were deemed eligible for analysis.The included studies were categorized based on key criteria,such as application types,geographical distribution,contributing organizations,leading journals,publishers,and temporal trends.Keyword co-occurrence mapping and subject profiling further highlighted the major research themes in this field.The findings reveal that GAI models have been applied to improve breast cancer diagnosis,treatment planning,and outcome predictions.Geographical and network analyses showed that most contributions come from a few leading institutions,with limited global collaboration.The review also identifies key challenges in implementing GAI in clinical practice,such as data availability,ethical concerns,and model validation.Despite these challenges,the study highlights GAI’s potential to enhance breast cancer research,particularly in generating synthetic data,improving diagnostic accuracy,and personalizing treatment approaches.This review serves as a valuable resource for researchers and stakeholders,providing insights into current research trends,major contributors,and collaborative networks in GAI-based breast cancer studies.By offering a holistic overview,it aims to support future research directions and encourage broader adoption of GAI technologies in healthcare.Additionally,the study emphasizes the importance of overcoming implementation barriers to fully realizeGAI’s potential in transforming breast cancer management.展开更多
This study addresses the pressing challenge of generating realistic strong ground motion data for simulating earthquakes,a crucial component in pre-earthquake risk assessments and post-earthquake disaster evaluations,...This study addresses the pressing challenge of generating realistic strong ground motion data for simulating earthquakes,a crucial component in pre-earthquake risk assessments and post-earthquake disaster evaluations,particularly suited for regions with limited seismic data.Herein,we report a generative adversarial network(GAN)framework capable of simulating strong ground motions under various environmental conditions using only a small set of real earthquake records.The constructed GAN model generates ground motions based on continuous physical variables such as source distance,site conditions,and magnitude,effectively capturing the complexity and diversity of ground motions under different scenarios.This capability allows the proposed model to approximate real seismic data,making it applicable to a wide range of engineering purposes.Using the Shandong Pingyuan earthquake as an example,a specialized dataset was constructed based on regional real ground motion records.The response spectrum at target locations was obtained through inverse distance-weighted interpolation of actual response spectra,followed by continuous wavelet transform to derive the ground motion time histories at these locations.Through iterative parameter adjustments,the constructed GAN model learned the probability distribution of strong-motion data for this event.The trained model generated three-component ground-motion time histories with clear P-wave and S-wave characteristics,accurately reflecting the non-stationary nature of seismic records.Statistical comparisons between synthetic and real response spectra,waveform envelopes,and peak ground acceleration show a high degree of similarity,underscoring the effectiveness of the model in replicating both the statistical and physical characteristics of real ground motions.These findings validate the feasibility of GANs for generating realistic earthquake data in data-scarce regions,providing a reliable approach for enriching regional ground motion databases.Additionally,the results suggest that GAN-based networks are a powerful tool for building predictive models in seismic hazard analysis.展开更多
The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films...The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films annually in more than 20 languages,personalized recommendations are essential to highlight relevant content.To overcome the limitations of traditional recommender systems-such as static latent vectors,poor handling of cold-start scenarios,and the absence of uncertainty modeling-we propose a deep Collaborative Neural Generative Embedding(C-NGE)model.C-NGE dynamically learns user and item representations by integrating rating information and metadata features in a unified neural framework.It uses metadata as sampled noise and applies the reparameterization trick to capture latent patterns better and support predictions for new users or items without retraining.We evaluate CNGE on the Indian Regional Movies(IRM)dataset,along with MovieLens 100 K and 1 M.Results show that our model consistently outperforms several existing methods,and its extensibility allows for incorporating additional signals like user reviews and multimodal data to enhance recommendation quality.展开更多
The inversion of ocean subsurface temperature and salinity(TS)is a hot topic and challenging problem in the oceanic sciences.In this study,a new method for the inversion of underwater TS in the South China Sea is prop...The inversion of ocean subsurface temperature and salinity(TS)is a hot topic and challenging problem in the oceanic sciences.In this study,a new method for the inversion of underwater TS in the South China Sea is proposed based on an improved generative adversarial network(GAN).The proposed model can derive the underwater TS from sea surface data(specifically,sea surface temperature and the sea surface height anomalies)with an eddy-resolving horizontal resolution of(1/12)°.For comparison,a robust statistics-based model,the Modular Ocean Data Assimilation System(MODAS),is also used to invert the subsurface TS in this study.Results show that the root-mean-square errors(RMSEs)of the TS inversions from the GAN-based model are significantly smaller than those from MODAS,especially in the thermocline of the South China Sea,where the RMSE of temperature can be reduced by up to 21.7%and the subsurface salinity RMSE is smaller than 0.32.In particular,the inversion results obtained using the proposed model are more accurate in either the seasonalscale or the synoptic-scale analysis.Firstly,the GAN-based model is more effective for the seasonal-scale extraction and diagnosis of the subsurface stratification,especially in the Luzon Strait and coastal shelf sea areas,in which stronger nonlinearities arise from the Kuroshio intrusion or complex coastal processes dominate the ocean subsurface dynamics.Secondly,the vertical heat pump and cold suction effects in the ocean's upper layers induced by the passage of a typhoon can be reflected more reasonably based on the synoptic-scale analysis with the proposed model.Furthermore,the underwater 3D structure of mesoscale eddies can be skillfully captured by AIGAN(Attention and Inception GAN),which can extract more refined eddy patterns with stronger recognition capability compared with the statistics-based MODAS.The present study can be extended to further explore the subsurface characteristics of the internal variability in the South China Sea.展开更多
The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(...The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(SRGAN)with a Pyramid Attention Module(PAM)to enhance the quality of deep face generation.The SRGAN framework is designed to improve the resolution of generated images,addressing common challenges such as blurriness and a lack of intricate details.The Pyramid Attention Module further complements the process by focusing on multi-scale feature extraction,enabling the network to capture finer details and complex facial features more effectively.The proposed method was trained and evaluated over 100 epochs on the CelebA dataset,demonstrating consistent improvements in image quality and a marked decrease in generator and discriminator losses,reflecting the model’s capacity to learn and synthesize high-quality images effectively,given adequate computational resources.Experimental outcome demonstrates that the SRGAN model with PAM module has outperformed,yielding an aggregate discriminator loss of 0.055 for real,0.043 for fake,and a generator loss of 10.58 after training for 100 epochs.The model has yielded an structural similarity index measure of 0.923,that has outperformed the other models that are considered in the current study for analysis.展开更多
In this paper,a data-driven topology optimization(TO)method is proposed for the efficient design of three-dimensional heat transfer structures.The presented method is composed of four parts.Firstly,the three-dimension...In this paper,a data-driven topology optimization(TO)method is proposed for the efficient design of three-dimensional heat transfer structures.The presented method is composed of four parts.Firstly,the three-dimensional heat transfer topology optimization(HTTO)dataset,composed of both design parameters and the corresponding HTTO configuration,is established by the solid isotropic material with penalization(SIMP)method.Secondly,a high-performance surrogate model,named ResUNet-assisted generative adversarial nets(ResUNet-GAN),is developed by combining ReUNet and generative and adversarial nets(GAN).Thirdly,the same-resolution(SR)ResUNet-GAN is deployed to design three-dimensional heat transfer configurations by feeding design parameters.Finally,the finite element mesh of the optimized configuration is refined by the cross-resolution(CR)ResUNet-GAN to obtain near-optimal three-dimensional heat transfer configurations.Compared with conventional TO methods,the proposed method has two outstanding advantages:(1)the developed surrogate model establishes the end-to-end mapping from the design parameters to the three-dimensional configuration without any need for optimization iterations and finite element analysis;(2)both the SR ResUNet-GAN and the CR ResUNet-GAN can be employed individually or in combination to achieve each function,according to the needs of heat transfer structures.The data-driven method provides an efficient design framework for three-dimensional practical engineering problems.展开更多
Tropical cyclones(TCs)are complex and powerful weather systems,and accurately forecasting their path,structure,and intensity remains a critical focus and challenge in meteorological research.In this paper,we propose a...Tropical cyclones(TCs)are complex and powerful weather systems,and accurately forecasting their path,structure,and intensity remains a critical focus and challenge in meteorological research.In this paper,we propose an Attention Spatio-Temporal predictive Generative Adversarial Network(AST-GAN)model for predicting the temporal and spatial distribution of TCs.The model forecasts the spatial distribution of TC wind speeds for the next 15 hours at 3-hour intervals,emphasizing the cyclone's center,high wind-speed areas,and its asymmetric structure.To effectively capture spatiotemporal feature transfer at different time steps,we employ a channel attention mechanism for feature selection,enhancing model performance and reducing parameter redundancy.We utilized High-Resolution Weather Research and Forecasting(HWRF)data to train our model,allowing it to assimilate a wide range of TC motion patterns.The model is versatile and can be applied to various complex scenarios,such as multiple TCs moving simultaneously or TCs approaching landfall.Our proposed model demonstrates superior forecasting performance,achieving a root-mean-square error(RMSE)of 0.71 m s^(-1)for overall wind speed and 2.74 m s^(-1)for maximum wind speed when benchmarked against ground truth data from HWRF.Furthermore,the model underwent optimization and independent testing using ERA5reanalysis data,showcasing its stability and scalability.After fine-tuning on the ERA5 dataset,the model achieved an RMSE of 1.33 m s^(-1)for wind speed and 1.75 m s^(-1)for maximum wind speed.The AST-GAN model outperforms other state-of-the-art models in RMSE on both the HWRF and ERA5 datasets,maintaining its superior performance and demonstrating its effectiveness for spatiotemporal prediction of TCs.展开更多
An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extra...An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.展开更多
Reconfigurable Intelligent Surface(RIS)is regarded as a cutting-edge technology for the development of future wireless communication networks with improved frequency efficiency and reduced energy consumption.This pape...Reconfigurable Intelligent Surface(RIS)is regarded as a cutting-edge technology for the development of future wireless communication networks with improved frequency efficiency and reduced energy consumption.This paper proposes an architecture by combining RIS with Generalized Spatial Modulation(GSM)and then presents a Multi-Residual Deep Neural Network(MR-DNN)scheme,where the active antennas and their transmitted constellation symbols are detected by sub-DNNs in the detection block.Simulation results demonstrate that the proposed MR-DNN detection algorithm performs considerably better than the traditional Zero-Forcing(ZF)and the Minimum Mean Squared Error(MMSE)detection algorithms in terms of Bit Error Rate(BER).Moreover,the MR-DNN detection algorithm has less time complexity than the traditional detection algorithms.展开更多
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ...Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.展开更多
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che...The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments.展开更多
基金supported by the Chinese Academy of Science"Light of West China"Program(2022-XBQNXZ-015)the National Natural Science Foundation of China(11903071)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China and administered by the Chinese Academy of Sciences。
文摘This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.
文摘This study explores a novel educational model of generative AI-empowered interdisciplinary project-based learning(PBL).By analyzing the current applications of generative AI technology in information technology curricula,it elucidates its advantages and operational mechanisms in interdisciplinary PBL.Combining case studies and empirical research,the investigation proposes implementation pathways and strategies for the generative AI-enhanced interdisciplinary PBL model,detailing specific applications across three phases:project preparation,implementation,and evaluation.The research demonstrates that generative AI-enabled interdisciplinary project-based learning can effectively enhance students’learning motivation,interdisciplinary thinking capabilities,and innovative competencies,providing new conceptual frameworks and practical approaches for educational model innovation.
基金supported in part by the National Key R&D Program of China under Grant 2024YFE0200700in part by the National Natural Science Foundation of China under Grant 62201504.
文摘Network architectures assisted by Generative Artificial Intelligence(GAI)are envisioned as foundational elements of sixth-generation(6G)communication system.To deliver ubiquitous intelligent services and meet diverse service requirements,6G network architecture should offer personalized services to various mobile devices.Federated learning(FL)with personalized local training,as a privacypreserving machine learning(ML)approach,can be applied to address these challenges.In this paper,we propose a meta-learning-based personalized FL(PFL)method that improves both communication and computation efficiency by utilizing over-the-air computations.Its“pretraining-and-fine-tuning”principle makes it particularly suitable for enabling edge nodes to access personalized GAI services while preserving local privacy.Experiment results demonstrate the outperformance and efficacy of the proposed algorithm,and notably indicate enhanced communication efficiency without compromising accuracy.
基金National Key Research and Development Program of China,Grant/Award Numbers:2021YFB2501301,2019YFB1600704The Science and Technology Development Fund,Grant/Award Numbers:0068/2020/AGJ,SKL‐IOTSC(UM)‐2021‐2023GDST,Grant/Award Numbers:2020B1212030003,MYRG2022‐00192‐FST。
文摘Robot calligraphy visually reflects the motion capability of robotic manipulators.While traditional researches mainly focus on image generation and the writing of simple calligraphic strokes or characters,this article presents a generative adversarial network(GAN)-based motion learning method for robotic calligraphy synthesis(Gan2CS)that can enhance the efficiency in writing complex calligraphy words and reproducing classic calligraphy works.The key technologies in the proposed approach include:(1)adopting the GAN to learn the motion parameters from the robot writing operation;(2)converting the learnt motion data into the style font and realising the transition from static calligraphy images to dynamic writing demonstration;(3)reproducing high-precision calligraphy works by synthesising the writing motion data hierarchically.In this study,the motion trajectories of sample calligraphy images are firstly extracted and converted into the robot module.The robot performs the writing with motion planning,and the writing motion parameters of calligraphy strokes are learnt with GANs.Then the motion data of basic strokes is synthesised based on the hierarchical process of‘stroke-radicalpart-character’.And the robot re-writes the synthesised characters whose similarity with the original calligraphy characters is evaluated.Regular calligraphy characters have been tested in the experiments for method validation and the results validated that the robot can actualise the robotic calligraphy synthesis of writing motion data with GAN.
基金supported by the National Natural Science Foundation of China(Grant Nos.22225801,W2441009,22408228)。
文摘As energy demands continue to rise in modern society,the development of high-performance lithium-ion batteries(LIBs)has become crucial.However,traditional research methods of material science face challenges such as lengthy timelines and complex processes.In recent years,the integration of machine learning(ML)in LIB materials,including electrolytes,solid-state electrolytes,and electrodes,has yielded remarkable achievements.This comprehensive review explores the latest applications of ML in predicting LIB material performance,covering the core principles and recent advancements in three key inverse material design strategies:high-throughput virtual screening,global optimization,and generative models.These strategies have played a pivotal role in fostering LIB material innovations.Meanwhile,the paper briefly discusses the challenges associated with applying ML to materials research and offers insights and directions for future research.
基金supported by the National Natural Science Foundation of China(Grant Nos.62422509 and 62405188)the Shanghai Natural Science Foundation(Grant No.23ZR1443700)+3 种基金the Shuguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission(Grant No.23SG41)the Young Elite Scientist Sponsorship Program by CAST(Grant No.20220042)the Science and Technology Commission of Shanghai Municipality(Grant No.21DZ1100500)the Shanghai Municipal Science and Technology Major Project,and the Shanghai Frontiers Science Center Program(2021-2025 No.20).
文摘Efficiently tracking and imaging interested moving targets is crucial across various applications,from autonomous systems to surveillance.However,persistent challenges remain in various fields,including environmental intricacies,limitations in perceptual technologies,and privacy considerations.We present a teacher-student learning model,the generative adversarial network(GAN)-guided diffractive neural network(DNN),which performs visual tracking and imaging of the interested moving target.The GAN,as a teacher model,empowers efficient acquisition of the skill to differentiate the specific target of interest in the domains of visual tracking and imaging.The DNN-based student model learns to master the skill to differentiate the interested target from the GAN.The process of obtaining a GAN-guided DNN starts with capturing moving objects effectively using an event camera with high temporal resolution and low latency.Then,the generative power of GAN is utilized to generate data with position-tracking capability for the interested moving target,subsequently serving as labels to the training of the DNN.The DNN learns to image the target during training while retaining the target’s positional information.Our experimental demonstration highlights the efficacy of the GAN-guided DNN in visual tracking and imaging of the interested moving target.We expect the GAN-guided DNN can significantly enhance autonomous systems and surveillance.
基金described in this paper has been developed with in the project PRESECREL(PID2021-124502OB-C43)。
文摘The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.
基金supported in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515012485in part by Shenzhen Fundamental Research Program under Grant JCYJ20220810112354002+4 种基金in part by Shenzhen Science and Technology Program under Grant KJZD20230923114111021in part by the Fund for Academic Innovation Teams and Research Platform of South-Central Minzu University under Grant XTZ24003 and Grant PTZ24001in part by the Knowledge Innovation Program of Wuhan-Basic Research through Project 2023010201010151in part by the Research Start-up Funds of South-Central Minzu University under Grant YZZ18006in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331.
文摘Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.
基金supported by the National Key R&D Program of China(2022YFD1401600)the National Science Foundation for Distinguished Young Scholars of Zhejang Province,China(LR23C140001)supported by the Key Area Research and Development Program of Guangdong Province,China(2018B020205003 and 2020B0202090001).
文摘Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively.
基金financial support from the Fundamental Research Grant Scheme(FRGS)under grant number:FRGS/1/2024/ICT02/TARUMT/02/1from the Ministry of Higher Education Malaysiafunded in part by the internal grant from the Tunku Abdul Rahman University of Management and Technology(TAR UMT)with grant number:UC/I/G2024-00129.
文摘This study systematically reviews the applications of generative artificial intelligence(GAI)in breast cancer research,focusing on its role in diagnosis and therapeutic development.While GAI has gained significant attention across various domains,its utility in breast cancer research has yet to be comprehensively reviewed.This study aims to fill that gap by synthesizing existing research into a unified document.A comprehensive search was conducted following Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)guidelines,resulting in the retrieval of 3827 articles,of which 31 were deemed eligible for analysis.The included studies were categorized based on key criteria,such as application types,geographical distribution,contributing organizations,leading journals,publishers,and temporal trends.Keyword co-occurrence mapping and subject profiling further highlighted the major research themes in this field.The findings reveal that GAI models have been applied to improve breast cancer diagnosis,treatment planning,and outcome predictions.Geographical and network analyses showed that most contributions come from a few leading institutions,with limited global collaboration.The review also identifies key challenges in implementing GAI in clinical practice,such as data availability,ethical concerns,and model validation.Despite these challenges,the study highlights GAI’s potential to enhance breast cancer research,particularly in generating synthetic data,improving diagnostic accuracy,and personalizing treatment approaches.This review serves as a valuable resource for researchers and stakeholders,providing insights into current research trends,major contributors,and collaborative networks in GAI-based breast cancer studies.By offering a holistic overview,it aims to support future research directions and encourage broader adoption of GAI technologies in healthcare.Additionally,the study emphasizes the importance of overcoming implementation barriers to fully realizeGAI’s potential in transforming breast cancer management.
基金Funded by the National Key Research and Development Program(2022YFC3003502).
文摘This study addresses the pressing challenge of generating realistic strong ground motion data for simulating earthquakes,a crucial component in pre-earthquake risk assessments and post-earthquake disaster evaluations,particularly suited for regions with limited seismic data.Herein,we report a generative adversarial network(GAN)framework capable of simulating strong ground motions under various environmental conditions using only a small set of real earthquake records.The constructed GAN model generates ground motions based on continuous physical variables such as source distance,site conditions,and magnitude,effectively capturing the complexity and diversity of ground motions under different scenarios.This capability allows the proposed model to approximate real seismic data,making it applicable to a wide range of engineering purposes.Using the Shandong Pingyuan earthquake as an example,a specialized dataset was constructed based on regional real ground motion records.The response spectrum at target locations was obtained through inverse distance-weighted interpolation of actual response spectra,followed by continuous wavelet transform to derive the ground motion time histories at these locations.Through iterative parameter adjustments,the constructed GAN model learned the probability distribution of strong-motion data for this event.The trained model generated three-component ground-motion time histories with clear P-wave and S-wave characteristics,accurately reflecting the non-stationary nature of seismic records.Statistical comparisons between synthetic and real response spectra,waveform envelopes,and peak ground acceleration show a high degree of similarity,underscoring the effectiveness of the model in replicating both the statistical and physical characteristics of real ground motions.These findings validate the feasibility of GANs for generating realistic earthquake data in data-scarce regions,providing a reliable approach for enriching regional ground motion databases.Additionally,the results suggest that GAN-based networks are a powerful tool for building predictive models in seismic hazard analysis.
文摘The exponential growth of over-the-top(OTT)entertainment has fueled a surge in content consumption across diverse formats,especially in regional Indian languages.With the Indian film industry producing over 1500 films annually in more than 20 languages,personalized recommendations are essential to highlight relevant content.To overcome the limitations of traditional recommender systems-such as static latent vectors,poor handling of cold-start scenarios,and the absence of uncertainty modeling-we propose a deep Collaborative Neural Generative Embedding(C-NGE)model.C-NGE dynamically learns user and item representations by integrating rating information and metadata features in a unified neural framework.It uses metadata as sampled noise and applies the reparameterization trick to capture latent patterns better and support predictions for new users or items without retraining.We evaluate CNGE on the Indian Regional Movies(IRM)dataset,along with MovieLens 100 K and 1 M.Results show that our model consistently outperforms several existing methods,and its extensibility allows for incorporating additional signals like user reviews and multimodal data to enhance recommendation quality.
基金supported by the National Research and Development Program of China(Grant No.2021YFC2803003)the National Natural Science Foundation of China(Grant No.42375143)。
文摘The inversion of ocean subsurface temperature and salinity(TS)is a hot topic and challenging problem in the oceanic sciences.In this study,a new method for the inversion of underwater TS in the South China Sea is proposed based on an improved generative adversarial network(GAN).The proposed model can derive the underwater TS from sea surface data(specifically,sea surface temperature and the sea surface height anomalies)with an eddy-resolving horizontal resolution of(1/12)°.For comparison,a robust statistics-based model,the Modular Ocean Data Assimilation System(MODAS),is also used to invert the subsurface TS in this study.Results show that the root-mean-square errors(RMSEs)of the TS inversions from the GAN-based model are significantly smaller than those from MODAS,especially in the thermocline of the South China Sea,where the RMSE of temperature can be reduced by up to 21.7%and the subsurface salinity RMSE is smaller than 0.32.In particular,the inversion results obtained using the proposed model are more accurate in either the seasonalscale or the synoptic-scale analysis.Firstly,the GAN-based model is more effective for the seasonal-scale extraction and diagnosis of the subsurface stratification,especially in the Luzon Strait and coastal shelf sea areas,in which stronger nonlinearities arise from the Kuroshio intrusion or complex coastal processes dominate the ocean subsurface dynamics.Secondly,the vertical heat pump and cold suction effects in the ocean's upper layers induced by the passage of a typhoon can be reflected more reasonably based on the synoptic-scale analysis with the proposed model.Furthermore,the underwater 3D structure of mesoscale eddies can be skillfully captured by AIGAN(Attention and Inception GAN),which can extract more refined eddy patterns with stronger recognition capability compared with the statistics-based MODAS.The present study can be extended to further explore the subsurface characteristics of the internal variability in the South China Sea.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(*MSIT)(No.2018R1A5A7059549).
文摘The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(SRGAN)with a Pyramid Attention Module(PAM)to enhance the quality of deep face generation.The SRGAN framework is designed to improve the resolution of generated images,addressing common challenges such as blurriness and a lack of intricate details.The Pyramid Attention Module further complements the process by focusing on multi-scale feature extraction,enabling the network to capture finer details and complex facial features more effectively.The proposed method was trained and evaluated over 100 epochs on the CelebA dataset,demonstrating consistent improvements in image quality and a marked decrease in generator and discriminator losses,reflecting the model’s capacity to learn and synthesize high-quality images effectively,given adequate computational resources.Experimental outcome demonstrates that the SRGAN model with PAM module has outperformed,yielding an aggregate discriminator loss of 0.055 for real,0.043 for fake,and a generator loss of 10.58 after training for 100 epochs.The model has yielded an structural similarity index measure of 0.923,that has outperformed the other models that are considered in the current study for analysis.
基金supported by the National Natural Science Foundation of China(12472113,11872080)Natural Science Foundation of Beijing,China(3192005).
文摘In this paper,a data-driven topology optimization(TO)method is proposed for the efficient design of three-dimensional heat transfer structures.The presented method is composed of four parts.Firstly,the three-dimensional heat transfer topology optimization(HTTO)dataset,composed of both design parameters and the corresponding HTTO configuration,is established by the solid isotropic material with penalization(SIMP)method.Secondly,a high-performance surrogate model,named ResUNet-assisted generative adversarial nets(ResUNet-GAN),is developed by combining ReUNet and generative and adversarial nets(GAN).Thirdly,the same-resolution(SR)ResUNet-GAN is deployed to design three-dimensional heat transfer configurations by feeding design parameters.Finally,the finite element mesh of the optimized configuration is refined by the cross-resolution(CR)ResUNet-GAN to obtain near-optimal three-dimensional heat transfer configurations.Compared with conventional TO methods,the proposed method has two outstanding advantages:(1)the developed surrogate model establishes the end-to-end mapping from the design parameters to the three-dimensional configuration without any need for optimization iterations and finite element analysis;(2)both the SR ResUNet-GAN and the CR ResUNet-GAN can be employed individually or in combination to achieve each function,according to the needs of heat transfer structures.The data-driven method provides an efficient design framework for three-dimensional practical engineering problems.
基金supported by the Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)(NO.SML2021SP201)the National Natural Science Foundation of China(Grant No.42306200 and 42306216)+2 种基金the National Key Research and Development Program of China(Grant No.2023YFC3008100)the Innovation Group Project of the Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)(Grant No.311021004)the Oceanic Interdisciplinary Program of Shanghai Jiao Tong University(Project No.SL2021ZD203)。
文摘Tropical cyclones(TCs)are complex and powerful weather systems,and accurately forecasting their path,structure,and intensity remains a critical focus and challenge in meteorological research.In this paper,we propose an Attention Spatio-Temporal predictive Generative Adversarial Network(AST-GAN)model for predicting the temporal and spatial distribution of TCs.The model forecasts the spatial distribution of TC wind speeds for the next 15 hours at 3-hour intervals,emphasizing the cyclone's center,high wind-speed areas,and its asymmetric structure.To effectively capture spatiotemporal feature transfer at different time steps,we employ a channel attention mechanism for feature selection,enhancing model performance and reducing parameter redundancy.We utilized High-Resolution Weather Research and Forecasting(HWRF)data to train our model,allowing it to assimilate a wide range of TC motion patterns.The model is versatile and can be applied to various complex scenarios,such as multiple TCs moving simultaneously or TCs approaching landfall.Our proposed model demonstrates superior forecasting performance,achieving a root-mean-square error(RMSE)of 0.71 m s^(-1)for overall wind speed and 2.74 m s^(-1)for maximum wind speed when benchmarked against ground truth data from HWRF.Furthermore,the model underwent optimization and independent testing using ERA5reanalysis data,showcasing its stability and scalability.After fine-tuning on the ERA5 dataset,the model achieved an RMSE of 1.33 m s^(-1)for wind speed and 1.75 m s^(-1)for maximum wind speed.The AST-GAN model outperforms other state-of-the-art models in RMSE on both the HWRF and ERA5 datasets,maintaining its superior performance and demonstrating its effectiveness for spatiotemporal prediction of TCs.
基金supported by the National Natural Science Foundation of China(Grant Nos.12272259 and 52005148).
文摘An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.
基金supported in part by the Shenzhen Basic Research Program under Grant JCYJ20220531103008018,20231120142345001 and 20231127144045001the Guangdong Basic Research Program under Grant 2024ZDZX1016the Natural Science Foundation of China under Grant U20A20156.
文摘Reconfigurable Intelligent Surface(RIS)is regarded as a cutting-edge technology for the development of future wireless communication networks with improved frequency efficiency and reduced energy consumption.This paper proposes an architecture by combining RIS with Generalized Spatial Modulation(GSM)and then presents a Multi-Residual Deep Neural Network(MR-DNN)scheme,where the active antennas and their transmitted constellation symbols are detected by sub-DNNs in the detection block.Simulation results demonstrate that the proposed MR-DNN detection algorithm performs considerably better than the traditional Zero-Forcing(ZF)and the Minimum Mean Squared Error(MMSE)detection algorithms in terms of Bit Error Rate(BER).Moreover,the MR-DNN detection algorithm has less time complexity than the traditional detection algorithms.
基金The National Natural Science Foundation of China under contract Nos 42176011 and 61931025the Fundamental Research Funds for the Central Universities of China under contract No.24CX03001A.
文摘Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.
基金funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R752),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments.