LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora...LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.展开更多
As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processin...As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processing and enabled significant improvements in various applications.This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models(LLM).Our primary objective is to evaluate the effectiveness of Static Application Security Testing(SAST)by applying various techniques such as prompt persona,structure outputs and zero-shot.To the selection of the LLMs(CodeLlama 7B,DeepSeek coder 7B,Gemini 1.5 Flash,Gemini 2.0 Flash,Mistral 7b Instruct,Phi 38b Mini 128K instruct,Qwen 2.5 coder,StartCoder 27B)with comparison and combination with Find Security Bugs.The evaluation method will involve using a selected dataset containing vulnerabilities,and the results to provide insights for different scenarios according to the software criticality(Business critical,non-critical,minimum effort,best effort)In detail,the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools,if the combining LLMs with Static Application Security Testing(SAST)tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results.Summarizing the most important conclusions of the research,it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software,the best results have been obtained by SAST analysis.This differs in“NonCritical,”“Best Effort,”and“Minimum Effort”scenarios,where the combination of LLM(Gemini)+SAST has obtained better results.展开更多
The article entitled with OptoGPT:A foundation model for inverse design in optical multilayer thin film structures1,with doi:10.29026/oea.2024.240062,published in No.7,Vol.7,2024 of Opto-Electronic Advances,has attrac...The article entitled with OptoGPT:A foundation model for inverse design in optical multilayer thin film structures1,with doi:10.29026/oea.2024.240062,published in No.7,Vol.7,2024 of Opto-Electronic Advances,has attracted attention from many researchers.As a result,the authors received many requests on the possibility sharing their code,model,and dataset in the mentioned work.To facilitate the needs of the research community,the authors decide to make the code,model,and datasets of OptoGPT public,enabling broader utilization and further development of enhanced models.展开更多
For computer science majors in higher education institutions,programming courses are one of the most important professional foundation courses.Proficiency in independent programming skills is of great help to the stud...For computer science majors in higher education institutions,programming courses are one of the most important professional foundation courses.Proficiency in independent programming skills is of great help to the study of subsequent courses and the personal development of students.In the teaching process of programming courses,online judgement systems are often used to improve students’programming level.Traditional online judgement systems lack guidance for students,and it is often difficult for inexperienced students to find and correct errors in their codes by themselves.We propose an online judgement system that integrates a large model of error correction to help students find errors and improve their programming skills.展开更多
介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及...介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。展开更多
This study introduces a lightweight deep learning model and a novel synthetic dataset designed to restore damaged one-dimensional(1D)barcodes and Quick Response(QR)codes,addressing critical challenges in logistics ope...This study introduces a lightweight deep learning model and a novel synthetic dataset designed to restore damaged one-dimensional(1D)barcodes and Quick Response(QR)codes,addressing critical challenges in logistics operations.The proposed solution leverages an efficient Pix2Pix-based framework,a type of conditional Generative Adversarial Network(GAN)optimized for image-to-image translation tasks,enabling the recovery of degraded barcodes and QR codes with minimal computational overhead.A core contribution of this work is the development of a synthetic dataset that simulates realistic damage scenarios frequently encountered in logistics environments,such as low contrast,misalignment,physical wear,and environmental interference.By training on this diverse and realistic dataset,the model demonstrates exceptional performance in restoring readability and decoding accuracy.The lightweight architecture,featuring a U-Net-based encoder-decoder with separable convolutions,ensures computational efficiency,making the approach suitable for real-time deployment on embedded and resource-constrained devices commonly used in logistics systems.Experimental results reveal significant improvements:QR code decoding ratios increased from 14%to 99%on training data and from 15%to 68%on validation data,while 1D barcode decoding ratios improved from 7%to 73%on training data and from 9%to 44%on validation data.By providing a robust,resource-efficient solution for restoring damaged barcodes and QR codes,this study offers practical advancements for enhancing the reliability of automated scanning systems in logistics operations,particularly under challenging conditions.展开更多
Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and ...Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and cyberattacks over these networks have become pressing concerns.Therefore,protecting copyrighted content and preventing illegal distribution in wireless communications has garnered significant attention.The Intelligent Reflecting Surface(IRS)is regarded as a promising technology for future wireless and mobile networks due to its ability to reconfigure the radio propagation environment.This study investigates the security performance of an uplink Non-Orthogonal Multiple Access(NOMA)system integrated with an IRS and employing Fountain Codes(FCs).Specifically,two users send signals to the base station at separate distances.A relay receives the signal from the nearby user first and then relays it to the base station.The IRS receives the signal from the distant user and reflects it to the relay,which then sends the reflected signal to the base station.Furthermore,a malevolent eavesdropper intercepts both user and relay communications.We construct mathematical equations for Outage Probability(OP),throughput,diversity evaluation,and Interception Probability(IP),offering quantitative insights to assess system security and performance.Additionally,OP and IP are analyzed using a Deep Neural Network(DNN)model.A deeper comprehension of the security performance of the IRS-assisted NOMA systemin signal transmission is provided by Monte Carlo simulations,which are also carried out to confirm the theoretical conclusions.展开更多
Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making sys...Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making system to propose a sewage treatment mode and scheme suitable for local conditions.By considering the village spatial layout and terrain factors,a decision tree model of residential density and terrain type was constructed with accuracies of 76.47%and 96.00%,respectively.Combined with binary classification probability unit regression,an appropriate sewage treatment mode for the village was determined with 87.00%accuracy.The Analytic Hierarchy Process(AHP),combined with the Technique for Order Preference(TOPSIS)by Similarity to an Ideal Solution model,formed the basis for optimal treatment process selection under different emission standards.Verification was conducted in 542 villages across three counties of the Inner Mongolia Autonomous Region,focusing on the standard effluent effect(0.3773),low investment cost(0.3196),and high standard effluent effect(0.5115)to determine the best treatment process for the same emission standard under different needs.The annual environmental and carbon emission benefits of sewage treatment in these villages were estimated.This model matches village density,geographic feature,and social development level,and provides scientific support and a theoretical basis for rural sewage treatment decision-making.展开更多
Two dimensional(2D) materials based on boron and carbon have attracted wide attention due to their unique properties. BC compounds have rich active sites and diverse chemical coordination, showing great potential in o...Two dimensional(2D) materials based on boron and carbon have attracted wide attention due to their unique properties. BC compounds have rich active sites and diverse chemical coordination, showing great potential in optoelectronic applications. However, due to the limitation of calculation and experimental conditions, it is still a challenging task to predict new 2D BC monolayer materials. Specifically, we utilized Crystal Diffusion Variational Autoencoder(CDVAE) and pre-trained Materials Graph Neural Network with 3-Body Interactions(M3GNet) model to generate novel and stable BCP materials. Each crystal structure was treated as a high-dimensional vector, where the encoder extracted lattice information and element coordinates, mapping the high-dimensional data into a low-dimensional latent space. The decoder then reconstructed the latent representation back into the original data space. Additionally, our designed attribute predictor network combined the advantages of dilated convolutions and residual connections,effectively increasing the model's receptive field and learning capacity while maintaining relatively low parameter count and computational complexity. By progressively increasing the dilation rate, the model can capture features at different scales. We used the DFT data set of about 1600 BCP monolayer materials to train the diffusion model, and combined with the pre-trained M3GNet model to screen the best candidate structure. Finally, we used DFT calculations to confirm the stability of the candidate structure.The results show that the combination of generative deep learning model and attribute prediction model can help accelerate the discovery and research of new 2D materials, and provide effective methods for exploring the inverse design of new two-dimensional materials.展开更多
Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a prom...Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a promising solution due to their outstanding error-correction performance and low complexity.Unequal error protection(UEP)involves nonuniform error safeguarding for distinct data segments,achieving a fine balance between error resilience and resource allocation,which ultimately enhancing system performance and efficiency.In this paper,we propose a novel class of UEP rateless polar codes.The codes are designed based on matrix extension of polar codes,and elegant mapping and duplication operations are designed to achieve UEP property while preserving the overall performance of conventional polar codes.Superior UEP performance is attained without significant modifications to conventional polar codes,making it straightforward for compatibility with existing polar codes.A theoretical analysis is conducted on the block error rate and throughput efficiency performance.To the best of our knowledge,this work provides the first theoretical performance analysis of UEP rateless polar codes.Simulation results show that the proposed codes significantly outperform existing polar coding schemes in both block error rate and throughput efficiency.展开更多
We present a comprehensive description and benchmark evaluation of the global–regional chemical transport model called the Emission and Atmospheric Processes Integrated and Coupled Community(EPICC)model.The framework...We present a comprehensive description and benchmark evaluation of the global–regional chemical transport model called the Emission and Atmospheric Processes Integrated and Coupled Community(EPICC)model.The framework incorporates(1)grid configuration,(2)transport dynamics,(3)chemical mechanisms,(4)aerosol processes,(5)wet/dry deposition parameterizations,and(6)heterogeneous chemistry treatments associated with sulfate,nitrous acid(HONO)chemistry,and aerosol/cloud–photolysis interactions(APIs/CPIs).Openly shared with the atmospheric research community,the model facilitates integration of advanced physicochemical schemes to enhance simulation accuracy.Globally,the model demonstrates realistic representations of ozone(O_(3))and aerosol optical depth.The EPICC model generally demonstrates robust performance in simulating regional concentrations of O_(3) and PM_(2.5)(and its components)in China.It successfully captures vertical profiles of both global and regional O_(3).Notably,the model mitigates frequently reported sulfate underestimations in highly industrialized regions of China.The model accurately captures two regional severe pollution episodes observed in eastern China(January/June 2021).Sensitivity experiments highlight the critical roles of heterogeneous chemical mechanisms associated with sulfate,HONO chemistry,APIs,and CPIs in capturing PM_(2.5) and O_(3) concentrations in China.Improved sulfate mechanisms result in an increase of approximately 32.4%(2.8μg m^(−3))in simulated winter sulfate concentrations when observations exceed 10μg m^(−3).Enhanced HONO elevates winter O_(3) and PM_(2.5) by≤20 and≤10μg m^(−3),respectively.Overall,CPIs dominate over APIs in improving O_(3) and PM_(2.5) simulations across China.Locally,APIs mitigate PM_(2.5) and O_(3) discrepancies in the Sichuan Basin.Seasonal cloud–chemistry coupling explains the weaker impact of PM_(2.5) in summer.展开更多
In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challen...In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets containing both emotion and intent labels, we designed a multi-dataset learning approach enabling the model to generate dialogue summaries encompassing both user intent and emotion. Experiments conducted on the MultiWoZ and MELD datasets demonstrate that our model effectively captures user intent and emotion, achieving extremely competitive results in dialogue state tracking tasks.展开更多
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy...In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.展开更多
The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pil...The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pillars supporting scientific inference and data-driven decisionmaking,have evolved through the collective wisdom of generations of statisticians.This special issue,titled"Recent Developments in Dimension Reduction and Model Checking for regressions",not only aims to showcase cutting-edge advances in the field but also carries a distinct sense of academic homage to honor the groundbreaking and enduring contributions of Professor Lixing Zhu,a leading scholar whose work has profoundly shaped both areas.展开更多
Differential Code Bias(DCB)is the time delays between two different GNSS signals,which is crucial for GNSS positioning.Previous studies have shown that it can be significantly affected by the flex power operations in ...Differential Code Bias(DCB)is the time delays between two different GNSS signals,which is crucial for GNSS positioning.Previous studies have shown that it can be significantly affected by the flex power operations in satellites.This study proposes a 15-min short-term DCB estimation method to analyze flex power's impact on DCB variations.The method jointly estimates satellite DCB,receiver DCB,and ionospheric parameters using over 300 MGEX stations.We examined three representative flex power events in 2024,achieving average internal RMS values of 0.042 ns and 0.0068 ns for inter-frequency and intra-frequency scenarios respectively.Results show that intra-frequency DCB exhibits clear shift biases synchronized with flex power state transitions while maintaining stability within 0.20 ns during nontransition periods.No definitive impact on inter-frequency DCB was observed at current estimation precision levels.展开更多
In their recent paper Pereira et al.(2025)claim that validation is overlooked in mapping and modelling of ecosystem services(ES).They state that“many studies lack critical evaluation of the results and no validation ...In their recent paper Pereira et al.(2025)claim that validation is overlooked in mapping and modelling of ecosystem services(ES).They state that“many studies lack critical evaluation of the results and no validation is provided”and that“the validation step is largely overlooked”.This assertion may have been true several years ago,for example,when Ochoa and Urbina-Cardona(2017)made a similar observation.However,there has been much work on ES model validation over the last decade.展开更多
文摘LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.
文摘As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processing and enabled significant improvements in various applications.This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models(LLM).Our primary objective is to evaluate the effectiveness of Static Application Security Testing(SAST)by applying various techniques such as prompt persona,structure outputs and zero-shot.To the selection of the LLMs(CodeLlama 7B,DeepSeek coder 7B,Gemini 1.5 Flash,Gemini 2.0 Flash,Mistral 7b Instruct,Phi 38b Mini 128K instruct,Qwen 2.5 coder,StartCoder 27B)with comparison and combination with Find Security Bugs.The evaluation method will involve using a selected dataset containing vulnerabilities,and the results to provide insights for different scenarios according to the software criticality(Business critical,non-critical,minimum effort,best effort)In detail,the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools,if the combining LLMs with Static Application Security Testing(SAST)tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results.Summarizing the most important conclusions of the research,it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software,the best results have been obtained by SAST analysis.This differs in“NonCritical,”“Best Effort,”and“Minimum Effort”scenarios,where the combination of LLM(Gemini)+SAST has obtained better results.
文摘The article entitled with OptoGPT:A foundation model for inverse design in optical multilayer thin film structures1,with doi:10.29026/oea.2024.240062,published in No.7,Vol.7,2024 of Opto-Electronic Advances,has attracted attention from many researchers.As a result,the authors received many requests on the possibility sharing their code,model,and dataset in the mentioned work.To facilitate the needs of the research community,the authors decide to make the code,model,and datasets of OptoGPT public,enabling broader utilization and further development of enhanced models.
基金supported by Research and Construction of Experimental Teaching Aid Platform for Programming under the Teaching Reform Research Project of Shandong University。
文摘For computer science majors in higher education institutions,programming courses are one of the most important professional foundation courses.Proficiency in independent programming skills is of great help to the study of subsequent courses and the personal development of students.In the teaching process of programming courses,online judgement systems are often used to improve students’programming level.Traditional online judgement systems lack guidance for students,and it is often difficult for inexperienced students to find and correct errors in their codes by themselves.We propose an online judgement system that integrates a large model of error correction to help students find errors and improve their programming skills.
文摘介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。
基金supported by the Scientific and Technological Research Council of Turkey(TÜB˙ITAK)through the Industrial R&D Projects Grant Program(TEYDEB)under Project No.3211077(grant recipient:Metin Kahraman)。
文摘This study introduces a lightweight deep learning model and a novel synthetic dataset designed to restore damaged one-dimensional(1D)barcodes and Quick Response(QR)codes,addressing critical challenges in logistics operations.The proposed solution leverages an efficient Pix2Pix-based framework,a type of conditional Generative Adversarial Network(GAN)optimized for image-to-image translation tasks,enabling the recovery of degraded barcodes and QR codes with minimal computational overhead.A core contribution of this work is the development of a synthetic dataset that simulates realistic damage scenarios frequently encountered in logistics environments,such as low contrast,misalignment,physical wear,and environmental interference.By training on this diverse and realistic dataset,the model demonstrates exceptional performance in restoring readability and decoding accuracy.The lightweight architecture,featuring a U-Net-based encoder-decoder with separable convolutions,ensures computational efficiency,making the approach suitable for real-time deployment on embedded and resource-constrained devices commonly used in logistics systems.Experimental results reveal significant improvements:QR code decoding ratios increased from 14%to 99%on training data and from 15%to 68%on validation data,while 1D barcode decoding ratios improved from 7%to 73%on training data and from 9%to 44%on validation data.By providing a robust,resource-efficient solution for restoring damaged barcodes and QR codes,this study offers practical advancements for enhancing the reliability of automated scanning systems in logistics operations,particularly under challenging conditions.
基金supported in part by Vietnam National Foundation for Science and Technology Development(NAFOSTED)under Grant 102.04-2021.57in part by Culture,Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture,Sports and Tourism in 2024(Project Name:Global Talent Training Program for Copyright Management Technology in Game Contents,Project Number:RS-2024-00396709,Contribution Rate:100%).
文摘Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and cyberattacks over these networks have become pressing concerns.Therefore,protecting copyrighted content and preventing illegal distribution in wireless communications has garnered significant attention.The Intelligent Reflecting Surface(IRS)is regarded as a promising technology for future wireless and mobile networks due to its ability to reconfigure the radio propagation environment.This study investigates the security performance of an uplink Non-Orthogonal Multiple Access(NOMA)system integrated with an IRS and employing Fountain Codes(FCs).Specifically,two users send signals to the base station at separate distances.A relay receives the signal from the nearby user first and then relays it to the base station.The IRS receives the signal from the distant user and reflects it to the relay,which then sends the reflected signal to the base station.Furthermore,a malevolent eavesdropper intercepts both user and relay communications.We construct mathematical equations for Outage Probability(OP),throughput,diversity evaluation,and Interception Probability(IP),offering quantitative insights to assess system security and performance.Additionally,OP and IP are analyzed using a Deep Neural Network(DNN)model.A deeper comprehension of the security performance of the IRS-assisted NOMA systemin signal transmission is provided by Monte Carlo simulations,which are also carried out to confirm the theoretical conclusions.
基金supported by the Central Government Guiding Local Science and Technology Development Fund Project(No.2024SZY0343)the Joint Research Program for Ecological Conservation and High Quality Development of the Yellow River Basin(No.2022-YRUC-01-050205)+2 种基金the Higher Education Scientific Research Project of Inner Mongolia Autonomous Region(No.NJZZ23078)the project of Inner Mongolia"Prairie Talents"Engineering Innovation Entrepreneurship Talent Team,the Major Projects of Erdos Science and Technology(No.2022EEDSKJZDZX015)the Innovation Team of the Inner Mongolia Academy of Science and Technology(No.CXTD2023-01-016).
文摘Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making system to propose a sewage treatment mode and scheme suitable for local conditions.By considering the village spatial layout and terrain factors,a decision tree model of residential density and terrain type was constructed with accuracies of 76.47%and 96.00%,respectively.Combined with binary classification probability unit regression,an appropriate sewage treatment mode for the village was determined with 87.00%accuracy.The Analytic Hierarchy Process(AHP),combined with the Technique for Order Preference(TOPSIS)by Similarity to an Ideal Solution model,formed the basis for optimal treatment process selection under different emission standards.Verification was conducted in 542 villages across three counties of the Inner Mongolia Autonomous Region,focusing on the standard effluent effect(0.3773),low investment cost(0.3196),and high standard effluent effect(0.5115)to determine the best treatment process for the same emission standard under different needs.The annual environmental and carbon emission benefits of sewage treatment in these villages were estimated.This model matches village density,geographic feature,and social development level,and provides scientific support and a theoretical basis for rural sewage treatment decision-making.
基金supported by the National Nature Science Foundation of China (Nos. 61671362 and 62071366)。
文摘Two dimensional(2D) materials based on boron and carbon have attracted wide attention due to their unique properties. BC compounds have rich active sites and diverse chemical coordination, showing great potential in optoelectronic applications. However, due to the limitation of calculation and experimental conditions, it is still a challenging task to predict new 2D BC monolayer materials. Specifically, we utilized Crystal Diffusion Variational Autoencoder(CDVAE) and pre-trained Materials Graph Neural Network with 3-Body Interactions(M3GNet) model to generate novel and stable BCP materials. Each crystal structure was treated as a high-dimensional vector, where the encoder extracted lattice information and element coordinates, mapping the high-dimensional data into a low-dimensional latent space. The decoder then reconstructed the latent representation back into the original data space. Additionally, our designed attribute predictor network combined the advantages of dilated convolutions and residual connections,effectively increasing the model's receptive field and learning capacity while maintaining relatively low parameter count and computational complexity. By progressively increasing the dilation rate, the model can capture features at different scales. We used the DFT data set of about 1600 BCP monolayer materials to train the diffusion model, and combined with the pre-trained M3GNet model to screen the best candidate structure. Finally, we used DFT calculations to confirm the stability of the candidate structure.The results show that the combination of generative deep learning model and attribute prediction model can help accelerate the discovery and research of new 2D materials, and provide effective methods for exploring the inverse design of new two-dimensional materials.
基金supported by National Natural Science Foundation of China(No.62301008)China Postdoctoral Science Foundation(No.2022M720272)New Cornerstone Science Foundation through the XPLORER PRIZE。
文摘Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a promising solution due to their outstanding error-correction performance and low complexity.Unequal error protection(UEP)involves nonuniform error safeguarding for distinct data segments,achieving a fine balance between error resilience and resource allocation,which ultimately enhancing system performance and efficiency.In this paper,we propose a novel class of UEP rateless polar codes.The codes are designed based on matrix extension of polar codes,and elegant mapping and duplication operations are designed to achieve UEP property while preserving the overall performance of conventional polar codes.Superior UEP performance is attained without significant modifications to conventional polar codes,making it straightforward for compatibility with existing polar codes.A theoretical analysis is conducted on the block error rate and throughput efficiency performance.To the best of our knowledge,this work provides the first theoretical performance analysis of UEP rateless polar codes.Simulation results show that the proposed codes significantly outperform existing polar coding schemes in both block error rate and throughput efficiency.
基金National Key Scientific and Technological Infrastructure project “Earth System Science Numerical Simulator Facility” (EarthLab)supported by the National Natural Science Foundation of China (Grant No. 92044302)the National Key Research Development Program of China (Grant No. 2022YFC3700703)
文摘We present a comprehensive description and benchmark evaluation of the global–regional chemical transport model called the Emission and Atmospheric Processes Integrated and Coupled Community(EPICC)model.The framework incorporates(1)grid configuration,(2)transport dynamics,(3)chemical mechanisms,(4)aerosol processes,(5)wet/dry deposition parameterizations,and(6)heterogeneous chemistry treatments associated with sulfate,nitrous acid(HONO)chemistry,and aerosol/cloud–photolysis interactions(APIs/CPIs).Openly shared with the atmospheric research community,the model facilitates integration of advanced physicochemical schemes to enhance simulation accuracy.Globally,the model demonstrates realistic representations of ozone(O_(3))and aerosol optical depth.The EPICC model generally demonstrates robust performance in simulating regional concentrations of O_(3) and PM_(2.5)(and its components)in China.It successfully captures vertical profiles of both global and regional O_(3).Notably,the model mitigates frequently reported sulfate underestimations in highly industrialized regions of China.The model accurately captures two regional severe pollution episodes observed in eastern China(January/June 2021).Sensitivity experiments highlight the critical roles of heterogeneous chemical mechanisms associated with sulfate,HONO chemistry,APIs,and CPIs in capturing PM_(2.5) and O_(3) concentrations in China.Improved sulfate mechanisms result in an increase of approximately 32.4%(2.8μg m^(−3))in simulated winter sulfate concentrations when observations exceed 10μg m^(−3).Enhanced HONO elevates winter O_(3) and PM_(2.5) by≤20 and≤10μg m^(−3),respectively.Overall,CPIs dominate over APIs in improving O_(3) and PM_(2.5) simulations across China.Locally,APIs mitigate PM_(2.5) and O_(3) discrepancies in the Sichuan Basin.Seasonal cloud–chemistry coupling explains the weaker impact of PM_(2.5) in summer.
基金funded by the Science and Technology Foundation of Chongqing EducationCommission(GrantNo.KJQN202301153)the ScientificResearch Foundation of Chongqing University of Technology(Grant No.2021ZDZ025)the Postgraduate Innovation Foundation of Chongqing University of Technology(Grant No.gzlcx20243524).
文摘In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets containing both emotion and intent labels, we designed a multi-dataset learning approach enabling the model to generate dialogue summaries encompassing both user intent and emotion. Experiments conducted on the MultiWoZ and MELD datasets demonstrate that our model effectively captures user intent and emotion, achieving extremely competitive results in dialogue state tracking tasks.
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金Supported by the National Natural Science Foundation of China(12261018)Universities Key Laboratory of Mathematical Modeling and Data Mining in Guizhou Province(2023013)。
文摘In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.
文摘The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pillars supporting scientific inference and data-driven decisionmaking,have evolved through the collective wisdom of generations of statisticians.This special issue,titled"Recent Developments in Dimension Reduction and Model Checking for regressions",not only aims to showcase cutting-edge advances in the field but also carries a distinct sense of academic homage to honor the groundbreaking and enduring contributions of Professor Lixing Zhu,a leading scholar whose work has profoundly shaped both areas.
基金the funds from the Key Laboratory of Smart Earth(KF2023YB01-07)Shanghai Collaborative Innovation Fund(XTCX-KJ-2024-17)the National Natural Science Foundation of China(42388102,62303311,and 62231010)。
文摘Differential Code Bias(DCB)is the time delays between two different GNSS signals,which is crucial for GNSS positioning.Previous studies have shown that it can be significantly affected by the flex power operations in satellites.This study proposes a 15-min short-term DCB estimation method to analyze flex power's impact on DCB variations.The method jointly estimates satellite DCB,receiver DCB,and ionospheric parameters using over 300 MGEX stations.We examined three representative flex power events in 2024,achieving average internal RMS values of 0.042 ns and 0.0068 ns for inter-frequency and intra-frequency scenarios respectively.Results show that intra-frequency DCB exhibits clear shift biases synchronized with flex power state transitions while maintaining stability within 0.20 ns during nontransition periods.No definitive impact on inter-frequency DCB was observed at current estimation precision levels.
文摘In their recent paper Pereira et al.(2025)claim that validation is overlooked in mapping and modelling of ecosystem services(ES).They state that“many studies lack critical evaluation of the results and no validation is provided”and that“the validation step is largely overlooked”.This assertion may have been true several years ago,for example,when Ochoa and Urbina-Cardona(2017)made a similar observation.However,there has been much work on ES model validation over the last decade.