This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
When a cracked hydrogel sample immersed in water is stretched,a swelling zone near the crack tip emerges.Within the swelling zone,water diffusion occurs and swells the hydrogel.Outside the swelling zone,water diffusio...When a cracked hydrogel sample immersed in water is stretched,a swelling zone near the crack tip emerges.Within the swelling zone,water diffusion occurs and swells the hydrogel.Outside the swelling zone,water diffusion is negligible,and the material behaves like an incompressible elastomer.Since water diffusion is a time-dependent process,the size of the swelling zone changes with time.As time evolves,the size of the swelling zone grows until to the size of the hydrogel sample.There exists a competition between the size of the swelling zone and the size of the hydrogel sample,which results in complex rate-dependent fracture behavior of hydrogel.In this article,the competition effect is studied theoretically and numerically.We find that the hydrogel undergoes three stages gradually:small-scale swelling,large-scale swelling,and equilibrium as the size of the swelling zone approaches the size of the hydrogel sample.In the stage of small-scale swelling,the first invariant of stretch at the notch tip I1notch increases with the decrease of the stretch rate.In the stage of large-scale swelling,I1notch increases first and then decreases with the decrease of stretch rate.In the stage of equilibrium,the effect of water diffusion is negligible,and I1notch is independent of stretch rate.This work reveals the connection between the stretch rate,the size of the swelling zone,and the crack tip quantity I1notch,which is used to establish the fracture criterion and predict rate-dependent fracture of hydrogel.Particularly,the previous works on different trends of rate-dependent behavior of hydrogel can be unified in this work,when both small-scale swelling and large-scale swelling are considered.展开更多
The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both g...The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.展开更多
The accuracy of the full-scale aircraft static tests is greatly influenced by the aircraft attitude.This paper proposes an aircraft attitude optimization method based on the characteristics of the test.The aim is to a...The accuracy of the full-scale aircraft static tests is greatly influenced by the aircraft attitude.This paper proposes an aircraft attitude optimization method based on the characteristics of the test.The aim is to address three typical problems of ttitude control in the full-scale aircraft static tests:(1)The coupling of rigid-body displacement and elastic deformation after large deformation,(2)the difficulty of characterizing the aircraft attitude by measurable structure,and(3)the insufficient adaptability of the center of gravity reference to complex loading conditions.The methodology involves the establishment of two observation coordinate systems,a ground coordinate system and an airframe coordinate system,and two deformation states,before and after airframe deformation.A subsequent analysis of the parameter changes of these two states under different coordinate systems is then undertaken,with the objective being to identify the key parameters affecting the attitude control accuracy of large deformation aircraft.Three optimization objective functions are established according to the test loading characteristics and the purpose of the test:(1)To minimize the full-scale aircraft loading angle error,(2)to minimize the full-scale aircraft loading additional load,and(3)to minimize the full-scale aircraft loading wing root additional bending moment.The optimization calculation results are obtained by using the particle swarm optimization algorithm,and the typical full-scale aircraft static test load condition of large passenger aircraft is taken as an example.The analysis of the results demonstrates that by customizing the measurable structure of the aircraft as the observation point for the aircraft attitude,and by obtaining the translational and rotational control parameters of the observation point during the test based on the optimization objective function,the results are reasonable,and the project can be implemented and used to control the aircraft's attitude more accurately in complex force test conditions.展开更多
As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for th...As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.展开更多
Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from ...Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from our special issue"Underground large-scale energy storage technologies in the context of carbon neutrality",11 from regular submissions on related topics,and two from early regular submissions.These contributions include five review articles,one perspective article,and 13 research articles.The increased volume of this issue and later issues reflects DUSE's commitment to addressing the rapid growth in submissions and the current backlog of high-quality papers.展开更多
A major bottleneck in large-scale eigenfrequency topology optimization is the repeated solution of the generalized eigenvalue problem.This work presents an efficient graphics processing unit(GPU)solver for threedimens...A major bottleneck in large-scale eigenfrequency topology optimization is the repeated solution of the generalized eigenvalue problem.This work presents an efficient graphics processing unit(GPU)solver for threedimensional(3D)topology optimization that maximizes the fundamental eigenfrequency.The Successive Iteration of Analysis and Design(SIAD)framework is employed to avoid solving a full eigenproblem at every iteration.The sequential approximation of the eigenpairs is solved by the GPU-accelerated multigrid-preconditioned conjugate gradient(MGPCG)method to efficiently improve the eigenvectors along with the topological evolution.The cluster-mean approach is adopted to address the non-differentiability issue caused by repeated eigenfrequencies.The corresponding sensitivity analysis method is provided.The parallelized gradient-based Zhang-Paulino-Ramos Jr.(ZPR)algorithm is employed to update the design variables.The effectiveness of the proposed solver is demonstrated through two large-scale numerical examples.The first example demonstrates the accuracy,efficiency,and scalability of the proposed solver by solving a 3D optimization problem of 50.33 million elements being solved in approximately 15.2 h over 300 iterations on a single NVIDIA Tesla V100 GPU.The second example validates the effectiveness of the proposed solver in the presence of repeated eigenfrequencies.Our findings also highlight that higher-resolution models produce distinct optimized structures with higher fundamental frequencies,underscoring the necessity of large-scale topology optimization.展开更多
0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly ...0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.展开更多
A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, th...A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.展开更多
With the grain yield accounting for 20% of the whole country, the north- east China is a strategic region for ensuring national grain security and also a most centralized region of large grain farmers. Through a sampl...With the grain yield accounting for 20% of the whole country, the north- east China is a strategic region for ensuring national grain security and also a most centralized region of large grain farmers. Through a sampling survey of large grain farmers in 15 counties and cities of northeast China, with the aid of SPSS and AMOS software, using multiple regression analysis and structural equation modeling, this paper made a quantitative analysis on the influence of the subjective and ob- jective factors of large grain farmers on their large-scale management. The results showed that the age structure, educational level, family operating capital, yield ex- pectation and protective farming awareness of large grain farmers are the positive factors influencing their large scale operation due to agricultural subsidy policy. By comparison, the number of agricultural machinery and equipment owned by family, regional labor force, expectation for future income, and expectation for contractual scale become negative factors influencing large-scale operation of large grain farm- ers because of agricultural policies. When the future expectation, self conditions, family endowment, and operation conditions of large grain farmers increase one unit, their large scale operation motivation will increase by 0.692, 0.689, 0.487 and 0.363 units respectively. Thus, increasing the future expectation and self conditions of large grain farmers is a key factor for promoting large scale operation of farmland.展开更多
The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex s...The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex system.However,there have not been effective methods for the model reliability and uncertainty analysis due to its complexity and difficulty.The uncertainties in hydrological modeling come from four important aspects:uncertainties in input data and parameters,uncertainties in model structure,uncertainties in analysis method and the initial and boundary conditions.This paper systematically reviewed the recent advances in the study of the uncertainty analysis approaches in the large-scale complex hydrological model on the basis of uncertainty sources.Also,the shortcomings and insufficiencies in the uncertainty analysis for complex hydrological models are pointed out.And then a new uncertainty quantification platform PSUADE and its uncertainty quantification methods were introduced,which will be a powerful tool and platform for uncertainty analysis of large-scale complex hydrological models.Finally,some future perspectives on uncertainty quantification are put forward.展开更多
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua...Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.展开更多
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ...Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.展开更多
Recently,tool learning with large language models(LLMs)has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.Despite growing attention and rapid advancements in ...Recently,tool learning with large language models(LLMs)has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.Despite growing attention and rapid advancements in this field,the existing literature remains fragmented and lacks systematic organization,posing barriers to entry for newcomers.This gap motivates us to conduct a comprehensive survey of existing works on tool learning with LLMs.In this survey,we focus on reviewing existing literature from the two primary aspects(1)why tool learning is beneficial and(2)how tool learning is implemented,enabling a comprehensive understanding of tool learning with LLMs.We first explore the“why”by reviewing both the benefits of tool integration and the inherent benefits of the tool learning paradigm from six specific aspects.In terms of“how”,we systematically review the literature according to a taxonomy of four key stages in the tool learning workflow:task planning,tool selection,tool calling,and response generation.Additionally,we provide a detailed summary of existing benchmarks and evaluation methods,categorizing them according to their relevance to different stages.Finally,we discuss current challenges and outline potential future directions,aiming to inspire both researchers and industrial developers to further explore this emerging and promising area.展开更多
Large Eddy Simulations(LES) in conjunction with the Flamelet Progress Variable(FPV) approach have been performed to investigate the flame and large-scale flow structures in the bluff-body stabilized non-premixed flame...Large Eddy Simulations(LES) in conjunction with the Flamelet Progress Variable(FPV) approach have been performed to investigate the flame and large-scale flow structures in the bluff-body stabilized non-premixed flames, HM1 and HM3. The validity of the numerical methods is first verified by comparing the predicted velocity and composition fields with experimental measurements. Then the evolution of the flame and large-scale flow structures is analyzed when the flames approach blow-off. The analysis of instantaneous and statistical data indicates that there exists a shift of the control mechanism in the recirculation zone in the two flames. In the recirculation zone, HM1 flame is mainly controlled by the mixing effect and ignition mainly occurs in the outer shear layer. In HM3 flame, both the chemical reactions and mixing are important in the recirculation zone. The Proper Orthogonal Decomposition(POD) results show that the fluctuations in the outer shear layer are more intense in HM1, while the flow structures are more obvious in the outer vortex structure in HM3, due to the different control mechanism in the recirculation zone.It further shows that the flow structures in HM1 spread larger in the intense mixing zone due to higher temperature and less extinction.展开更多
Gas holdups of large bubbles and small bubbles were measured by means of dynamic gas disengagement approach in the pressured bubble column with a diameter of 0.3 m and a height of 6.6 m. The effects of superficial gas...Gas holdups of large bubbles and small bubbles were measured by means of dynamic gas disengagement approach in the pressured bubble column with a diameter of 0.3 m and a height of 6.6 m. The effects of superficial gas velocity, liquid surface tension, liquid viscosity and system pressure on gas holdups of small bubbles and large bubbles were investigated. The holdup of large bubbles increases and the holdup of small bubbles decreases with an increase of liquid viscosity. Meanwhile, the holdup of large bubbles decreases with increasing the system pressure. A correlation for the holdup of small bubbles was obtained from the experimental data.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach th...In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.展开更多
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.展开更多
In isotope 137 Cs tracing studies, it is a premise to determine suitable 137 Cs reference inventory(CRI) plots and the CRI values. Owing to the heterogeneous spatial distribution of 137 Cs deposition in the ground a...In isotope 137 Cs tracing studies, it is a premise to determine suitable 137 Cs reference inventory(CRI) plots and the CRI values. Owing to the heterogeneous spatial distribution of 137 Cs deposition in the ground and diverse, or even irregular, operations in sampling and testing procedures, CRI determination is usually faced with many difficulties and uncertainties. In addition, more difficulties occur in an investigation of a large-scale region because of time constraints and measurement cost limitations. In this study, traditional CRI acquiring methods were summarized first, and then a new complex scheme was established, involving seven core steps and coupling the model estimate and sample measurement. The above CRI determination methodology was implemented in the central-eastern Inner Mongolia Plateau. The case study results showed that the CRI in the dark chestnut soil sub-region, located in the east and south of Xing'an City, exhibited 2447 Bq·m–2; the CRI in the aeolian sandy soil sub-region, positioned in the south central Tongliao City and central Chifeng City, showed 2430 Bq·m–2; the CRI in the sandy chernozem soil sub-region, situated in the northwestern Chifeng City, presented 2384 Bq·m–2; and the CRI in the chestnut soil sub-region, in the southern Xilin Gol City, was 2368 Bq·m–2. The newly proposed CRI determination scheme was proved effective, and the determined CRI plots and CRI values were convincing. The methodology offered a framework for 137 Cs tracing studies in large-scale regions or long-distance transects.展开更多
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金supported by the Postdoctoral Fellowship Program of CPSF(Grant No.GZB20240607)the Postdoctoral Program of Shaanxi Province(Grant No.25010103232)。
文摘When a cracked hydrogel sample immersed in water is stretched,a swelling zone near the crack tip emerges.Within the swelling zone,water diffusion occurs and swells the hydrogel.Outside the swelling zone,water diffusion is negligible,and the material behaves like an incompressible elastomer.Since water diffusion is a time-dependent process,the size of the swelling zone changes with time.As time evolves,the size of the swelling zone grows until to the size of the hydrogel sample.There exists a competition between the size of the swelling zone and the size of the hydrogel sample,which results in complex rate-dependent fracture behavior of hydrogel.In this article,the competition effect is studied theoretically and numerically.We find that the hydrogel undergoes three stages gradually:small-scale swelling,large-scale swelling,and equilibrium as the size of the swelling zone approaches the size of the hydrogel sample.In the stage of small-scale swelling,the first invariant of stretch at the notch tip I1notch increases with the decrease of the stretch rate.In the stage of large-scale swelling,I1notch increases first and then decreases with the decrease of stretch rate.In the stage of equilibrium,the effect of water diffusion is negligible,and I1notch is independent of stretch rate.This work reveals the connection between the stretch rate,the size of the swelling zone,and the crack tip quantity I1notch,which is used to establish the fracture criterion and predict rate-dependent fracture of hydrogel.Particularly,the previous works on different trends of rate-dependent behavior of hydrogel can be unified in this work,when both small-scale swelling and large-scale swelling are considered.
基金funded by the National Natural Science Foundation of China(Grant No.62272236)the Natural Science Foundation of Jiangsu Province(Grant No.BK20201136).
文摘The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.
基金supported in part by the National Specialized Research Project(No.XXZ3-XX21-3).
文摘The accuracy of the full-scale aircraft static tests is greatly influenced by the aircraft attitude.This paper proposes an aircraft attitude optimization method based on the characteristics of the test.The aim is to address three typical problems of ttitude control in the full-scale aircraft static tests:(1)The coupling of rigid-body displacement and elastic deformation after large deformation,(2)the difficulty of characterizing the aircraft attitude by measurable structure,and(3)the insufficient adaptability of the center of gravity reference to complex loading conditions.The methodology involves the establishment of two observation coordinate systems,a ground coordinate system and an airframe coordinate system,and two deformation states,before and after airframe deformation.A subsequent analysis of the parameter changes of these two states under different coordinate systems is then undertaken,with the objective being to identify the key parameters affecting the attitude control accuracy of large deformation aircraft.Three optimization objective functions are established according to the test loading characteristics and the purpose of the test:(1)To minimize the full-scale aircraft loading angle error,(2)to minimize the full-scale aircraft loading additional load,and(3)to minimize the full-scale aircraft loading wing root additional bending moment.The optimization calculation results are obtained by using the particle swarm optimization algorithm,and the typical full-scale aircraft static test load condition of large passenger aircraft is taken as an example.The analysis of the results demonstrates that by customizing the measurable structure of the aircraft as the observation point for the aircraft attitude,and by obtaining the translational and rotational control parameters of the observation point during the test based on the optimization objective function,the results are reasonable,and the project can be implemented and used to control the aircraft's attitude more accurately in complex force test conditions.
文摘As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.
文摘Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from our special issue"Underground large-scale energy storage technologies in the context of carbon neutrality",11 from regular submissions on related topics,and two from early regular submissions.These contributions include five review articles,one perspective article,and 13 research articles.The increased volume of this issue and later issues reflects DUSE's commitment to addressing the rapid growth in submissions and the current backlog of high-quality papers.
基金support from the National Natural Science Foundation of China(Award No.52105240).
文摘A major bottleneck in large-scale eigenfrequency topology optimization is the repeated solution of the generalized eigenvalue problem.This work presents an efficient graphics processing unit(GPU)solver for threedimensional(3D)topology optimization that maximizes the fundamental eigenfrequency.The Successive Iteration of Analysis and Design(SIAD)framework is employed to avoid solving a full eigenproblem at every iteration.The sequential approximation of the eigenpairs is solved by the GPU-accelerated multigrid-preconditioned conjugate gradient(MGPCG)method to efficiently improve the eigenvectors along with the topological evolution.The cluster-mean approach is adopted to address the non-differentiability issue caused by repeated eigenfrequencies.The corresponding sensitivity analysis method is provided.The parallelized gradient-based Zhang-Paulino-Ramos Jr.(ZPR)algorithm is employed to update the design variables.The effectiveness of the proposed solver is demonstrated through two large-scale numerical examples.The first example demonstrates the accuracy,efficiency,and scalability of the proposed solver by solving a 3D optimization problem of 50.33 million elements being solved in approximately 15.2 h over 300 iterations on a single NVIDIA Tesla V100 GPU.The second example validates the effectiveness of the proposed solver in the presence of repeated eigenfrequencies.Our findings also highlight that higher-resolution models produce distinct optimized structures with higher fundamental frequencies,underscoring the necessity of large-scale topology optimization.
基金funded by the Key Research and Development Program of Shaanxi Province(No.2024SFYBXM-669)the National Natural Science Foundation of China(No.42271078)。
文摘0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.
基金the National Natural Science Foundation of China(10471062)the Natural Science Foundation of Jiangsu Province(BK2006184)~~
文摘A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.
文摘With the grain yield accounting for 20% of the whole country, the north- east China is a strategic region for ensuring national grain security and also a most centralized region of large grain farmers. Through a sampling survey of large grain farmers in 15 counties and cities of northeast China, with the aid of SPSS and AMOS software, using multiple regression analysis and structural equation modeling, this paper made a quantitative analysis on the influence of the subjective and ob- jective factors of large grain farmers on their large-scale management. The results showed that the age structure, educational level, family operating capital, yield ex- pectation and protective farming awareness of large grain farmers are the positive factors influencing their large scale operation due to agricultural subsidy policy. By comparison, the number of agricultural machinery and equipment owned by family, regional labor force, expectation for future income, and expectation for contractual scale become negative factors influencing large-scale operation of large grain farm- ers because of agricultural policies. When the future expectation, self conditions, family endowment, and operation conditions of large grain farmers increase one unit, their large scale operation motivation will increase by 0.692, 0.689, 0.487 and 0.363 units respectively. Thus, increasing the future expectation and self conditions of large grain farmers is a key factor for promoting large scale operation of farmland.
基金National Key Basic Research Program of China,No.2010CB428403National Grand Science and Technology Special Project of Water Pollution Control and Improvement,No.2009ZX07210-006
文摘The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex system.However,there have not been effective methods for the model reliability and uncertainty analysis due to its complexity and difficulty.The uncertainties in hydrological modeling come from four important aspects:uncertainties in input data and parameters,uncertainties in model structure,uncertainties in analysis method and the initial and boundary conditions.This paper systematically reviewed the recent advances in the study of the uncertainty analysis approaches in the large-scale complex hydrological model on the basis of uncertainty sources.Also,the shortcomings and insufficiencies in the uncertainty analysis for complex hydrological models are pointed out.And then a new uncertainty quantification platform PSUADE and its uncertainty quantification methods were introduced,which will be a powerful tool and platform for uncertainty analysis of large-scale complex hydrological models.Finally,some future perspectives on uncertainty quantification are put forward.
基金supported by National Natural Science Foundation of China(62376219 and 62006194)Foundational Research Project in Specialized Discipline(Grant No.G2024WD0146)Faculty Construction Project(Grant No.24GH0201148).
文摘Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.
文摘Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.
基金funded by the National Key R&D Program of China(2023YFA1008704),the National Natural Science Foundation of China(Grant No.62377044)Beijing Key Laboratory of Big Data Management and Analysis Methods,Major Innovation&Planning Interdisciplinary Platform for the“Double-First Class”Initiative,funds for building world-class universities(disciplines)of Renmin University of China,and PCC@RUC.The authors would like to extend their sincere gratitude to Yankai Lin for his constructive feedback throughout the development of this work.
文摘Recently,tool learning with large language models(LLMs)has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.Despite growing attention and rapid advancements in this field,the existing literature remains fragmented and lacks systematic organization,posing barriers to entry for newcomers.This gap motivates us to conduct a comprehensive survey of existing works on tool learning with LLMs.In this survey,we focus on reviewing existing literature from the two primary aspects(1)why tool learning is beneficial and(2)how tool learning is implemented,enabling a comprehensive understanding of tool learning with LLMs.We first explore the“why”by reviewing both the benefits of tool integration and the inherent benefits of the tool learning paradigm from six specific aspects.In terms of“how”,we systematically review the literature according to a taxonomy of four key stages in the tool learning workflow:task planning,tool selection,tool calling,and response generation.Additionally,we provide a detailed summary of existing benchmarks and evaluation methods,categorizing them according to their relevance to different stages.Finally,we discuss current challenges and outline potential future directions,aiming to inspire both researchers and industrial developers to further explore this emerging and promising area.
基金supported by the National Natural Science Foundation of China(Nos.91441202 and 51476087)
文摘Large Eddy Simulations(LES) in conjunction with the Flamelet Progress Variable(FPV) approach have been performed to investigate the flame and large-scale flow structures in the bluff-body stabilized non-premixed flames, HM1 and HM3. The validity of the numerical methods is first verified by comparing the predicted velocity and composition fields with experimental measurements. Then the evolution of the flame and large-scale flow structures is analyzed when the flames approach blow-off. The analysis of instantaneous and statistical data indicates that there exists a shift of the control mechanism in the recirculation zone in the two flames. In the recirculation zone, HM1 flame is mainly controlled by the mixing effect and ignition mainly occurs in the outer shear layer. In HM3 flame, both the chemical reactions and mixing are important in the recirculation zone. The Proper Orthogonal Decomposition(POD) results show that the fluctuations in the outer shear layer are more intense in HM1, while the flow structures are more obvious in the outer vortex structure in HM3, due to the different control mechanism in the recirculation zone.It further shows that the flow structures in HM1 spread larger in the intense mixing zone due to higher temperature and less extinction.
基金Supported by China Petroleum and Chem ical Corporation(No. 2 0 0 0 5 8)
文摘Gas holdups of large bubbles and small bubbles were measured by means of dynamic gas disengagement approach in the pressured bubble column with a diameter of 0.3 m and a height of 6.6 m. The effects of superficial gas velocity, liquid surface tension, liquid viscosity and system pressure on gas holdups of small bubbles and large bubbles were investigated. The holdup of large bubbles increases and the holdup of small bubbles decreases with an increase of liquid viscosity. Meanwhile, the holdup of large bubbles decreases with increasing the system pressure. A correlation for the holdup of small bubbles was obtained from the experimental data.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金National Natural Science Foundation of china(No.42371446)Natural Science Foundatiorof Hubei Province(No.2024AFD412)Fundamental Research Funds for National Universities,China University of Geosciences(Wuhan)(No.2024XLA17).
文摘In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.
文摘Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
基金National Key Basic Research Program of China(973 Program),No.2010CB950904 National Natural Science Foundation of China,No.40971223 Knowledge Innovation Project of CAS,No.KZCX2-EW-306
文摘In isotope 137 Cs tracing studies, it is a premise to determine suitable 137 Cs reference inventory(CRI) plots and the CRI values. Owing to the heterogeneous spatial distribution of 137 Cs deposition in the ground and diverse, or even irregular, operations in sampling and testing procedures, CRI determination is usually faced with many difficulties and uncertainties. In addition, more difficulties occur in an investigation of a large-scale region because of time constraints and measurement cost limitations. In this study, traditional CRI acquiring methods were summarized first, and then a new complex scheme was established, involving seven core steps and coupling the model estimate and sample measurement. The above CRI determination methodology was implemented in the central-eastern Inner Mongolia Plateau. The case study results showed that the CRI in the dark chestnut soil sub-region, located in the east and south of Xing'an City, exhibited 2447 Bq·m–2; the CRI in the aeolian sandy soil sub-region, positioned in the south central Tongliao City and central Chifeng City, showed 2430 Bq·m–2; the CRI in the sandy chernozem soil sub-region, situated in the northwestern Chifeng City, presented 2384 Bq·m–2; and the CRI in the chestnut soil sub-region, in the southern Xilin Gol City, was 2368 Bq·m–2. The newly proposed CRI determination scheme was proved effective, and the determined CRI plots and CRI values were convincing. The methodology offered a framework for 137 Cs tracing studies in large-scale regions or long-distance transects.