Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat...Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated...Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a ...The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications.展开更多
As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab...As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.展开更多
To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fract...To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.展开更多
Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters accordi...Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.展开更多
This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standar...This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standard system was established for comprehensive quality evaluation of HTD.There were obvious changes in the physicochemical properties,enzyme activities,and volatile flavor components at different storage periods,which affected the sensory evaluation of HTD to a certain extent.The results of high-throughput sequencing revealed significant microbial diversity,and showed that the bacterial community changed significantly more than did the fungal community.During the storage process,the dominant bacterial genera were Kroppenstedtia and Thermoascus.The correlation between dominant microorganisms and quality indicators highlighted their role in HTD quality.Lactococcus,Candida,Pichia,Paecilomyces,and protease activity played a crucial role in the formation of isovaleraldehyde.Acidic protease activity had the greatest impact on the microbial community.Moisture promoted isobutyric acid generation.Furthermore,the comprehensive quality evaluation standard system was established by the entropy weight method combined with multi-factor fuzzy mathematics.Consequently,this study provides innovative insights for comprehensive quality evaluation of HTD during storage and establishes a groundwork for scientific and rational storage of HTD and quality control of sauce-flavor Baijiu.展开更多
Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this cha...Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.展开更多
The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytica...The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.展开更多
Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vi...Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.展开更多
Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial i...Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial intelligence technology,especially the breakthrough of deep learning technology,it provides a new idea for bearing fault diagnosis.Deep learning can automatically learn features from a large amount of data,has a strong nonlinear modeling ability,and can effectively solve the problems existing in traditional methods.Aiming at the key problems in bearing fault diagnosis,this paper studies the fault diagnosis method based on deep learning,which not only provides a new solution for bearing fault diagnosis but also provides a reference for the application of deep learning in other mechanical fault diagnosis fields.展开更多
Objective To improve the accuracy and professionalism of question-answering(QA)model in traditional Chinese medicine(TCM)lung cancer by integrating large language models with structured knowledge graphs using the know...Objective To improve the accuracy and professionalism of question-answering(QA)model in traditional Chinese medicine(TCM)lung cancer by integrating large language models with structured knowledge graphs using the knowledge graph(KG)to text-enhanced retrievalaugmented generation(KG2TRAG)method.Methods The TCM lung cancer model(TCMLCM)was constructed by fine-tuning Chat-GLM2-6B on the specialized datasets Tianchi TCM,HuangDi,and ShenNong-TCM-Dataset,as well as a TCM lung cancer KG.The KG2TRAG method was applied to enhance the knowledge retrieval,which can convert KG triples into natural language text via ChatGPT-aided linearization,leveraging large language models(LLMs)for context-aware reasoning.For a comprehensive comparison,MedicalGPT,HuatuoGPT,and BenTsao were selected as the baseline models.Performance was evaluated using bilingual evaluation understudy(BLEU),recall-oriented understudy for gisting evaluation(ROUGE),accuracy,and the domain-specific TCM-LCEval metrics,with validation from TCM oncology experts assessing answer accuracy,professionalism,and usability.Results The TCMLCM model achieved the optimal performance across all metrics,including a BLEU score of 32.15%,ROUGE-L of 59.08%,and an accuracy rate of 79.68%.Notably,in the TCM-LCEval assessment specific to the field of TCM,its performance was 3%−12%higher than that of the baseline model.Expert evaluations highlighted superior performance in accuracy and professionalism.Conclusion TCMLCM can provide an innovative solution for TCM lung cancer QA,demonstrating the feasibility of integrating structured KGs with LLMs.This work advances intelligent TCM healthcare tools and lays a foundation for future AI-driven applications in traditional medicine.展开更多
Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.Howev...Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.However,the widened hydraulic excitation frequency may satisfy the blade resonance due to the time variation in the velocity and angle of attack of the ocean current,even resulting in blade fatigue and destructively interfering with grid stability.A key parameter that determines the resonance amplitude of the blade is the hydrodynamic damping ratio(HDR).However,HDR is difficult to obtain due to the complex fluid-structure interaction(FSI).Therefore,a literature review was conducted on the hydrodynamic damping characteristics of blade-like structures.The experimental and simulation methods used to identify and obtain the HDR quantitatively were described,placing emphasis on the experimental processes and simulation setups.Moreover,the accuracy and efficiency of different simulation methods were compared,and the modal work approach was recommended.The effects of key typical parameters,including flow velocity,angle of attack,gap,rotational speed,and cavitation,on the HDR were then summarized,and the suggestions on operating conditions were presented from the perspective of increasing the HDR.Subsequently,considering multiple flow parameters,several theoretical derivations and semi-empirical prediction formulas for HDR were introduced,and the accuracy and application were discussed.Based on the shortcomings of the existing research,the direction of future research was finally determined.The current work offers a clear understanding of the HDR of blade-like structures,which could improve the evaluation accuracy of flow-induced vibration in the design stage.展开更多
To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on...To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A threelevel function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.展开更多
The self-assembled nanoparticles(SAN)formed during the decoction process of traditional Chinese medicine(TCM)exhibit non-uniform particle sizes and a tendency for aggregation.Our group found that the p H-driven method...The self-assembled nanoparticles(SAN)formed during the decoction process of traditional Chinese medicine(TCM)exhibit non-uniform particle sizes and a tendency for aggregation.Our group found that the p H-driven method can improve the self-assembly phenomenon of Herpetospermum caudigerum Wall.,and the SAN exhibited uniform particle size and demonstrated good stability.In this paper,we analyzed the interactions between the main active compound,herpetrione(Her),and its main carrier,Herpetospermum caudigerum Wall.polysaccharide(HCWP),along with their self-assembly mechanisms under different p H values.The binding constants of Her and HCWP increase with rising p H,leading to the formation of Her-HCWP SAN with a smaller particle size,higher zeta potential,and improved thermal stability.While the contributions of hydrogen bonding and electrostatic attraction to the formation of Her-HCWP SAN increase with rising p H,the hydrophobic force consistently plays a dominant role.This study enhances our scientific understanding of the self-assembly phenomenon of TCM improved by p H driven method.展开更多
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金supported by the National Natural Science Foundation of China(Grant Nos.52306126,22350710788,12432010,11988102,92270203)the Xplore Prize.
文摘Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金financial support from the SERB-SURE under file number of SUR/2022/003129Jong Hyeok Park acknowledges the support of the National Research Foundation of Korea (NRF)funded by the Ministry of Science and ICT (RS-2023-00302697,RS-2023-00268523).
文摘Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
基金the Assets4Rail Project which is funded by the Shift2Rail Joint Undertaking under the EU’s H2020 program(Grant No.826250)the Open Research Fund of State Key Laboratory of Traction Power of Southwest Jiaotong University(Grant No.TPL2011)+1 种基金part of the experiment data concerning the railway line is supported by the DynoTRAIN Project,funded by European Commission(Grant No.234079)The first author is also supported by the China Scholarship Council(Grant No.201707000113).
文摘The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications.
文摘As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.
基金funded by the project of the Major Scientific and Technological Projects of CNOOC in the 14th Five-Year Plan(No.KJGG2022-0701)the CNOOC Research Institute(No.2020PFS-03).
文摘To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.
基金supported by the Innovation Foundation of Provincial Education Department of Gansu(2024B-005)the Gansu Province National Science Foundation(22YF7GA182)the Fundamental Research Funds for the Central Universities(No.lzujbky2022-kb01)。
文摘Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.
文摘This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standard system was established for comprehensive quality evaluation of HTD.There were obvious changes in the physicochemical properties,enzyme activities,and volatile flavor components at different storage periods,which affected the sensory evaluation of HTD to a certain extent.The results of high-throughput sequencing revealed significant microbial diversity,and showed that the bacterial community changed significantly more than did the fungal community.During the storage process,the dominant bacterial genera were Kroppenstedtia and Thermoascus.The correlation between dominant microorganisms and quality indicators highlighted their role in HTD quality.Lactococcus,Candida,Pichia,Paecilomyces,and protease activity played a crucial role in the formation of isovaleraldehyde.Acidic protease activity had the greatest impact on the microbial community.Moisture promoted isobutyric acid generation.Furthermore,the comprehensive quality evaluation standard system was established by the entropy weight method combined with multi-factor fuzzy mathematics.Consequently,this study provides innovative insights for comprehensive quality evaluation of HTD during storage and establishes a groundwork for scientific and rational storage of HTD and quality control of sauce-flavor Baijiu.
基金funded by the National Key R&D Program of China(Grant No.2022YFC2903904)the National Natural Science Foundation of China(Grant Nos.51904057 and U1906208).
文摘Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.
基金supported by the National Natural Science Foundation of China(12172023).
文摘The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.
基金funded by the National Natural Science Foundation of China(No.41962016)the Natural Science Foundation of NingXia(Nos.2023AAC02023,2023A1218,and 2021AAC02006).
文摘Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.
文摘Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial intelligence technology,especially the breakthrough of deep learning technology,it provides a new idea for bearing fault diagnosis.Deep learning can automatically learn features from a large amount of data,has a strong nonlinear modeling ability,and can effectively solve the problems existing in traditional methods.Aiming at the key problems in bearing fault diagnosis,this paper studies the fault diagnosis method based on deep learning,which not only provides a new solution for bearing fault diagnosis but also provides a reference for the application of deep learning in other mechanical fault diagnosis fields.
基金Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX24_2145).
文摘Objective To improve the accuracy and professionalism of question-answering(QA)model in traditional Chinese medicine(TCM)lung cancer by integrating large language models with structured knowledge graphs using the knowledge graph(KG)to text-enhanced retrievalaugmented generation(KG2TRAG)method.Methods The TCM lung cancer model(TCMLCM)was constructed by fine-tuning Chat-GLM2-6B on the specialized datasets Tianchi TCM,HuangDi,and ShenNong-TCM-Dataset,as well as a TCM lung cancer KG.The KG2TRAG method was applied to enhance the knowledge retrieval,which can convert KG triples into natural language text via ChatGPT-aided linearization,leveraging large language models(LLMs)for context-aware reasoning.For a comprehensive comparison,MedicalGPT,HuatuoGPT,and BenTsao were selected as the baseline models.Performance was evaluated using bilingual evaluation understudy(BLEU),recall-oriented understudy for gisting evaluation(ROUGE),accuracy,and the domain-specific TCM-LCEval metrics,with validation from TCM oncology experts assessing answer accuracy,professionalism,and usability.Results The TCMLCM model achieved the optimal performance across all metrics,including a BLEU score of 32.15%,ROUGE-L of 59.08%,and an accuracy rate of 79.68%.Notably,in the TCM-LCEval assessment specific to the field of TCM,its performance was 3%−12%higher than that of the baseline model.Expert evaluations highlighted superior performance in accuracy and professionalism.Conclusion TCMLCM can provide an innovative solution for TCM lung cancer QA,demonstrating the feasibility of integrating structured KGs with LLMs.This work advances intelligent TCM healthcare tools and lays a foundation for future AI-driven applications in traditional medicine.
基金Supported by the National Natural Science Foundation of China(Nos.52222904 and 52309117)China Postdoctoral Science Foundation(Nos.2022TQ0168 and 2023M731895).
文摘Ocean energy has progressively gained considerable interest due to its sufficient potential to meet the world’s energy demand,and the blade is the core component in electricity generation from the ocean current.However,the widened hydraulic excitation frequency may satisfy the blade resonance due to the time variation in the velocity and angle of attack of the ocean current,even resulting in blade fatigue and destructively interfering with grid stability.A key parameter that determines the resonance amplitude of the blade is the hydrodynamic damping ratio(HDR).However,HDR is difficult to obtain due to the complex fluid-structure interaction(FSI).Therefore,a literature review was conducted on the hydrodynamic damping characteristics of blade-like structures.The experimental and simulation methods used to identify and obtain the HDR quantitatively were described,placing emphasis on the experimental processes and simulation setups.Moreover,the accuracy and efficiency of different simulation methods were compared,and the modal work approach was recommended.The effects of key typical parameters,including flow velocity,angle of attack,gap,rotational speed,and cavitation,on the HDR were then summarized,and the suggestions on operating conditions were presented from the perspective of increasing the HDR.Subsequently,considering multiple flow parameters,several theoretical derivations and semi-empirical prediction formulas for HDR were introduced,and the accuracy and application were discussed.Based on the shortcomings of the existing research,the direction of future research was finally determined.The current work offers a clear understanding of the HDR of blade-like structures,which could improve the evaluation accuracy of flow-induced vibration in the design stage.
基金The National Key Research and Development Program of China(No.2023YFC3805003)。
文摘To quantify the seismic resilience of buildings,a method for evaluating functional loss from the component level to the overall building is proposed,and the dual-parameter seismic resilience assessment method based on postearthquake loss and recovery time is improved.A threelevel function tree model is established,which can consider the dynamic changes in weight coefficients of different category of components relative to their functional losses.Bayesian networks are utilized to quantify the impact of weather conditions,construction technology levels,and worker skill levels on component repair time.A method for determining the real-time functional recovery curve of buildings based on the component repair process is proposed.Taking a three-story teaching building as an example,the seismic resilience indices under basic earthquakes and rare earthquakes are calculated.The results show that the seismic resilience grade of the teaching building is comprehensively judged as GradeⅢ,and its resilience grade is more significantly affected by postearthquake loss.The proposed method can be used to predict the seismic resilience of buildings prior to earthquakes,identify weak components within buildings,and provide guidance for taking measures to enhance the seismic resilience of buildings.
基金supported by the National Natural Science Foundation of China(Nos.81873092,82174074)。
文摘The self-assembled nanoparticles(SAN)formed during the decoction process of traditional Chinese medicine(TCM)exhibit non-uniform particle sizes and a tendency for aggregation.Our group found that the p H-driven method can improve the self-assembly phenomenon of Herpetospermum caudigerum Wall.,and the SAN exhibited uniform particle size and demonstrated good stability.In this paper,we analyzed the interactions between the main active compound,herpetrione(Her),and its main carrier,Herpetospermum caudigerum Wall.polysaccharide(HCWP),along with their self-assembly mechanisms under different p H values.The binding constants of Her and HCWP increase with rising p H,leading to the formation of Her-HCWP SAN with a smaller particle size,higher zeta potential,and improved thermal stability.While the contributions of hydrogen bonding and electrostatic attraction to the formation of Her-HCWP SAN increase with rising p H,the hydrophobic force consistently plays a dominant role.This study enhances our scientific understanding of the self-assembly phenomenon of TCM improved by p H driven method.