Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a ...The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications.展开更多
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat...Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to ...In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.展开更多
In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical m...In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environmen...The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.展开更多
Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The t...Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The traditional thermal elastic-plastic finite element method(TEP-FEM)can accurately predict welding deformation.However,its efficiency is low because of the complex nonlinear transient computation,making it difficult to meet the needs of rapid engineering evaluation.To address this challenge,this study proposes an efficient prediction method for welding deformation in marine thin plate butt welds.This method is based on the coupled temperature gradient-thermal strain method(TG-TSM)that integrates inherent strain theory with a shell element finite element model.The proposed method first extracts the distribution pattern and characteristic value of welding-induced inherent strain through TEP-FEM analysis.This strain is then converted into the equivalent thermal load applied to the shell element model for rapid computation.The proposed method-particularly,the gradual temperature gradient-thermal strain method(GTG-TSM)-achieved improved computational efficiency and consistent precision.Furthermore,the proposed method required much less computation time than the traditional TEP-FEM.Thus,this study lays the foundation for future prediction of welding deformation in more complex marine thin plates.展开更多
At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systema...At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.展开更多
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated...Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.展开更多
This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one...This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one or more optimized wheel profiles according to train operators’needs,e.g.,reducing wheel wear,mitigating the development of wheel out-of-roundness(OOR),improving the shape stability of the wheel profile.Specifically,WPFTS includes four modules:(I)a wheel profile generation module based on the rotary-scaling finetuning(RSFT)method;(II)a multi-objective generation module consisting of a rigid multi-body dynamics simulation(MBS)model,an analytical model,and a rigid–flexible MBS model,for generating 11 objectives related to wheel damage,vehicle stability,vehicle safety,and passenger comfort;(III)a weight assignment module consisting of an adaptive weight assignment strategy and a manual weight assignment strategy;and(IV)an optimization module based on radial basis function(RBF)and particle swarm optimization(PSO).Finally,three cases are introduced to show how WPTFS recommends a wheel profile according to train operators’needs.Among them,a wheel profile with high shape stability,a wheel profile for mitigating the development of wheel OOR,and a wheel profile considering hunting stability and derailment safety are developed,respectively.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
During cerebral cortical cortex neurogenesis two major types of progenitors generate a variety of morphologically and functionally diverse projection neurons destined for the different cortical layers in non-gyrified ...During cerebral cortical cortex neurogenesis two major types of progenitors generate a variety of morphologically and functionally diverse projection neurons destined for the different cortical layers in non-gyrified mice. Radial glia cells (RGCs) undergo mitosis in the cortical ventricular zone and exhibit an apical-basal cell polarity, whereas non-polar intermediate progenitor cells (IPCs) divide basally in the subventricular zone (Franco and Muller, 2013; Taverna et al., 2014).展开更多
As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab...As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.展开更多
Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to ...Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule.展开更多
Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters accordi...Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.展开更多
To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fract...To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.展开更多
Chinese Vice Premier’s visit to Africa continues to emphasize the mutual cooperation,with a focus on agriculture FOR many years,the Chinese Government has dispatched the minister of foreign affairs to Africa for the ...Chinese Vice Premier’s visit to Africa continues to emphasize the mutual cooperation,with a focus on agriculture FOR many years,the Chinese Government has dispatched the minister of foreign affairs to Africa for the first official visit of a year.This year,however,that rule was broken when Hui Liangyu,Chinese Vice Premier,made the 14-day trip. On January 6-19,Hui paid official visits to Mauritius,Zambia,the Democratic Republic of Congo(DRC),Cameroon and Senegal,focusing on economic and agri-展开更多
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金the Assets4Rail Project which is funded by the Shift2Rail Joint Undertaking under the EU’s H2020 program(Grant No.826250)the Open Research Fund of State Key Laboratory of Traction Power of Southwest Jiaotong University(Grant No.TPL2011)+1 种基金part of the experiment data concerning the railway line is supported by the DynoTRAIN Project,funded by European Commission(Grant No.234079)The first author is also supported by the China Scholarship Council(Grant No.201707000113).
文摘The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.52306126,22350710788,12432010,11988102,92270203)the Xplore Prize.
文摘Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金Supported in part by Natural Science Foundation of Guangxi(2023GXNSFAA026246)in part by the Central Government's Guide to Local Science and Technology Development Fund(GuikeZY23055044)in part by the National Natural Science Foundation of China(62363003)。
文摘In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.
基金supported by Anhui Provincial Natural Science Foundation(2408085QA030)Natural Science Research Project of Anhui Educational Committee,China(2022AH050825)+3 种基金Medical Special Cultivation Project of Anhui University of Science and Technology(YZ2023H2C008)the Excellent Research and Innovation Team of Anhui Province,China(2022AH010052)the Scientific Research Foundation for High-level Talents of Anhui University of Science and Technology,China(2021yjrc51)Collaborative Innovation Program of Hefei Science Center,CAS,China(2019HSC-CIP006).
文摘In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
基金supported by the National Key Research and Develop-ment Program(No.2022YFC3701103)the National Natural Science Foundation of China(Nos.42130714 and 41931287).
文摘The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.
基金Supported by the National Natural Science Foundation of China under Grant No.51975138the High-Tech Ship Scientific Research Project from the Ministry of Industry and Information Technology under Grant No.CJ05N20the National Defense Basic Research Project under Grant No.JCKY2023604C006.
文摘Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The traditional thermal elastic-plastic finite element method(TEP-FEM)can accurately predict welding deformation.However,its efficiency is low because of the complex nonlinear transient computation,making it difficult to meet the needs of rapid engineering evaluation.To address this challenge,this study proposes an efficient prediction method for welding deformation in marine thin plate butt welds.This method is based on the coupled temperature gradient-thermal strain method(TG-TSM)that integrates inherent strain theory with a shell element finite element model.The proposed method first extracts the distribution pattern and characteristic value of welding-induced inherent strain through TEP-FEM analysis.This strain is then converted into the equivalent thermal load applied to the shell element model for rapid computation.The proposed method-particularly,the gradual temperature gradient-thermal strain method(GTG-TSM)-achieved improved computational efficiency and consistent precision.Furthermore,the proposed method required much less computation time than the traditional TEP-FEM.Thus,this study lays the foundation for future prediction of welding deformation in more complex marine thin plates.
基金supported by the National Natural Science Foundation of China(Project No.42307555).
文摘At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.
基金financial support from the SERB-SURE under file number of SUR/2022/003129Jong Hyeok Park acknowledges the support of the National Research Foundation of Korea (NRF)funded by the Ministry of Science and ICT (RS-2023-00302697,RS-2023-00268523).
文摘Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.
基金This work was supported by China Scholarship Council(Grant No.201707000113).
文摘This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one or more optimized wheel profiles according to train operators’needs,e.g.,reducing wheel wear,mitigating the development of wheel out-of-roundness(OOR),improving the shape stability of the wheel profile.Specifically,WPFTS includes four modules:(I)a wheel profile generation module based on the rotary-scaling finetuning(RSFT)method;(II)a multi-objective generation module consisting of a rigid multi-body dynamics simulation(MBS)model,an analytical model,and a rigid–flexible MBS model,for generating 11 objectives related to wheel damage,vehicle stability,vehicle safety,and passenger comfort;(III)a weight assignment module consisting of an adaptive weight assignment strategy and a manual weight assignment strategy;and(IV)an optimization module based on radial basis function(RBF)and particle swarm optimization(PSO).Finally,three cases are introduced to show how WPTFS recommends a wheel profile according to train operators’needs.Among them,a wheel profile with high shape stability,a wheel profile for mitigating the development of wheel OOR,and a wheel profile considering hunting stability and derailment safety are developed,respectively.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘During cerebral cortical cortex neurogenesis two major types of progenitors generate a variety of morphologically and functionally diverse projection neurons destined for the different cortical layers in non-gyrified mice. Radial glia cells (RGCs) undergo mitosis in the cortical ventricular zone and exhibit an apical-basal cell polarity, whereas non-polar intermediate progenitor cells (IPCs) divide basally in the subventricular zone (Franco and Muller, 2013; Taverna et al., 2014).
文摘As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.
基金partially supported by grants RZ2009-00006-00-00(Instituto Nacional de Investigacion y Tecnología Agraria y Alimentaria,Ministerio de Ciencia e Innovación,Spain)AGL-2013-42726-R(Secretaria de Estado de Investigacion,Desarrollo e Innovacion,Ministerio de Economia y Competitividad,Spain)+1 种基金supported by a Ph.D.fellowship from the ceiA3(Andalucia,Spain)with funding provided by Banco Santander through its Global Division,Santander Universidadesfunded by the Swedish Foundation for Equine Research,Stockholm,Sweden(H14-47-008)
文摘Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule.
基金supported by the Innovation Foundation of Provincial Education Department of Gansu(2024B-005)the Gansu Province National Science Foundation(22YF7GA182)the Fundamental Research Funds for the Central Universities(No.lzujbky2022-kb01)。
文摘Modal parameters can accurately characterize the structural dynamic properties and assess the physical state of the structure.Therefore,it is particularly significant to identify the structural modal parameters according to the monitoring data information in the structural health monitoring(SHM)system,so as to provide a scientific basis for structural damage identification and dynamic model modification.In view of this,this paper reviews methods for identifying structural modal parameters under environmental excitation and briefly describes how to identify structural damages based on the derived modal parameters.The paper primarily introduces data-driven modal parameter recognition methods(e.g.,time-domain,frequency-domain,and time-frequency-domain methods,etc.),briefly describes damage identification methods based on the variations of modal parameters(e.g.,natural frequency,modal shapes,and curvature modal shapes,etc.)and modal validation methods(e.g.,Stability Diagram and Modal Assurance Criterion,etc.).The current status of the application of artificial intelligence(AI)methods in the direction of modal parameter recognition and damage identification is further discussed.Based on the pre-vious analysis,the main development trends of structural modal parameter recognition and damage identification methods are given to provide scientific references for the optimized design and functional upgrading of SHM systems.
基金funded by the project of the Major Scientific and Technological Projects of CNOOC in the 14th Five-Year Plan(No.KJGG2022-0701)the CNOOC Research Institute(No.2020PFS-03).
文摘To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.
文摘Chinese Vice Premier’s visit to Africa continues to emphasize the mutual cooperation,with a focus on agriculture FOR many years,the Chinese Government has dispatched the minister of foreign affairs to Africa for the first official visit of a year.This year,however,that rule was broken when Hui Liangyu,Chinese Vice Premier,made the 14-day trip. On January 6-19,Hui paid official visits to Mauritius,Zambia,the Democratic Republic of Congo(DRC),Cameroon and Senegal,focusing on economic and agri-