The large particle CeO2 and Y2O3 were prepared using oxalic acid as precipitator. The effects of operational parameters such as stirring velocity, precipitation temperature, feeding speed, aging time, precipitation me...The large particle CeO2 and Y2O3 were prepared using oxalic acid as precipitator. The effects of operational parameters such as stirring velocity, precipitation temperature, feeding speed, aging time, precipitation method, and calcination temperature on particle size and loose density of CeO2 were studied. Under the particular conditions, particle median size of D50≥30μm, loose density of ≥2.0 g/mlof CeO2, and particle median size of D50≥20 μm of Y2O3 were prepared. This technology had advantages of simple process, less equipment investment, ease of use, and suitability for industrialization products.展开更多
Crystalline rare-earth(RE)carbonates having large particle size were prepared from the lixivium of weathered crust elution-deposited rare-earth ores using the precipitation method with ammonium bicarbonate as the prec...Crystalline rare-earth(RE)carbonates having large particle size were prepared from the lixivium of weathered crust elution-deposited rare-earth ores using the precipitation method with ammonium bicarbonate as the precipitant.Their chemical composition was studied using elemental and thermogravimetric analyses(TGA),and their structure and morphology were characterized using Fourier transform infrared(FTIR)spectroscopy,X-ray diffraction(XRD),and scanning electron microscopy(SEM).The results demonstrate that the crystalline rareearth carbonate is a hydrated basic carbonate or oxycarbonate and not astable intermediate carbonate in the process of thermal decomposition.The particle size of crystalline rare-earth carbonates with large particle size is in the range of 50–200μm.With an RE2O3 content of up to 95wt%,the quality of crystalline rare-earth carbonates is higher compared to the Chinese National Standard(GB/T 28882–2012).The quality of the product is superior to the Chinese National Standard.展开更多
Cerium dioxide(CeO2) has attracted much attention and has wide applications such as automotive exhaust catalysts,polishing materials for optical glasses and additives for advanced glasses,as well as cosmetic materials...Cerium dioxide(CeO2) has attracted much attention and has wide applications such as automotive exhaust catalysts,polishing materials for optical glasses and additives for advanced glasses,as well as cosmetic materials.The particle size and its distribution are key factors to the performance of the materials in the functional applications.However,control of particle size is still a challenge in materials synthesis.Therefore,continuous precipitation of cerium oxalate(precursor of ceria) was carried out at dif...展开更多
The large particle cerium oxide was prepared using oxalic acid as precipitation agent. The effects of preparation conditions on the particle size of cerium oxide were discussed. The results showed that the particle si...The large particle cerium oxide was prepared using oxalic acid as precipitation agent. The effects of preparation conditions on the particle size of cerium oxide were discussed. The results showed that the particle size of cerium oxide could be controlled effectively by the temperature,acidity of the solution,aging time,etc. The optimized preparation process of large particle cerium oxide was obtained. The cerium oxide with size between 50 μm to 150 μm was prepared by the process. Moreover,the cerium oxide particles were dispersed uniformly.展开更多
Rheological properties of large particulate-liquid model food systems were studied by using the BMS (ball measuring system). The model food systems were composed of alginate gel particles (-10mm) and a gelatinised...Rheological properties of large particulate-liquid model food systems were studied by using the BMS (ball measuring system). The model food systems were composed of alginate gel particles (-10mm) and a gelatinised starch solution with 1% w/w sodium chloride as a liquid phase. The effects of particle phase volume (Ф, 0-0.60), particle shapes (cube, sphere, rod and disc) and starch concentrations (3% and 5% w/w starch) were investigated. The power law model was successfully applied to characterize the flow properties of each system and the consistency K and power law index n were obtained. The K increased and n decreased with increasing # for samples at all particle shapes at 3% w/w starch in the liquid phase. The particle effect on the viscosity is further analysed by means of the Krieger-Dougherty model and the maximum packing fraction #,, and the intrinsic viscosity [η] were obtained in each system. The Фm, depended on the particle shape, as expected. The [7] value depended on particle shape and was largely in the order of 4.04 (cube), 3.28 (disc), 2.56 (sphere) and 2.32 (rod) at 3% w/w starch. The [η] also depended on starch concentration and was 1.1 at 5%,6 w/w starch in the liquid phase with spherical particles. The present results show successful application of BMS to study the rheological properties of large particulate liquid food systems at relatively small scale experiment (-0.5 L) and also that existing models for suspension rheology are applicable for such food systems to a great extend.展开更多
Large size titanium alloy parts are widely used in aerospace.However,they are difficult to manufacture using mechanical cutting technology because of severe tool wear.Electrochemical jet machining is a promising techn...Large size titanium alloy parts are widely used in aerospace.However,they are difficult to manufacture using mechanical cutting technology because of severe tool wear.Electrochemical jet machining is a promising technology to achieve high efficiency,because it has high machining flexibility and no machining tool wear.However,reports on the macro electrochemical jet machining of large size titanium alloy parts are very scarce,because it is difficult to achieve effective constraint of the flow field in macro electrochemical jet machining.In addition,titanium alloy is very sensitive to fluctuation of the flow field,and a turbulent flow field would lead to serious stray corrosion.This paper reports a series of investigations of the electrochemical jet machining of titanium alloy parts.Based on the flow analysis and experiments,the machining flow field was effectively constrained.TB6 titanium alloy part with a perimeter of one meter was machined.The machined surface was smooth with no obvious machining defects.The machining process was particularly stable with no obvious spark discharge.The research provides a reference for the application of electrochemical jet machining technology to achieve large allowance material removal in the machining of large titanium alloy parts.展开更多
In this paper,we establish some strong laws of large numbers,which are for nonindependent random variables under the framework of sublinear expectations.One of our main results is for blockwise m-dependent random vari...In this paper,we establish some strong laws of large numbers,which are for nonindependent random variables under the framework of sublinear expectations.One of our main results is for blockwise m-dependent random variables,and another is for sub-orthogonal random variables.Both extend the strong law of large numbers for independent random variables under sublinear expectations to the non-independent case.展开更多
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Ja...Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.展开更多
Magnesium hydride(MgH_(2)),a promising high-capacity hydrogen storage material,is hindered by slow dehydrogenation kinetics.AIdriven catalyst discovery to address this is often hampered by the laborious extraction of ...Magnesium hydride(MgH_(2)),a promising high-capacity hydrogen storage material,is hindered by slow dehydrogenation kinetics.AIdriven catalyst discovery to address this is often hampered by the laborious extraction of data from unstructured literature.To overcome this,we introduce a transformative“LLM to Agent”framework that synergistically integrates Large Language Models(LLMs)for automated data curation with Machine Learning(ML)for predictive design.We automatically constructed a comprehensive database of 809 MgH_(2)catalysts(6555 data rows)with high fidelity and an~40-fold acceleration over manual methods.The resulting ML models achieved high accuracy(average R^(2)>0.91)in predicting dehydrogenation temperature and activation energy,subsequently guiding a Genetic Algorithm(GA)in an exploratory inverse design that autonomously uncovered key design principles for high-performance catalysts.Encouragingly,a strong alignment was found between these AI-discovered principles and the design strategies of recently reported,state-of-the-art experimental systems,providing substantial evidence for the validity of our approach.The framework culminates in Cat-Advisor,a novel,domain-adapted multi-agent system.Cat-Advisor translates ML predictions and retrieval-augmented knowledge into actionable design guidance,demonstrating capabilities that surpass those of general-purpose LLMs in this specialized domain.This work delivers a practical AI toolkit for accelerated materials discovery and advances the emerging Agent-based paradigm for designing next-generation energy technologies.展开更多
Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,i...Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,its TC forecasts still require enhancement.Prediction errors persist due to biases in the training data and smoothing effects in data-driven methods.To address this,we introduce CycloneBCNet,a deep-learning model designed to correct TianXing’s TC forecast biases by leveraging spatial and temporal data.CycloneBCNet utilizes the SimVP(simpler yet better video prediction)framework with spatial attention to highlight cyclone core regions in forecast fields.It also incorporates TC trend information(center position,maximum wind speed,and minimum sea level pressure)via an LSTM(long short-term memory)module.These TC vectors are derived from post-processed TianXing forecasts.By fusing features from forecast fields and TC vectors,CycloneBCNet corrects biases across multiple lead times.At a 96-h lead time,the track error reduces from 162.4 to 86.4 km,the wind speed error from 17.2 to 6.69 m s^(-1),and the pressure error from 22.2 to 9.36 hPa.Interpretability analysis shows that CycloneBCNet adjusts its attention across forecast lead times.Intensity corrections prioritize inner-core dynamics,particularly the eye and eyewall,while track corrections shift from lower-level variables and the cyclone’s core to broader environmental factors and mid-to upper-level features as the forecast duration increases.These findings demonstrate that CycloneBCNet effectively captures key TC dynamics consistent with meteorological principles,including the dominance of near-surface conditions for intensity and the increasing influence of steering currents on track prediction.展开更多
LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora...LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.展开更多
Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standa...Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.展开更多
AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospecti...AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospective cohort study enrolled 29 patients diagnosed with OA-DLBCL based on histopathological biopsy between 2006 and 2023.Patients were stratified into two subgroups:primary OA-DLBCL(no prior history of lymphoma)and secondary OA-DLBCL(history of DLBCL at non-ocular adnexal sites).OS was defined as the time interval from OA-DLBCL diagnosis to death from any cause.Survival analysis was performed using the Kaplan–Meier method,and prognostic factors affecting OS were identified using multivariate Cox proportional hazards regression with a stepwise selection approach.RESULTS:The cohort included 24 patients with primary OA-DLBCL(13 males,11 females;mean age:61.36±18.29y)and 5 patients with secondary OA-DLBCL(2 males,3 females;mean age:50.94±18.17y).Among the primary OA-DLBCL subgroup,12 patients(50%)presented with advanced disease(Ann Arbor stage IIIE–IV),and 16 patients(66%)were classified as T4 disease according to the tumor-node-metastasis(TNM)staging system.The mean final visual acuity was 1.72±1.10 in the primary group and 0.90±1.18 in the secondary group.The 5-year OS rate for the entire cohort was 27.7%.Multivariate analysis identified five factors significantly associated with poor survival outcomes:epiphora[adjusted hazard ratio(aHR),36.95],atherosclerotic cardiovascular disease(aHR,10.08),human immunodeficiency virus(HIV)infection(aHR,12.47),M1 stage(aHR,6.99),and secondary OA-DLBCL(aHR,6.03;all P<0.05).The median OS was 1.68y for primary OA-DLBCL and 1.12y for secondary OA-DLBCL.CONCLUSION:A substantial proportion of patients with primary OA-DLBCL present with advanced-stage disease at diagnosis.Epiphora,atherosclerotic cardiovascular disease,HIV infection,M1 stage,and secondary OA-DLBCL are independent prognostic factors for poor survival outcomes.These findings emphasize the urgent need for optimized therapeutic strategies and early screening protocols to improve the management of OA-DLBCL,particularly in developing countries.展开更多
War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient an...War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.展开更多
The Yingxiu-Beichuan fault zone(YBFZ)has long been active and experienced repeated large earthquakes.The physicochemical properties of the deep fault zone(>1000 m)are the key to understanding the deformation mechan...The Yingxiu-Beichuan fault zone(YBFZ)has long been active and experienced repeated large earthquakes.The physicochemical properties of the deep fault zone(>1000 m)are the key to understanding the deformation mechanism of large earthquakes.This study uses rock magnetic,microstructural,and geochemical analyses of representative samples exposed in FZ1681 within the Wenchuan Earthquake Fault Scientific Drilling borehole 2(WFSD-2)cores.Fault gouge and fault breccia have higher magnetic susceptibility values than wall rocks,and they contain abundant paramagnetic minerals and small quantities of magnetite and monoclinic pyrrhotite.The magnetite and monoclinic pyrrhotite in the fault gouge were mainly formed by coseismic frictional heating,indicating that large earthquakes with frictional heating temperatures of~500-900℃once occurred in the YBFZ.The seismogenic and coseismic environment was reducing with a relatively high sulfur content.The monoclinic pyrrhotite in the fault breccia was formed mainly by low-temperature hydrothermal fluid.This indicates that the fault zone experienced reducing and low-temperature(<400℃)hydrothermal fluid with a relatively high sulfur content after the earthquake.The YBFZ,which experiences frequent large earthquakes,is weakly oxidizing environment at different depths,but the effect of the low-temperature hydrothermal fluid is weaker at depth.展开更多
The giant impact hypothesis for the Moon's origin has had difficulty explaining the nearly identical isotopic compositions of Moon rocks and rocks from Earth's silicate mantle and crust.These similarities are ...The giant impact hypothesis for the Moon's origin has had difficulty explaining the nearly identical isotopic compositions of Moon rocks and rocks from Earth's silicate mantle and crust.These similarities are instead more compatible with the Darwin-Wise hypothesis that the Moon arose by fission of a rapidly spinning Earth.To overcome problems with the fission model concerning structural stability and angular momentum conservation,some authors suggested that lunar fission was feasible on a more slowly rotating Earth if assisted by a nuclear explosion near the core-mantle boundary.In this light we consider the possible roles of the large low-velocity provinces(LLVPs).These long-lived structures have been implicated in diverse geophysical processes ranging from deep mantle plumes to continental breakup and mass extinction events.While the LLVPs have been seen as possible remnants of the giant imp actor,we propose that one of them was the site of lunar ejection.Internal heating of the liquid core is suggested to have given rise to an equatorial belt just under the core-mantle boundary analogous to the one recently detected by Ma and Tkalcic[Sci Adv 10(35):eadn5562,2024].Upwellings of heat and volatiles from this belt then generated two antipodal,equatorial bulges:the precursors of the Pacific and African LLVPs.Prior to the emergence of plate tectonics,core heat was mainly dissipated by networks of deep mantle plumes extending above the proto-LLVPs.These plume networks represent conduits of weakened mantle through which proto-lunar materials could later rise in a focused ejection.Continuing heat buildup in the core eventually triggered a cataclysmic explosion in the Pacific proto-LLVP,possibly analogous to a planetary-scale kimberlite eruption.This explosion launched LLVP and overlying mantle material into a low Earth orbit,where it coalesced to form the Moon.Some possible sources of additional energy to power the explosion are considered,including nuclear fission,bolide impacts and a hypothetical gravitational decay process culminating in a'A event'.展开更多
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact...Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.展开更多
One of the main issues in designing optimum tapered cascades for uranium enrichment for annual fuel production in a power reactor is whether to employ large(fat)or small(thin)cascades.What will be the permissible and ...One of the main issues in designing optimum tapered cascades for uranium enrichment for annual fuel production in a power reactor is whether to employ large(fat)or small(thin)cascades.What will be the permissible and optimal ranges of the number of machines that can be used in a cascade?For the first time,the permissible and optimal ranges of the number of gas centrifuges that can be utilized in a cascade were investigated using two types of centrifuges,and the performance of small and large tapered cascades was discussed.The particle swarm optimization algorithm(PSO)has been used to optimize tapered cascades.The results show:(1)For the first centrifuge,41 cascades(91≤n≤4897)and for the second centrifuge,49 cascades(18≤n≤3839)with small and large sizes can be used in enrichment facilities,and the best cascade for them has 530(with 23 stages)and 39(with 7 stages)centrifuges,respectively.(2)For both centrifuges,when 600≤n(number of centrifuges=n),the large cascade performance changes are relatively insignificant.(3)For both types of gas centrifuges,the annual los s of separation power in enrichment facilities is approximately 1.25%-4.82%of the total separation work required.展开更多
基金National Key Basic Research Program (NKBRP 2004CCA03900) the National Natural Science Foundation of China (50662002)
文摘The large particle CeO2 and Y2O3 were prepared using oxalic acid as precipitator. The effects of operational parameters such as stirring velocity, precipitation temperature, feeding speed, aging time, precipitation method, and calcination temperature on particle size and loose density of CeO2 were studied. Under the particular conditions, particle median size of D50≥30μm, loose density of ≥2.0 g/mlof CeO2, and particle median size of D50≥20 μm of Y2O3 were prepared. This technology had advantages of simple process, less equipment investment, ease of use, and suitability for industrialization products.
基金This work was financially supported by the National Natural Science Foundation of China(Nos.51964021 and 51774156)the Jiangxi Province Nature Science Foundation,China(No.20181BAB206020)and China’s National Key R&D Plan Project(No.2019YFC0605000).
文摘Crystalline rare-earth(RE)carbonates having large particle size were prepared from the lixivium of weathered crust elution-deposited rare-earth ores using the precipitation method with ammonium bicarbonate as the precipitant.Their chemical composition was studied using elemental and thermogravimetric analyses(TGA),and their structure and morphology were characterized using Fourier transform infrared(FTIR)spectroscopy,X-ray diffraction(XRD),and scanning electron microscopy(SEM).The results demonstrate that the crystalline rareearth carbonate is a hydrated basic carbonate or oxycarbonate and not astable intermediate carbonate in the process of thermal decomposition.The particle size of crystalline rare-earth carbonates with large particle size is in the range of 50–200μm.With an RE2O3 content of up to 95wt%,the quality of crystalline rare-earth carbonates is higher compared to the Chinese National Standard(GB/T 28882–2012).The quality of the product is superior to the Chinese National Standard.
基金supported by the National Natural Science Foundation of China (2056601, 50662002)
文摘Cerium dioxide(CeO2) has attracted much attention and has wide applications such as automotive exhaust catalysts,polishing materials for optical glasses and additives for advanced glasses,as well as cosmetic materials.The particle size and its distribution are key factors to the performance of the materials in the functional applications.However,control of particle size is still a challenge in materials synthesis.Therefore,continuous precipitation of cerium oxalate(precursor of ceria) was carried out at dif...
基金Project supported by the Inner Mongolia Science & Technology Innovation Leading Award Fund Project (20081717)
文摘The large particle cerium oxide was prepared using oxalic acid as precipitation agent. The effects of preparation conditions on the particle size of cerium oxide were discussed. The results showed that the particle size of cerium oxide could be controlled effectively by the temperature,acidity of the solution,aging time,etc. The optimized preparation process of large particle cerium oxide was obtained. The cerium oxide with size between 50 μm to 150 μm was prepared by the process. Moreover,the cerium oxide particles were dispersed uniformly.
文摘Rheological properties of large particulate-liquid model food systems were studied by using the BMS (ball measuring system). The model food systems were composed of alginate gel particles (-10mm) and a gelatinised starch solution with 1% w/w sodium chloride as a liquid phase. The effects of particle phase volume (Ф, 0-0.60), particle shapes (cube, sphere, rod and disc) and starch concentrations (3% and 5% w/w starch) were investigated. The power law model was successfully applied to characterize the flow properties of each system and the consistency K and power law index n were obtained. The K increased and n decreased with increasing # for samples at all particle shapes at 3% w/w starch in the liquid phase. The particle effect on the viscosity is further analysed by means of the Krieger-Dougherty model and the maximum packing fraction #,, and the intrinsic viscosity [η] were obtained in each system. The Фm, depended on the particle shape, as expected. The [7] value depended on particle shape and was largely in the order of 4.04 (cube), 3.28 (disc), 2.56 (sphere) and 2.32 (rod) at 3% w/w starch. The [η] also depended on starch concentration and was 1.1 at 5%,6 w/w starch in the liquid phase with spherical particles. The present results show successful application of BMS to study the rheological properties of large particulate liquid food systems at relatively small scale experiment (-0.5 L) and also that existing models for suspension rheology are applicable for such food systems to a great extend.
基金the National Natural Science Foundation of China(No.52205468)China Postdoctoral Science Foundation(No.2022M710061 and No.2023T160277)Natural Science Foundation of Jiangsu Province(No.BK20210755)。
文摘Large size titanium alloy parts are widely used in aerospace.However,they are difficult to manufacture using mechanical cutting technology because of severe tool wear.Electrochemical jet machining is a promising technology to achieve high efficiency,because it has high machining flexibility and no machining tool wear.However,reports on the macro electrochemical jet machining of large size titanium alloy parts are very scarce,because it is difficult to achieve effective constraint of the flow field in macro electrochemical jet machining.In addition,titanium alloy is very sensitive to fluctuation of the flow field,and a turbulent flow field would lead to serious stray corrosion.This paper reports a series of investigations of the electrochemical jet machining of titanium alloy parts.Based on the flow analysis and experiments,the machining flow field was effectively constrained.TB6 titanium alloy part with a perimeter of one meter was machined.The machined surface was smooth with no obvious machining defects.The machining process was particularly stable with no obvious spark discharge.The research provides a reference for the application of electrochemical jet machining technology to achieve large allowance material removal in the machining of large titanium alloy parts.
文摘In this paper,we establish some strong laws of large numbers,which are for nonindependent random variables under the framework of sublinear expectations.One of our main results is for blockwise m-dependent random variables,and another is for sub-orthogonal random variables.Both extend the strong law of large numbers for independent random variables under sublinear expectations to the non-independent case.
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
文摘Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.
基金supported by the Natural Science Foundation of Hebei Province(E2023502006)Fundamental Research Fund for the Central Universities(2025MS131).
文摘Magnesium hydride(MgH_(2)),a promising high-capacity hydrogen storage material,is hindered by slow dehydrogenation kinetics.AIdriven catalyst discovery to address this is often hampered by the laborious extraction of data from unstructured literature.To overcome this,we introduce a transformative“LLM to Agent”framework that synergistically integrates Large Language Models(LLMs)for automated data curation with Machine Learning(ML)for predictive design.We automatically constructed a comprehensive database of 809 MgH_(2)catalysts(6555 data rows)with high fidelity and an~40-fold acceleration over manual methods.The resulting ML models achieved high accuracy(average R^(2)>0.91)in predicting dehydrogenation temperature and activation energy,subsequently guiding a Genetic Algorithm(GA)in an exploratory inverse design that autonomously uncovered key design principles for high-performance catalysts.Encouragingly,a strong alignment was found between these AI-discovered principles and the design strategies of recently reported,state-of-the-art experimental systems,providing substantial evidence for the validity of our approach.The framework culminates in Cat-Advisor,a novel,domain-adapted multi-agent system.Cat-Advisor translates ML predictions and retrieval-augmented knowledge into actionable design guidance,demonstrating capabilities that surpass those of general-purpose LLMs in this specialized domain.This work delivers a practical AI toolkit for accelerated materials discovery and advances the emerging Agent-based paradigm for designing next-generation energy technologies.
基金supported by the Meteorological Joint Funds of the National Natural Science Foundation of China(Grant No.U2142211)the National Natural Science Foundation of China(Grant Nos.42075141,42341202 and 62088101)+1 种基金the National Key Research and Development Program of China(Grant No.2020YFA0608000)the Shanghai Municipal Science and Technology Major Project(Grant No.2021SHZDZX0100).
文摘Accurate forecasting of tropical cyclone(TC)tracks and intensities is essential.Although the TianXing large weather model,a six-hourly forecasting model surpassing operational forecasts,exhibits superior performance,its TC forecasts still require enhancement.Prediction errors persist due to biases in the training data and smoothing effects in data-driven methods.To address this,we introduce CycloneBCNet,a deep-learning model designed to correct TianXing’s TC forecast biases by leveraging spatial and temporal data.CycloneBCNet utilizes the SimVP(simpler yet better video prediction)framework with spatial attention to highlight cyclone core regions in forecast fields.It also incorporates TC trend information(center position,maximum wind speed,and minimum sea level pressure)via an LSTM(long short-term memory)module.These TC vectors are derived from post-processed TianXing forecasts.By fusing features from forecast fields and TC vectors,CycloneBCNet corrects biases across multiple lead times.At a 96-h lead time,the track error reduces from 162.4 to 86.4 km,the wind speed error from 17.2 to 6.69 m s^(-1),and the pressure error from 22.2 to 9.36 hPa.Interpretability analysis shows that CycloneBCNet adjusts its attention across forecast lead times.Intensity corrections prioritize inner-core dynamics,particularly the eye and eyewall,while track corrections shift from lower-level variables and the cyclone’s core to broader environmental factors and mid-to upper-level features as the forecast duration increases.These findings demonstrate that CycloneBCNet effectively captures key TC dynamics consistent with meteorological principles,including the dominance of near-surface conditions for intensity and the increasing influence of steering currents on track prediction.
文摘LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.
基金National Key Research and Development Program of China(2024YFC3505400)Capital Clinical Project of Beijing Municipal Science&Technology Commission(Z221100007422092)Capital’s Funds for Health Improvement and Research(2024-1-2231).
文摘Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.
基金Supported by the Faculty of Medicine,Prince of Songkla University.Wainipitapong S has received grants from the Faculty of Medicine,Prince of Songkla University。
文摘AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospective cohort study enrolled 29 patients diagnosed with OA-DLBCL based on histopathological biopsy between 2006 and 2023.Patients were stratified into two subgroups:primary OA-DLBCL(no prior history of lymphoma)and secondary OA-DLBCL(history of DLBCL at non-ocular adnexal sites).OS was defined as the time interval from OA-DLBCL diagnosis to death from any cause.Survival analysis was performed using the Kaplan–Meier method,and prognostic factors affecting OS were identified using multivariate Cox proportional hazards regression with a stepwise selection approach.RESULTS:The cohort included 24 patients with primary OA-DLBCL(13 males,11 females;mean age:61.36±18.29y)and 5 patients with secondary OA-DLBCL(2 males,3 females;mean age:50.94±18.17y).Among the primary OA-DLBCL subgroup,12 patients(50%)presented with advanced disease(Ann Arbor stage IIIE–IV),and 16 patients(66%)were classified as T4 disease according to the tumor-node-metastasis(TNM)staging system.The mean final visual acuity was 1.72±1.10 in the primary group and 0.90±1.18 in the secondary group.The 5-year OS rate for the entire cohort was 27.7%.Multivariate analysis identified five factors significantly associated with poor survival outcomes:epiphora[adjusted hazard ratio(aHR),36.95],atherosclerotic cardiovascular disease(aHR,10.08),human immunodeficiency virus(HIV)infection(aHR,12.47),M1 stage(aHR,6.99),and secondary OA-DLBCL(aHR,6.03;all P<0.05).The median OS was 1.68y for primary OA-DLBCL and 1.12y for secondary OA-DLBCL.CONCLUSION:A substantial proportion of patients with primary OA-DLBCL present with advanced-stage disease at diagnosis.Epiphora,atherosclerotic cardiovascular disease,HIV infection,M1 stage,and secondary OA-DLBCL are independent prognostic factors for poor survival outcomes.These findings emphasize the urgent need for optimized therapeutic strategies and early screening protocols to improve the management of OA-DLBCL,particularly in developing countries.
文摘War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.
基金supported by the Deep Earth Probe and Mineral Resources Exploration-National Science and Technology Major Project(2024ZD1000500)the National Natural Science Foundation of China(42172262 and 42372266)+1 种基金the China Geological Survey(DD20240041)the Fundamental Research Funds of the Institute of Geomechanics(DZLXJK202516).
文摘The Yingxiu-Beichuan fault zone(YBFZ)has long been active and experienced repeated large earthquakes.The physicochemical properties of the deep fault zone(>1000 m)are the key to understanding the deformation mechanism of large earthquakes.This study uses rock magnetic,microstructural,and geochemical analyses of representative samples exposed in FZ1681 within the Wenchuan Earthquake Fault Scientific Drilling borehole 2(WFSD-2)cores.Fault gouge and fault breccia have higher magnetic susceptibility values than wall rocks,and they contain abundant paramagnetic minerals and small quantities of magnetite and monoclinic pyrrhotite.The magnetite and monoclinic pyrrhotite in the fault gouge were mainly formed by coseismic frictional heating,indicating that large earthquakes with frictional heating temperatures of~500-900℃once occurred in the YBFZ.The seismogenic and coseismic environment was reducing with a relatively high sulfur content.The monoclinic pyrrhotite in the fault breccia was formed mainly by low-temperature hydrothermal fluid.This indicates that the fault zone experienced reducing and low-temperature(<400℃)hydrothermal fluid with a relatively high sulfur content after the earthquake.The YBFZ,which experiences frequent large earthquakes,is weakly oxidizing environment at different depths,but the effect of the low-temperature hydrothermal fluid is weaker at depth.
文摘The giant impact hypothesis for the Moon's origin has had difficulty explaining the nearly identical isotopic compositions of Moon rocks and rocks from Earth's silicate mantle and crust.These similarities are instead more compatible with the Darwin-Wise hypothesis that the Moon arose by fission of a rapidly spinning Earth.To overcome problems with the fission model concerning structural stability and angular momentum conservation,some authors suggested that lunar fission was feasible on a more slowly rotating Earth if assisted by a nuclear explosion near the core-mantle boundary.In this light we consider the possible roles of the large low-velocity provinces(LLVPs).These long-lived structures have been implicated in diverse geophysical processes ranging from deep mantle plumes to continental breakup and mass extinction events.While the LLVPs have been seen as possible remnants of the giant imp actor,we propose that one of them was the site of lunar ejection.Internal heating of the liquid core is suggested to have given rise to an equatorial belt just under the core-mantle boundary analogous to the one recently detected by Ma and Tkalcic[Sci Adv 10(35):eadn5562,2024].Upwellings of heat and volatiles from this belt then generated two antipodal,equatorial bulges:the precursors of the Pacific and African LLVPs.Prior to the emergence of plate tectonics,core heat was mainly dissipated by networks of deep mantle plumes extending above the proto-LLVPs.These plume networks represent conduits of weakened mantle through which proto-lunar materials could later rise in a focused ejection.Continuing heat buildup in the core eventually triggered a cataclysmic explosion in the Pacific proto-LLVP,possibly analogous to a planetary-scale kimberlite eruption.This explosion launched LLVP and overlying mantle material into a low Earth orbit,where it coalesced to form the Moon.Some possible sources of additional energy to power the explosion are considered,including nuclear fission,bolide impacts and a hypothetical gravitational decay process culminating in a'A event'.
基金supported by the National Key R&D Program of China[2022YFF0902703]the State Administration for Market Regulation Science and Technology Plan Project(2024MK033).
文摘Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.
文摘One of the main issues in designing optimum tapered cascades for uranium enrichment for annual fuel production in a power reactor is whether to employ large(fat)or small(thin)cascades.What will be the permissible and optimal ranges of the number of machines that can be used in a cascade?For the first time,the permissible and optimal ranges of the number of gas centrifuges that can be utilized in a cascade were investigated using two types of centrifuges,and the performance of small and large tapered cascades was discussed.The particle swarm optimization algorithm(PSO)has been used to optimize tapered cascades.The results show:(1)For the first centrifuge,41 cascades(91≤n≤4897)and for the second centrifuge,49 cascades(18≤n≤3839)with small and large sizes can be used in enrichment facilities,and the best cascade for them has 530(with 23 stages)and 39(with 7 stages)centrifuges,respectively.(2)For both centrifuges,when 600≤n(number of centrifuges=n),the large cascade performance changes are relatively insignificant.(3)For both types of gas centrifuges,the annual los s of separation power in enrichment facilities is approximately 1.25%-4.82%of the total separation work required.