Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua...Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
With the emergence of general foundational models,such as Chat Generative Pre-trained Transformer(ChatGPT),researchers have shown considerable interest in the potential applications of foundation models in the process...With the emergence of general foundational models,such as Chat Generative Pre-trained Transformer(ChatGPT),researchers have shown considerable interest in the potential applications of foundation models in the process industry.This paper provides a comprehensive overview of the challenges and opportunities presented by the use of foundation models in the process industry,including the frameworks,core applications,and future prospects.First,this paper proposes a framework for foundation models for the process industry.Second,it summarizes the key capabilities of industrial foundation models and their practical applications.Finally,it highlights future research directions and identifies unresolved open issues related to the use of foundation models in the process industry.展开更多
This paper presents a high-fidelity lumpedparameter(LP)thermal model(HF-LPTM)for permanent magnet synchronous machines(PMSMs)in electric vehicle(EV)applications,where various cooling techniques are considered,includin...This paper presents a high-fidelity lumpedparameter(LP)thermal model(HF-LPTM)for permanent magnet synchronous machines(PMSMs)in electric vehicle(EV)applications,where various cooling techniques are considered,including frame forced air/liquid cooling,oil jet cooling for endwinding,and rotor shaft cooling.To address the temperature misestimation in the LP thermal modelling due to assumptions of concentrated loss input and uniform heat flows,the developed HF-LPTM introduces two compensation thermal resistances for the winding and PM components,which are analytically derived from the multi-dimensional heat transfer equations and are robust against different load/thermal conditions.As validated by the finite element analysis method and experiments,the conventional LPTMs exhibit significant winding temperature deviations,while the proposed HF-LPTM can accurately predict both the midpoint and average temperatures.The developed HFLPTM is further used to assess the effectiveness of various cooling techniques under different scenarios,i.e.,steady-state thermal states under the rated load condition,and transient temperature profiles under city,freeway,and hybrid(city+freeway)driving cycles.Results indicate that no single cooling technique can maintain both winding and PM temperatures within safety limits.The combination of frame liquid cooling and oil jet cooling for end winding can sufficiently mitigate PMSM thermal stress in EV applications.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
The three-dimensional(3D)geometry of a fault is a critical control on earthquake nucleation,dynamic rupture,stress triggering,and related seismic hazards.Therefore,a 3D model of an active fault can significantly impro...The three-dimensional(3D)geometry of a fault is a critical control on earthquake nucleation,dynamic rupture,stress triggering,and related seismic hazards.Therefore,a 3D model of an active fault can significantly improve our understanding of seismogenesis and our ability to evaluate seismic hazards.Utilising the SKUA GoCAD software,we constructed detailed seismic fault models for the 2021 M_(S)6.4 Yangbi earthquake in Yunnan,China,using two sets of relocated earthquake catalogs and focal mechanism solutions following a convenient 3D fault modeling workflow.Our analysis revealed a NW-striking main fault with a high-angle SW dip,accompanied by two branch faults.Interpretation of one dataset revealed a single NNW-striking branch fault SW of the main fault,whereas the other dataset indicated four steep NNE-striking segments with a left-echelon pattern.Additionally,a third ENE-striking short fault was identified NE of the main fault.In combination with the spatial distribution of pre-existing faults,our 3D fault models indicate that the Yangbi earthquake reactivated pre-existing NW-and NE-striking fault directions rather than the surface-exposed Weixi-Qiaohou-Weishan Fault zone.The occurrence of the Yangbi earthquake demonstrates that the reactivation of pre-existing faults away from active fault zones,through either cascade or conjugate rupture modes,can cause unexpected moderate-large earthquakes and severe disasters,necessitating attention in regions like southeast Xizang,which have complex fault systems.展开更多
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinde...Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.展开更多
Accurate Global Horizontal Irradiance(GHI)forecasting has become vital for successfully integrating solar energy into the electrical grid because of the expanding demand for green power and the worldwide shift favouri...Accurate Global Horizontal Irradiance(GHI)forecasting has become vital for successfully integrating solar energy into the electrical grid because of the expanding demand for green power and the worldwide shift favouring green energy resources.Particularly considering the implications of the aggressive GHG emission targets,accurate GHI forecasting has become vital for developing,designing,and operational managing solar energy systems.This research presented the core concepts of modelling and performance analysis of the application of various forecasting models such as ARIMA(Autoregressive Integrated Moving Average),Elaman NN(Elman Neural Network),RBFN(Radial Basis Function Neural Network),SVM(Support Vector Machine),LSTM(Long Short-Term Memory),Persistent,BPN(Back Propagation Neural Network),MLP(Multilayer Perceptron Neural Network),RF(Random Forest),and XGBoost(eXtreme Gradient Boosting)for assessing multi-seasonal forecasting of GHI.Used the India region data to evaluate the models’performance and forecasting ability.Research using forecasting models for seasonal Global Horizontal Irradiance(GHI)forecasting in winter,spring,summer,monsoon,and autumn.Substantiated performance effectiveness through evaluation metrics,such as Mean Absolute Error(MAE),Root Mean Squared Error(RMSE),and R-squared(R^(2)),coded using Python programming.The performance experimentation analysis inferred that the most accurate forecasts in all the seasons compared to the other forecasting models the Random Forest and eXtreme Gradient Boosting,are the superior and competing models that yield Winter season-based forecasting XGBoost is the best forecasting model with MAE:1.6325,RMSE:4.8338,and R^(2):0.9998.Spring season-based forecasting XGBoost is the best forecasting model with MAE:2.599599,RMSE:5.58539,and R^(2):0.999784.Summer season-based forecasting RF is the best forecasting model with MAE:1.03843,RMSE:2.116325,and R^(2):0.999967.Monsoon season-based forecasting RF is the best forecasting model with MAE:0.892385,RMSE:2.417587,and R^(2):0.999942.Autumn season-based forecasting RF is the best forecasting model with MAE:0.810462,RMSE:1.928215,and R^(2):0.999958.Based on seasonal variations and computing constraints,the findings enable energy system operators to make helpful recommendations for choosing the most effective forecasting models.展开更多
Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse de...Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.展开更多
Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two ki...Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two kinds,i.e.,the hydro-elastoplastic-damage model with independent equation of state and the cap-elastoplastic-damage model with continuous cap surface.The essential differences between the two kind models are vital for researchers to choose an appropriate kind of concrete material model for their concerned problems,while existing studies have contradictory conclusions.To resolve this issue,the constitutive theories of the two kinds of models are firstly overviewed.Then,the constitutive theories between the two kinds of models are comprehensively compared and the main similarities and differences are clarified,which are demonstrated by single element numerical examples.Finally,numerical predictions for projectile penetration and charge explosion experiments on concrete targets are compared to further demonstrate the conclusion made by constitutive comparison.It is found that both the two kind models could be used to simulate the dynamic responses of concrete under projectile impact and blast loadings,if the parameter needed in material models are well calibrated,although some discrepancies between them may exist.展开更多
Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari...Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.展开更多
BACKGROUND Colorectal polyps(CPs)are important precursor lesions of colorectal cancer,and endoscopic surgery remains the primary treatment option.However,the shortterm recurrence rate post-surgery is high,and the risk...BACKGROUND Colorectal polyps(CPs)are important precursor lesions of colorectal cancer,and endoscopic surgery remains the primary treatment option.However,the shortterm recurrence rate post-surgery is high,and the risk factors for recurrence remain unknown.AIM To comprehensively explore risk factors for short-term recurrence of CPs after endoscopic surgery and develop a nomogram prediction model.METHODS Overall,362 patients who underwent endoscopic polypectomy between January 2022 and January 2024 at Nanjing Jiangbei Hospital were included.We screened basic demographic data,clinical and polyp characteristics,surgery-related information,and independent risk factors for CPs recurrence using univariate and multivariate logistic regression analyses.The multivariate analysis results were used to construct a nomogram prediction model,internally validated using Bootstrapping,with performance evaluated using area under the curve(AUC),calibration curve,and decision curve analysis.RESULTS CP re-occurred in 166(45.86%)of the 362 patients within 1 year post-surgery.Multivariate logistic regression analysis showed that age(OR=1.04,P=0.002),alcohol consumption(OR=2.07,P=0.012),Helicobacter pylori infection(OR=2.34,P<0.001),polyp number>2(OR=1.98,P=0.005),sessile polyps(OR=2.10,P=0.006),and adenomatous pathological type(OR=3.02,P<0.001)were independent risk factors for post-surgery recurrence.The nomogram prediction model showed good discriminatory(AUC=0.73)and calibrating power,and decision curve analysis showed that the model had good clinical benefit at risk probabilities>20%.CONCLUSION We identified multiple independent risk factors for short-term recurrence after endoscopic surgery.The nomogram prediction model showed a certain degree of differentiation,calibration,and potential clinical applicability.展开更多
Photodynamic therapy(PDT)is an emerging minimally invasive therapeutic modality that relies on the activation of a photosensitizing agent by light of a specific wavelength in the presence of molecular oxygen,leading t...Photodynamic therapy(PDT)is an emerging minimally invasive therapeutic modality that relies on the activation of a photosensitizing agent by light of a specific wavelength in the presence of molecular oxygen,leading to the generation of reactive oxygen species(ROS).This mechanism facilitates selective cytotoxic effects within pathological tissues and has demonstrated therapeutic potential across diverse disease contexts.However,the broader clinical applications remain limited by photosensitizer selectivity,shallow light penetration,and the risk of off-target cytotoxicity.Recent advancements in PDT have focused on the development of next-generation photosensitizers,the integration of nanotechnology for enhanced delivery and targeting,and the strategic combination of PDT with complementary therapeutic approaches.Experimental animal models play a crucial role in validating the efficacy and safety of PDT,optimizing its therapeutic parameters,and determining its mechanisms of action.This review provides a comprehensive overview of PDT applications in various disease models,including oncological,infectious,and nonconventional indications.Special emphasis is placed on the importance of large animal models in PDT research,such as rabbits,pigs,dogs,and non-human primates,which provide experimental platforms that more closely resemble human physiological and pathological states.The use of these models for understanding the mechanisms of PDT,optimizing therapeutic regimens,and evaluating clinical outcomes is also discussed.This review aims to inform future directions in PDT research and emphasizes the importance of selecting appropriate preclinical animal models to facilitate successful clinical translation.展开更多
This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating t...This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating the oil production rate of the ZH86 reservoir block under waterflood recovery.The historical input variables:reservoir pressure,reservoir pore volume containing hydrocarbons,reservoir pore volume containing water and reservoir water injection rate used as inputs for ANN models.To create the NRS-ANN hybrid models,314 data sets extracted from the NRS model,which included reservoir pressure,reservoir pore volume containing hy-drocarbons,reservoir pore volume containing water and reservoir water injection rate were used.The output of the models was the historical oil production rate(HOPR in m^(3) per day)recorded from the ZH86 reservoir block.Models were developed using MATLAB R2021a and trained with 25 models in three replicate conditions(2,4 and 6),each at 1000 epochs.A comparative analysis indicated that,for all 25 models,the ANN outperformed the NRS-ANN in terms of processing speed and prediction performance.ANN models achieved an average of R^(2) and MAE of 0.8433 and 8.0964 m^(3)/day values,respectively,while NRS-ANN hybrid models achieved an average of R^(2) and MAE of 0.7828 and 8.2484 m^(3)/day values,respectively.In addition,ANN models achieved a processing speed of 49 epochs/sec,32 epochs/sec,and 24 epochs/sec after 2,4,and 6 replicates,respectively.Whereas the NRS-ANN hybrid models achieved lower average processing speeds of 45 epochs/sec,23 epochs/sec and 20 epochs/sec.In addition,the ANN optimal model outperforms the NRS-ANN model in terms of both processing speed and accuracy.The ANN optimal model achieved a speed of 336.44 epochs/sec,compared to the NRS-ANN hybrid optimal model,which achieved a speed of 52.16 epochs/sec.The ANN optimal model achieved lower RMSE and MAE values of 7.9291 m^(3)/day and 5.3855 m^(3)/day in the validation dataset compared with the hybrid ANS optimal model,which achieved 13.6821 m^(3)/day and 9.2047 m^(3)/day,respectively.The study also showed that the ANN optimal model consistently achieved higher R^(2) values:0.9472,0.9284 and 0.9316 in the training,test and validation data sets.Whereas the NRS-ANN hybrid optimal yielded lower R^(2) values of 0.8030,0.8622 and 0.7776 for the training,testing and validation datasets.The study showed that ANN models are a more effective and reliable tool,as they balance both processing speed and accuracy in estimating the oil production rate of the ZH86 reservoir block under the waterflooding recovery method.展开更多
Workpiece rotational grinding is widely used in the ultra-precision machining of hard and brittle semiconductor materials,including single-crystal silicon,silicon carbide,and gallium arsenide.Surface roughness and sub...Workpiece rotational grinding is widely used in the ultra-precision machining of hard and brittle semiconductor materials,including single-crystal silicon,silicon carbide,and gallium arsenide.Surface roughness and subsurface damage depth(SDD)are crucial indicators for evaluating the surface quality of these materials after grinding.Existing prediction models lack general applicability and do not accurately account for the complex material behavior under grinding conditions.This paper introduces novel models for predicting both surface roughness and SDD in hard and brittle semiconductor materials.The surface roughness model uniquely incorporates the material’s elastic recovery properties,revealing the significant impact of these properties on prediction accuracy.The SDD model is distinguished by its analysis of the interactions between abrasive grits and the workpiece,as well as the mechanisms governing stress-induced damage evolution.The surface roughness model and SDD model both establish a stable relationship with the grit depth of cut(GDC).Additionally,we have developed an analytical relationship between the GDC and grinding process parameters.This,in turn,enables the establishment of an analytical framework for predicting surface roughness and SDD based on grinding process parameters,which cannot be achieved by previous models.The models were validated through systematic experiments on three different semiconductor materials,demonstrating excellent agreement with experimental data,with prediction errors of 6.3%for surface roughness and6.9%for SDD.Additionally,this study identifies variations in elastic recovery and material plasticity as critical factors influencing surface roughness and SDD across different materials.These findings significantly advance the accuracy of predictive models and broaden their applicability for grinding hard and brittle semiconductor materials.展开更多
In this study,we used an extensive sampling network established in central Romania to develop tree height and crown length models.Our analysis included more than 18,000 tree measurements from five different species.In...In this study,we used an extensive sampling network established in central Romania to develop tree height and crown length models.Our analysis included more than 18,000 tree measurements from five different species.Instead of building univariate models for each response variable,we employed a multivariate approach using seemingly unrelated mixed-effects models.These models incorporated variables related to species mixture,tree and stand size,competition,and stand structure.With the inclusion of additional variables in the multivariate seemingly unrelated mixed-effects models,the accuracy of the height prediction models improved by over 10% for all species,whereas the improvement in the crown length models was considerably smaller.Our findings indicate that trees in mixed stands tend to have shorter heights but longer crowns than those in pure stands.We also observed that trees in homogeneous stand structures have shorter crown lengths than those in heterogeneous stands.By employing a multivariate mixed-effects modelling framework,we were able to perform cross-model random-effect predictions,leading to a significant increase in accuracy when both responses were used to calibrate the model.In contrast,the improvement in accuracy was marginal when only height was used for calibration.We demonstrate how multivariate mixed-effects models can be effectively used to develop multi-response allometric models that can be easily calibrated with a limited number of observations while simultaneously achieving better-aligned projections.展开更多
This paper provides a comparative sociological analysis of the application models for industrial robots in the automotive and electronics industries.The integration of robots in these two key sectors has been a signif...This paper provides a comparative sociological analysis of the application models for industrial robots in the automotive and electronics industries.The integration of robots in these two key sectors has been a significant milestone in the evolution of modern manufacturing,contributing to major shifts in production processes,labor markets,and organizational structures.Through a comprehensive review of literature and case studies,the paper identifies and contrasts the driving factors for robot adoption,the impact of automation on the workforce,and the sociocultural factors influencing these transitions.The automotive industry,characterized by high-volume production and cost-efficiency,and the electronics industry,known for precision and fast-paced production,present unique challenges and opportunities in robot integration.By examining these differences,the paper aims to offer insights into the broader social and economic implications of industrial robot deployment and its effect on industry dynamics and labor relations.The findings highlight not only the technological benefits but also the social challenges associated with automation in these industries.展开更多
Accurate prediction of nurse demand plays a crucial role in efficiently planning the healthcare workforce,ensuring appropriate staffing levels,and providing high-quality care to patients.The intricacy and variety of c...Accurate prediction of nurse demand plays a crucial role in efficiently planning the healthcare workforce,ensuring appropriate staffing levels,and providing high-quality care to patients.The intricacy and variety of contemporary healthcare systems and a growing patient populace call for advanced forecasting models.Factors like technological advancements,novel treatment protocols,and the increasing prevalence of chronic illnesses have diminished the efficacy of traditional estimation approaches.Novel forecasting methodologies,including time-series analysis,machine learning,and simulation-based techniques,have been developed to tackle these challenges.Time-series analysis recognizes patterns from past data,whereas machine learning uses extensive datasets to uncover concealed trends.Simulation models are employed to assess diverse scenarios,assisting in proactive adjustments to staffing.These techniques offer distinct advantages,such as the identification of seasonal patterns,the management of large datasets,and the ability to test various assumptions.By integrating these sophisticated models into workforce planning,organizations can optimize staffing,reduce financial waste,and elevate the standard of patient care.As the healthcare field progresses,the utilization of these predictive models will be pivotal for fostering adaptable and resilient workforce management.展开更多
Predictive maintenance often involves imbalanced multivariate time series datasets with scarce failure events,posing challenges for model training due to the high dimensionality of the data and the need for domain-spe...Predictive maintenance often involves imbalanced multivariate time series datasets with scarce failure events,posing challenges for model training due to the high dimensionality of the data and the need for domain-specific preprocessing,which frequently leads to the development of large and complex models.Inspired by the success of Large Language Models(LLMs),transformer-based foundation models have been developed for time series(TSFM).These models have been proven to reconstruct time series in a zero-shot manner,being able to capture different patterns that effectively characterize time series.This paper proposes the use of TSFM to generate embeddings of the input data space,making them more interpretable for machine learning models.To evaluate the effectiveness of our approach,we trained three classical machine learning algorithms and one neural network using the embeddings generated by the TSFM called Moment for predicting the remaining useful life of aircraft engines.We test the models trained with both the full training dataset and only 10%of the training samples.Our results show that training simple models,such as support vector regressors or neural networks,with embeddings generated by Moment not only accelerates the training process but also enhances performance in few-shot learning scenarios,where data is scarce.This suggests a promising alternative to complex deep learning architectures,particularly in industrial contexts with limited labeled data.展开更多
基金supported by National Natural Science Foundation of China(62376219 and 62006194)Foundational Research Project in Specialized Discipline(Grant No.G2024WD0146)Faculty Construction Project(Grant No.24GH0201148).
文摘Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
基金supported by the National Natural Science Foundation of China(62225302,623B2014,and 62173023).
文摘With the emergence of general foundational models,such as Chat Generative Pre-trained Transformer(ChatGPT),researchers have shown considerable interest in the potential applications of foundation models in the process industry.This paper provides a comprehensive overview of the challenges and opportunities presented by the use of foundation models in the process industry,including the frameworks,core applications,and future prospects.First,this paper proposes a framework for foundation models for the process industry.Second,it summarizes the key capabilities of industrial foundation models and their practical applications.Finally,it highlights future research directions and identifies unresolved open issues related to the use of foundation models in the process industry.
文摘This paper presents a high-fidelity lumpedparameter(LP)thermal model(HF-LPTM)for permanent magnet synchronous machines(PMSMs)in electric vehicle(EV)applications,where various cooling techniques are considered,including frame forced air/liquid cooling,oil jet cooling for endwinding,and rotor shaft cooling.To address the temperature misestimation in the LP thermal modelling due to assumptions of concentrated loss input and uniform heat flows,the developed HF-LPTM introduces two compensation thermal resistances for the winding and PM components,which are analytically derived from the multi-dimensional heat transfer equations and are robust against different load/thermal conditions.As validated by the finite element analysis method and experiments,the conventional LPTMs exhibit significant winding temperature deviations,while the proposed HF-LPTM can accurately predict both the midpoint and average temperatures.The developed HFLPTM is further used to assess the effectiveness of various cooling techniques under different scenarios,i.e.,steady-state thermal states under the rated load condition,and transient temperature profiles under city,freeway,and hybrid(city+freeway)driving cycles.Results indicate that no single cooling technique can maintain both winding and PM temperatures within safety limits.The combination of frame liquid cooling and oil jet cooling for end winding can sufficiently mitigate PMSM thermal stress in EV applications.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金financial support from the National Key R&D Program of China (No. 2021YFC3000600)National Natural Science Foundation of China (No. 41872206)National Nonprofit Fundamental Research Grant of China, Institute of Geology, China, Earthquake Administration (No. IGCEA2010)
文摘The three-dimensional(3D)geometry of a fault is a critical control on earthquake nucleation,dynamic rupture,stress triggering,and related seismic hazards.Therefore,a 3D model of an active fault can significantly improve our understanding of seismogenesis and our ability to evaluate seismic hazards.Utilising the SKUA GoCAD software,we constructed detailed seismic fault models for the 2021 M_(S)6.4 Yangbi earthquake in Yunnan,China,using two sets of relocated earthquake catalogs and focal mechanism solutions following a convenient 3D fault modeling workflow.Our analysis revealed a NW-striking main fault with a high-angle SW dip,accompanied by two branch faults.Interpretation of one dataset revealed a single NNW-striking branch fault SW of the main fault,whereas the other dataset indicated four steep NNE-striking segments with a left-echelon pattern.Additionally,a third ENE-striking short fault was identified NE of the main fault.In combination with the spatial distribution of pre-existing faults,our 3D fault models indicate that the Yangbi earthquake reactivated pre-existing NW-and NE-striking fault directions rather than the surface-exposed Weixi-Qiaohou-Weishan Fault zone.The occurrence of the Yangbi earthquake demonstrates that the reactivation of pre-existing faults away from active fault zones,through either cascade or conjugate rupture modes,can cause unexpected moderate-large earthquakes and severe disasters,necessitating attention in regions like southeast Xizang,which have complex fault systems.
基金supported by the National Key Research and Development Program of China(Grant No.2024YFA1408604 for K.C.and X.C.)the National Natural Science Foundation of China(Grant Nos.12047503,12447103 for K.C.and X.C.,12325501 for P.Z.,and 12275263 for Y.D.and S.H.)+1 种基金the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900 for Y.D.and S.H.)the Natural Science Foundation of Fujian Province of China(Grant No.2023J02032 for Y.D.and S.H.)。
文摘Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.
文摘Accurate Global Horizontal Irradiance(GHI)forecasting has become vital for successfully integrating solar energy into the electrical grid because of the expanding demand for green power and the worldwide shift favouring green energy resources.Particularly considering the implications of the aggressive GHG emission targets,accurate GHI forecasting has become vital for developing,designing,and operational managing solar energy systems.This research presented the core concepts of modelling and performance analysis of the application of various forecasting models such as ARIMA(Autoregressive Integrated Moving Average),Elaman NN(Elman Neural Network),RBFN(Radial Basis Function Neural Network),SVM(Support Vector Machine),LSTM(Long Short-Term Memory),Persistent,BPN(Back Propagation Neural Network),MLP(Multilayer Perceptron Neural Network),RF(Random Forest),and XGBoost(eXtreme Gradient Boosting)for assessing multi-seasonal forecasting of GHI.Used the India region data to evaluate the models’performance and forecasting ability.Research using forecasting models for seasonal Global Horizontal Irradiance(GHI)forecasting in winter,spring,summer,monsoon,and autumn.Substantiated performance effectiveness through evaluation metrics,such as Mean Absolute Error(MAE),Root Mean Squared Error(RMSE),and R-squared(R^(2)),coded using Python programming.The performance experimentation analysis inferred that the most accurate forecasts in all the seasons compared to the other forecasting models the Random Forest and eXtreme Gradient Boosting,are the superior and competing models that yield Winter season-based forecasting XGBoost is the best forecasting model with MAE:1.6325,RMSE:4.8338,and R^(2):0.9998.Spring season-based forecasting XGBoost is the best forecasting model with MAE:2.599599,RMSE:5.58539,and R^(2):0.999784.Summer season-based forecasting RF is the best forecasting model with MAE:1.03843,RMSE:2.116325,and R^(2):0.999967.Monsoon season-based forecasting RF is the best forecasting model with MAE:0.892385,RMSE:2.417587,and R^(2):0.999942.Autumn season-based forecasting RF is the best forecasting model with MAE:0.810462,RMSE:1.928215,and R^(2):0.999958.Based on seasonal variations and computing constraints,the findings enable energy system operators to make helpful recommendations for choosing the most effective forecasting models.
基金supported by the National Natural Science Foundation of China(Nos.61906168,62202429 and 62272267)the Zhejiang Provincial Natural Science Foundation of China(No.LY23F020023)the Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(No.2022SDSJ01)。
文摘Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.
基金supported by the National Natural Science Foundations of China (Grant Nos. 52178515, 52078133)
文摘Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two kinds,i.e.,the hydro-elastoplastic-damage model with independent equation of state and the cap-elastoplastic-damage model with continuous cap surface.The essential differences between the two kind models are vital for researchers to choose an appropriate kind of concrete material model for their concerned problems,while existing studies have contradictory conclusions.To resolve this issue,the constitutive theories of the two kinds of models are firstly overviewed.Then,the constitutive theories between the two kinds of models are comprehensively compared and the main similarities and differences are clarified,which are demonstrated by single element numerical examples.Finally,numerical predictions for projectile penetration and charge explosion experiments on concrete targets are compared to further demonstrate the conclusion made by constitutive comparison.It is found that both the two kind models could be used to simulate the dynamic responses of concrete under projectile impact and blast loadings,if the parameter needed in material models are well calibrated,although some discrepancies between them may exist.
文摘Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.
文摘BACKGROUND Colorectal polyps(CPs)are important precursor lesions of colorectal cancer,and endoscopic surgery remains the primary treatment option.However,the shortterm recurrence rate post-surgery is high,and the risk factors for recurrence remain unknown.AIM To comprehensively explore risk factors for short-term recurrence of CPs after endoscopic surgery and develop a nomogram prediction model.METHODS Overall,362 patients who underwent endoscopic polypectomy between January 2022 and January 2024 at Nanjing Jiangbei Hospital were included.We screened basic demographic data,clinical and polyp characteristics,surgery-related information,and independent risk factors for CPs recurrence using univariate and multivariate logistic regression analyses.The multivariate analysis results were used to construct a nomogram prediction model,internally validated using Bootstrapping,with performance evaluated using area under the curve(AUC),calibration curve,and decision curve analysis.RESULTS CP re-occurred in 166(45.86%)of the 362 patients within 1 year post-surgery.Multivariate logistic regression analysis showed that age(OR=1.04,P=0.002),alcohol consumption(OR=2.07,P=0.012),Helicobacter pylori infection(OR=2.34,P<0.001),polyp number>2(OR=1.98,P=0.005),sessile polyps(OR=2.10,P=0.006),and adenomatous pathological type(OR=3.02,P<0.001)were independent risk factors for post-surgery recurrence.The nomogram prediction model showed good discriminatory(AUC=0.73)and calibrating power,and decision curve analysis showed that the model had good clinical benefit at risk probabilities>20%.CONCLUSION We identified multiple independent risk factors for short-term recurrence after endoscopic surgery.The nomogram prediction model showed a certain degree of differentiation,calibration,and potential clinical applicability.
基金supported by the China Postdoctoral Science Foundation(2024M751098,2024M761134)Jilin Province Development and Reform Commission Program(ZKJCFGW2023015)+1 种基金Wenzhou Science&Technology Bureau Basic Public Welfare Research Program(Y20240006)Jilin University Young Teachers and Students Cross-disciplinary Training Project(2023-JCXK-08)。
文摘Photodynamic therapy(PDT)is an emerging minimally invasive therapeutic modality that relies on the activation of a photosensitizing agent by light of a specific wavelength in the presence of molecular oxygen,leading to the generation of reactive oxygen species(ROS).This mechanism facilitates selective cytotoxic effects within pathological tissues and has demonstrated therapeutic potential across diverse disease contexts.However,the broader clinical applications remain limited by photosensitizer selectivity,shallow light penetration,and the risk of off-target cytotoxicity.Recent advancements in PDT have focused on the development of next-generation photosensitizers,the integration of nanotechnology for enhanced delivery and targeting,and the strategic combination of PDT with complementary therapeutic approaches.Experimental animal models play a crucial role in validating the efficacy and safety of PDT,optimizing its therapeutic parameters,and determining its mechanisms of action.This review provides a comprehensive overview of PDT applications in various disease models,including oncological,infectious,and nonconventional indications.Special emphasis is placed on the importance of large animal models in PDT research,such as rabbits,pigs,dogs,and non-human primates,which provide experimental platforms that more closely resemble human physiological and pathological states.The use of these models for understanding the mechanisms of PDT,optimizing therapeutic regimens,and evaluating clinical outcomes is also discussed.This review aims to inform future directions in PDT research and emphasizes the importance of selecting appropriate preclinical animal models to facilitate successful clinical translation.
基金National Natural Science Foundation of China grants no.41972326 and 51774258.
文摘This study compared the predictive performance and processing speed of an artificial neural network(ANN)and a hybrid of a numerical reservoir simulation(NRS)and artificial neural network(NRS-ANN)models in estimating the oil production rate of the ZH86 reservoir block under waterflood recovery.The historical input variables:reservoir pressure,reservoir pore volume containing hydrocarbons,reservoir pore volume containing water and reservoir water injection rate used as inputs for ANN models.To create the NRS-ANN hybrid models,314 data sets extracted from the NRS model,which included reservoir pressure,reservoir pore volume containing hy-drocarbons,reservoir pore volume containing water and reservoir water injection rate were used.The output of the models was the historical oil production rate(HOPR in m^(3) per day)recorded from the ZH86 reservoir block.Models were developed using MATLAB R2021a and trained with 25 models in three replicate conditions(2,4 and 6),each at 1000 epochs.A comparative analysis indicated that,for all 25 models,the ANN outperformed the NRS-ANN in terms of processing speed and prediction performance.ANN models achieved an average of R^(2) and MAE of 0.8433 and 8.0964 m^(3)/day values,respectively,while NRS-ANN hybrid models achieved an average of R^(2) and MAE of 0.7828 and 8.2484 m^(3)/day values,respectively.In addition,ANN models achieved a processing speed of 49 epochs/sec,32 epochs/sec,and 24 epochs/sec after 2,4,and 6 replicates,respectively.Whereas the NRS-ANN hybrid models achieved lower average processing speeds of 45 epochs/sec,23 epochs/sec and 20 epochs/sec.In addition,the ANN optimal model outperforms the NRS-ANN model in terms of both processing speed and accuracy.The ANN optimal model achieved a speed of 336.44 epochs/sec,compared to the NRS-ANN hybrid optimal model,which achieved a speed of 52.16 epochs/sec.The ANN optimal model achieved lower RMSE and MAE values of 7.9291 m^(3)/day and 5.3855 m^(3)/day in the validation dataset compared with the hybrid ANS optimal model,which achieved 13.6821 m^(3)/day and 9.2047 m^(3)/day,respectively.The study also showed that the ANN optimal model consistently achieved higher R^(2) values:0.9472,0.9284 and 0.9316 in the training,test and validation data sets.Whereas the NRS-ANN hybrid optimal yielded lower R^(2) values of 0.8030,0.8622 and 0.7776 for the training,testing and validation datasets.The study showed that ANN models are a more effective and reliable tool,as they balance both processing speed and accuracy in estimating the oil production rate of the ZH86 reservoir block under the waterflooding recovery method.
基金supported by the National Key Research and Development Program of China(2022YFB3605902)the National Natural Science Foundation of China(52375411,52293402)。
文摘Workpiece rotational grinding is widely used in the ultra-precision machining of hard and brittle semiconductor materials,including single-crystal silicon,silicon carbide,and gallium arsenide.Surface roughness and subsurface damage depth(SDD)are crucial indicators for evaluating the surface quality of these materials after grinding.Existing prediction models lack general applicability and do not accurately account for the complex material behavior under grinding conditions.This paper introduces novel models for predicting both surface roughness and SDD in hard and brittle semiconductor materials.The surface roughness model uniquely incorporates the material’s elastic recovery properties,revealing the significant impact of these properties on prediction accuracy.The SDD model is distinguished by its analysis of the interactions between abrasive grits and the workpiece,as well as the mechanisms governing stress-induced damage evolution.The surface roughness model and SDD model both establish a stable relationship with the grit depth of cut(GDC).Additionally,we have developed an analytical relationship between the GDC and grinding process parameters.This,in turn,enables the establishment of an analytical framework for predicting surface roughness and SDD based on grinding process parameters,which cannot be achieved by previous models.The models were validated through systematic experiments on three different semiconductor materials,demonstrating excellent agreement with experimental data,with prediction errors of 6.3%for surface roughness and6.9%for SDD.Additionally,this study identifies variations in elastic recovery and material plasticity as critical factors influencing surface roughness and SDD across different materials.These findings significantly advance the accuracy of predictive models and broaden their applicability for grinding hard and brittle semiconductor materials.
基金supported by the European Union and the Romanian Government through the Competitiveness Operational Programme 2014–2020, under the project“Increasing the economic competitiveness of the forestry sector and the quality of life through knowledge transfer,technology and CDI skills”(CRESFORLIFE),ID P 40 380/105506, subsidiary contract no. 17/2020partially by the FORCLIMSOC Nucleu Programme (Contract 12N/2023)+2 种基金project PN 23090101CresPerfInst project (Contract 34PFE/December 30, 2021)“Increasing the institutional capacity and performance of INCDS ‘Marin Drǎcea’in RDI activities-CresPer”LM was financially supported by the Research Council of Finland's flagship ecosystem for Forest-Human-Machine Interplay–Building Resilience, Redefining Value Networks and Enabling Meaningful Experiences (UNITE)(decision number 357909)
文摘In this study,we used an extensive sampling network established in central Romania to develop tree height and crown length models.Our analysis included more than 18,000 tree measurements from five different species.Instead of building univariate models for each response variable,we employed a multivariate approach using seemingly unrelated mixed-effects models.These models incorporated variables related to species mixture,tree and stand size,competition,and stand structure.With the inclusion of additional variables in the multivariate seemingly unrelated mixed-effects models,the accuracy of the height prediction models improved by over 10% for all species,whereas the improvement in the crown length models was considerably smaller.Our findings indicate that trees in mixed stands tend to have shorter heights but longer crowns than those in pure stands.We also observed that trees in homogeneous stand structures have shorter crown lengths than those in heterogeneous stands.By employing a multivariate mixed-effects modelling framework,we were able to perform cross-model random-effect predictions,leading to a significant increase in accuracy when both responses were used to calibrate the model.In contrast,the improvement in accuracy was marginal when only height was used for calibration.We demonstrate how multivariate mixed-effects models can be effectively used to develop multi-response allometric models that can be easily calibrated with a limited number of observations while simultaneously achieving better-aligned projections.
文摘This paper provides a comparative sociological analysis of the application models for industrial robots in the automotive and electronics industries.The integration of robots in these two key sectors has been a significant milestone in the evolution of modern manufacturing,contributing to major shifts in production processes,labor markets,and organizational structures.Through a comprehensive review of literature and case studies,the paper identifies and contrasts the driving factors for robot adoption,the impact of automation on the workforce,and the sociocultural factors influencing these transitions.The automotive industry,characterized by high-volume production and cost-efficiency,and the electronics industry,known for precision and fast-paced production,present unique challenges and opportunities in robot integration.By examining these differences,the paper aims to offer insights into the broader social and economic implications of industrial robot deployment and its effect on industry dynamics and labor relations.The findings highlight not only the technological benefits but also the social challenges associated with automation in these industries.
文摘Accurate prediction of nurse demand plays a crucial role in efficiently planning the healthcare workforce,ensuring appropriate staffing levels,and providing high-quality care to patients.The intricacy and variety of contemporary healthcare systems and a growing patient populace call for advanced forecasting models.Factors like technological advancements,novel treatment protocols,and the increasing prevalence of chronic illnesses have diminished the efficacy of traditional estimation approaches.Novel forecasting methodologies,including time-series analysis,machine learning,and simulation-based techniques,have been developed to tackle these challenges.Time-series analysis recognizes patterns from past data,whereas machine learning uses extensive datasets to uncover concealed trends.Simulation models are employed to assess diverse scenarios,assisting in proactive adjustments to staffing.These techniques offer distinct advantages,such as the identification of seasonal patterns,the management of large datasets,and the ability to test various assumptions.By integrating these sophisticated models into workforce planning,organizations can optimize staffing,reduce financial waste,and elevate the standard of patient care.As the healthcare field progresses,the utilization of these predictive models will be pivotal for fostering adaptable and resilient workforce management.
基金Funded by the Spanish Government and FEDER funds(AEI/FEDER,UE)under grant PID2021-124502OB-C42(PRESECREL)the predoctoral program“Concepción Arenal del Programa de Personal Investigador en formación Predoctoral”funded by Universidad de Cantabria and Cantabria’s Government(BOC 18-10-2021).
文摘Predictive maintenance often involves imbalanced multivariate time series datasets with scarce failure events,posing challenges for model training due to the high dimensionality of the data and the need for domain-specific preprocessing,which frequently leads to the development of large and complex models.Inspired by the success of Large Language Models(LLMs),transformer-based foundation models have been developed for time series(TSFM).These models have been proven to reconstruct time series in a zero-shot manner,being able to capture different patterns that effectively characterize time series.This paper proposes the use of TSFM to generate embeddings of the input data space,making them more interpretable for machine learning models.To evaluate the effectiveness of our approach,we trained three classical machine learning algorithms and one neural network using the embeddings generated by the TSFM called Moment for predicting the remaining useful life of aircraft engines.We test the models trained with both the full training dataset and only 10%of the training samples.Our results show that training simple models,such as support vector regressors or neural networks,with embeddings generated by Moment not only accelerates the training process but also enhances performance in few-shot learning scenarios,where data is scarce.This suggests a promising alternative to complex deep learning architectures,particularly in industrial contexts with limited labeled data.