Amplitudes have been found to be a function of incident angle and offset. Hence data required to test for amplitude variation with angle or offset needs to have its amplitudes for all offsets preserved and not stacked...Amplitudes have been found to be a function of incident angle and offset. Hence data required to test for amplitude variation with angle or offset needs to have its amplitudes for all offsets preserved and not stacked. Amplitude Variation with Offset (AVO)/Amplitude Variation with Angle (AVA) is necessary to account for information in the offset/angle parameter (mode converted S-wave and P-wave velocities). Since amplitudes are a function of the converted S- and P-waves, it is important to investigate the dependence of amplitudes on the elastic (P- and S-waves) parameters from the seismic data. By modelling these effects for different reservoir fluids via fluid substitution, various AVO geobody classes present along the well and in the entire seismic cube can be observed. AVO analysis was performed on one test well (Well_1) and 3D pre-stack angle gathers from the Tano Basin. The analysis involves creating a synthetic model to infer the effect of offset scaling techniques on amplitude responses in the Tano basin as compared to the effect of unscaled seismic data. The spectral balance process was performed to match the amplitude spectra of all angle stacks to that of the mid (26°) stack on the test lines. The process had an effect primarily on the far (34° - 40°) stacks. The frequency content of these stacks slightly increased to match that of the near and mid stacks. In offset scaling process, the root mean square (RMS) amplitude comparison between the synthetic and seismic suggests that the amplitude of the far traces should be reduced relative to the nears by up to 16%. However, the exact scaler values depend on the time window considered. This suggests that the amplitude scaling with offset delivered from seismic processing is only approximately correct and needs to be checked with well synthetics and adjusted accordingly prior to use for AVO studies. The AVO attribute volumes generated were better at resolving anomalies on spectrally balanced and offset scaled data than data delivered from conventional processing. A typical class II AVO anomaly is seen along the test well from the cross-plot analysis and AVO attribute cube which indicates an oil filled reservoir.展开更多
A model based method which recruited the extended Kalman filter (EKF) to estimate the full state of charge (SOC) of Li-ion battery was proposed. The underlying dynamic behavior of the cell pack was described based...A model based method which recruited the extended Kalman filter (EKF) to estimate the full state of charge (SOC) of Li-ion battery was proposed. The underlying dynamic behavior of the cell pack was described based on an equivalent circuit comprising of two capacitors and three resistors. Measurements in two tests were applied to compare the SOC estimated by model based EKF estimation with the SOC calculated by coulomb counting. Results have shown that the proposed method is able to perform a good estimation of the SOC of battery packs. Moreover, a corresponding battery management systems (BMS) including software and hardware based on this method was designed.展开更多
Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to rea...Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to realize the identification and separation of the Hammerstein model.As a result,the identification of the dynamic linear part can be separated from the static nonlinear elements without any redundant adjustable parameters.The auxiliary model based multi-innovation stochastic gradient algorithm was applied to identifying the serial link parameters of the Hammerstein model.The auxiliary model based multi-innovation stochastic gradient algorithm can avoid the influence of noise and improve the identification accuracy by changing the innovation length.The simulation results show the efficiency of the proposed method.展开更多
Depth maps are used for synthesis virtual view in free-viewpoint television (FTV) systems. When depth maps are derived using existing depth estimation methods, the depth distortions will cause undesirable artifacts ...Depth maps are used for synthesis virtual view in free-viewpoint television (FTV) systems. When depth maps are derived using existing depth estimation methods, the depth distortions will cause undesirable artifacts in the synthesized views. To solve this problem, a 3D video quality model base depth maps (D-3DV) for virtual view synthesis and depth map coding in the FTV applications is proposed. First, the relationships between distortions in coded depth map and rendered view are derived. Then, a precisely 3DV quality model based depth characteristics is develop for the synthesized virtual views. Finally, based on D-3DV model, a multilateral filtering is applied as a pre-processed filter to reduce rendering artifacts. The experimental results evaluated by objective and subjective methods indicate that the proposed D-3DV model can reduce bit-rate of depth coding and achieve better rendering quality.展开更多
A photonuclear reaction transport model based on an isospin-dependent quantum molecular dynamics model (IQMD) is presented in the intermediate energy region, which is named as GiQMD in this study. Methodology to sim...A photonuclear reaction transport model based on an isospin-dependent quantum molecular dynamics model (IQMD) is presented in the intermediate energy region, which is named as GiQMD in this study. Methodology to simulate the course of the photonuclear reaction within the IQMD frame is described to study the photo- absorption cross section and π meson production, and the simulation results are compared with some available experimental data as well as the Giessen Boltzmann-Uehling-Uhlenbeck model.展开更多
The elasticjviscoplastic constitutive equation which describes deformation law of metal materials was suggested based on no-yield-surface concept and thermal activation theory of dislocation. The equation which takes ...The elasticjviscoplastic constitutive equation which describes deformation law of metal materials was suggested based on no-yield-surface concept and thermal activation theory of dislocation. The equation which takes account of effects of strain-rate, strain history, strain-rate history, hardening and temperature has stronger physical basis.Comparison of the theoretical prediction with experimental results of mechanical behaviours of Ti under conditions of uniaxial stress and room temperature shows good consistency.展开更多
As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve t...As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve this problem, a hierarchy Bayesian model based services awareness mechanism is proposed for high-speed optical access networks. This approach builds a so-called hierarchy Bayesian model, according to the structure of typical optical access networks. Moreover, the proposed scheme is able to conduct simple services awareness operation in each optical network unit(ONU) and to perform complex services awareness from the whole view of system in optical line terminal(OLT). Simulation results show that the proposed scheme is able to achieve better quality of services(Qo S), in terms of packet loss rate and time delay.展开更多
Model based control schemes use the inverse dynamics of the robot arm to produce the main torque component necessary for trajectory tracking. For model-based controller one is required to know the model parameters acc...Model based control schemes use the inverse dynamics of the robot arm to produce the main torque component necessary for trajectory tracking. For model-based controller one is required to know the model parameters accurately. This is a very difficult task especially if the manipulator is flexible. So a reduced model based controller has been developed, which requires only the information of space robot base velocity and link parameters. The flexible link is modeled as Euler Bernoulli beam. To simplify the analysis we have considered Jacobian of rigid manipulator. Bond graph modeling is used to model the dynamics of the system and to devise the control strategy. The scheme has been verified using simulation for two links flexible space manipulator.展开更多
This study firstly improved the Generalized Autoregressive Conditional Heteroskedast model for the issue that financial product sales data have singular information when applying this model, and the improved outlier d...This study firstly improved the Generalized Autoregressive Conditional Heteroskedast model for the issue that financial product sales data have singular information when applying this model, and the improved outlier detection method was used to detect the location of outliers, which were processed by the iterative method. Secondly, in order to describe the peak and fat tail of the financial time series, as well as the leverage effect, this work used the skewed-t Asymmetric Power Autoregressive Conditional Heteroskedasticity model based on the Autoregressive Integrated Moving Average Model to analyze the sales data. Empirical analysis showed that the model considering the skewed distribution is effective.展开更多
We use the secondary relative benefit model based on DEA to evaluate the performance of agricultural financial expenditure in Guizhou Province, which can give due consideration to the production effectiveness determin...We use the secondary relative benefit model based on DEA to evaluate the performance of agricultural financial expenditure in Guizhou Province, which can give due consideration to the production effectiveness determined by objective natural conditions, and management effectiveness of all regions (as decision-making body) in the use of financial fund for supporting agriculture. In general, there is north-south gradient difference in the performance of financial support for agriculture between regions in Guizhou Province. The drought in 2010 has significant impact on the technical efficiency in the whole province; the performance score of each item in Liupanshui City and Southwest Guizhou is very low; the technical efficiency and management efficiency in most regions need to be improved. In order to improve the performance of financial support for agriculture, we need to ensure the scale of input; at the same time, provide appropriate preferential financial policies for agricultural infrastructure, especially the construction of rural water conservancy, development and promotion of agricultural science and technology, and other fields; adopt the way of special check and acceptance of supporting projects to strengthen the use management of the fund for agriculture.展开更多
This paper describes a new approach to intelligent model based predictive control scheme for deriving a complex system. In the control scheme presented, the main problem of the linear model based predictive control th...This paper describes a new approach to intelligent model based predictive control scheme for deriving a complex system. In the control scheme presented, the main problem of the linear model based predictive control theory in dealing with severe nonlinear and time variant systems is thoroughly solved. In fact, this theory could appropriately be improved to a perfect approach for handling all complex systems, provided that they are firstly taken into consideration in line with the outcomes presented. This control scheme is organized based on a multi-fuzzy-based predictive control approach as well as a multi-fuzzy-based predictive model approach, while an intelligent decision mechanism system (IDMS) is used to identify the best fuzzy-based predictive model approach and the corresponding fuzzy-based predictive control approach, at each instant of time. In order to demonstrate the validity of the proposed control scheme, the single linear model based generalized predictive control scheme is used as a benchmark approach. At last, the appropriate tracking performance of the proposed control scheme is easily outperformed in comparison with previous one.展开更多
Offline policy evaluation,evaluating and selecting complex policies for decision-making by only using offline datasets is important in reinforcement learning.At present,the model-based offline policy evaluation(MBOPE)...Offline policy evaluation,evaluating and selecting complex policies for decision-making by only using offline datasets is important in reinforcement learning.At present,the model-based offline policy evaluation(MBOPE)is widely welcomed because of its easy to implement and good performance.MBOPE directly approximates the unknown value of a given policy using the Monte Carlo method given the estimated transition and reward functions of the environment.Usually,multiple models are trained,and then one of them is selected to be used.However,a challenge remains in selecting an appropriate model from those trained for further use.The authors first analyse the upper bound of the difference between the approximated value and the unknown true value.Theoretical results show that this difference is related to the trajectories generated by the given policy on the learnt model and the prediction error of the transition and reward functions at these generated data points.Based on the theoretical results,a new criterion is proposed to tell which trained model is better suited for evaluating the given policy.At last,the effectiveness of the proposed criterion is demonstrated on both benchmark and synthetic offline datasets.展开更多
The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficultie...The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficulties in providing social services that meet the required standards, and the prolongation of emergencies. Despite this challenging context, short-term considerations continue to guide their planning and management rather than more integrated, longer-term perspectives, thus preventing viable, sustainable development. Over the years, the design of humanitarian settlements has not been adapted to local contexts and perspectives, nor to the dynamics of urbanization and population growth and data. In addition, the current approach to temporary settlement harms the environment and can strain limited resources. Inefficient land use and ad hoc development models have compounded difficulties and generated new challenges. As a result, living conditions in settlements have deteriorated over the last few decades and continue to pose new challenges. The stakes are such that major shortcomings have emerged along the way, leading to disruption, budget overruns in a context marked by a steady decline in funding. However, some attempts have been made to shift towards more sustainable approaches, but these have mainly focused on vague, sector-oriented themes, failing to consider systematic and integration views. This study is a contribution in addressing these shortcomings by designing a model-driving solution, emphasizing an integrated system conceptualized as a system of systems. This paper proposes a new methodology for designing an integrated and sustainable human settlement model, based on Model-Based Systems Engineering and a Systems Modeling Language to provide valuable insights toward sustainable solutions for displaced populations aligning with the United Nations 2030 agenda for sustainable development.展开更多
This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By ...This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.展开更多
The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a lar...The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a large scale.This limitation arises primarily from the scarcity of historical rockfall data and the inadequacy of conventional assessment indicators in capturing the physical and structural characteristics of rockfalls.This study proposes a physically based deterministic model designed to accurately quantify rockfall hazards at a large scale.The model accounts for multiple rockfall failure modes and incorporates the key physical and structural parameters of the rock mass.Rockfall hazard is defined as the product of three factors:the rockfall failure probability,the probability of reaching a specific position,and the corresponding impact intensity.The failure probability includes probabilities of formation and instability of rock blocks under different failure modes,modeled based on the combination patterns of slope surfaces and rock discontinuities.The Monte Carlo method is employed to account for the randomness of mechanical and geometric parameters when quantifying instability probabilities.Additionally,the rock trajectories and impact energies simulated using Flow-R software are combined with rockfall failure probability to enable regional rockfall hazard zoning.A case study was conducted in Tiefeng,Chongqing,China,considering four types of rockfall failure modes.Hazard zoning results identified the steep and elevated terrains of the northern and southern anaclinal slopes as areas of highest rockfall hazard.These findings align with observed conditions,providing detailed hazard zoning and validating the effectiveness and potential of the proposed model.展开更多
The increasing global demand for sustainable agricultural practices and effective waste management has highlighted the potential of biochar as a multifaceted solution. This study evaluates the economic viability of su...The increasing global demand for sustainable agricultural practices and effective waste management has highlighted the potential of biochar as a multifaceted solution. This study evaluates the economic viability of sugarcane bagasse-based biochar in Brazil, focusing on its potential to enhance agricultural productivity and contribute to environmental sustainability. While existing literature predominantly explores the production, crop yield benefits, and carbon sequestration capabilities of biochar, there is a notable gap in comprehensive economic modeling and viability analysis for the region. This paper aims to fill this gap by employing a scenario-based economic modeling approach, incorporating relevant economic models. Findings include that biochar implementation can be economically viable for medium and large sugarcane farms (20,000 - 50,000 hectares) given the availability of funding, breaking even in about 7.5 years with an internal rate of return of 18% on average. For small farms, biochar can only be viable when applied biochar to the soil, which in all scenarios is found to be the more profitable practice by a large margin. Sensitivity analyses found that generally, biochar becomes economically feasible at biochar carbon credit prices above $120 USD/tCO2e, and at sugarcane bagasse availability percentages above 60%. While the economic models are well-grounded in existing literature, the production of biochar at the studied scales is not yet widespread, especially in Brazil and uncertainties can result. Reviewing the results, the land application scenario was found to be the most viable, and large farms saw the best results, highlighting the importance of scale in biochar operations. Small and medium farms with no land application were concluded to have no or questionable viability. Overall, sugarcane bagasse-based biochar can be economically viable, under the right circumstances, for agricultural and environmental advancement in Brazil.展开更多
Proteolysis-targeting chimeras(PROTACs)represent a promising class of drugs that can target disease-causing proteins more effectively than traditional small molecule inhibitors can,potentially revolutionizing drug dis...Proteolysis-targeting chimeras(PROTACs)represent a promising class of drugs that can target disease-causing proteins more effectively than traditional small molecule inhibitors can,potentially revolutionizing drug discovery and treatment strategies.However,the links between in vitro and in vivo data are poorly understood,hindering a comprehensive understanding of the absorption,distribution,metabolism,and excretion(ADME)of PROTACs.In this work,14C-labeled vepdegestrant(ARV-471),which is currently in phase III clinical trials for breast cancer,was synthesized as a model PROTAC to characterize its preclinical ADME properties and simulate its clinical pharmacokinetics(PK)by establishing a physiologically based pharmacokinetics(PBPK)model.For in vitro–in vivo extrapolation(IVIVE),hepatocyte clearance correlated more closely with in vivo rat PK data than liver microsomal clearance did.PBPK models,which were initially developed and validated in rats,accurately simulate ARV-471's PK across fed and fasted states,with parameters within 1.75-fold of the observed values.Human models,informed by in vitro ADME data,closely mirrored postoral dose plasma profiles at 30 mg.Furthermore,no human-specific metabolites were identified in vitro and the metabolic profile of rats could overlap that of humans.This work presents a roadmap for developing future PROTAC medications by elucidating the correlation between in vitro and in vivo characteristics.展开更多
With the aid of multi-agent based modeling approach to complex systems, the hierarchy simulation models of carrier-based aircraft catapult launch are developed. Ocean, carrier, aircraft, and atmosphere are treated as ...With the aid of multi-agent based modeling approach to complex systems, the hierarchy simulation models of carrier-based aircraft catapult launch are developed. Ocean, carrier, aircraft, and atmosphere are treated as aggregation agents, the detailed components like catapult, landing gears, and disturbances are considered as meta-agents, which belong to their aggregation agent. Thus, the model with two layers is formed i.e. the aggregation agent layer and the meta-agent layer. The information communication among all agents is described. The meta-agents within one aggregation agent communicate with each other directly by information sharing, but the meta-agents, which belong to different aggregation agents exchange their information through the aggregation layer first, and then perceive it from the sharing environment, that is the aggregation agent. Thus, not only the hierarchy model is built, but also the environment perceived by each agent is specified. Meanwhile, the problem of balancing the independency of agent and the resource consumption brought by real-time communication within multi-agent system (MAS) is resolved. Each agent involved in carrier-based aircraft catapult launch is depicted, with considering the interaction within disturbed atmospheric environment and multiple motion bodies including carrier, aircraft, and landing gears. The models of reactive agents among them are derived based on tensors, and the perceived messages and inner frameworks of each agent are characterized. Finally, some results of a simulation instance are given. The simulation and modeling of dynamic system based on multi-agent system is of benefit to express physical concepts and logical hierarchy clearly and precisely. The system model can easily draw in kinds of other agents to achieve a precise simulation of more complex system. This modeling technique makes the complex integral dynamic equations of multibodies decompose into parallel operations of single agent, and it is convenient to expand, maintain, and reuse the program codes.展开更多
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati...In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.展开更多
文摘Amplitudes have been found to be a function of incident angle and offset. Hence data required to test for amplitude variation with angle or offset needs to have its amplitudes for all offsets preserved and not stacked. Amplitude Variation with Offset (AVO)/Amplitude Variation with Angle (AVA) is necessary to account for information in the offset/angle parameter (mode converted S-wave and P-wave velocities). Since amplitudes are a function of the converted S- and P-waves, it is important to investigate the dependence of amplitudes on the elastic (P- and S-waves) parameters from the seismic data. By modelling these effects for different reservoir fluids via fluid substitution, various AVO geobody classes present along the well and in the entire seismic cube can be observed. AVO analysis was performed on one test well (Well_1) and 3D pre-stack angle gathers from the Tano Basin. The analysis involves creating a synthetic model to infer the effect of offset scaling techniques on amplitude responses in the Tano basin as compared to the effect of unscaled seismic data. The spectral balance process was performed to match the amplitude spectra of all angle stacks to that of the mid (26°) stack on the test lines. The process had an effect primarily on the far (34° - 40°) stacks. The frequency content of these stacks slightly increased to match that of the near and mid stacks. In offset scaling process, the root mean square (RMS) amplitude comparison between the synthetic and seismic suggests that the amplitude of the far traces should be reduced relative to the nears by up to 16%. However, the exact scaler values depend on the time window considered. This suggests that the amplitude scaling with offset delivered from seismic processing is only approximately correct and needs to be checked with well synthetics and adjusted accordingly prior to use for AVO studies. The AVO attribute volumes generated were better at resolving anomalies on spectrally balanced and offset scaled data than data delivered from conventional processing. A typical class II AVO anomaly is seen along the test well from the cross-plot analysis and AVO attribute cube which indicates an oil filled reservoir.
文摘A model based method which recruited the extended Kalman filter (EKF) to estimate the full state of charge (SOC) of Li-ion battery was proposed. The underlying dynamic behavior of the cell pack was described based on an equivalent circuit comprising of two capacitors and three resistors. Measurements in two tests were applied to compare the SOC estimated by model based EKF estimation with the SOC calculated by coulomb counting. Results have shown that the proposed method is able to perform a good estimation of the SOC of battery packs. Moreover, a corresponding battery management systems (BMS) including software and hardware based on this method was designed.
基金National Natural Science Foundation of China(No.61374044)Shanghai Science Technology Commission,China(Nos.15510722100,16111106300)
文摘Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to realize the identification and separation of the Hammerstein model.As a result,the identification of the dynamic linear part can be separated from the static nonlinear elements without any redundant adjustable parameters.The auxiliary model based multi-innovation stochastic gradient algorithm was applied to identifying the serial link parameters of the Hammerstein model.The auxiliary model based multi-innovation stochastic gradient algorithm can avoid the influence of noise and improve the identification accuracy by changing the innovation length.The simulation results show the efficiency of the proposed method.
基金supported by the National Natural Science Foundation of China(Grant No.60832003)Key Laboratory of Advanced Display and System Application(Shanghai University),Ministry of Education,China(Grant No.P200902)the Key Project of Science and Technology Commission of Shanghai Municipality(Grant No.10510500500)
文摘Depth maps are used for synthesis virtual view in free-viewpoint television (FTV) systems. When depth maps are derived using existing depth estimation methods, the depth distortions will cause undesirable artifacts in the synthesized views. To solve this problem, a 3D video quality model base depth maps (D-3DV) for virtual view synthesis and depth map coding in the FTV applications is proposed. First, the relationships between distortions in coded depth map and rendered view are derived. Then, a precisely 3DV quality model based depth characteristics is develop for the synthesized virtual views. Finally, based on D-3DV model, a multilateral filtering is applied as a pre-processed filter to reduce rendering artifacts. The experimental results evaluated by objective and subjective methods indicate that the proposed D-3DV model can reduce bit-rate of depth coding and achieve better rendering quality.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11421505 and 11220101005the National Basic Research Program of China under Grant No 2014CB845401the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No XDB16
文摘A photonuclear reaction transport model based on an isospin-dependent quantum molecular dynamics model (IQMD) is presented in the intermediate energy region, which is named as GiQMD in this study. Methodology to simulate the course of the photonuclear reaction within the IQMD frame is described to study the photo- absorption cross section and π meson production, and the simulation results are compared with some available experimental data as well as the Giessen Boltzmann-Uehling-Uhlenbeck model.
基金Projects Supported by National Natural Science Foundation of China
文摘The elasticjviscoplastic constitutive equation which describes deformation law of metal materials was suggested based on no-yield-surface concept and thermal activation theory of dislocation. The equation which takes account of effects of strain-rate, strain history, strain-rate history, hardening and temperature has stronger physical basis.Comparison of the theoretical prediction with experimental results of mechanical behaviours of Ti under conditions of uniaxial stress and room temperature shows good consistency.
基金supported by the Science and Technology Project of State Grid Corporation of China:"Research on the Power-Grid Services Oriented"IP+Optics"Coordination Choreography Technology"
文摘As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve this problem, a hierarchy Bayesian model based services awareness mechanism is proposed for high-speed optical access networks. This approach builds a so-called hierarchy Bayesian model, according to the structure of typical optical access networks. Moreover, the proposed scheme is able to conduct simple services awareness operation in each optical network unit(ONU) and to perform complex services awareness from the whole view of system in optical line terminal(OLT). Simulation results show that the proposed scheme is able to achieve better quality of services(Qo S), in terms of packet loss rate and time delay.
文摘Model based control schemes use the inverse dynamics of the robot arm to produce the main torque component necessary for trajectory tracking. For model-based controller one is required to know the model parameters accurately. This is a very difficult task especially if the manipulator is flexible. So a reduced model based controller has been developed, which requires only the information of space robot base velocity and link parameters. The flexible link is modeled as Euler Bernoulli beam. To simplify the analysis we have considered Jacobian of rigid manipulator. Bond graph modeling is used to model the dynamics of the system and to devise the control strategy. The scheme has been verified using simulation for two links flexible space manipulator.
文摘This study firstly improved the Generalized Autoregressive Conditional Heteroskedast model for the issue that financial product sales data have singular information when applying this model, and the improved outlier detection method was used to detect the location of outliers, which were processed by the iterative method. Secondly, in order to describe the peak and fat tail of the financial time series, as well as the leverage effect, this work used the skewed-t Asymmetric Power Autoregressive Conditional Heteroskedasticity model based on the Autoregressive Integrated Moving Average Model to analyze the sales data. Empirical analysis showed that the model considering the skewed distribution is effective.
基金Key Chongqing Municipal Humanities and Social Sciences Research Base Project in Southwest University(SWU0810026)the Fundamental Research Funds for the Central Universities(SWU1209457 and SWU0909510)+5 种基金National Soft Science Fund(2007GXS3D094)Youth Fund Project of Southwest Normal University(SWU07106)Chongqing Soft Science Project(CSTC,2009CE9016)2012 Shizhu Base Science and Technology Innovation Special Fund Projects(sz201208)National Social Science Fund Project(12ASH00412AGL008)
文摘We use the secondary relative benefit model based on DEA to evaluate the performance of agricultural financial expenditure in Guizhou Province, which can give due consideration to the production effectiveness determined by objective natural conditions, and management effectiveness of all regions (as decision-making body) in the use of financial fund for supporting agriculture. In general, there is north-south gradient difference in the performance of financial support for agriculture between regions in Guizhou Province. The drought in 2010 has significant impact on the technical efficiency in the whole province; the performance score of each item in Liupanshui City and Southwest Guizhou is very low; the technical efficiency and management efficiency in most regions need to be improved. In order to improve the performance of financial support for agriculture, we need to ensure the scale of input; at the same time, provide appropriate preferential financial policies for agricultural infrastructure, especially the construction of rural water conservancy, development and promotion of agricultural science and technology, and other fields; adopt the way of special check and acceptance of supporting projects to strengthen the use management of the fund for agriculture.
文摘This paper describes a new approach to intelligent model based predictive control scheme for deriving a complex system. In the control scheme presented, the main problem of the linear model based predictive control theory in dealing with severe nonlinear and time variant systems is thoroughly solved. In fact, this theory could appropriately be improved to a perfect approach for handling all complex systems, provided that they are firstly taken into consideration in line with the outcomes presented. This control scheme is organized based on a multi-fuzzy-based predictive control approach as well as a multi-fuzzy-based predictive model approach, while an intelligent decision mechanism system (IDMS) is used to identify the best fuzzy-based predictive model approach and the corresponding fuzzy-based predictive control approach, at each instant of time. In order to demonstrate the validity of the proposed control scheme, the single linear model based generalized predictive control scheme is used as a benchmark approach. At last, the appropriate tracking performance of the proposed control scheme is easily outperformed in comparison with previous one.
文摘Offline policy evaluation,evaluating and selecting complex policies for decision-making by only using offline datasets is important in reinforcement learning.At present,the model-based offline policy evaluation(MBOPE)is widely welcomed because of its easy to implement and good performance.MBOPE directly approximates the unknown value of a given policy using the Monte Carlo method given the estimated transition and reward functions of the environment.Usually,multiple models are trained,and then one of them is selected to be used.However,a challenge remains in selecting an appropriate model from those trained for further use.The authors first analyse the upper bound of the difference between the approximated value and the unknown true value.Theoretical results show that this difference is related to the trajectories generated by the given policy on the learnt model and the prediction error of the transition and reward functions at these generated data points.Based on the theoretical results,a new criterion is proposed to tell which trained model is better suited for evaluating the given policy.At last,the effectiveness of the proposed criterion is demonstrated on both benchmark and synthetic offline datasets.
文摘The challenge of transitioning from temporary humanitarian settlements to more sustainable human settlements is due to a significant increase in the number of forcibly displaced people over recent decades, difficulties in providing social services that meet the required standards, and the prolongation of emergencies. Despite this challenging context, short-term considerations continue to guide their planning and management rather than more integrated, longer-term perspectives, thus preventing viable, sustainable development. Over the years, the design of humanitarian settlements has not been adapted to local contexts and perspectives, nor to the dynamics of urbanization and population growth and data. In addition, the current approach to temporary settlement harms the environment and can strain limited resources. Inefficient land use and ad hoc development models have compounded difficulties and generated new challenges. As a result, living conditions in settlements have deteriorated over the last few decades and continue to pose new challenges. The stakes are such that major shortcomings have emerged along the way, leading to disruption, budget overruns in a context marked by a steady decline in funding. However, some attempts have been made to shift towards more sustainable approaches, but these have mainly focused on vague, sector-oriented themes, failing to consider systematic and integration views. This study is a contribution in addressing these shortcomings by designing a model-driving solution, emphasizing an integrated system conceptualized as a system of systems. This paper proposes a new methodology for designing an integrated and sustainable human settlement model, based on Model-Based Systems Engineering and a Systems Modeling Language to provide valuable insights toward sustainable solutions for displaced populations aligning with the United Nations 2030 agenda for sustainable development.
文摘This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.
基金supported by the National Natural Science Foundation of China(Grant Nos.42172318 and 42377186)the National Key R&D Program of China(Grant No.2023YFC3007201).
文摘The rise in construction activities within mountainous regions has significantly increased the frequency of rockfalls.Statistical models for rockfall hazard assessment often struggle to achieve high precision on a large scale.This limitation arises primarily from the scarcity of historical rockfall data and the inadequacy of conventional assessment indicators in capturing the physical and structural characteristics of rockfalls.This study proposes a physically based deterministic model designed to accurately quantify rockfall hazards at a large scale.The model accounts for multiple rockfall failure modes and incorporates the key physical and structural parameters of the rock mass.Rockfall hazard is defined as the product of three factors:the rockfall failure probability,the probability of reaching a specific position,and the corresponding impact intensity.The failure probability includes probabilities of formation and instability of rock blocks under different failure modes,modeled based on the combination patterns of slope surfaces and rock discontinuities.The Monte Carlo method is employed to account for the randomness of mechanical and geometric parameters when quantifying instability probabilities.Additionally,the rock trajectories and impact energies simulated using Flow-R software are combined with rockfall failure probability to enable regional rockfall hazard zoning.A case study was conducted in Tiefeng,Chongqing,China,considering four types of rockfall failure modes.Hazard zoning results identified the steep and elevated terrains of the northern and southern anaclinal slopes as areas of highest rockfall hazard.These findings align with observed conditions,providing detailed hazard zoning and validating the effectiveness and potential of the proposed model.
文摘The increasing global demand for sustainable agricultural practices and effective waste management has highlighted the potential of biochar as a multifaceted solution. This study evaluates the economic viability of sugarcane bagasse-based biochar in Brazil, focusing on its potential to enhance agricultural productivity and contribute to environmental sustainability. While existing literature predominantly explores the production, crop yield benefits, and carbon sequestration capabilities of biochar, there is a notable gap in comprehensive economic modeling and viability analysis for the region. This paper aims to fill this gap by employing a scenario-based economic modeling approach, incorporating relevant economic models. Findings include that biochar implementation can be economically viable for medium and large sugarcane farms (20,000 - 50,000 hectares) given the availability of funding, breaking even in about 7.5 years with an internal rate of return of 18% on average. For small farms, biochar can only be viable when applied biochar to the soil, which in all scenarios is found to be the more profitable practice by a large margin. Sensitivity analyses found that generally, biochar becomes economically feasible at biochar carbon credit prices above $120 USD/tCO2e, and at sugarcane bagasse availability percentages above 60%. While the economic models are well-grounded in existing literature, the production of biochar at the studied scales is not yet widespread, especially in Brazil and uncertainties can result. Reviewing the results, the land application scenario was found to be the most viable, and large farms saw the best results, highlighting the importance of scale in biochar operations. Small and medium farms with no land application were concluded to have no or questionable viability. Overall, sugarcane bagasse-based biochar can be economically viable, under the right circumstances, for agricultural and environmental advancement in Brazil.
基金supported by the National Natural Science Foundation of China(Grant Nos.:82373938,82104275,and 82204585)Key Technologies R&D Program of Guangdong Province,China(Grant No.:2023B1111030004)National Key R&D Program of China(Grant No.:2022YFF1202600).
文摘Proteolysis-targeting chimeras(PROTACs)represent a promising class of drugs that can target disease-causing proteins more effectively than traditional small molecule inhibitors can,potentially revolutionizing drug discovery and treatment strategies.However,the links between in vitro and in vivo data are poorly understood,hindering a comprehensive understanding of the absorption,distribution,metabolism,and excretion(ADME)of PROTACs.In this work,14C-labeled vepdegestrant(ARV-471),which is currently in phase III clinical trials for breast cancer,was synthesized as a model PROTAC to characterize its preclinical ADME properties and simulate its clinical pharmacokinetics(PK)by establishing a physiologically based pharmacokinetics(PBPK)model.For in vitro–in vivo extrapolation(IVIVE),hepatocyte clearance correlated more closely with in vivo rat PK data than liver microsomal clearance did.PBPK models,which were initially developed and validated in rats,accurately simulate ARV-471's PK across fed and fasted states,with parameters within 1.75-fold of the observed values.Human models,informed by in vitro ADME data,closely mirrored postoral dose plasma profiles at 30 mg.Furthermore,no human-specific metabolites were identified in vitro and the metabolic profile of rats could overlap that of humans.This work presents a roadmap for developing future PROTAC medications by elucidating the correlation between in vitro and in vivo characteristics.
基金Aeronautical Science Foundation of China (2006ZA51004)
文摘With the aid of multi-agent based modeling approach to complex systems, the hierarchy simulation models of carrier-based aircraft catapult launch are developed. Ocean, carrier, aircraft, and atmosphere are treated as aggregation agents, the detailed components like catapult, landing gears, and disturbances are considered as meta-agents, which belong to their aggregation agent. Thus, the model with two layers is formed i.e. the aggregation agent layer and the meta-agent layer. The information communication among all agents is described. The meta-agents within one aggregation agent communicate with each other directly by information sharing, but the meta-agents, which belong to different aggregation agents exchange their information through the aggregation layer first, and then perceive it from the sharing environment, that is the aggregation agent. Thus, not only the hierarchy model is built, but also the environment perceived by each agent is specified. Meanwhile, the problem of balancing the independency of agent and the resource consumption brought by real-time communication within multi-agent system (MAS) is resolved. Each agent involved in carrier-based aircraft catapult launch is depicted, with considering the interaction within disturbed atmospheric environment and multiple motion bodies including carrier, aircraft, and landing gears. The models of reactive agents among them are derived based on tensors, and the perceived messages and inner frameworks of each agent are characterized. Finally, some results of a simulation instance are given. The simulation and modeling of dynamic system based on multi-agent system is of benefit to express physical concepts and logical hierarchy clearly and precisely. The system model can easily draw in kinds of other agents to achieve a precise simulation of more complex system. This modeling technique makes the complex integral dynamic equations of multibodies decompose into parallel operations of single agent, and it is convenient to expand, maintain, and reuse the program codes.
文摘In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.