The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in so...The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in southwestern China as the engineering prototype,large-scale three-dimensional(3D)physical model tests were conducted on a 3D-printed complex geological model containing two faults.Based on the selfdeveloped 3D loading system and excavation device,the macroscopic failure of fault-slip rockbursts was simulated indoors.The stress,strain,and fracturing characteristics of the surrounding rock near the two faults were systematically evaluated during excavation and multistage loading.The test results effectively revealed the evolution and triggering mechanism of fault-slip rockbursts.After the excavation of a highstress tunnel,stress readjustment occurred.Owing to the presence of these two faults,stress continued to accumulate in the rock mass between them,leading to the accumulation of fractures.When the shear stress on a fault surface exceeded its shear strength,sudden fault slip and dislocation occurred,thus triggering rockbursts.Rockbursts occurred twice in the vault between the two faults,showing obvious intermittent characteristics.The rockburst pit was controlled by two faults.When the faults remained stable,tensile failure predominated in the surrounding rock.However,when the fault slip was triggered,shear failure in the surrounding rock increased.These findings provide valuable insights for enhancing the comprehension of fault-slip rockbursts.展开更多
Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining wal...Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.展开更多
Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve ...Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial ...The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°×1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin, The PRECIS- LRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.展开更多
Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three dif...Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three different pile-soil models were used to study a large 10 MW monopile wind turbine.By modeling the three models in the SACS software,this paper analyzed the motion response of the overall structure under the conditions of wind and waves.According to the given working conditions,this paper concludes that under the condition of independent wind,the average value of the tower top x-displacement of the rigid connection method is the smalle st,and the standard deviation is the smallest under the condition of independent wave.The results obtained by the p-y curve method are the most conservative.展开更多
This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly m...This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.展开更多
The widespread utilisation of tunnel boring machines(TBMs)in underground construction engineering requires a detailed investigation of the cutter-rock interaction.In this paper,we conduct a series of largescale standi...The widespread utilisation of tunnel boring machines(TBMs)in underground construction engineering requires a detailed investigation of the cutter-rock interaction.In this paper,we conduct a series of largescale standing rotary cutting tests on granite in conjunction with high-fidelity numerical simulations based on a particle-type discrete element method(DEM)to explore the effects of key cutting parameters on the TBM cutter performance and the distribution of cutter-rock contact stresses.The assessment results of cutter performance obtained from the cutting tests and numerical simulations reveal similar dependencies on the key cutting parameters.More specifically,the normal and rolling forces exhibit a positive correlation with penetration but are slightly influenced by the cutting radius.In contrast,the side force decreases as the cutting radius increases.Additionally,the side force shows a positive relationship with the penetration for smaller cutting radii but tends to become negative as the cutting radius increases.The cutter's relative effectiveness in rock breaking is significantly impacted by the penetration but shows little dependency on the cutting radius.Consequently,an optimal penetration is identified,leading to a low boreability index and specific energy.A combined Hertz-Weibull function is developed to fit the cutter-rock contact stress distribution obtained in DEM simulations,whereby an improved CSM(Colorado School of Mines)model is proposed by replacing the original monotonic cutting force distribution with this combined Hertz-Weibull model.The proposed model outperforms the original CSM model as demonstrated by a comparison of the estimated cutting forces with those from the tests/simulations.The findings from this work that advance our understanding of TBM cutter performance have important implications for improving the efficiency and reliability of TBM tunnelling in granite.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
This paper discusses the digital application and benefit analysis of building information model(BIM)technology in the large-scale comprehensive development project of the Guangxi headquarters base.The project covers a...This paper discusses the digital application and benefit analysis of building information model(BIM)technology in the large-scale comprehensive development project of the Guangxi headquarters base.The project covers a total area of 92,100 square meters,with a total construction area of 379,700 square meters,including a variety of architectural forms.Through three-dimensional modeling and simulation analysis,BIM technology significantly enhances the design quality and efficiency,shortens the design cycle by about 20%,and promotes the collaboration and integration of project management,improving the management efficiency by about 25%.During the construction phase,the collision detection and four-dimensional visual management functions of BIM technology have improved construction efficiency by about 15%and saved the cost by about 10%.In addition,BIM technology has promoted green building and sustainable development,achieved the dual improvement of technical and economic indicators and social and economic benefits,set an example for enterprises in digital transformation,and opened up new market businesses.展开更多
Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Sys...Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be reduced too. The recent advances of MOR research are overviewed in the article. The MOR theor y and methods may be classified as Singular Value decomposition (SVD) based, the Krylov subspace based and others. The merits and demerits of the different meth ods are analyzed, and the existed problems are pointed out. Moreover, the applic ation’s fields are overviewed, and the potential applications are forecaste d. After the existed problems analyzed, the future work is described. There are som e problems in the traditional methods such as SVD and Krylov subspace, they are that it’s difficult to (1)guarantee the stability of the original system, (2) b e adaptive to nonlinear system, and (3) control the modeling accuracy. The f uture works may be solving the above problems on the foundation of the tradition al methods, and applying other methods such as wavelet or signal compression.展开更多
Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from...Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from observing phenomena to uncovering underlying mechanisms,from regional-scale investigations to global perspectives,and from experience-based inference toward data-and model-enabled intelligent prediction.AlphaEarth Foundations(AEF)is a next-generation geospatial intelligence platform that addresses these changes by introducing a unified 64-dimensional shared embedding space,enabling-for the first time-standardized representation and seamless integration of 12 distinct types of Earth observation data,including optical,radar,and lidar.This framework significantly improves data assimilation efficiency and resolves the persistent problem of“data silos”in geoscience research.AEF is helping redefine research methodologies and fostering breakthroughs,particularly in quantitative Earth system science.This paper systematically examines how AEF’s innovative architecture-featuring multi-source data fusion,high-dimensional feature representation learning,and a scalable computational framework-facilitates intelligent,precise,and realtime data-driven geoscientific research.Using case studies from resource and environmental applications,we demonstrate AEF’s broad potential and identify emerging innovation needs.Our findings show that AEF not only enhances the efficiency of solving traditional geoscientific problems but also stimulates novel research directions and methodological approaches.展开更多
A distributed model predictive control(MPC) scheme with one-step delay communication is proposed for on-line optimization and control of large-scale systems in this paper. Cooperation between subsystems is achieved by...A distributed model predictive control(MPC) scheme with one-step delay communication is proposed for on-line optimization and control of large-scale systems in this paper. Cooperation between subsystems is achieved by exchanging information with neighbor-to-neighbor communication and by optimizing the local problem with the improved performance index in the neighborhood. A distributed MPC algorithm with one-step delay communication is developed for the situation that there is a one-step delay in the information available from its neighbors when a subsystem solves the local optimization problem. The nominal stability is employed for the whole system under the distributed MPC algorithm without the inequality constraints. Finally, the case study of the reactor-storage-separator(RSS) system is illustrated to test the practicality of the presented control algorithm.展开更多
Student cognitive modeling is a fundamental task in the intelligence education field.It serves as the basis for various downstream applications,such as student profiling,personalized educational content recommendation...Student cognitive modeling is a fundamental task in the intelligence education field.It serves as the basis for various downstream applications,such as student profiling,personalized educational content recommendation,and adaptive testing.Cognitive Diagnosis(CD)and Knowledge Tracing(KT)are two mainstream categories for student cognitive modeling,which measure the cognitive ability from a limited time(e.g.,an exam)and the learning ability dynamics over a long period(e.g.,learning records from a year),respectively.Recent efforts have been dedicated to the development of open-source code libraries for student cognitive modeling.However,existing libraries often focus on a particular category and overlook the relationships between them.Additionally,these libraries lack sufficient modularization,which hinders reusability.To address these limitations,we have developed a unified PyTorch-based library EduStudio,which unifies CD and KT for student cognitive modeling.The design philosophy of EduStudio is from two folds.From a horizontal perspective,EduStudio employs the modularization that separates the main step pipeline of each algorithm.From a vertical perspective,we use templates with the inheritance style to implement each module.We also provide eco-services of EduStudio,such as the repository that collects resources about student cognitive modeling and the leaderboard that demonstrates comparison among models.Our open-source project is available at the website of edustudio.ai.展开更多
The identification of rock mass discontinuities is critical for rock mass characterization.While high-resolution digital outcrop models(DOMs)are widely used,current digital methods struggle to generalize across divers...The identification of rock mass discontinuities is critical for rock mass characterization.While high-resolution digital outcrop models(DOMs)are widely used,current digital methods struggle to generalize across diverse geological settings.Large-scale models(LSMs),with vast parameter spaces and extensive training datasets,excel in solving complex visual problems.This study explores the potential of using one such LSM,Segment anything model(SAM),to identify facet-type discontinuities across several outcrops via interactive prompting.The findings demonstrate that SAM effectively segments two-dimensional(2D)discontinuities,with its generalization capability validated on a dataset of 2426 identified discontinuities across 170 outcrops.The model achieves 0.78 mean IoU and 0.86 average precision using 11-point prompts.To extend to three dimensions(3D),a framework integrating SAM with Structure-from-Motion(SfM)was proposed.By utilizing the inherent but often overlooked relationship between image pixels and point clouds in SfM,the identification process was simplified and generalized across photogrammetric devices.Benchmark studies showed that the framework achieved 0.91 average precision,identifying 87 discontinuities in Dataset-3D.The results confirm its high precision and efficiency,making it a valuable tool for data annotation.The proposed method offers a practical solution for geological investigations.展开更多
As important infrastructure for airborne communication platforms,unmanned aerial vehicles(UAVs)are expected to become a key part of 6G wireless networks.Thus,modeling low-and medium-altitude propagation channels has a...As important infrastructure for airborne communication platforms,unmanned aerial vehicles(UAVs)are expected to become a key part of 6G wireless networks.Thus,modeling low-and medium-altitude propagation channels has attracted much attention.Air-to-ground(A2G)propagation channel models vary in different scenarios,requiring accurate models for designing and evaluating UAV communication links.Unlike terrestrial models,A2G channel models lack detailed investigation.Therefore,this paper provides an overview of existing A2G channel measurement campaigns,different types of A2G channel models for various environments,and future research directions for UAV airland channel modeling.This study focuses on the potential of millimeter-wave technology for UAV A2G channel modeling and highlights nonsuburban scenarios requiring consideration in future modeling efforts.展开更多
[Objective] The behavior of eating, drinking, defecating and peeing of 1 500 pigs in a large-scale microbial fermentation bed-equipped piggery was observed. We hoped to find some simple indicators that could reflect t...[Objective] The behavior of eating, drinking, defecating and peeing of 1 500 pigs in a large-scale microbial fermentation bed-equipped piggery was observed. We hoped to find some simple indicators that could reflect the health status of swinery and to provide experience for the swinery performance management in large-scale microbial fermentation bed-equipped piggery. [Method] The body weight (BW), daily BW gain, feed intake and other indicators of different-day-old pigs were recorded in details. Based on the recorded data, the models between BW, BW gain, average daily feed intake and feed/gain ratio and growth days (d) were established. In addition, the incidences of pox-like macula (dermatitis), diarrhea (gastrointestinal disease), cough (respiratory disease), stiff pig (malnutrition), conjunctivitis (eye disease) and foot inflection (trauma) among fattening pigs were also investigated. [Result] The BW range, average BW, daily BW gain, breeding days, daily feed intake range, average daily feed intake, staged feed intake, accumulated feed intake, feed/gain ratio and accumulated feed/gain ratio of different-day-old pigs were studied, respectively. Four dynamic models were established for the growth of pigs: (1) the BW (y)-age (x) mod- el: y=0.758 9x-19.883 (3=0.993 7); (2) the BW gain (y)-age (x) model: y=1.039 5x05051 (F=0.885 4); (3) the average daily feed intake (y)-age (x) model: y=0.023 5x-0.334 3 (F=0.991 7); (4) the feed/gain ratio (y)-age (x) model: y=0.022x+0.427 8 (P=0.988 5). Based on these models, the corresponding theoretical growth value of pigs at different growth stage could be predicted. The main diseases occurred among the swinery in the large-scale microbial fermentation bed piggery included pox-like macula (dermatitis), diarrhea (gastrointestinal disease), cough (respiratory disease), stiff pig (mal- nutrition), conjunctivitis (eye disease) and foot inflection (trauma). The deadly infec- tious diseases had been not found among the pigs. [Conclusion] When the actual BW, BW gain, average daily feed intake and feed/gain ratio were all lower than the theoretical values predicted by the models, the management should be enhanced. The average daily feed intake of 60 to 65-day-old pigs was lower than the theoretic value, indicating that the pigs could not adapt nicely to the fermentation bed at the very early stage. When the pigs grew up to 70 to 75 d old, the average daily feed intake was higher than the theoretical value, indicating that the pigs had adapted to the fermentation bed. In particularly, average daily feed intake of 75-day-old pigs was higher than the theoretical value by 21%. It was suggested the fermentation bed was conducive to the growth of pigs. Considering the occurrence of diseases among pigs, the overall incidence was relatively low. The incidence of each disease was all lower than 10% with little difficulty in treating. If the management of mattress was strength- ened, such as paying attention to feeding and keeping water clean, many diseases could heal by themselves.展开更多
As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerou...As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.展开更多
The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature cont...The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.展开更多
Wind energy has been rapidly developed in China during the past decades and the installed capacity has been the largest in the world. In the future, utilization of wind power is still expected to carry out in China ma...Wind energy has been rapidly developed in China during the past decades and the installed capacity has been the largest in the world. In the future, utilization of wind power is still expected to carry out in China mainly with a large-scale centralized layout. Here, we examine the potential climatic impacts of large-scale windfarms associated with deployment scale in China using numerical experiments, in which four deployment scenarios were designed. These four scenarios represented relatively small- (484 GW), medium- (2165 GW) and large-scale (3490 GW and 5412 GW) installed wind power capacities, respectively. Results showed that turbulent kinetic energy, wind velocity, and air temperature varied consistently within those windfarms with the largest changes in turbine hub heights. Moreover, the above relatively large- scale windfarms could induce regional wanning with a maximum of above 0.8 °C in North China. This regional warming may be linked to an anomalous circulation pattern with a negative pressure anomaly center in Northeast China and a positive pressure anomaly center in the middle and lower reaches of the Yangtze-Huaihe River Basin.展开更多
基金funding support from the National Natural Science Foundation of China(Grant Nos.42177136 and 52309126).
文摘The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in southwestern China as the engineering prototype,large-scale three-dimensional(3D)physical model tests were conducted on a 3D-printed complex geological model containing two faults.Based on the selfdeveloped 3D loading system and excavation device,the macroscopic failure of fault-slip rockbursts was simulated indoors.The stress,strain,and fracturing characteristics of the surrounding rock near the two faults were systematically evaluated during excavation and multistage loading.The test results effectively revealed the evolution and triggering mechanism of fault-slip rockbursts.After the excavation of a highstress tunnel,stress readjustment occurred.Owing to the presence of these two faults,stress continued to accumulate in the rock mass between them,leading to the accumulation of fractures.When the shear stress on a fault surface exceeded its shear strength,sudden fault slip and dislocation occurred,thus triggering rockbursts.Rockbursts occurred twice in the vault between the two faults,showing obvious intermittent characteristics.The rockburst pit was controlled by two faults.When the faults remained stable,tensile failure predominated in the surrounding rock.However,when the fault slip was triggered,shear failure in the surrounding rock increased.These findings provide valuable insights for enhancing the comprehension of fault-slip rockbursts.
基金supported by the Fujian Science Foundation for Outstanding Youth(Grant No.2023J06039)the National Natural Science Foundation of China(Grant No.41977259 and No.U2005205)Fujian Province natural resources science and technology innovation project(Grant No.KY-090000-04-2022-019)。
文摘Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.
基金National Natural Science Foundation of China(82274265 and 82274588)Hunan University of Traditional Chinese Medicine Research Unveiled Marshal Programs(2022XJJB003).
文摘Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
文摘The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°×1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin, The PRECIS- LRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.
基金financially supported by the Open Research Fund of Hunan Provincial Key Laboratory of Key Technology on Hydropower Development (Grant No.PKLHD202003)the National Natural Science Foundation of China (Grant Nos.52071058 and 51939002)+1 种基金the National Natural Science Foundation of Liaoning Province (Grant No.2022-KF-18-01)Fundamental Research Funds for the Central University (Grant No.DUT20ZD219)。
文摘Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three different pile-soil models were used to study a large 10 MW monopile wind turbine.By modeling the three models in the SACS software,this paper analyzed the motion response of the overall structure under the conditions of wind and waves.According to the given working conditions,this paper concludes that under the condition of independent wind,the average value of the tower top x-displacement of the rigid connection method is the smalle st,and the standard deviation is the smallest under the condition of independent wave.The results obtained by the p-y curve method are the most conservative.
基金supported by the National Key R&D Program of China with Grant number 2019YFB1803400the National Natural Science Foundation of China under Grant number 62071114the Fundamental Research Funds for the Central Universities of China under grant numbers 3204002004A2 and 2242022k30005。
文摘This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.
基金supported by the National Natural Science Foundation of China(Grant Nos.52278407 and 52378407)the China Postdoctoral Science Foundation(Grant No.2023M732670)the support by the Postdoctoral Fellowship Program of China Postdoctoral Science Foundation.
文摘The widespread utilisation of tunnel boring machines(TBMs)in underground construction engineering requires a detailed investigation of the cutter-rock interaction.In this paper,we conduct a series of largescale standing rotary cutting tests on granite in conjunction with high-fidelity numerical simulations based on a particle-type discrete element method(DEM)to explore the effects of key cutting parameters on the TBM cutter performance and the distribution of cutter-rock contact stresses.The assessment results of cutter performance obtained from the cutting tests and numerical simulations reveal similar dependencies on the key cutting parameters.More specifically,the normal and rolling forces exhibit a positive correlation with penetration but are slightly influenced by the cutting radius.In contrast,the side force decreases as the cutting radius increases.Additionally,the side force shows a positive relationship with the penetration for smaller cutting radii but tends to become negative as the cutting radius increases.The cutter's relative effectiveness in rock breaking is significantly impacted by the penetration but shows little dependency on the cutting radius.Consequently,an optimal penetration is identified,leading to a low boreability index and specific energy.A combined Hertz-Weibull function is developed to fit the cutter-rock contact stress distribution obtained in DEM simulations,whereby an improved CSM(Colorado School of Mines)model is proposed by replacing the original monotonic cutting force distribution with this combined Hertz-Weibull model.The proposed model outperforms the original CSM model as demonstrated by a comparison of the estimated cutting forces with those from the tests/simulations.The findings from this work that advance our understanding of TBM cutter performance have important implications for improving the efficiency and reliability of TBM tunnelling in granite.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金The 2023 Guangxi University Young and Middle-Aged Teachers’Scientific Research Basic Ability Improvement Project“Research on Seismic Performance of Prefabricated CFST Column-SRC Beam Composite Joints”(2023KY1204)The 2023 Guangxi Vocational Education Teaching Reform Research Project“Research and Practice on the Cultivation of Digital Talents in Prefabricated Buildings in the Context of Deepening the Integration of Industry and Education”(GXGZJG2023B052)The 2022 Guangxi Polytechnic of Construction School-Level Teaching Innovation Team Project“Prefabricated and Intelligent Teaching Innovation Team”(Gui Jian Yuan Ren[2022]No.15)。
文摘This paper discusses the digital application and benefit analysis of building information model(BIM)technology in the large-scale comprehensive development project of the Guangxi headquarters base.The project covers a total area of 92,100 square meters,with a total construction area of 379,700 square meters,including a variety of architectural forms.Through three-dimensional modeling and simulation analysis,BIM technology significantly enhances the design quality and efficiency,shortens the design cycle by about 20%,and promotes the collaboration and integration of project management,improving the management efficiency by about 25%.During the construction phase,the collision detection and four-dimensional visual management functions of BIM technology have improved construction efficiency by about 15%and saved the cost by about 10%.In addition,BIM technology has promoted green building and sustainable development,achieved the dual improvement of technical and economic indicators and social and economic benefits,set an example for enterprises in digital transformation,and opened up new market businesses.
文摘Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be reduced too. The recent advances of MOR research are overviewed in the article. The MOR theor y and methods may be classified as Singular Value decomposition (SVD) based, the Krylov subspace based and others. The merits and demerits of the different meth ods are analyzed, and the existed problems are pointed out. Moreover, the applic ation’s fields are overviewed, and the potential applications are forecaste d. After the existed problems analyzed, the future work is described. There are som e problems in the traditional methods such as SVD and Krylov subspace, they are that it’s difficult to (1)guarantee the stability of the original system, (2) b e adaptive to nonlinear system, and (3) control the modeling accuracy. The f uture works may be solving the above problems on the foundation of the tradition al methods, and applying other methods such as wavelet or signal compression.
基金National Natural Science Foundation of China Key Project(No.42050103)Higher Education Disciplinary Innovation Program(No.B25052)+2 种基金the Guangdong Pearl River Talent Program Innovative and Entrepreneurial Team Project(No.2021ZT09H399)the Ministry of Education’s Frontiers Science Center for Deep-Time Digital Earth(DDE)(No.2652023001)Geological Survey Project of China Geological Survey(DD20240206201)。
文摘Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from observing phenomena to uncovering underlying mechanisms,from regional-scale investigations to global perspectives,and from experience-based inference toward data-and model-enabled intelligent prediction.AlphaEarth Foundations(AEF)is a next-generation geospatial intelligence platform that addresses these changes by introducing a unified 64-dimensional shared embedding space,enabling-for the first time-standardized representation and seamless integration of 12 distinct types of Earth observation data,including optical,radar,and lidar.This framework significantly improves data assimilation efficiency and resolves the persistent problem of“data silos”in geoscience research.AEF is helping redefine research methodologies and fostering breakthroughs,particularly in quantitative Earth system science.This paper systematically examines how AEF’s innovative architecture-featuring multi-source data fusion,high-dimensional feature representation learning,and a scalable computational framework-facilitates intelligent,precise,and realtime data-driven geoscientific research.Using case studies from resource and environmental applications,we demonstrate AEF’s broad potential and identify emerging innovation needs.Our findings show that AEF not only enhances the efficiency of solving traditional geoscientific problems but also stimulates novel research directions and methodological approaches.
基金the National Natural Science Foundation of China(No.61203110)the Shanghai Natural Science Foundation(No.13ZR1418900)the Innovation Programs of Shanghai Municipal Education Commission(Nos.12ZZ155 and 14YZ107)
文摘A distributed model predictive control(MPC) scheme with one-step delay communication is proposed for on-line optimization and control of large-scale systems in this paper. Cooperation between subsystems is achieved by exchanging information with neighbor-to-neighbor communication and by optimizing the local problem with the improved performance index in the neighborhood. A distributed MPC algorithm with one-step delay communication is developed for the situation that there is a one-step delay in the information available from its neighbors when a subsystem solves the local optimization problem. The nominal stability is employed for the whole system under the distributed MPC algorithm without the inequality constraints. Finally, the case study of the reactor-storage-separator(RSS) system is illustrated to test the practicality of the presented control algorithm.
基金supported in part by grants from the National Science and Technology Major Project,China(Grant No.2021ZD0111802)the National Natural Science Foundation of China(Grant Nos.72188101,62406096,and 62376086)the Fundamental Research Funds for the Central Universities,China(Grant No.JZ2024HGQB0093).
文摘Student cognitive modeling is a fundamental task in the intelligence education field.It serves as the basis for various downstream applications,such as student profiling,personalized educational content recommendation,and adaptive testing.Cognitive Diagnosis(CD)and Knowledge Tracing(KT)are two mainstream categories for student cognitive modeling,which measure the cognitive ability from a limited time(e.g.,an exam)and the learning ability dynamics over a long period(e.g.,learning records from a year),respectively.Recent efforts have been dedicated to the development of open-source code libraries for student cognitive modeling.However,existing libraries often focus on a particular category and overlook the relationships between them.Additionally,these libraries lack sufficient modularization,which hinders reusability.To address these limitations,we have developed a unified PyTorch-based library EduStudio,which unifies CD and KT for student cognitive modeling.The design philosophy of EduStudio is from two folds.From a horizontal perspective,EduStudio employs the modularization that separates the main step pipeline of each algorithm.From a vertical perspective,we use templates with the inheritance style to implement each module.We also provide eco-services of EduStudio,such as the repository that collects resources about student cognitive modeling and the leaderboard that demonstrates comparison among models.Our open-source project is available at the website of edustudio.ai.
基金support in dataset preparation.This study was funded by National Natural Science Foundation of China(Nos.42422704 and 52379109)Opening the fund of State Key Laboratory of Geohazard Prevention and Geoenvironment Protection(Chengdu University of Technology)(No.SKLGP2024K028)Science and Technology Research and Design Projects of China State Construction Engineering Corporation Ltd.(No.CSCEC-2024-Q-68).
文摘The identification of rock mass discontinuities is critical for rock mass characterization.While high-resolution digital outcrop models(DOMs)are widely used,current digital methods struggle to generalize across diverse geological settings.Large-scale models(LSMs),with vast parameter spaces and extensive training datasets,excel in solving complex visual problems.This study explores the potential of using one such LSM,Segment anything model(SAM),to identify facet-type discontinuities across several outcrops via interactive prompting.The findings demonstrate that SAM effectively segments two-dimensional(2D)discontinuities,with its generalization capability validated on a dataset of 2426 identified discontinuities across 170 outcrops.The model achieves 0.78 mean IoU and 0.86 average precision using 11-point prompts.To extend to three dimensions(3D),a framework integrating SAM with Structure-from-Motion(SfM)was proposed.By utilizing the inherent but often overlooked relationship between image pixels and point clouds in SfM,the identification process was simplified and generalized across photogrammetric devices.Benchmark studies showed that the framework achieved 0.91 average precision,identifying 87 discontinuities in Dataset-3D.The results confirm its high precision and efficiency,making it a valuable tool for data annotation.The proposed method offers a practical solution for geological investigations.
基金supported by the National Natural Science Foundation of China under Grant No.42176190Fundamental Research Funds for the Central Universities,CHD under Grant Nos.300102243401 and 300102244203Research Funds for the Interdisciplinary Projects,CHU under Grant Nos.300104240912 and 300104240922。
文摘As important infrastructure for airborne communication platforms,unmanned aerial vehicles(UAVs)are expected to become a key part of 6G wireless networks.Thus,modeling low-and medium-altitude propagation channels has attracted much attention.Air-to-ground(A2G)propagation channel models vary in different scenarios,requiring accurate models for designing and evaluating UAV communication links.Unlike terrestrial models,A2G channel models lack detailed investigation.Therefore,this paper provides an overview of existing A2G channel measurement campaigns,different types of A2G channel models for various environments,and future research directions for UAV airland channel modeling.This study focuses on the potential of millimeter-wave technology for UAV A2G channel modeling and highlights nonsuburban scenarios requiring consideration in future modeling efforts.
基金Supported by International Science and Technology Cooperation Project of China(2012DFA31120)Special Fund for Agro-scientific Research in the Public Interest(201303094)National Key Technology Research and Development Program(2012BAD14B15)~~
文摘[Objective] The behavior of eating, drinking, defecating and peeing of 1 500 pigs in a large-scale microbial fermentation bed-equipped piggery was observed. We hoped to find some simple indicators that could reflect the health status of swinery and to provide experience for the swinery performance management in large-scale microbial fermentation bed-equipped piggery. [Method] The body weight (BW), daily BW gain, feed intake and other indicators of different-day-old pigs were recorded in details. Based on the recorded data, the models between BW, BW gain, average daily feed intake and feed/gain ratio and growth days (d) were established. In addition, the incidences of pox-like macula (dermatitis), diarrhea (gastrointestinal disease), cough (respiratory disease), stiff pig (malnutrition), conjunctivitis (eye disease) and foot inflection (trauma) among fattening pigs were also investigated. [Result] The BW range, average BW, daily BW gain, breeding days, daily feed intake range, average daily feed intake, staged feed intake, accumulated feed intake, feed/gain ratio and accumulated feed/gain ratio of different-day-old pigs were studied, respectively. Four dynamic models were established for the growth of pigs: (1) the BW (y)-age (x) mod- el: y=0.758 9x-19.883 (3=0.993 7); (2) the BW gain (y)-age (x) model: y=1.039 5x05051 (F=0.885 4); (3) the average daily feed intake (y)-age (x) model: y=0.023 5x-0.334 3 (F=0.991 7); (4) the feed/gain ratio (y)-age (x) model: y=0.022x+0.427 8 (P=0.988 5). Based on these models, the corresponding theoretical growth value of pigs at different growth stage could be predicted. The main diseases occurred among the swinery in the large-scale microbial fermentation bed piggery included pox-like macula (dermatitis), diarrhea (gastrointestinal disease), cough (respiratory disease), stiff pig (mal- nutrition), conjunctivitis (eye disease) and foot inflection (trauma). The deadly infec- tious diseases had been not found among the pigs. [Conclusion] When the actual BW, BW gain, average daily feed intake and feed/gain ratio were all lower than the theoretical values predicted by the models, the management should be enhanced. The average daily feed intake of 60 to 65-day-old pigs was lower than the theoretic value, indicating that the pigs could not adapt nicely to the fermentation bed at the very early stage. When the pigs grew up to 70 to 75 d old, the average daily feed intake was higher than the theoretical value, indicating that the pigs had adapted to the fermentation bed. In particularly, average daily feed intake of 75-day-old pigs was higher than the theoretical value by 21%. It was suggested the fermentation bed was conducive to the growth of pigs. Considering the occurrence of diseases among pigs, the overall incidence was relatively low. The incidence of each disease was all lower than 10% with little difficulty in treating. If the management of mattress was strength- ened, such as paying attention to feeding and keeping water clean, many diseases could heal by themselves.
基金This work was supported by National Key R&D Program of China under Grant 2018YFB1800802in part by the National Natural Science Foundation of China under Grant No.61771488,No.61631020 and No.61827801+1 种基金in part by State Key Laboratory of Air Traffic Management System and Technology under Grant No.SKLATM201808in part by Postgraduate Research and Practice Innovation Program of Jiangsu Province under No.KYCX190188.
文摘As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.
基金Project(61174132)supported by the National Natural Science Foundation of ChinaProject(2015zzts047)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(20130162110067)supported by the Research Fund for the Doctoral Program of Higher Education of China
文摘The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.
基金s We acknowledged the financial support of the National Key Research and Development Program of China (2018YFB1502803), the National Natural Science Foundation of China (41475066), and Tsinghua University Initiative Sci entific Research Program (20131089357, 20131089356).
文摘Wind energy has been rapidly developed in China during the past decades and the installed capacity has been the largest in the world. In the future, utilization of wind power is still expected to carry out in China mainly with a large-scale centralized layout. Here, we examine the potential climatic impacts of large-scale windfarms associated with deployment scale in China using numerical experiments, in which four deployment scenarios were designed. These four scenarios represented relatively small- (484 GW), medium- (2165 GW) and large-scale (3490 GW and 5412 GW) installed wind power capacities, respectively. Results showed that turbulent kinetic energy, wind velocity, and air temperature varied consistently within those windfarms with the largest changes in turbine hub heights. Moreover, the above relatively large- scale windfarms could induce regional wanning with a maximum of above 0.8 °C in North China. This regional warming may be linked to an anomalous circulation pattern with a negative pressure anomaly center in Northeast China and a positive pressure anomaly center in the middle and lower reaches of the Yangtze-Huaihe River Basin.