In this paper,we present an application of distributed fiber optic sensor(DFOS)technology to measure the strain of a continuous flight auger(CFA)test pile with a central reinforcement bar bundle,during a static load t...In this paper,we present an application of distributed fiber optic sensor(DFOS)technology to measure the strain of a continuous flight auger(CFA)test pile with a central reinforcement bar bundle,during a static load test carried out in London.Being distributed in nature,DFOS gives much more information about the pile performance as compared to traditional point sensors,such as identifying cross-sectional irregularities or other anomalies.The strain profiles recorded along the depth of the piles from the DFOS were used to calculate pile deformation(contraction),shaft friction,and tip resistance under various loads.Based on this pile load test,a finite element(FE)analysis was performed using a one-dimensional nonlinear load-transfer model.Calibrated by the shaft friction and tip resistance derived from the monitored data,the FE model was able to simulate the pile and soil performance during the load testing with good accuracy.The effect of the reinforcement cage and central reinforcement bar bundle were investigated,and it was found that the addition of a reinforcement cage would reduce the pile settlement by up to 20%.展开更多
In the field of hexapod robot control,the application of central pattern generators(CPG)and deep reinforcement learning(DRL)is becoming increasingly common.Compared to traditional control methods that rely on dynamic ...In the field of hexapod robot control,the application of central pattern generators(CPG)and deep reinforcement learning(DRL)is becoming increasingly common.Compared to traditional control methods that rely on dynamic models,both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models.However,relying solely on DRL for control also has its drawbacks,such as slow convergence speed and low exploration efficiency.Moreover,although the CPG can produce rhythmic gaits,its control strategy is relatively singular,limiting the robot's ability to adapt to complex terrains.To overcome these limitations,this study proposes a three-layer DRL control architecture.The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions,while the middle and low level controllers coordinate the joint movements within and between legs.By integrating the learning capabilities of DRL with the gait generation characteristics of CPG,this method significantly enhances the stability and adaptability of hexapod robots in complex terrains.Experimental results show that,compared to pure DRL approaches,this method significantly improves learning efficiency and control performance,when dealing with complex terrains,it considerably enhances the robot's stability and adaptability compared to pure CPG control.展开更多
基金The authors thank the EPSRC and Innovate UK for funding this research through the Cambridge Centre for Smart Infrastructure and Construction(CSIC)Innovation and Knowledge Centre(EPSRC grand reference number EP/L010917/1)We thank Professor Kenichi Soga(UC Berkeley)for providing valuable input to this research.We would also like to acknowledge the contribution of Angus Cameron from Environmental Scientifics Group.
文摘In this paper,we present an application of distributed fiber optic sensor(DFOS)technology to measure the strain of a continuous flight auger(CFA)test pile with a central reinforcement bar bundle,during a static load test carried out in London.Being distributed in nature,DFOS gives much more information about the pile performance as compared to traditional point sensors,such as identifying cross-sectional irregularities or other anomalies.The strain profiles recorded along the depth of the piles from the DFOS were used to calculate pile deformation(contraction),shaft friction,and tip resistance under various loads.Based on this pile load test,a finite element(FE)analysis was performed using a one-dimensional nonlinear load-transfer model.Calibrated by the shaft friction and tip resistance derived from the monitored data,the FE model was able to simulate the pile and soil performance during the load testing with good accuracy.The effect of the reinforcement cage and central reinforcement bar bundle were investigated,and it was found that the addition of a reinforcement cage would reduce the pile settlement by up to 20%.
基金supported by the Beijing Natural Science Foundation-Xiaomi Innovation Joint Fund(L243013)the National Natural Science Foundation of China(62172392).
文摘In the field of hexapod robot control,the application of central pattern generators(CPG)and deep reinforcement learning(DRL)is becoming increasingly common.Compared to traditional control methods that rely on dynamic models,both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models.However,relying solely on DRL for control also has its drawbacks,such as slow convergence speed and low exploration efficiency.Moreover,although the CPG can produce rhythmic gaits,its control strategy is relatively singular,limiting the robot's ability to adapt to complex terrains.To overcome these limitations,this study proposes a three-layer DRL control architecture.The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions,while the middle and low level controllers coordinate the joint movements within and between legs.By integrating the learning capabilities of DRL with the gait generation characteristics of CPG,this method significantly enhances the stability and adaptability of hexapod robots in complex terrains.Experimental results show that,compared to pure DRL approaches,this method significantly improves learning efficiency and control performance,when dealing with complex terrains,it considerably enhances the robot's stability and adaptability compared to pure CPG control.