The vast diversity of morphologies,body size,and lifestyles of snakes represents an important source of information that can be used to derive bio-inspired robots through a biology-push and pull process.An understandi...The vast diversity of morphologies,body size,and lifestyles of snakes represents an important source of information that can be used to derive bio-inspired robots through a biology-push and pull process.An understanding of the detailed kinematics of swimming snakes is a fundamental prerequisite to conceive and design bio-inspired aquatic snake robots.However,only limited information is available on the kinematics of swimming snake.Fast and accurate methods are needed to fill this knowledge gap.In the present paper,three existing methods were compared to test their capacity to characterize the kinematics of swimming snakes.(1)Marker tracking(Deftac),(2)Markerless pose estimation(DeepLabCut),and(3)Motion capture were considered.(4)We also designed and tested an automatic video processing method.All methods provided different albeit complementary data sets;they also involved different technical issues in terms of experimental conditions,snake manipulation,or processing resources.Marker tracking provided accurate data that can be used to calibrate other methods.Motion capture posed technical difficulties but can provide limited 3D data.Markerless pose estimation required deep learning(thus time)but was efficient to extract the data under various experimental conditions.Finally,automatic video processing was particularly efficient to extract a wide range of data useful for both biology and robotics but required a specific experimental setting.展开更多
The accurate identification of various postures in the daily life of piglets that are directly reflected by their skeleton morphology is necessary to study the behavioral characteristics of pigs.Accordingly,this study...The accurate identification of various postures in the daily life of piglets that are directly reflected by their skeleton morphology is necessary to study the behavioral characteristics of pigs.Accordingly,this study proposed a novel approach for the skeleton extraction and pose estimation of piglets.First,an improved Zhang-Suen(ZS)thinning algorithm based on morphology was used to establish the chain code mechanism of the burr and the redundant information deletion templates to achieve a single-pixel width extraction of pig skeletons.Then,body nodes were extracted on the basis of the improved DeepLabCut(DLC)algorithm,and a part affinity field(PAF)was added to realize the connection of body nodes,and consequently,construct a database of pig behavior and postures.Finally,a support vector machine was used for pose matching to recognize the main behavior of piglets.In this study,14000 images of piglets with different types of behavior were used in posture recognition experiments.Results showed that the improved algorithm based on ZS-DLC-PAF achieved the best thinning rate compared with those of distance transformation,medial axis transformation,morphology refinement,and the traditional ZS algorithm.The node tracking accuracy reached 85.08%,and the pressure test could accurately detect up to 35 nodes of 5 pigs.The average accuracy of posture matching was 89.60%.This study not only realized the single-pixel extraction of piglets’skeletons but also the connection among the different behavior body nodes of individual sows and multiple piglets.Furthermore,this study established a database of pig posture behavior,which provides a reference for studying animal behavior identification and classification and anomaly detection.展开更多
Identifying and tracking the drinking behavior of pigs is of great significance for welfare feeding and piggery management. Research on pigs’ drinking behavior not only needs to indicate whether the snout is in conta...Identifying and tracking the drinking behavior of pigs is of great significance for welfare feeding and piggery management. Research on pigs’ drinking behavior not only needs to indicate whether the snout is in contact with the water fountain, but it also needs to establish whether the pig is drinking water and for how long. To solve target loss and identification errors, a novel method for tracking the drinking behavior of pigs based on L-K Pyramid Optical Flow (L-K OPT), Kernelized Correlation Filters (KCF), and DeepLabCut (DLC) was proposed. First, the feature model of the drinking behavior of a sow was established by L-K OPT. In addition, the water flow vector was used to determine whether the animal drank water and to demonstrate the details of the movements. Then, on the basis of the improved KCF, the relocation model of the sow’s snout was established to resolve the problem of tracking loss in the snout. Finally, the tracking model of piglets’ drinking behavior was established by DLC to build the mapping association between the pig’s snout and the drinking fountain. By using 200 episodes of drinking water videos (30-60 s each) to verify the method proposed in this study, the results are explained that 1) according to the two important drinking water indexes, the Down (−135°, −45°) direction feature and the V2 (>10 pixels) speed feature, the drinking time could be accurate to the frame level, with an error within 30 frames;2) The overlapping precision (OP) was 95%, the center location error (CLE) was 3 pixels, and the speed was 300 fps, which were all superior to other traditional algorithms;3) The optimal learning rate was 0.005, and the loss value was 0.0 002. The method proposed in this study realized accurate and automatic monitoring of the drinking behavior of pigs, which could provide reference for other animal behavior monitoring.展开更多
OpenPose(OP)and DeepLabCut(DLC)are applications that use deep learning to estimate posture,but there are few reports on the reliability,validity,and accuracy of their 2D lower limb joint motion analysis.This study com...OpenPose(OP)and DeepLabCut(DLC)are applications that use deep learning to estimate posture,but there are few reports on the reliability,validity,and accuracy of their 2D lower limb joint motion analysis.This study compared OP and DLC estimates of lower extremity joint angles in standing movements with those of conventional software.A total of nine healthy men participated.The trial task was to stand up from a chair.The motion was recorded by a digital camera,and the joint angles of the hip and knee joints were calculated from the video using OP,DLC,and Kinovea.To confirm reliability and validity,ICC was calculated using the Kinovea value as the validity criterion and the correlation coefficient between OP and DLC.In addition,the agreement between those data was evaluated by the Bland-Altman plot.To evaluate the accuracy of the data,root means square error(RMSE)was calculated and compared for each joint.Although the correlation coefficients and ICC(2,1)were in almost perfect agreement,fixed and proportional errors were found for most joints.The RMSE was smaller for OP than for DLC.Compared to Kinovea,OP and DLC can estimate the joint angles of the hip and knee joints during the stand-up movement with an estimation error of fewer than 10,but since they are affected by the resolution of the analysis video and other factors,they need to be validated in a variety of environments and with a variety of movements.展开更多
基金Agence Nationale de la recherche(Grant no.ANR-20-CE02-0010).
文摘The vast diversity of morphologies,body size,and lifestyles of snakes represents an important source of information that can be used to derive bio-inspired robots through a biology-push and pull process.An understanding of the detailed kinematics of swimming snakes is a fundamental prerequisite to conceive and design bio-inspired aquatic snake robots.However,only limited information is available on the kinematics of swimming snake.Fast and accurate methods are needed to fill this knowledge gap.In the present paper,three existing methods were compared to test their capacity to characterize the kinematics of swimming snakes.(1)Marker tracking(Deftac),(2)Markerless pose estimation(DeepLabCut),and(3)Motion capture were considered.(4)We also designed and tested an automatic video processing method.All methods provided different albeit complementary data sets;they also involved different technical issues in terms of experimental conditions,snake manipulation,or processing resources.Marker tracking provided accurate data that can be used to calibrate other methods.Motion capture posed technical difficulties but can provide limited 3D data.Markerless pose estimation required deep learning(thus time)but was efficient to extract the data under various experimental conditions.Finally,automatic video processing was particularly efficient to extract a wide range of data useful for both biology and robotics but required a specific experimental setting.
基金This work was financially supported by the National Major Science and Technology Project(Innovation 2030)of China(Grant No.2021ZD0113701).
文摘The accurate identification of various postures in the daily life of piglets that are directly reflected by their skeleton morphology is necessary to study the behavioral characteristics of pigs.Accordingly,this study proposed a novel approach for the skeleton extraction and pose estimation of piglets.First,an improved Zhang-Suen(ZS)thinning algorithm based on morphology was used to establish the chain code mechanism of the burr and the redundant information deletion templates to achieve a single-pixel width extraction of pig skeletons.Then,body nodes were extracted on the basis of the improved DeepLabCut(DLC)algorithm,and a part affinity field(PAF)was added to realize the connection of body nodes,and consequently,construct a database of pig behavior and postures.Finally,a support vector machine was used for pose matching to recognize the main behavior of piglets.In this study,14000 images of piglets with different types of behavior were used in posture recognition experiments.Results showed that the improved algorithm based on ZS-DLC-PAF achieved the best thinning rate compared with those of distance transformation,medial axis transformation,morphology refinement,and the traditional ZS algorithm.The node tracking accuracy reached 85.08%,and the pressure test could accurately detect up to 35 nodes of 5 pigs.The average accuracy of posture matching was 89.60%.This study not only realized the single-pixel extraction of piglets’skeletons but also the connection among the different behavior body nodes of individual sows and multiple piglets.Furthermore,this study established a database of pig posture behavior,which provides a reference for studying animal behavior identification and classification and anomaly detection.
基金supported by the National Major Science and Technology Project (Innovation 2030)of China (Grant No.2021ZD0113701)the National Key Research and Development Program of China (Grant No.2021YFD1300101)the National Research Facility for Phenotypic and Genotypic Analysis of Model Animals (Beijing) (Grant No.2016-000052-73-01-001202).
文摘Identifying and tracking the drinking behavior of pigs is of great significance for welfare feeding and piggery management. Research on pigs’ drinking behavior not only needs to indicate whether the snout is in contact with the water fountain, but it also needs to establish whether the pig is drinking water and for how long. To solve target loss and identification errors, a novel method for tracking the drinking behavior of pigs based on L-K Pyramid Optical Flow (L-K OPT), Kernelized Correlation Filters (KCF), and DeepLabCut (DLC) was proposed. First, the feature model of the drinking behavior of a sow was established by L-K OPT. In addition, the water flow vector was used to determine whether the animal drank water and to demonstrate the details of the movements. Then, on the basis of the improved KCF, the relocation model of the sow’s snout was established to resolve the problem of tracking loss in the snout. Finally, the tracking model of piglets’ drinking behavior was established by DLC to build the mapping association between the pig’s snout and the drinking fountain. By using 200 episodes of drinking water videos (30-60 s each) to verify the method proposed in this study, the results are explained that 1) according to the two important drinking water indexes, the Down (−135°, −45°) direction feature and the V2 (>10 pixels) speed feature, the drinking time could be accurate to the frame level, with an error within 30 frames;2) The overlapping precision (OP) was 95%, the center location error (CLE) was 3 pixels, and the speed was 300 fps, which were all superior to other traditional algorithms;3) The optimal learning rate was 0.005, and the loss value was 0.0 002. The method proposed in this study realized accurate and automatic monitoring of the drinking behavior of pigs, which could provide reference for other animal behavior monitoring.
文摘OpenPose(OP)and DeepLabCut(DLC)are applications that use deep learning to estimate posture,but there are few reports on the reliability,validity,and accuracy of their 2D lower limb joint motion analysis.This study compared OP and DLC estimates of lower extremity joint angles in standing movements with those of conventional software.A total of nine healthy men participated.The trial task was to stand up from a chair.The motion was recorded by a digital camera,and the joint angles of the hip and knee joints were calculated from the video using OP,DLC,and Kinovea.To confirm reliability and validity,ICC was calculated using the Kinovea value as the validity criterion and the correlation coefficient between OP and DLC.In addition,the agreement between those data was evaluated by the Bland-Altman plot.To evaluate the accuracy of the data,root means square error(RMSE)was calculated and compared for each joint.Although the correlation coefficients and ICC(2,1)were in almost perfect agreement,fixed and proportional errors were found for most joints.The RMSE was smaller for OP than for DLC.Compared to Kinovea,OP and DLC can estimate the joint angles of the hip and knee joints during the stand-up movement with an estimation error of fewer than 10,but since they are affected by the resolution of the analysis video and other factors,they need to be validated in a variety of environments and with a variety of movements.