In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object ...In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object recognition.In this paper,we propose to use the principal curvature directions of 3D objects(using a CAD model)to represent the geometric features as inputs for the 3D CNN.Our framework,namely CurveNet,learns perceptually relevant salient features and predicts object class labels.Curvature directions incorporate complex surface information of a 3D object,which helps our framework to produce more precise and discriminative features for object recognition.Multitask learning is inspired by sharing features between two related tasks,where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification.Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification.We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs.A Cross-Stitch module was adopted to learn effective shared features across multiple representations.We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task.展开更多
3D object detection is one of the most challenging research tasks in computer vision. In order to solve the problem of template information dependency of 3D object proposal in the method of 3D object detection based o...3D object detection is one of the most challenging research tasks in computer vision. In order to solve the problem of template information dependency of 3D object proposal in the method of 3D object detection based on 2.5D information, we proposed a 3D object detector based on fusion of vanishing point and prior orientation, which estimates an accurate 3D proposal from 2.5D data, and provides an excellent start point for 3D object classification and localization. The algorithm first calculates three mutually orthogonal vanishing points by the Euler angle principle and projects them into the pixel coordinate system. Then, the top edge of the 2D proposal is sampled by the preset sampling pitch, and the first one vertex is taken. Finally, the remaining seven vertices of the 3D proposal are calculated according to the linear relationship between the three vanishing points and the vertices, and the complete information of the 3D proposal is obtained. The experimental results show that this proposed method improves the Mean Average Precision score by 2.7% based on the Amodal3Det method.展开更多
To address various fisheries science problems around Japan, the Japan Fisheries Research and Education Agency (FRA) has developed an ocean forecast system by combining an ocean circulation model based on the Regional ...To address various fisheries science problems around Japan, the Japan Fisheries Research and Education Agency (FRA) has developed an ocean forecast system by combining an ocean circulation model based on the Regional Ocean Modeling System (ROMS) with three-dimensional variational analysis schemes. This system, which is called FRA-ROMS, is a basic and essential tool for the systematic conduct of fisheries science. The main aim of FRA-ROMS is to realistically simulate mesoscale variations over the Kuroshio-Oyashio region. Here, in situ oceanographic and satellite data were assimilated into FRA-ROMS using a weekly time window. We first examined the reproducibility through comparison with several oceanographic datasets with an Eulerian reference frame. FRA-ROMS was able to reproduce representative features of mesoscale variations such as the position of the Kuroshio path, variability of the Kuroshio Extension, and southward intrusions of the Oyashio. Second, using a Lagrangian reference frame, we estimated position errors between ocean drifters and particles passively transported by simulated currents, because particle tracking is an essential technique used in applications of reanalysis products to fisheries science. Finally, we summarize recent and ongoing fisheries studies that use FRA-ROMS and mention several new developments and enhancements that will be implemented in the near future.展开更多
Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,s...Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.展开更多
Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great...Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great challenges as a partial visualization regularly occurs owing to occlusion from static or dynamic objects or a limited perspective of camera.This paper presents an imagery-based framework to infer parking space status by generating 3D bounding box of the vehicle.A specially designed convolutional neural network based on ResNet and feature pyramid network is proposed to overcome challenges from partial visualization and occlusion.It predicts 3D box candidates on multi-scale feature maps with five different 3D anchors,which generated by clustering diverse scales of ground truth box according to different vehicle templates in the source data set.Subsequently,vehicle distribution map is constructed jointly from the coordinates of vehicle box and artificially segmented parking spaces,where the normative degree of parked vehicle is calculated by computing the intersection over union between vehicle’s box and parking space edge.In space status inference,to further eliminate mutual vehicle interference,three adjacent spaces are combined into one unit and then a multinomial logistic regression model is trained to refine the status of the unit.Experiments on KITTI benchmark and Shanghai road show that the proposed method outperforms most monocular approaches in 3D box regression and achieves satisfactory accuracy in space status inference.展开更多
Holoscopic 3D imaging is a true 3D imaging system mimics fly’s eye technique to acquire a true 3D optical model of a real scene. To reconstruct the 3D image computationally, an efficient implementation of an Auto-Fea...Holoscopic 3D imaging is a true 3D imaging system mimics fly’s eye technique to acquire a true 3D optical model of a real scene. To reconstruct the 3D image computationally, an efficient implementation of an Auto-Feature-Edge (AFE) descriptor algorithm is required that provides an individual feature detector for integration of 3D information to locate objects in the scene. The AFE descriptor plays a key role in simplifying the detection of both edge-based and region-based objects. The detector is based on a Multi-Quantize Adaptive Local Histogram Analysis (MQALHA) algorithm. This is distinctive for each Feature-Edge (FE) block i.e. the large contrast changes (gradients) in FE are easier to localise. The novelty of this work lies in generating a free-noise 3D-Map (3DM) according to a correlation analysis of region contours. This automatically combines the exploitation of the available depth estimation technique with edge-based feature shape recognition technique. The application area consists of two varied domains, which prove the efficiency and robustness of the approach: a) extracting a set of setting feature-edges, for both tracking and mapping process for 3D depthmap estimation, and b) separation and recognition of focus objects in the scene. Experimental results show that the proposed 3DM technique is performed efficiently compared to the state-of-the-art algorithms.展开更多
The use of pretrained backbones with finetuning has shown success for 2D vision and natural language processing tasks,with advantages over taskspecific networks.In this paper,we introduce a pretrained 3D backbone,call...The use of pretrained backbones with finetuning has shown success for 2D vision and natural language processing tasks,with advantages over taskspecific networks.In this paper,we introduce a pretrained 3D backbone,called Swin3D,for 3D indoor scene understanding.We designed a 3D Swin Transformer as our backbone network,which enables efficient selfattention on sparse voxels with linear memory complexity,making the backbone scalable to large models and datasets.We also introduce a generalized contextual relative positional embedding scheme to capture various irregularities of point signals for improved network performance.We pretrained a large Swin3D model on a synthetic Structured3D dataset,which is an order of magnitude larger than the ScanNet dataset.Our model pretrained on the synthetic dataset not only generalizes well to downstream segmentation and detection on real 3D point datasets but also outperforms state-of-the-art methods on downstream tasks with+2.3 mIoU and+2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation,respectively,+1.8 mIoU on ScanNet segmentation(val),+1.9 mAP@0.5 on ScanNet detection,and+8.1 mAP@0.5 on S3DIS detection.A series of extensive ablation studies further validated the scalability,generality,and superior performance enabled by our approach.展开更多
We analyze the radio light curve of 3C 273 at 15 GHz from 1963 to 2006 taken from the database of the literature,and find evidence of quasi-periodic activity.Using the wavelet analysis method to analyze these data,our...We analyze the radio light curve of 3C 273 at 15 GHz from 1963 to 2006 taken from the database of the literature,and find evidence of quasi-periodic activity.Using the wavelet analysis method to analyze these data,our results indicate that:(1) There is one main outburst period of P1=8.1±0.1 year in 3C 273.This period is in a good agreement with Ozernoi's analysis in optical bands.(2) Based on the possible periods,we expect the next burst in 2014 October.展开更多
基金This paper was partially supported by a project of the Shanghai Science and Technology Committee(18510760300)Anhui Natural Science Foundation(1908085MF178)Anhui Excellent Young Talents Support Program Project(gxyqZD2019069).
文摘In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object recognition.In this paper,we propose to use the principal curvature directions of 3D objects(using a CAD model)to represent the geometric features as inputs for the 3D CNN.Our framework,namely CurveNet,learns perceptually relevant salient features and predicts object class labels.Curvature directions incorporate complex surface information of a 3D object,which helps our framework to produce more precise and discriminative features for object recognition.Multitask learning is inspired by sharing features between two related tasks,where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification.Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification.We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs.A Cross-Stitch module was adopted to learn effective shared features across multiple representations.We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task.
基金Supported by the National Natural Science Foundation of China(61772328,61802253,61831018)
文摘3D object detection is one of the most challenging research tasks in computer vision. In order to solve the problem of template information dependency of 3D object proposal in the method of 3D object detection based on 2.5D information, we proposed a 3D object detector based on fusion of vanishing point and prior orientation, which estimates an accurate 3D proposal from 2.5D data, and provides an excellent start point for 3D object classification and localization. The algorithm first calculates three mutually orthogonal vanishing points by the Euler angle principle and projects them into the pixel coordinate system. Then, the top edge of the 2D proposal is sampled by the preset sampling pitch, and the first one vertex is taken. Finally, the remaining seven vertices of the 3D proposal are calculated according to the linear relationship between the three vanishing points and the vertices, and the complete information of the 3D proposal is obtained. The experimental results show that this proposed method improves the Mean Average Precision score by 2.7% based on the Amodal3Det method.
文摘To address various fisheries science problems around Japan, the Japan Fisheries Research and Education Agency (FRA) has developed an ocean forecast system by combining an ocean circulation model based on the Regional Ocean Modeling System (ROMS) with three-dimensional variational analysis schemes. This system, which is called FRA-ROMS, is a basic and essential tool for the systematic conduct of fisheries science. The main aim of FRA-ROMS is to realistically simulate mesoscale variations over the Kuroshio-Oyashio region. Here, in situ oceanographic and satellite data were assimilated into FRA-ROMS using a weekly time window. We first examined the reproducibility through comparison with several oceanographic datasets with an Eulerian reference frame. FRA-ROMS was able to reproduce representative features of mesoscale variations such as the position of the Kuroshio path, variability of the Kuroshio Extension, and southward intrusions of the Oyashio. Second, using a Lagrangian reference frame, we estimated position errors between ocean drifters and particles passively transported by simulated currents, because particle tracking is an essential technique used in applications of reanalysis products to fisheries science. Finally, we summarize recent and ongoing fisheries studies that use FRA-ROMS and mention several new developments and enhancements that will be implemented in the near future.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2020R1G1A1099559).
文摘Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.
基金This work was supported in part by National Natural Science Foundation of China(No.51805312)in part by Shanghai Sailing Program(No.18YF1409400)+2 种基金in part by Training and Funding Program of Shanghai College young teachers(No.ZZGCD15102)in part by Scientific Research Project of Shanghai University of Engineering Science(No.2016-19)in part by the Shanghai University of Engineering Science Innovation Fund for Graduate Students(No.18KY0613).
文摘Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great challenges as a partial visualization regularly occurs owing to occlusion from static or dynamic objects or a limited perspective of camera.This paper presents an imagery-based framework to infer parking space status by generating 3D bounding box of the vehicle.A specially designed convolutional neural network based on ResNet and feature pyramid network is proposed to overcome challenges from partial visualization and occlusion.It predicts 3D box candidates on multi-scale feature maps with five different 3D anchors,which generated by clustering diverse scales of ground truth box according to different vehicle templates in the source data set.Subsequently,vehicle distribution map is constructed jointly from the coordinates of vehicle box and artificially segmented parking spaces,where the normative degree of parked vehicle is calculated by computing the intersection over union between vehicle’s box and parking space edge.In space status inference,to further eliminate mutual vehicle interference,three adjacent spaces are combined into one unit and then a multinomial logistic regression model is trained to refine the status of the unit.Experiments on KITTI benchmark and Shanghai road show that the proposed method outperforms most monocular approaches in 3D box regression and achieves satisfactory accuracy in space status inference.
文摘Holoscopic 3D imaging is a true 3D imaging system mimics fly’s eye technique to acquire a true 3D optical model of a real scene. To reconstruct the 3D image computationally, an efficient implementation of an Auto-Feature-Edge (AFE) descriptor algorithm is required that provides an individual feature detector for integration of 3D information to locate objects in the scene. The AFE descriptor plays a key role in simplifying the detection of both edge-based and region-based objects. The detector is based on a Multi-Quantize Adaptive Local Histogram Analysis (MQALHA) algorithm. This is distinctive for each Feature-Edge (FE) block i.e. the large contrast changes (gradients) in FE are easier to localise. The novelty of this work lies in generating a free-noise 3D-Map (3DM) according to a correlation analysis of region contours. This automatically combines the exploitation of the available depth estimation technique with edge-based feature shape recognition technique. The application area consists of two varied domains, which prove the efficiency and robustness of the approach: a) extracting a set of setting feature-edges, for both tracking and mapping process for 3D depthmap estimation, and b) separation and recognition of focus objects in the scene. Experimental results show that the proposed 3DM technique is performed efficiently compared to the state-of-the-art algorithms.
文摘The use of pretrained backbones with finetuning has shown success for 2D vision and natural language processing tasks,with advantages over taskspecific networks.In this paper,we introduce a pretrained 3D backbone,called Swin3D,for 3D indoor scene understanding.We designed a 3D Swin Transformer as our backbone network,which enables efficient selfattention on sparse voxels with linear memory complexity,making the backbone scalable to large models and datasets.We also introduce a generalized contextual relative positional embedding scheme to capture various irregularities of point signals for improved network performance.We pretrained a large Swin3D model on a synthetic Structured3D dataset,which is an order of magnitude larger than the ScanNet dataset.Our model pretrained on the synthetic dataset not only generalizes well to downstream segmentation and detection on real 3D point datasets but also outperforms state-of-the-art methods on downstream tasks with+2.3 mIoU and+2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation,respectively,+1.8 mIoU on ScanNet segmentation(val),+1.9 mAP@0.5 on ScanNet detection,and+8.1 mAP@0.5 on S3DIS detection.A series of extensive ablation studies further validated the scalability,generality,and superior performance enabled by our approach.
基金supported by the National Natural Science Foundation of China (Grant Nos. 10821061,10763002,and 10663002)the National Basic Research Program of China (Grant No. 2009CB824800)
文摘We analyze the radio light curve of 3C 273 at 15 GHz from 1963 to 2006 taken from the database of the literature,and find evidence of quasi-periodic activity.Using the wavelet analysis method to analyze these data,our results indicate that:(1) There is one main outburst period of P1=8.1±0.1 year in 3C 273.This period is in a good agreement with Ozernoi's analysis in optical bands.(2) Based on the possible periods,we expect the next burst in 2014 October.