Qiaojun Feng      冯乔俊

I am a Ph.D. student in the Existential Robotics Laboratory, ECE Department and CRI, UC San Diego. My advisor is Prof. Nikolay Atanasov. I received my B.Eng. degree from Department of Automation, Tsinghua University in 2017.

CV  /  Google Scholar  /  GitHub  /  LinkedIn

E-mail: qjfeng (at) ucsd.edu

profile photo

I work on Robotics. My work focuses on environment perception and representation. My ultimate research goal is to develop intelligent agents that can explore, reconstruct and understand the environment and collaborate with humans.

Jul 2021
One paper (CORSAIR) accepted at IROS 2021!
Jun 2021
I started as a Software Engineer Intern at Nuro for the summer.
Apr 2021
Check our new preprint work on category-level object retrieval and alignment.
Mar 2021
One paper (Terrain Mapping) accepted at ICRA 2021! (JSOE News)
Jun 2020
Two papers (Object Alignment, Object SLAM) accepted at IROS 2020. (JSOE News)
[Older News]
[full list]
CORSAIR: Convolutional Object Retrieval and Symmetry-AIded Registration
Tianyu Zhao, Qiaojun Feng, Sai Jadhav, Nikolay Atanasov
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
project page / arXiv

We develop and approach for fully Convolutional Object Retrieval and Symmetry-AIded Registration (CORSAIR). Our model extends the Fully Convolutional Geometric Features (FCGF) model to learn a global object-shape embedding in addition to local point-wise features from the point-cloud observations. The global feature is used to retrieve a similar object from a category database, and the local features are used for robust pose registration between the observed and the retrieved object. Our formulation also leverages symmetries, present in the object shapes, to obtain promising local-feature pairs from different symmetry classes for matching.
Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning
Qiaojun Feng, Nikolay Atanasov
IEEE International Conference on Robotics and Automation (ICRA), 2021
project page / arXiv

This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle. This paper develops a joint 2D-3D learning approach to reconstruct local meshes at each camera keyframe, which can be assembled into a global environment model. Each local mesh is initialized from sparse depth measurements. We associate image features with the mesh vertices through camera projection and apply graph convolution to refine the mesh vertices based on joint 2-D reprojected depth and 3-D mesh supervision.
Fully Convolutional Geometric Features for Category-level Object Alignment
Qiaojun Feng, Nikolay Atanasov
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
project page / arXiv / doi

This paper focuses on pose registration of different object instances from the same category. Our approach transforms instances of the same category to a normalized canonical coordinate frame and uses metric learning to train fully convolutional geometric features. The resulting model is able to generate pairs of matching points between the instances, allowing category-level registration.
OrcVIO: Object residual constrained Visual-Inertial Odometry
Mo Shan, Qiaojun Feng, Nikolay Atanasov
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
project page / arXiv / doi

This work presents OrcVIO, for visual-inertial odometry tightly coupled with tracking and optimization over structured object models. OrcVIO differentiates through semantic feature and bounding-box reprojection errors to perform batch optimization over the pose and shape of objects. The estimated object states aid in real-time incremental optimization over the IMU-camera states.
Localization and Mapping using Instance-specific Mesh Models
Qiaojun Feng, Yue Meng, Mo Shan, Nikolay Atanasov
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
project page / arXiv / doi

This paper focuses on building semantic maps, containing object poses and shapes, using a monocular camera. Our contribution is an instance-specific mesh model of object shape that can be optimized online based on semantic information extracted from camera images. Multi-view constraints on the object shape are obtained by detecting objects and extracting category-specific keypoints and segmentation masks. We show that the errors between projections of the mesh model and the observed keypoints and masks can be differentiated in order to obtain accurate instance-specific object shapes.

Want to learn more about my Chinese Name?

This webpage template was borrowed from Jon Barron.
(Message from Jon Barron) Feel free to steal this website's source code, just add a link back to my website. Do not scrape the HTML from the deployed instance of this website at http://jonbarron.info, as it includes analytics tags that you do not want on your own website — use the github code instead. If you'd like your new page linked to from here, submit a pull request adding yourself.