Di Feng

I am a second year PhD researcher at Bosch Research in the Stuttgart area, Germany. My research is centered on robust perception in autonomous driving. My supervisor is Prof. Dr. Klaus Dietmayer from the Ulm University.

Prior joining Bosch, I finished my master's study with distinction at the Technical University of Munich. I worked on tactile intelligence in humanoid robots, under the supervision from Dr. Mohsen Kaboli and Prof. Dr. Gordon Cheng. I also worked as research intern at the Institute of Robotics and Mechatronics (German Aerospace Center), as well as in the BMW autonomous driving team. I obtained my Bachelor's degree with honor from the Tongji University.

Email  /  CV  /  Google Scholar  /  LinkedIn

profile photo

I'm interested in machine learning in robotics, computer vision, autonomous driving and tactile sensing. Much of my current research is about leveraging multi-modal sensors for robust, probabilistic object detection networks in autonomous driving. Keywords: mutli-modal sensors, uncertainty estimation, deep learning, object detection, autonomous driving.

Academic Service

Reviewer for multiple IEEE conferences about robotics and autonomous driving, incl. ITSC, IV, ICRA.

Supervising several talented master students


[Oct 10, 2019] I will give an invited talk about "Uncertainty Estimation in Deep Object Detectors" in the IROS 2019 workshop The Importance of Uncertainty in Deep Learning for Robotics, which will be held on Nov 8, 2019 (Macau, China). Thanks Dr. Niko Suenderhauf et al. for the invitation!

[Apr 15, 2019] Our survey paper Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges has achieved in ResearchGate as the most-read Preprint in Germany for one week and eight weeks in Bosch research! We will frequently summarize the new methods and update the paper.

Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges
Di Feng, Christian Haase-Schuetz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, Klaus Dietmayer
IEEE Transactions on Intelligent Transportation Systems , 2019 (minor revision)

Online interactive platform

Systematically summarizing methodologies and discussing challenges for deep multi-modal object detection and semantic segmentation in autonomous driving.

Can We Trust You? On Calibration of a Probabilistic Object Detector for Autonomous Driving
Di Feng, Lars Rosenbaum, Claudius Glaeser, Fabian Timm, Klaus Dietmayer
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Workshop , 2019


Identify uncertainty miscalibration problem in a state-of-the-art detector.

Proposing three practical methods to recalibrate uncertainties.

Leveraging Heteroscedastic Aleatoric Uncertainties for Robust Real-Time LiDAR 3D Object Detection
Di Feng, Lars Rosenbaum, Fabian Timm, Klaus Dietmayer
IEEE Intelligent Vehicles Symposium (IV) , 2019   (Oral Presentation)


Boosting detection performance by modeling aleatoric uncertainties in an object detector.

Deep Active Learning for Efficient Training of a LiDAR 3D Object Detector
Di Feng, Xiao Wei, Lars Rosenbaum, Atsuto Maki, Klaus Dietmayer
IEEE Intelligent Vehicles Symposium (IV) , 2019

Increasing training efficiency of a LiDAR detector by uncertainty estimation and active learning.

Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection
Di Feng, Lars Rosenbaum, Klaus Dietmayer
IEEE International Conference on Intelligent Transportation Systems (ITSC) , 2018


Modeling epistemic & aleatoric uncertainties in a LiDAR 3D object detector.

Showing that both uncertianties incorporate very different information.

Tactile-based active object discrimination and target object search in an unknown workspace
Mohsen Kaboli, Kunpeng Yao, Di Feng, Gordon Cheng
Autonomous Robots, 2019


An autonomous robot explores unknown workspaces and recognizes objects purely based on the tactile information.

Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects
Di Feng, Mohsen Kaboli, Gordon Cheng
MDPI Sensors , 2018

Enabling a robotic arm to actively transfer prior tactile knowledge, when it learns the physical properties of new objects via multi-modal artificial skin.

Active Tactile Transfer Learning for Object Discrimination in an Unstructured Environment using Multimodal Robotic Skin
Mohsen Kaboli, Di Feng, Gordon Cheng
International Journal of Humanoid Robotics , 2017


The robot actively learns the physical properties of new objects with only a few exploratory actions or even one.

A Tactile-based Framework for Active Object Learning and Discrimination Using Multimodal Robotic Skin
Mohsen Kaboli, Di Feng, Kunpeng Yao, Pablo Lanillos, Gordon Cheng
IEEE Robotics and Automation Letters (RAL) , 2017,   (Presentation in IROS 2017)


A complete probabilistic tactile-based framework to enable robots to autonomously explore unknown workspaces and recognize objects based on their physical properties.

Website by the courtesy of Jon Barron.