AHSWN Home · Issue Contents · Forthcoming Papers

Safe Reinforcement Learning for Pedestrian Collision Avoidance in Connected and Autonomous Vehicles
Ying He, Guangyuan Zou, Guang Zhou, Weike Pan and Zhong Ming

Pedestrian collision avoidance is one of the most fundamental problems in autonomous driving. Reinforcement learning (RL) provides a promising solution to solve this problem by learning to adapt to pedestrian behaviors. However, it is difficult to directly apply traditional RL methods due to the weak safety in training and deployment. To address this issue, we propose three safe RL methods: 1) RL with safe reward, 2) RL with constraints, 3) RL with limited exploration. Our results show that the proposed three safe RL methods make a better trade-off between the driving efficiency and the unsafety caused by unexpected pedestrian behaviors. These three safe RL methods are applicable of avoiding pedestrian collision in different environments. If the safe reward can be well designed, standard RL method can have good performance. If the safety reward is not well designed, RL with constraints should be the optimal choice. If the safety guarantee is strictly required for the RL training process, RL with limited exploration should be considered. In addition, we present several useful observations about parameter settings in safe RL methods.

Keywords: Safe reinforcement learning, pedestrian collision avoidance, autonomous driving

Full Text (IP)