Abstract
Reinforcement Learning (RL) with safety guarantee is critical for agents performing tasks in risky environments. Recent safe RL algorithms, developed based on Constrained Markov Decision Process (CMDP), mostly take the safety requirement as additional constraints when learning to maximize the return. However, they usually make unnecessary compromises in return for safety and only learn sub-optimal policies, due to the inability of differentiating safe and unsafe state-actions with high rewards. To address this, we propose Cost-sensitive Advantage Estimation (CSAE), which is simple to deploy for policy optimization and effective for guiding the agents to avoid unsafe state-actions by penalizing their advantage value properly. Moreover, for stronger safety guarantees, we develop a Worst-case Constrained Markov Decision Process (WCMDP) method to augment CMDP by constraining the worst-case safety cost instead of the average one. With CSAE and WCMDP, we develop new safe RL algorithms with theoretical justifications on their benefits for safety and performance of the obtained policies. Extensive experiments clearly demonstrate the superiority of our algorithms in learning safer and better agents under multiple settings.
Original language | English |
---|---|
Title of host publication | ICLR 2021 Conference |
State | Published - 2021 |
Event | Ninth International Conference on Learning Representations - Virtual Duration: 3 May 2021 → 7 May 2021 Conference number: 9th https://iclr.cc/Conferences/2021 |
Conference
Conference | Ninth International Conference on Learning Representations |
---|---|
Abbreviated title | ICLR |
Period | 3/05/21 → 7/05/21 |
Internet address |