The fast response time is vitally important for emergency vehicles. Usually, it is being ameliorated by optimizing the route and employing intelligent traffic signal control. A recent paper suggests complementing these techniques with deep learning.
In an emulator, a rule-based avoidance strategy is used for common vehicles, and a real-time and data-efficient tactical decision-making method based on the reinforcement learning for emergency vehicles. The simulation revealed that at a low velocity, the avoidance strategy has a great benefit, but deep learning outperforms in high-speed traffic flow. In this case, emergency vehicles are encouraged to change lanes more often thus avoiding possible deadlocks. The suggested combinational decision-making method let to travel approximately 20 % faster and helped to prevent collisions. However, a reinforcement learning based policy means that trajectory is less smooth.
Increasing response time of emergency vehicles (EVs) could lead to an immensurable loss of property and life. On this account, tactical decision making for EV’s microscopic control remains an indispensable issue to be improved. Our approach verifies that deep reinforcement learning could complement rule-based methods in generalization. It reveals that deterministic avoidance strategy for common vehicles at a low speed benefits EVs a lot, nevertheless, when at a high velocity, DQN breaks the deadlock of reduced safe distance and brings boldness to EVs in lane changing. Besides, a novel DQN method with speed-adaptive compact state space (SC-DQN) is put forward to fit in EVs’ high-speed feature and generalize in various road topologies. All Above is implemented in SUMO emulator, where common vehicles are modeled rule-based whereas EVs are intelligently controlled.