Abstract
Enabling mobile robots to navigate unpredictable and ever-changing environments while avoiding static and moving obstacles is a critical challenge for dynamic path planning. Advanced sensors have simplified the robot’s work by enabling it to navigate autonomously without human intervention. Optimal path planning in dynamic environments requires sophisticated algorithms considering essential factors such as time, energy, and distance. These problems can be solved using deep neural networks (DNNs) and reinforcement learning (RL). An artificial intelligence (AI) agent learns from reward signals using trial and error to identify humans' optimal behavioral strategies. This review paper explores how deep reinforcement learning (DRL) techniques can be combined with other path-planning techniques to enhance the efficiency of these methods and solutions to address the problem of efficient navigation in unfamiliar environments with obstacles, with a focus on processes such as policy gradient, model-free and model-based learning, and the actor-critic approach. We comprehensively examine the key concepts, challenges, and recent developments in DRL, focusing on its application to revolutionize robotic navigation in complex scenarios.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2024 Iraqi Journal of Intelligent Computing and Informatics (IJICI)