Fauzi Abdul Rohim and Sony Sumaryo and Syamsul Rizal and Eki Ahmad Zaki Hamidi Mobile Robot Navigation in Dynamic Environments Using Reinforcement Learning. In: 2022 8th International Conference on Wireless and Telematics (ICWT), 21-22 Juli 2022, Yogyakarta.
|
Text
Mobile Robot Navigation in Dynamic Environments Using Reinforcement Learning.pdf Download (2MB) | Preview |
Abstract
The navigation system is one of the most important and crucial concerns in the research of mobile robots. Perception, cognition, action, human-robot interaction, and control systems are among the difficulties that have been resolved. Each navigation system must handle the aforementioned common designs to ensure that all duties may be completed. The navigation system is built on learning techniques that provide the ability to reason in the face of environmental uncertainty. However, the design will be difficult to build due to a number of factors, including inherent uncertainties in the unorganized environment. A more expensive design cost, computational resources, and larger memory are all required in this case. Navigating an autonomous robot in an uncontrolled environment is difficult because it necessitates the cooperation of a number of subsystems. Mobile robots must be intelligent in order to adapt to navigation in unfamiliar environments, such as environmental cognition, behavioral decisions, and learning. The robot will then navigate around these obstacles without collapsing and arrive at a specific destination point. Combining two processes, such as environmental mapping and robot behaviors, can result in behavior-based navigation. Obstacle avoidance, wall following, corridor following, and target seeking are some examples. If only one of the two processes is used, the system should be used in two ways. When this approach is used, two major issues are bound to arise: I the combination of two simple behaviors to form a complex one, and (ii) the integration of more than two behaviors. Behavior induced by multiple concurrent goals can be smoothly blended into a dynamic sequence of control action. This study is concerned with the automatic navigation of a mobile robot from its starting point to its destination point. To solve a few sub-problems associated with automatic navigation in an uncontrolled environment. Monte Carlo simulation is used to evaluate the algorithm’s performance and show under what conditions the algorithm performs better and worse. Obtaining position mapping to optimize action on mobile robots using a reinforcement learning framework. Reinforcement learning necessitates a large number of training samples, making it difficult to apply directly to real-world mobile robot navigation scenarios. To address this issue, the robot is trained in a Gazebo platform middleware Robot Operating System (ROS) simulation environment, followed by Q-Learning training on mobile robots.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Divisions: | Fakultas Sains dan Teknologi > Program Studi Teknik Elektro |
Depositing User: | ST.,MT. Eki Ahmad Zaki Hamidi - |
Date Deposited: | 22 May 2023 02:53 |
Last Modified: | 22 May 2023 02:53 |
URI: | https://digilib.uinsgd.ac.id/id/eprint/67352 |
Actions (login required)
View Item |