Enhancing Exploration in Reinforcement Learning through Multi-Step Actions
dc.contributor.advisor | Shrivastava, Anshumali | en_US |
dc.creator | Medini, Tharun | en_US |
dc.date.accessioned | 2020-12-10T17:37:28Z | en_US |
dc.date.available | 2020-12-10T17:37:28Z | en_US |
dc.date.created | 2020-12 | en_US |
dc.date.issued | 2020-12-03 | en_US |
dc.date.submitted | December 2020 | en_US |
dc.date.updated | 2020-12-10T17:37:28Z | en_US |
dc.description.abstract | The paradigm of Reinforcement Learning (RL) has been plagued by slow and uncertain training owing to the poor exploration in existing techniques. This can be mainly attributed to the lack of training data beforehand. Further, querying a neural network after every step is a wasteful process as some states are conducive to multi-step actions. Since we train with data generated on-the-fly, it is hard to pre-identify certain action sequences that consistently yield great rewards. Prior research in RL has been focused on designing algorithms that can train multiple agents in parallel and accumulate information from these agents to train faster. Concurrently, research has also been done to dynamically identify action sequences that are suited for a specific input state. In this work, we provide insights into the necessity and training methods for RL with multi-step action sequences in conjunction with the main actions of an RL environment. We broadly discuss two approaches. First of them is A4C - Anticipatory Asynchronous Advantage Actor-Critic, a method that squeezes twice the gradients from the same number of episodes and thereby achieves higher scores and converges faster. The second one is an alternative to Imitation Learning that mitigates the need for having state-action pairs of expert. With as few as 20 action trajectories of expert, we can identify the most frequent action pairs and append to the novice's action space. We show the power of our approaches by consistently and significantly outperforming the state-of-the-art GPU-enabled-A3C (GA3C) on popular ATARI games. | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Medini, Tharun. "Enhancing Exploration in Reinforcement Learning through Multi-Step Actions." (2020) Master’s Thesis, Rice University. <a href="https://hdl.handle.net/1911/109644">https://hdl.handle.net/1911/109644</a>. | en_US |
dc.identifier.uri | https://hdl.handle.net/1911/109644 | en_US |
dc.language.iso | eng | en_US |
dc.rights | Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder. | en_US |
dc.subject | Reinforcement Learning | en_US |
dc.subject | Imitation Learning | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | ATARI | en_US |
dc.subject | DeepMind | en_US |
dc.subject | A3C | en_US |
dc.subject | GA3C | en_US |
dc.subject | Actor Critic | en_US |
dc.title | Enhancing Exploration in Reinforcement Learning through Multi-Step Actions | en_US |
dc.type | Thesis | en_US |
dc.type.material | Text | en_US |
thesis.degree.department | Electrical and Computer Engineering | en_US |
thesis.degree.discipline | Engineering | en_US |
thesis.degree.grantor | Rice University | en_US |
thesis.degree.level | Masters | en_US |
thesis.degree.name | Master of Science | en_US |
Files
Original bundle
1 - 1 of 1