top of page

Reinforcement Learning (RL)



A subset of machine learning called reinforcement learning (RL) enables an AIdriven system (also known as an agent) to learn by doing and receiving feedback from its mistakes. Positive or negative feedback is sent as punishment or reward, with the obvious goal of maximizing the reward function. RL offers artificial intelligence that as closely resembles natural intelligence as is currently achievable while also learning from its mistakes.


The only thing that supervised learning and RL have in common in terms of learning techniques is the use of mapping between input and output. In contrast, the feedback in supervised learning includes the appropriate course of action for the agent to take. There is no such answer key in real life.


To complete the assignment correctly, the agent chooses what actions to take on it's own. RL aims are different from unsupervised learning. To identify similarities or differences between data points is the aim of unsupervised learning. Finding the best action model to maximize the overall cumulative reward for the RL agent is the aim of RL. The RL problem is resolved by the agent's actions using input from the environment in the absence of a training dataset.


RL techniques, which use a more dynamic approach than conventional machine learning, are pioneering the field. Examples include Monte Carlo, SARSA, and Q-learning. Three different RL implementations exist:

  • Policy-based RL employs a deterministic or policy approach that maximizes cumulative reward.

  • Value-based RL tries to maximize an arbitrary value function.

  • Model-based RL creates a virtual model for a particular environment and the agent learns to perform within those constraints.

How does RL function?


It is difficult to thoroughly explain reinforcement learning in a single article. The book Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton is a useful tool to obtain a solid foundation in the subject. Video games, which use a reward and punishment system, are the greatest medium for explaining reinforcement learning. This has led to the utilization of vintage Atari games as a testing ground for reinforcement learning algorithms.


You take on the role of a character in a video game who is an agent that exists in a specific area. The situations they run into are comparable to a state. When your character or agent reacts, they do an action that changes their condition from one to another. Following this change, they might get anything good or bad. The strategy known as the policy determines the agent's course of action based on the environment and the agent's current state.


The RL agent must decide whether to maximize its reward while also exploring new states in order to develop an optimal policy. The trade-off between exploration and exploitation is termed as such. The goal is to maximize cumulative reward during the course of training, not to seek out rewards right now. Time is also crucial; the reward agent considers the full history of states in addition to the current condition. An algorithm called policy iteration aids in determining the best course of action for specific situations and actions.


Almost all RL problems are formalized using MDPs, and the environment in a reinforcement learning algorithm is frequently stated as an MDP. The SARSA algorithm is used to learn a Markov decision. It is a minor alteration to the well-known Q-learning method. The two RL algorithms that are most frequently employed are SARSA and Qlearning.


Actor-Critic, which is a Temporal Difference variation of Policy Gradient techniques, is another method that is often employed. It resembles the baseline version of the REINFORCE algorithm. One of the key components of many reinforcement learning algorithms is the Bellman equation. Typically, it alludes to the dynamic programming equation connected to issues with discrete-time optimization.


One of the most recently created deep reinforcement learning algorithms is the Asynchronous Advantage Actor Critic (A3C) algorithm. A3C employs several agents, each with its own network settings and a copy of the environment, in contrast to other well-known deep RL algorithms like Deep Q-Learning (DQN), which uses a single agent and a single environment.


The agents engage with their environments in asynchronous ways, learning from each encounter and adding to the collective wisdom of a vast network. Additionally, the global network enables agents to access a wider variety of training data. This allows the entire global network to profit while simulating the actual environment in which people learn from one another's experiences.


Do RLs require data?


Data for RL is gathered from machine learning systems that employ a trial-and-error process. Input for either supervised or unsupervised machine learning does not include data. A type of model-free RL techniques known as temporal difference (TD) learning uses bootstrapping to learn from a current estimate of the value function. The term "temporal difference" refers to a learning strategy that employs variations in predictions over subsequent time steps to advance learning. The prediction is revised at every time step, bringing it closer to the prediction of the same quantity at the following time step.


Dynamic programming and Monte Carlo principles are used in TD learning, which is frequently used to forecast the entire amount of future reward. However, learning occurs after every encounter in TD, as opposed to the learning occurring at the conclusion of any Monte Carlo technique. Gerald Tesauro created the computer backgammon game TD Gammon in 1992 at IBM's Thomas J. Watson Research Center. In order to train computers to play backgammon at the level of grandmasters, RL and more especially a non-linear version of the TD algorithm were used. It was a crucial stage in teaching computers how to play challenging games.


The vast class of algorithms known as "Monte Carlo techniques" uses repeated random sampling to get numerical values that indicate probability. The likelihood of an opponent's move in a game like chess, a weather event occurring in the future, or the likelihood of a car accident under precise circumstances can all be determined using Monte Carlo methods.


Monte Carlo methods first appeared in the field of particle physics and helped to create the first computers. They were named after the casino in the city of the same name in Monaco. With the aid of Monte Carlo simulations, one may incorporate risk into quantitative analysis and decision-making.


A wide range of industries, including finance, project management, manufacturing, engineering, research and development, insurance, transportation, and the environment, use this technique. Monte Carlo methods serve as a foundation for calculating the likelihood of outcomes in artificial intelligence challenges employing simulation in machine learning or robotics. The bootstrap method is a resampling strategy for estimating a quantity, such as the accuracy of a model using a small dataset, and is based on Monte Carlo methods.


Applications of RL


DeepMind developed RL to teach artificial intelligence how to play chess, go, and shogi, among other challenging board games (Japanese chess). It was incorporated into the creation of AlphaGo, the first program to defeat a skilled human Go player. This gave rise to the deep neural network agent AlphaZero, which in just four hours was able to teach itself to play chess well enough to defeat the chess engine Stockfish.


The only components of AlphaZero are a neural network and the Monte Carlo Tree Search method. Compare this to Deep Blue's brute force processing ability, which even in 1997, when it defeated Garry Kasparov as the world chess champion, allowed for the consideration of 200 million potential chess positions per second. However, because the representations of deep neural networks, such as those employed by AlphaZero, are opaque, our comprehension of their judgments is constrained. This problem is examined in the paper Acquisition of Chess Knowledge in AlphaZero.


In Conclusion


The use of unmanned spacecraft to travel through unfamiliar settings, such as the Moon or Mars, is being proposed. A team of Greek scientists created the OpenAI Gym-compatible environment known as MarsExplorer. The team has trained four deep reinforcement learning algorithms on the MarsExplorer environment: A3C, Ranbow, PPO, and SAC, with PPO displaying the greatest performance.


The first reinforcement learning framework that is open-AI compatible and is designed for the exploration of uncharted territory is called MarsExplorer. Other applications of reinforcement learning include self-driving cars, stock price forecasting in trading and finance and the diagnosis of rare diseases in medicine. To gain access to more of our whitepapers, visit here.

33 views0 comments

Commenti


bottom of page