This is the second post in our Simulation and Deep Reinforcement Learning (DRL) series. In our first post, we covered the benefits of simulations as training environments for DRL. Now, we’ll focus on how to to make simulations + DRL work.
In the example below, we will train a Bonsai BRAIN using a Simulink model. The goal is to teach the BRAIN (an AI model built in the Bonsai Platform) how to tune a wind turbine and maximize the energy output of it by keeping it turned into the wind at an optimal angle.
Simulink provides a great training environment for DRL as it allows 3rd parties like Bonsai to integrate and control simulation models from the outside. This ability is one of the basic requirements for simulation platforms to be feasible for Deep Reinforcement Learning using Bonsai AI. More requirements can be found here.
This Simulink Wind Turbine model is provided by The MathWorks. For this scenario, it represents a simple control problem that can be solved by applying reinforcement learning.
First, we need to identify a control point within the model so Bonsai can take over inputs and outputs. We’re doing this by inserting a Bonsai block into the model, replacing the existing control block.
Once training has completed, you can use the trained Bonsai BRAIN to get predictions.
Simulators are a crucial tool for reinforcement learning. Enterprises can use simulation models that reflect real-world business processes or physical realities and optimize them with Bonsai’s reinforcement learning technology. Typically, there are no changes needed to the simulation model. If you’ve missed our first post on how simulations can be used for training, please find it on our blog.
Bonsai can help you apply deep reinforcement learning technology and build intelligent control into your own industrial systems using Simulink as the training environment. If you are using Simulink and you want to try out Bonsai AI, join our beta program and get started here.