Can we use recorded data from an environment to train a physics engine?
Model generation is SO laborious. What if you could automate the process?
On Building and Using a winch to allow a real robot to train via reinforcement learning
When talking about PID controllers to people outside of control theory, I wanted a thing that lets me actually demo a PID controller, so I built this device, and its a perfect teaching aid.
There are a lot of RL posts and papers that discuss return normalization. This post discusses a problem with a common strategy, and my attempt to fix.
URDF files let you explain the physical characteristics of your robot to a physics engine. Thats great. The bummer is the files tend to get long, and they are in XML (sigh).
Here is a small automation attempt using the Mustache template engine.
Weekend Side Project: Meena is a small wooden box containing a timer and a striker, which is intended to strike an instrument like a singing bowl or gong. Used for meditation.
Beaker is a two-wheeled balancing robot. Here we build a low-level controller that maintains desired wheel rotational velocity, and discuss tuning.
Transfer Learning is basically a brain transplant – taking a neural net that was trained for one task, and applying it (or transferring it) to another task. Here I transfer from virtual to a real robot — just like in The Matrix!
Imagine you have a motor turning a gear which turns another gear. Unless those gears are *perfect* (pro tip: they aren’t), you are going to have something called backlash. Here is how to model it.