Wednesday, December 10, 2014

Self-Balancing Robot

In my last class for my Masters, I decided to build a self-balancing robot for the final project. What I liked about this project, is that it involved a few very relevant areas of embedded systems all in one project. I'll try to write this post the way I wrote the presentation.

The class was 'Mixed Signal Embedded Systems', and revolved around the Cypress chip called the PSoC4 (programmable-system-on-chip). The chip is interesting in that it has both an ARM M0 processor AND some (emphasis on 'some') programmable logic (PLDs). It also has 2 embedded amplifiers and some other interesting HW components (all configurable through the ARM via registers). Anyway, this post is not about this chip, but feel free to read up on it. It is another example of the direction in which embedded systems are heading.

Motivations for this project

  • Interface to inertial measurement sensors (Gyros/Accelerometers)
  • Employ light-duty sensor fusion for robot pose estimation
  • Implement an embedded PID controller to maintain robot balance

The problem: The Inverted Pendulum

The goal of the inverted pendulum is to keep the pendulum from tipping over. Without intervention, the pendulum is naturally unstable and will eventually tip over. Controls must be used to counter act this tendency by applying a force in the horizontal direction.

Inertial Measurement Sensors

The MPU9150 was used for inertial measurement sensing for this project. The MPU9150 is:

  • 3 axis accelerometer
  • 3 axis gyro
  • 3 axis compass
  • ...all integrated in a single IC

The benefits of having these components on a single chip, is that the error caused by misaligned axis is greatly reduced and controlled by the manufacturing process. This also makes the packaging both smaller and cheaper.

The sensor data is accessed through the I2C protocol, so a microcontroller is needed to perform this (and I wrote a library both in C and C++ which I will post to my github page and provide a link).


Accelerometers measure linear acceleration. The diagram above shows the 3-axis accelerometer
placement inside the MPU9150. When an accelerometer is stationary, it measures a net acceleration the gravitational force of 9.8m/s^2 (at least on earth). The orientation of this sensor with respect to the gravitational force (let's call it earth's z-axis) can be estimated using trigonometry:

This function is accurate only when there is no external forces being applied. When external forces are applied, such as the robot moving/tipping, the orientation cannot be accurately estimated.


Gyros measure the the rate of angular change. The above diagram shows the 3-axis gyro placement inside the MPU9150. Gyros are not susceptible to linear forces such as vibrations or movement. They are only sensitive to angular displacements. Gyros cannot be used to directly measure physical orientation, but their readings can be integrated over time.

dt is the control loop time.

Gyros suffer from a phenomenon known as drift, where they show a small rate of change even though no actual rate of change is occurring physically. When integrating, this error adds up over time, causing an inaccurate estimate of angle.

Also note from the 2 pictures of the axis alignment: the x-axis gyro is perpendicular to the y-z plane of the accelerometer. So the angle using the accelerometer's x-z plane (as shown in the equation) would be the same angle using the gyro's y-axis.

Estimating Robot Pose

In order to accurately estimate angle using gyros and accelerometers, one has to combine these sensor
readings together in a way that takes advantage of their strengths in order to make up for their
weaknesses. This technique is known as sensor fusion. There are a few different pose estimation methods that have their own strengths and weaknesses. For this project, one that is easily implemented on a low-power microcontroller, called complementary filtering is used. The equation below shows how this is implemented:

Alpha is used to weight the two angles estimates before they are combined. Since the accelerometer is more prone to adding noise to the system, a large alpha is chosen to give the gyro integration more
weight in the equation, effectively applying a low-pass filter on the accelerometer. For this project, an
alpha of 0.99 was used (along with a control loop time of 10 milliseconds).

PID Control System

Proportional-Integral-Derivative (PID) is a control loop feedback technique using the error between a
set-point and observed output of a system (often referred to as the plant). A PID controller must be tuned by varying the 3 gains associated with the control algorithm: Kp, Kd and Ki. Different choices for these constants manipulate the systems response in terms of responsetime, overshoot and oscillations.

Kp: The proportional term. Depends on the present error only.
Ki: Integral term. Depends on the accumulation of past errors.
Kd: Derivative term. Prediction of future errors.

The following is a pseudo-code example of implementing a PID algorithm in a controlled loop.

The output of the PID algorithm is a sum of the gain-terms multiplied by their function on the error. The error is a simple difference from the target (in this case an angle) and the observed state (the estimated angle by the sensor fusion algorithm). It is important to keep the control loop rate consistent (so use an interrupt, or don't do extra processing).


I built my robot using parts of one of my other robots, so save on cost. I built a cheap frame out of small wood squares, a dowel and hot glue (all bought at Michaels for less than $5):

Starting off with just a p controller (Kp >0, Ki = Kd = 0), the system was nowhere near useful. The robot would over-shoot and bang itself on the floor. I thought it would destroy itself before I would be able to stabilize it. Fortunately, after adding some derivative gain (Kd), the robot was able to keep itself up-right, with some oscillation. When I added some disturbance (a little push) the robot would travel horizontally until it eventually fell over. After adding some integral gain (Ki) the robot seemed to stabilize itself very well.  My robot at this point would re-stabilize fast enough from a slight push, with some oscillation and small travel in the horizontal direction.My first set (and my favorite) set of gains were: Kp = 8, Ki = 0.5, Kd = 10. These gains won't mean much to you, as they are fit for my robot/system, which is dependent on my control loop, motor response, motor torque, speed resolution, robot height...and it goes on...). You would have to experiment with your system for a suitable set of gains.

What I can do, is share my next set of gains as a comparison to show the change in performance. I thought it would be good to rid of the oscillation, so I increased the derivative gain from 15 to 60. In order to keep my robot stable I had to increase the Kp and Ki gains slightly. This was good, in that my robot did not oscillate so much while trying to keep its balance in the non-added disturbance case. And when I did add a slight push, the robot corrected its angle almost instantly. However, the robot was forced to travel in the horizontal direction for a much larger distance to maintain this response.

I learned that added disturbances (like pushes) is like adding energy to the system. In order to deal with the extra energy, the robot can either oscillate back and forth a few times while minimizing travel in the horizontal direction, or it had to travel in the horizontal direction quite more just to maintain the angle I had programmed.

Anywho, here is a video of the robot in action:

I have a github site, so I'll post the code there. I have C, and C++ library for the MPU9150. The C library however is dependent upon the PSOC4, but you can strip what you need rather easily from it. I'll also include an application I wrote in processing (see which the robot used to communicate to my PC so I could understand what was going on a little better:

Link to the code:


Tuesday, July 29, 2014

Autonomous Robot with Zynq

In an earlier post, I talked up the Xilinx Zynq (an IC with both FPGA and Microcontroller). In my Advanced Embedded System Design course, we had to build an autonomous robot that can navigate around with some form of intelligence and seek out certain objects to 'destroy'. Now the term 'destroy' was really left up to the students to define; for our robot a laser pointer was used to mark enemies. Identifying 'enemies' was a challenge, so we incorporated a camera so the robot could see and track on its own. Meet our robot (below):

Equipped with:

  •  Zybo
  • OV7670 Camera
  • Arduino
  • Sabertooth Motor Controller
  • IR Proximity Sensors
  • LiPo Batteries (12V + 7V)
  • Pan/Tilt Servo motors
  • Laser
Custom logic was designed onto the FPGA portion of the Zynq, in order to maneuver the robot, control the pan/tilt bracket, capture frame data from the camera, and lastly - 'fire the laser'. The ARM portion of the Zynq was used as the algorithm prototyping environment (in C) to make use of the custom FPGA interfaces. The Arduino was used to configure the OV7670 over the I2C-like communications. The following diagram shows how all components were interfaced.

The reason the OV7670 was chosen was its parallel data interface and low cost ($20), allowing us to obtain an image capture rate of 30fps. However, you get what you pay for in terms of ease of interfacing. I had to design custom logic in VHDL to perform frame captures and store them in block-ram on the FPGA. It's hard to debug what you can't see. None the less, after days of toying around, the Zybo was finally able to see. In order to view what the Zybo saw, I wrote a quick program to transfer images from the Zynq to my laptop over a serial connection (using Processing). Below is the process flow for debugging the images.

The custom FPGA logic which interfaced to the OV7670 was designed to either stream frames into the block-ram, or take a single snapshot and leave it in the block-ram. The FPGA had to interface/synchronize to the vsync, href, pixel-clock, and 8-bit data of the OV7670. It also needed to interface with the ARM processor and to FPGA-based block-RAM. Below is the block model of the custom camera control as shown in Vivado.

We couldn't have done it without camera register settings provided by this source: Hamsterworks - OV7670

Otherwise the images didn't turn out so well:

I'll post some videos of the robot in action soon...

Tuesday, April 29, 2014

Xilinx Zynq7000 and the Zybo

If you are into getting your hands on the latest embedded technology, the Zynq7000 is a great platform to get familiar with. Xilinx, a leader in FPGA design (in which I have no affiliation with), partnered up with ARM to create the very first microprocessor with on-chip FPGA (or is it an FPGA with an on-chip processor?).

This technology combines the power of using an FPGA to perform high-speed paralleled tasks with a dual-core ARM processor with standard Micorcontroller peripherals (CAN, I2C, SPI, UART etc.). I was lucky enough to be introduced to this device in my Advanced Embedded System Design course at Oakland University, and will continue working with it as long as I can continue to make sense out of partitioning my embedded design projects into both 'some hardware' and 'some software'. I purchased the Zybo development board made by Digilient, however there are others out there (Zedboard, microZed by AVNet)

To paint a better picture of how this is so useful, imagine using a microcontroller to both interface to a camera and perform image processing in order to make a decision (maybe you are trying to track an object). In terms of cameras you are able to interface to, you are left with those with slower serial interfaces (SPI, UART). If you really wanted to interface a microcontroller to a camera with a faster parallel interface, you would have to spend a little more money to get a fast enough controller. But this doesn't leave your controller much time to do anything else other than image-processing and frame-grabbing! On the flip side, if you were to use an FPGA it becomes harder to actually implement your decision making and takes longer to develop and is harder to debug.

Having an FPGA and Microcontroller on the same IC solves this exact problem. Now, you can develop your frame-grabbing for your higher-speed camera and even do some image processing on the FPGA. In this way, the FPGA can be treated as a custom co-processor for your application running on the Processor. And the FPGA to Processor interface is seen by your application is either 'just another memory mapped peripheral' or even an 'external interrupt'.

The example I used comes directly from my project, where I did use the FPGA to communicate with a faster camera (well faster than UART or SPI based) and allow the processor to dictate what kind of operations to perform on the image, or even stream the image to the processor's RAM for streaming to a remote PC via UART. This was for an Autonomous Robot project, which I will add posts on in the near future.

I recommend checking this technology out. It is great to see innovations such as this, because they have the power to take industries into new directions. If you are interested in getting yourself one, I suggest either the Zedboard or the Zybo.

Link to Digilent's Zybo