Tuesday, July 29, 2014

Autonomous Robot with Zynq

In an earlier post, I talked up the Xilinx Zynq (an IC with both FPGA and Microcontroller). In my Advanced Embedded System Design course, we had to build an autonomous robot that can navigate around with some form of intelligence and seek out certain objects to 'destroy'. Now the term 'destroy' was really left up to the students to define; for our robot a laser pointer was used to mark enemies. Identifying 'enemies' was a challenge, so we incorporated a camera so the robot could see and track on its own. Meet our robot (below):


Equipped with:

  •  Zybo
  • OV7670 Camera
  • Arduino
  • Sabertooth Motor Controller
  • IR Proximity Sensors
  • LiPo Batteries (12V + 7V)
  • Pan/Tilt Servo motors
  • Laser
Custom logic was designed onto the FPGA portion of the Zynq, in order to maneuver the robot, control the pan/tilt bracket, capture frame data from the camera, and lastly - 'fire the laser'. The ARM portion of the Zynq was used as the algorithm prototyping environment (in C) to make use of the custom FPGA interfaces. The Arduino was used to configure the OV7670 over the I2C-like communications. The following diagram shows how all components were interfaced.


The reason the OV7670 was chosen was its parallel data interface and low cost ($20), allowing us to obtain an image capture rate of 30fps. However, you get what you pay for in terms of ease of interfacing. I had to design custom logic in VHDL to perform frame captures and store them in block-ram on the FPGA. It's hard to debug what you can't see. None the less, after days of toying around, the Zybo was finally able to see. In order to view what the Zybo saw, I wrote a quick program to transfer images from the Zynq to my laptop over a serial connection (using Processing). Below is the process flow for debugging the images.


The custom FPGA logic which interfaced to the OV7670 was designed to either stream frames into the block-ram, or take a single snapshot and leave it in the block-ram. The FPGA had to interface/synchronize to the vsync, href, pixel-clock, and 8-bit data of the OV7670. It also needed to interface with the ARM processor and to FPGA-based block-RAM. Below is the block model of the custom camera control as shown in Vivado.

We couldn't have done it without camera register settings provided by this source: Hamsterworks - OV7670

Otherwise the images didn't turn out so well:

I'll post some videos of the robot in action soon...

Tuesday, April 29, 2014

Xilinx Zynq7000 and the Zybo

If you are into getting your hands on the latest embedded technology, the Zynq7000 is a great platform to get familiar with. Xilinx, a leader in FPGA design (in which I have no affiliation with), partnered up with ARM to create the very first microprocessor with on-chip FPGA (or is it an FPGA with an on-chip processor?).

zynq-7000-soc-processing-system

This technology combines the power of using an FPGA to perform high-speed paralleled tasks with a dual-core ARM processor with standard Micorcontroller peripherals (CAN, I2C, SPI, UART etc.). I was lucky enough to be introduced to this device in my Advanced Embedded System Design course at Oakland University, and will continue working with it as long as I can continue to make sense out of partitioning my embedded design projects into both 'some hardware' and 'some software'. I purchased the Zybo development board made by Digilient, however there are others out there (Zedboard, microZed by AVNet)

To paint a better picture of how this is so useful, imagine using a microcontroller to both interface to a camera and perform image processing in order to make a decision (maybe you are trying to track an object). In terms of cameras you are able to interface to, you are left with those with slower serial interfaces (SPI, UART). If you really wanted to interface a microcontroller to a camera with a faster parallel interface, you would have to spend a little more money to get a fast enough controller. But this doesn't leave your controller much time to do anything else other than image-processing and frame-grabbing! On the flip side, if you were to use an FPGA it becomes harder to actually implement your decision making and takes longer to develop and is harder to debug.

Having an FPGA and Microcontroller on the same IC solves this exact problem. Now, you can develop your frame-grabbing for your higher-speed camera and even do some image processing on the FPGA. In this way, the FPGA can be treated as a custom co-processor for your application running on the Processor. And the FPGA to Processor interface is seen by your application is either 'just another memory mapped peripheral' or even an 'external interrupt'.

The example I used comes directly from my project, where I did use the FPGA to communicate with a faster camera (well faster than UART or SPI based) and allow the processor to dictate what kind of operations to perform on the image, or even stream the image to the processor's RAM for streaming to a remote PC via UART. This was for an Autonomous Robot project, which I will add posts on in the near future.

I recommend checking this technology out. It is great to see innovations such as this, because they have the power to take industries into new directions. If you are interested in getting yourself one, I suggest either the Zedboard or the Zybo.

Link to Digilent's Zybo

Friday, April 20, 2012

Audio Localization

Just a little introduction on my interest in this topic:
I've always been intrigued by the way our mind is configured to interpret sound signals. To those of you with two working ears: Ever notice when you hear a noise, you know which direction it came from?... I mean you just 'know'; you don't have to sit down, grab a pencil and notepad and plot waveforms to triangulate the angle in which the sound likely originated. These calculations are done in the background of our mind. That's right, you and I (and even our pet cats) are pre-programmed to utilize these functions without having to 'think' about it. This way we can use our main processor-time for more important tasks.

I wanted to experiment with methods that the brain uses for indicating the direction of sound. A little background on the two methods:
  • Interaural Level Difference: used to describe difference in amplitude between two or more sensors
  • Interaural TIme Difference: used to describe difference in arrival time of two signals
A few links on the topic:
http://www.ise.ncsu.edu/kay/msf/sound.htm
http://en.wikipedia.org/wiki/Sound_localization
For simplicity, I focused on the 1st method and implemented an 'object tracking' approach.

Components used:
  • Arduino Uno
  • 2 Phidget Sound Sensors
  • Continuous Servo Motor
To continue to maintain simplicity, I chose the Phidget Sound Sensors because they outputs a 0-5 Volt signal representing  measured volume (opposed to a raw signal from a microphone). This also allows for a slower processor (such as the Atmega328) to be quick enough for the task. Below is a pic of the system (made for a class project).

Here is a functional diagram of the system I drew up:

A high-level schematic

A picture of the project

And at last, a video! (May be loud). We used the "Air-Horn" Phone-App

Video with improved code


Tuesday, May 3, 2011

Return of the RC car!!!

I've been very busy since the last post, but I finaly got back into the groove for the RC car. I wanted this car to be completely modifiable so the inside of the electronics control box = breadboards + velcro (which works pretty darn good). Here is a photo of the wiring.

Included in this box:
1. Boarduino (Atmega 328) controller
2. 18v15 Pololu Motor Driver (rear motor drive)
3. Dual AxisCompass Module - HMC6352
4. Xbee Pro 60mW U.FL module: for wireless comunication



I had to replace the stock steering motor/potentiometer setup with a servo motor to put ease on the programming (I'm now able to eliminate the steering control loop code). I also broke the ICSP header outside the box for programming (I had to wire up a switch to disconnect both the external power and the Xbee's transmit line from the microcontroller in order to externally program).

I'm working on the code for wireless control/feedback. Eventually I intend on adding a camera, ultrasonic rangefinder and GPS module.

Anyway, here is a short video on my car's first run with both functional steering/driving. (given the small area, I had it just do circles in my kitchen).
video

Wednesday, August 4, 2010

I'm currently putting together a new project where I'm attempting to automate the control of an old RC Car. I've ripped out the original guts except for the the drive motor and steer motor. After numerous attempts at a DIY motor controller for the drive motor (and popping through transistors like popcorn), I've come to the conclusion that this motor needed a driver who could handle the spike currents without melting under pressure. Here is a list of all the parts that will be going into this project, and later I will provide updates on my progress.

Parts:

1. Original RC Chasis (including suspension and wheels/tires).
2. Original RC 7.2V Battery Packs (2)
3. Original rear-wheel drive motor (specs unknown except that it draws 3.5A current running, and 15A stall).
4. Orignal steering motor + original feedback potentiometer (specs also unkown)
5. Boarduino (Atmega 328) controller
6. 18v15 Pololu Motor Driver (to properly drive the rear motor)
7. Dual AxisCompass Module - HMC6352
8. Xbee Pro 60mW U.FL modules + antenna (2): for wireless comunication
9. A few 2n2907/2n2222 BJTs for driving the steering motor



Sunday, April 25, 2010

Light Follower - 2 Axis


Here it is - My rendition of a 2-axis light follower. I went ahead and clipped the IR Cam from the wiimote to cut down on weight. I put the camera along with the required oscillator in a project box (from radio shack). I also included an LED on the box; it lights up when light is seen by the camera. A serial port is used to connect the box to the microcontroller.
Here are some Pictures:


Here is a video:

video

http://www.youtube.com/watch?v=y41o7KieRgw

Here is a link to the code:

Would you like me to build you a project box with a lovely Wiimote Camera? Let's talk!

Wednesday, March 17, 2010

Nintendo nunchucks as Orientation Sensors

My senior design project was to make a wirelessly controlled robotic arm, that mimics human arm movements. The closest we got was the movement of a shoulder joint and an elbow joint at a very high accuracy and low time delay.

I made a sensor system out of 2 Wii Nunchucks, an Arduino and some external circuitry to switch between nunchuck sensor reading. At a deadline of one of our presentations, a plastic gear from our arm had chipped some of its teeth so we weren't able to give motion demos. I whipped up a program using Processing (processing.org) to communicate with the Arduino and move a 3D simulation of an arm based on the sensor outputs.

I thought I'd share a few screenshots of the program. I'll have both the Arduino code, and processing code up soon.





Processing is a great program for doing graphical manipulations. It can also compile code to execuatable files.

Wiimote light follower with servo

Everybody is familiar with the infamous Wiimote. When I look at it, I think about all the useful sensors/gadgets that this little 40$ package (new) comes with. Recently I've been playing with the IR Camera (It's really just a light sensing camera with an IR Filter).This particular camera is a standalone module that outputs coordinates of the 4 brightest "images", all via I2C communication.

I've only seen hacks with the Wiimote cam where the camera is desoldered/removed from the Wiimote. However, at 40$ a pop that seemed like a waste of a perfectly good Wiimote. Instead of removing the cam, I only made 1 small modification, which was drilling a very tiny hole near the camera and soldering a connection to it's "Clock" pin (which needs to be a 24MHz sine wave to replace the internal oscillator). Once you have this done, all you need to do is plug a cord into the Wiimote peripheral port to use anything on its I2C bus.
Moving on, I attached the Wiimote on a homemade stand that was fixed onto a continuous-rotation servo motor (servo without feedback). Add a little duct tape, and that servo isn't moving for at least an hour.


Using an IR Camera library already created (thanks to Hobley – http://www.stephenhobley.com/), I used an Arduino to receive points from the camera and follow the 1st object (light source) that's noticed.

The code can be found here:

video
http://www.youtube.com/watch?v=Nj7UqjP-z6U

Monday, March 15, 2010

Two Wii Nunchucks with one arduino

In the midst of a senior design project, it was decided that we wanted to use 2 Wii Nunchucks as accelerometers to measure orientation of a human arm ( 1 for the upper arm and 1 for the forearm). In understanding I2C communication, there is no way to use 2 Nunchucks on the same I2C bus without some sort of external circuitry. (All nunchucks have the same slave address, leaving nothing to distinguish between the two when attempting to receive data).

I drew up a simple and cheap solution to interface two (or more) Wii Nunchucks on the same I2C bus. This is useful for projects that require multiple accelerometers at a cheap price.

Here is all you will need:

2 npn switching transistors (I used 2N3904)
2 current limiting resistors (I used 200 Ohm)



Just connect all nunchuck Power (PWR), Clock (SCL), and Ground (GND) wires to the same corresponding spots on your microcontroller. The microcontroller's SDL can be connected to the outputs of both transistors.


Programming notes:


In order to perform a read, all you have to do is set the pin of the corresponding transistor to HIGH (5v in our case), write/read to the I2C bus, then set that pin to LOW (0v)to disconnect that nunchuck from the bus. Also, during start up you must initialize each nunchuck individually in order to operate both nunchucks correctly.

Good luck,

B Dwyer (aka: johnnyonthespot)