Friday, April 20, 2012

Audio Localization

Just a little introduction on my interest in this topic:
I've always been intrigued by the way our mind is configured to interpret sound signals. To those of you with two working ears: Ever notice when you hear a noise, you know which direction it came from?... I mean you just 'know'; you don't have to sit down, grab a pencil and notepad and plot waveforms to triangulate the angle in which the sound likely originated. These calculations are done in the background of our mind. That's right, you and I (and even our pet cats) are pre-programmed to utilize these functions without having to 'think' about it. This way we can use our main processor-time for more important tasks.

I wanted to experiment with methods that the brain uses for indicating the direction of sound. A little background on the two methods:
  • Interaural Level Difference: used to describe difference in amplitude between two or more sensors
  • Interaural TIme Difference: used to describe difference in arrival time of two signals
A few links on the topic:
http://www.ise.ncsu.edu/kay/msf/sound.htm
http://en.wikipedia.org/wiki/Sound_localization
For simplicity, I focused on the 1st method and implemented an 'object tracking' approach.

Components used:
  • Arduino Uno
  • 2 Phidget Sound Sensors
  • Continuous Servo Motor
To continue to maintain simplicity, I chose the Phidget Sound Sensors because they outputs a 0-5 Volt signal representing  measured volume (opposed to a raw signal from a microphone). This also allows for a slower processor (such as the Atmega328) to be quick enough for the task. Below is a pic of the system (made for a class project).

Here is a functional diagram of the system I drew up:

A high-level schematic

A picture of the project

And at last, a video! (May be loud). We used the "Air-Horn" Phone-App

Video with improved code