I've always been intrigued by the way our mind is configured to interpret sound signals. To those of you with two working ears: Ever notice when you hear a noise, you know which direction it came from?... I mean you just 'know'; you don't have to sit down, grab a pencil and notepad and plot waveforms to triangulate the angle in which the sound likely originated. These calculations are done in the background of our mind. That's right, you and I (and even our pet cats) are pre-programmed to utilize these functions without having to 'think' about it. This way we can use our main processor-time for more important tasks.
I wanted to experiment with methods that the brain uses for indicating the direction of sound. A little background on the two methods:
- Interaural Level Difference: used to describe difference in amplitude between two or more sensors
- Interaural TIme Difference: used to describe difference in arrival time of two signals
http://www.ise.ncsu.edu/kay/msf/sound.htm
http://en.wikipedia.org/wiki/Sound_localization
Components used:
- Arduino Uno
- 2 Phidget Sound Sensors
- Continuous Servo Motor
Here is a functional diagram of the system I drew up:
And at last, a video! (May be loud). We used the "Air-Horn" Phone-App