I've always been intrigued by the way our mind is configured to interpret sound signals. To those of you with two working ears: Ever notice when you hear a noise, you know which direction it came from?... I mean you just 'know'; you don't have to sit down, grab a pencil and notepad and plot waveforms to triangulate the angle in which the sound likely originated. These calculations are done in the background of our mind. That's right, you and I (and even our pet cats) are pre-programmed to utilize these functions without having to 'think' about it. This way we can use our main processor-time for more important tasks.
I wanted to experiment with methods that the brain uses for indicating the direction of sound. A little background on the two methods:
- Interaural Level Difference: used to describe difference in amplitude between two or more sensors
- Interaural TIme Difference: used to describe difference in arrival time of two signals
http://www.ise.ncsu.edu/kay/msf/sound.htm
http://en.wikipedia.org/wiki/Sound_localization
Components used:
- Arduino Uno
- 2 Phidget Sound Sensors
- Continuous Servo Motor
Here is a functional diagram of the system I drew up:
And at last, a video! (May be loud). We used the "Air-Horn" Phone-App
Hi Brian ..
ReplyDeleteNice work .. I have been looking into this for a while .. although i would like to find some code here : )) i have some questions .. Am not sure this will work for my application .. which i look for a specific acoustic signature ...
did you try your setup for long ranges ? and how far did it recognize the source and how accurate .. Did you do any improvement for it afterwards ?
Thanks and greatly appreciate your feedback.
Wow... So good! I think You look so smart and competent!
ReplyDeleteplease Could I get this source? I'm Really curious how produce it.
For over 3month, i have tried to make it but I'm in failure always T^T
Please help me!
If you allow me to get this source, please send it by e-mail ^*^
My e-mail ; ljf9307@naver.com
Thank you foy your help.