Real Time Auralization Module for Electronic Travel Aid Devices for People with Visual Disability
Abstract
This paper presents a software module for real-timeauralization that was used to recreate the acoustic perceptionproduced by a sound obstacle in virtual and real environments.This module fulfills the function of inserting, in any audio signal,a three-dimensional positioning effect that allows the listenerto determine the location of a sound source within the chosentest environment. This effect was achieved with a processingtechnique called segmented convolution and several functionscontained in a database of head related impulse responses(HRIRs). The module was tested in a real environment anda virtual one. In the real test environment, the user had astereoscopic camera that fulfilled the function of an obstacledetector, as well as a computer and headphones, in which themodule was installed and three-dimensional sound alerts weregenerated. In this way, the effects could be recorded, analyzed,discussed and finally validated.
Downloads
References
P. Lee and D. Stewart, “Virtual reality (vr): a billion dollar nichetmt predictions 2016.” [Online]. Available: https://www2.deloitte.com/global/en/pages/technology-media-and-telecommunications/articles/tmt-pred16-media-virtual-reality-billion-dollar-niche.html
K. Mendel, B.-I. Dalenb ̈ack, and P. Svensson, “Auralization – anoverview,”J. Audio Eng. Soc. 41, p. 861, 1993.
M. Vorl ̈ander,Fundamentals of Acoustics, Modelling, Simulation, Algo-rithms and Acoustic Virtual Reality. Springer, 2008.
J. F. Lucio Naranjo, R. Tenenbaum, L. A. Paz Arias, H. P. Morales Esco-bar, and I. J. Iniguez Jarr ́ın, “3d sound applied to the design of assistednavigation devices for the visually impaired,”Latin-American Journalof Computing, 2015.
I. Lengua, D. Larisa, G. Peris, and B. Defez, “Dispositivo denavegaci ́on para personas invidentes basado en la tecnolog ́ıa time offlight.” [Online]. Available: http://www.scielo.org.co/scielo.php?script=sciarttext&pid=S0012-73532013000300004
R.Bencina,“Real-timeaudioprogramming101:timewaitsfor nothing.” [Online]. Available: http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
G. Alonso, L. Budde, and M. Zannier, “Síntesis de respuesta impulsiva de recintos a través del método de trazado de rayos,”UTN FRC - DeptoIng. Electrónica, 2012.
F.Pishdadian,“Filters,Reverberation&Convolution,”2017.[Online]. Available: http://www.cs.northwestern.edu/∼pardo/courses/eecs352/lectures/MPM16-topic9-Filtering.pdf
B. Gartner and K. Martin, “HRTF Measurements of a KEMARDummy-Head Microphone.” [Online]. Available: http://sound.media.mit.edu/resources/KEMAR.html
Ref.?, p. 28.
T.Wo ́zniak,“Implementing Binaural (HRTF) Panner Node with Web Audio API, ”April2015. [Online]. Available: https://codeandsound.wordpress.com/2015/04/08/implementing-binaural-hrtf-panner-node-with-web-audio-api/
S. Spors, “Segmented Convolution—DigitalSignalProcessing 0.0 documentation.” [Online]. Available: http://dsp-nbsphinx.readthedocs.io/en/nbsphinx-experiment/nonrecursivefilters/segmentedconvolution.html
S. Smith, “FFT Convolution.” [Online]. Available: http://www.dspguide.com/ch18/2.htm
G. Wersnyi, “Effect of emulated head-tracking for reducing localizationerrors in virtual audio simulation,”IEEE Transactions On Audio, Speech,And Language Processing, vol. 17, no. 2, pp. 247–252, 2009.
T. M ̈oller and B. Trumbore, “Fast, Minimum Storage Ray/TriangleIntersection,” inACM SIGGRAPH 2005 Courses, ser. SIGGRAPH’05.New York, NY, USA: ACM, 2005. [Online]. Available:http://doi.acm.org/10.1145/1198555.1198746
Ref.?, p. 98.
T. C. R. Network, “Cplusplus reference ¡atomic¿.” [Online]. Available:http://www.cplusplus.com/reference/atomic/
G. Held,Server Management. CRC Press, 2000.
F. Torres, P. Pomares, J.and Gil, and S. Puente,Robots y SistemasSensoriales. Prentice Hall, 2002.
P. Corke,Robotics, Vision and control. Springer, 2013.
T. C. R. Network, “About - point cloud library (pcl).” [Online].Available: http://pointclouds.org/about/
B. Li, X. Zhang, Munoz, X. J. P., X. J., Rong, and Y. Tian, “Assistingblind people to avoid obstacles: An wearable obstacle stereo feedbacksystem based on 3d detection,”IEEE International Conference onRobotics and Biomimetics, 2016.
A. Garcia, “Towards a real-time 3d object recognition pipeline onmobile gpgpu computing platforms using low-cost rgb-d sensors,”ser. CEUR Workshop Proceedings, 2015. [Online]. Available: https://doi.org/10.1017/CBO9781107415324.004
A. Nguyen and B. Le, “3d point cloud segmentation: A survey,” ser.IEEE Conference on Robotics, Automation and Mechatronics, 2013.[Online]. Available: https://doi.org/10.1109/RAM.2013.6758588
V. R. Algazi, R. O. Duda, D. M. Thompson, and C. Avendano, “Thecipic hrtf database.” [Online]. Available: https://bit.ly/3JtsBrI
This article is published by LAJC under a Creative Commons Attribution-Non-Commercial-Share-Alike 4.0 International License. This means that non-exclusive copyright is transferred to the National Polytechnic School. The Author (s) give their consent to the Editorial Committee to publish the article in the issue that best suits the interests of this Journal. Find out more in our Copyright Notice.
Disclaimer
LAJC in no event shall be liable for any direct, indirect, incidental, punitive, or consequential copyright infringement claims related to articles that have been submitted for evaluation, or published in any issue of this journal. Find out more in our Disclaimer Notice.