基本信息
浏览量:0
职业迁徙
个人简介
Mounya Elhilali, the Charles Renn Faculty Scholar and founder of the Department of Electrical and Computer Engineering’s Laboratory for Computational Audio Perception (LCAP), is recognized for advancing the understanding of how the human brain and machines process the complexities of sound.
Elhilali’s research bridges the gap between neuroscience and audio technologies by examining the computational and neural bases of sound and speech perception and behavior in complex acoustic environments. Using mathematical signal processing models, behavioral testing (psychoacoustics), and neural recordings, she focuses on decoding how these processes guide human behavior, and on engineering more efficient machine parsing of complex soundscapes. Her work has applications in a wide range of fields spanning the medical, commercial, military, and robotic domains.
Recently, Elhilali has explored how attention to sound provides feedback to brain networks and changes how humans analyze and understand their acoustic surroundings. The brain operates as an adaptive system that constantly changes its processing in order to sift through the cacophony of sounds in our environments. By studying and modeling this adaptive behavior, Elhilali’s work offers novel theories for advancing intelligent audio technologies. Her multidisciplinary research is creating a number of insights into brain sciences, adaptive signal processing, audio technologies, and medical systems, including devising new diagnosis technologies that leverage body sounds to tackle public health problems, such as pneumonia, that affect millions worldwide.
Affiliated with Johns Hopkins’ Center for Language and Speech Processing, Elhilali is a member of Institute of Electrical and Electronic Engineers (IEEE, senior member), IEEE Signal Processing Society, Acoustical Society of America (ASA), Society for Neuroscience (SfN), Association for Research in Otolaryngology (ARO), International Speech Communication Association (ISCA), Association for Women in Science (AWIS), and the American Society for Engineering Education (ASEE). The recipient of Johns Hopkins University Catalyst Award (2017) and Kenan Award for Innovative Projects in Undergraduate Education (2015), she won the prestigious Office of Naval Research Young Investigator Award and a National Science Foundation Early Career Award. She is a member of the IEEE speech and language processing technical committee and the IEEE research and development policy committee and serves on the technical committee of many conferences and the editorial boards of various publications, including Neural Networks, PLOS Computational Biology and Frontiers in Neuroscience.
Elhilali’s research bridges the gap between neuroscience and audio technologies by examining the computational and neural bases of sound and speech perception and behavior in complex acoustic environments. Using mathematical signal processing models, behavioral testing (psychoacoustics), and neural recordings, she focuses on decoding how these processes guide human behavior, and on engineering more efficient machine parsing of complex soundscapes. Her work has applications in a wide range of fields spanning the medical, commercial, military, and robotic domains.
Recently, Elhilali has explored how attention to sound provides feedback to brain networks and changes how humans analyze and understand their acoustic surroundings. The brain operates as an adaptive system that constantly changes its processing in order to sift through the cacophony of sounds in our environments. By studying and modeling this adaptive behavior, Elhilali’s work offers novel theories for advancing intelligent audio technologies. Her multidisciplinary research is creating a number of insights into brain sciences, adaptive signal processing, audio technologies, and medical systems, including devising new diagnosis technologies that leverage body sounds to tackle public health problems, such as pneumonia, that affect millions worldwide.
Affiliated with Johns Hopkins’ Center for Language and Speech Processing, Elhilali is a member of Institute of Electrical and Electronic Engineers (IEEE, senior member), IEEE Signal Processing Society, Acoustical Society of America (ASA), Society for Neuroscience (SfN), Association for Research in Otolaryngology (ARO), International Speech Communication Association (ISCA), Association for Women in Science (AWIS), and the American Society for Engineering Education (ASEE). The recipient of Johns Hopkins University Catalyst Award (2017) and Kenan Award for Innovative Projects in Undergraduate Education (2015), she won the prestigious Office of Naval Research Young Investigator Award and a National Science Foundation Early Career Award. She is a member of the IEEE speech and language processing technical committee and the IEEE research and development policy committee and serves on the technical committee of many conferences and the editorial boards of various publications, including Neural Networks, PLOS Computational Biology and Frontiers in Neuroscience.
研究兴趣
论文共 166 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Open Mind (2024): 333-365
EURASIP Journal on Audio, Speech, and Music Processingno. 1 (2024): 1-13
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)pp.6265-6269, (2024)
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC (2023): 1-5
Journal of the Acoustical Society of Americano. 3_supplement (2023): A329-A329
引用0浏览0引用
0
0
BIOMEDICAL SIGNAL PROCESSING AND CONTROL (2023): 104852
CoRR (2023): 4573-4585
ACS applied bio materialsno. 8 (2023): 3241-3256
引用0浏览0WOS引用
0
0
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023): 1241-1245
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023): 1196-1200
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn