Research
Auditory Perception of Objects and Scenes
In the past, we have studied a variety of topics related to how we segregate sounds in complex situations, sometimes called the Cocktail Party Problem in the case of speech perception in noise (Bronkhorst, 2015). Our work has focused mostly on sound segregation of simpler tone patterns in auditory stream segregation tasks and collections of environmental sounds in change deafness tasks (for our reviews, see Snyder et al., 2007; Snyder et al., 2012; Snyder & Elhilali, 2017).
More recently, we have found:
- During change detection, attention to objects and semantic knowledge play important roles in determining which sounds are noticeable in complex auditory scenes (Irsik et al., 2016; Vanden Bosch der Nederlanden et al., 2016).
- During stream segregation, bistable switching of perception can be explained by bottom-up processing in the auditory ventral stream and the interplay between inhibition and adaptation mechanisms (Higgins et al., 2020; Little et al., 2020/). We discovered that the auditory sustained negative event-related brain potential (localized to anterior auditory cortex) reflects perceptual state and perceptual switching (on the left), and three stages of auditory processing were sufficient to explain this type of perception (on the right). The Object Analysis level was especially well suited to emulating human perception.
Image Descriptions: Perception Diagrams
Now we are focused on the following problems:
- Does auditory scene perception rely more on perception of objects or on the types of global scene properties that have been found to be important in vision (e.g., Greene & Oliva, 2009)? We have recorded and analyzed a large number of natural auditory scenes from various sites in the United States and are now collecting data to understand how they are perceived.
- Why is auditory long-term memory worse than visual long-term memory (Cohen et al., 2009)? We are collaborating with Colleen Parks and Melissa Gregg to understand this using a variety of behavioral and cognitive neuroscience methods.
Music Perception
We focus a lot on perception of musical rhythms, especially how we perceive beat and meter while listening to music. This topic has seen recent progress, especially in uncovering surprising beat perception abilities in non-human abilities (Merchant et al., 2018), using steady-state brain potentials to measure neural processing of musical beat (Nozaradan, 2014), and using oscillator models to explain beat perception (Large et al., 2015).
We have focused on developing new paradigms to study beat perception in children and adults, using behavioral and event-related brain potential methods:
- Surprisingly, we found that beat perception develops gradually in school-aged children (see figure below) and meter perception (defined as perceiving two beat levels at the same time) is not present in children at all (Nave-Blodgett et al., 2020, in press).
- We found that steady-state brain potentials reflect perception of beat perception, guided by a musical context (Nave et al., in press). We are also leading a project in which numerous labs will replicate a prior study that showed steady-state brain potentials reflect perception during a beat imagery task (Nave et al., provisionally accepted).
We are also studying:
- Perception of musical groove, the urge to move to music, and how it relates to musical aptitude, reward, and the motor system, using a variety of behavioral and cognitive neuroscience methods.
- How the experience of misophonia is reflected in behavioral and physiological responses. Misophonia is when repetitive sounds produced by others causes sometimes surprisingly strong negative reactions, such as anger, disgust, and annoyance. For this project we are collaborating with the laboratories of Erin Hannon and Stephen Benning here at UNLV.