Integrating speech in time depends on temporal expectancies and attention

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Matthias Scharinger
  • Johanna Steinberg
  • Alessandro Tavano

Organisationseinheiten

Externe Organisationen

  • Philipps-Universität Marburg
  • Universität Leipzig
  • Max-Planck-Institut für empirische Ästhetik (MPIEA)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)28-40
Seitenumfang13
FachzeitschriftCortex
Jahrgang93
Frühes Online-Datum19 Mai 2017
PublikationsstatusVeröffentlicht - Aug. 2017

Abstract

Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125–150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.

ASJC Scopus Sachgebiete

Zitieren

Integrating speech in time depends on temporal expectancies and attention. / Scharinger, Matthias; Steinberg, Johanna; Tavano, Alessandro.
in: Cortex, Jahrgang 93, 08.2017, S. 28-40.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Scharinger M, Steinberg J, Tavano A. Integrating speech in time depends on temporal expectancies and attention. Cortex. 2017 Aug;93:28-40. Epub 2017 Mai 19. doi: 10.1016/j.cortex.2017.05.001
Scharinger, Matthias ; Steinberg, Johanna ; Tavano, Alessandro. / Integrating speech in time depends on temporal expectancies and attention. in: Cortex. 2017 ; Jahrgang 93. S. 28-40.
Download
@article{ef5e7cf8c1984b82b5250d65dc866ad0,
title = "Integrating speech in time depends on temporal expectancies and attention",
abstract = "Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125–150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.",
keywords = "Mismatch negativity, Omission, Prediction, Speech, Temporal integration",
author = "Matthias Scharinger and Johanna Steinberg and Alessandro Tavano",
year = "2017",
month = aug,
doi = "10.1016/j.cortex.2017.05.001",
language = "English",
volume = "93",
pages = "28--40",
journal = "Cortex",
issn = "0010-9452",
publisher = "Masson SpA",

}

Download

TY - JOUR

T1 - Integrating speech in time depends on temporal expectancies and attention

AU - Scharinger, Matthias

AU - Steinberg, Johanna

AU - Tavano, Alessandro

PY - 2017/8

Y1 - 2017/8

N2 - Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125–150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.

AB - Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125–150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.

KW - Mismatch negativity

KW - Omission

KW - Prediction

KW - Speech

KW - Temporal integration

UR - http://www.scopus.com/inward/record.url?scp=85020445758&partnerID=8YFLogxK

U2 - 10.1016/j.cortex.2017.05.001

DO - 10.1016/j.cortex.2017.05.001

M3 - Article

VL - 93

SP - 28

EP - 40

JO - Cortex

JF - Cortex

SN - 0010-9452

ER -