Temporal predictability modulates putative midbrain activity: evidence from human EEG

Predictable timing has been shown to modulate the neural processing of auditory stimuli at multiple stages and time scales, e.g. reducing the amplitude of the P50 and N1 potentials in the EEG. The modulatory effects of predictable timing include an enhancement of repetition suppression (see these examples) and omission responses to tones whose identity can also be predicted. However, most of the previously reported modulations of evoked responses occur relatively late, and have primarily been attributed to cortical processing. Can similar modulatory effects of predictable timing be observed at earlier, putatively subcortical stages?

A recent paper by Gorina-Careta et al. addresses this question by focusing on the human auditory frequency-following response (FFR) – a sustained EEG component serving as a proxy for the auditory brainstem response. The FFR signal is phase-locked to the periodic characteristics of the eliciting stimulus with a short delay (~15 ms), and has previously been shown to be sensitive to contextual factors. The authors recorded the FFR at the central electrode (Cz) in response to an auditory sequence consisting of a rapidly repeated syllable /wa/. The F0 formant of the /wa/ syllable – describing the most prominent frequency component of the auditory stimulus – was set to 100 Hz.

Accordingly, the 100 Hz component of the FFR signal was significantly reduced by temporal predictability (although the reported effect sizes were rather modest), suggesting that even very early auditory processing stages can be modulated by predictable timing. However, the modulatory effect of predictability appeared only after several hundred repetitions, indicating that the putative subcortical responses are shaped over the course of learning. While a closer inspection of Figure 1B might suggest that perhaps the most prominent modulation of the 100 Hz FFR component occurs not in the analysed time window (65-180 ms, corresponding to the steady-state part of the FFR) but in a pre-stimulus time window (shown from -40 ms), this is likely due to a contamination of the baseline in the unpredictable condition by stimulus presentation.

The authors also analysed the extent to which timing predictability influences the neural pitch strength, using a metric that quantified the magnitude of EEG phase-locking to the syllable pitch. While this metric also showed a modulation by timing predictability and its sensitivity to stimulus repetition, the pattern of results was opposite to the FFR. Thus, neural pitch strength was stronger under predictable timing, an effect which was especially prominent during the first 200 repetitions and disappeared after 500 repetitions. This interaction was due to a gradual reduction of the neural phase-locking to the stimulus pitch over the course of learning in the predictable condition, and no learning-related differences in pitch strength in the unpredictable condition.

On a methodological note, it would be interesting to see if the effects would be similar – or perhaps more robust and/or consistent over time – had the authors chosen a different F0 of the acoustic stimuli. In this paper, the F0 at 100 Hz falls exactly at the first harmonic of line noise (50 Hz), one of the most prominent artefacts in EEG signals. Thus, especially in the predictable condition (in which the stimulus onset asynchrony was fixed at 366 ms), approximately every third stimulus might be presented at a phase interfering with line noise.

Nevertheless, these results suggest two complementary mechanisms of temporal predictability: an initially increased neural phase-locking to the physical stimulus which disappears over the course of learning, and a gradual suppression of the neural response to the primary stimulus frequency (F0) occurring at later stages of learning. The authors interpret the first result as a reflection of more reliable processing of complex acoustic inputs. Thus, by increasing the signal-to-noise ratio, temporal predictability might facilitate the extraction of characteristic input features and the forming of neural predictions, which in turn suppress the responses to the most predictable (i.e. highly repetitive) aspects of the stimuli. The latter finding might therefore reflect a gradual deployment of neural predictions formed under temporal predictability to lower (subcortical) stages of auditory processing.

Invasive work in the rodent auditory system shows that the modulation of neural responses to predictable (e.g. repetitive) stimuli occurs already at subcortical stages, including the midbrain. This paper suggests that also the temporal predictability of stimuli might influence the short-latency neural responses associated with activity in the auditory brainstem. While invasive recordings might be necessary to establish an unequivocal link between the modulation of neural activity by temporal predictability and specific subcortical structures, these results offer further support for proposals that even the very early stages of sensory processing might be shaped by statistical regularities in the environment.

Ryszard Auksztulewicz, Oxford Centre for Human Brain Activity

Source article: Gorina-Careta N, Zarnowiec K, Costa-Faidella J, Escera C (2016) Timing predictability enhances regularity encoding in the human subcortical auditory pathway. Scientific Reports 6:37405. doi: 10.1038/srep37405.

Time-dependency in perceptual decision-making

Sequential sampling models are a class of models that are widely supported by empirical and modelling studies in perceptual decision-making. These models propose that noisy sensory information for each choice alternative is accumulated over time until a particular decision threshold is reached, which in turn leads to a response associated with that threshold (see Forstmann et al., (2016) for a nice review). Standard sequential sampling models like the drift diffusion model (DDM) assume that this decision process is context-dependent but time-invariant, meaning that both the rate at which the evidence is accumulated and the decision threshold can vary across different contexts but remain fixed over the course of a single decision. One drawback of this assumption is that the time required to make a choice increases with ambiguity in sensory evidence. This can lead to suboptimal behaviour in contexts that require subjects to strike a balance between response speed and accuracy (the speed-accuracy tradeoff), especially when the potential cost of continued deliberation increases with time. Now, however, a paper from Murphy and colleagues (2016) has provided convergent behavioural, electrophysiological and model-based evidence for the presence of a dynamic ‘urgency signal’ during perceptual decision-making which strongly refutes the assumption of a time-invariant decision policy and suggests that human decision-makers may be considerably more flexible than previously thought.

Perceptual decision-making tasks that solely prioritise accuracy rather than the speed of choices do not in principle invoke time-dependency. Even in tasks which should promote a dynamic speed-accuracy tradeoff, human decision-makers have been found to display an accuracy bias whereby choices are slower and more cautious than required, which leads to lower task payoffs on average. Under such conditions, standard sequential sampling models provide a good fit for the data without the need to incorporate a time-dependent component in the decision policy. In a new twist on common experimental designs in the field of perceptual decision-making, Murphy and colleagues (2016) applied an incentive scheme during performance of a standard two-alternative motion discrimination task that laid an especially heavy monetary penalty (10x that of an incorrect decision) on failure to make a decision within a stipulated time (1.4 seconds). In contrast, the magnitude of reward and penalty was the same for correct and incorrect trials, respectively. Thus, failure to make a decision within the temporal deadline cost participants on this task ten correct trials whereas an incorrect choice cost just one correct trial. Such an incentive scheme reduces the accuracy bias and should lead to strong time-dependency, if human decision-makers are capable of it.

Murphy et al. first examined the empirical conditional accuracy functions relating accuracy to reaction time (RT), which provide a window onto variation in the amount of accumulated evidence that subjects required for decision commitment. These functions suggested two phenomena when subjects performed with versus without the deadline on choices: a ‘static’, time-invariant lowering of the required evidence coupled with a gradual decrease in required evidence as time progressed within a single trial. Moreover, approximately zero evidence was required to make a decision around the time of the deadline, which resulted in chance performance when decisions took that long to be made but in very few missed deadlines. The latter findings in particular are hallmarks of time-dependency in the decision process. Mechanistically, these empirical observations may arise from two distinct sources in the framework of a sequential sampling model: a decision threshold might collapse over time within a trial; or, the threshold could remain fixed and some form of additional input (an urgency signal) might instead be added to the evidence accumulation process itself  as the trial progresses. To distinguish between these possibilities, Murphy et al. examined brain activity (in the form of EEG) recorded during task performance. They found that electrophysiological signals in the µ frequency range (8-14Hz), which are thought to reflect building decision-related motor preparation, exhibited both increased  pre-trial baseline activity under speed pressure (corresponding to a static urgency effect) and a dynamic increase in activity over the course of a trial for both the choices (reflecting a time-dependent urgency effect). These observations were further supported by computational modelling showing that a version of the DDM that included an urgency signal with both static and time-dependent components, coupled with a fixed decision threshold, explained the behavioural data far better than the standard DDM without an urgency signal but with a condition-dependent, time-invariant decision threshold.

Equipped with these findings, Murphy et al., (2016) also explored whether time-dependent urgency was present in trials under mild speed-pressure (without any explicit penalty for missed deadlines) by reanalysing data from a different set of experiments. They found that a time-dependent decision policy seemed to be deployed, albeit less severely, even in contexts where speed pressure is mild. This suggests that the assumption of time-invariance may not even hold in standard perceptual decision-making tasks and that time-dependency is an important factor that cannot be ignored in studies of decision-making.

How might the flexible urgency signal described above be generated in the brain? One appealing candidate mechanism that has already received some attention from computational neuroscientists is  modulation of the ‘gain’ or responsivity of the brain circuits thought to carry out neural evidence accumulation. Moreover, several studies have identified that pupil diameter seems to provide a reliable non-invasive index of the activity of low-level neuromodulatory systems that boast diffuse cortical projections and are hypothesised to control global neural gain (see Aston-Jones & Cohen (2005) for a review). Using pupillometry, Murphy et al., (2016) found in a final study that tonic pupil diameter prior to trial onset was higher when subjects performed under the temporal deadline, reflecting the static urgency effect. In addition, phasic, trial-evoked pupil fluctuations revealed a time-dependent increase in pupil size as the deadline approached, suggesting that the time-dependent urgency effect might be achieved through global gain modulation. Formal modelling of the pupil time-series showed that the input to the pupil system during decision formation is a ramping signal that increased monotonically with elapsed decision time under deadline. Lastly, simulations using a simple neural network model provided strength to the hypothesis that global gain modulation is a plausible biophysical mechanism for generating static and time-dependent urgency in the brain.

The above results, though important for decision-making researchers in general, hold equal relevance in timing research. Performing a task such as the one used by Murphy et al. requires subjects to sample and accumulate sensory evidence while also continually updating estimates of the elapsed time since trial onset, thus concurrently recruiting brain regions involved in both decision-making and time perception. The input to the neural system responsible for generating the urgency signal may thus originate from a network of brain regions involved in the estimation of elapsed time (for e.g., the dorso-medial prefrontal cortex).Perceptual decision-making experiments usually assume temporal invariance of the decision policy in a single trial level. The paper by Murphy et al., (2016) shows that this can no longer be the case. As it is well established that distributed and varied brain regions contribute to human cognition in general, it is time that more studies incorporate established theories from various domains (for e.g., time perception and decision-making as in the experiments above) to obtain better insights into the working of human brain.

Source article:

Murphy, P. R., Boonstra, E. & Nieuwenhuis, S. (2016). Global gain modulation generates time-dependent urgency during perceptual choice in humans. Nat. Commun. 7, 13526. doi: 10.1038/ncomms13526.

Articles cited:

Forstmann, B. U., Ratcliff, R. & Wagenmakers, E. J. (2016). Sequential sampling models in cognitive neuroscience: advantages, applications, and extensions. Annu. Rev. Psychol. 67, 641-666.

Aston-Jones, G. & Cohen, J. D. (2005). An integrative theory of locus coeruleus- norepinephrine function: adaptive gain and optimal performance. Annu. Rev. Neurosci. 28, 403-450.

Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation

As interest in the mechanistic roles of neural oscillations and neural entrainment in perception and cognition increases, so does interest in the bounding conditions for entrainment. The degree of temporal regularity is an intuitive feature to consider – entrainment to a completely periodic stimulus is clear, but entrainment to a completely structureless stimulus is impossible by definition. Somewhere in between are behaviorally relevant stimuli, such as music and speech in the auditory modality or lip movements and gestures in the visual modality. A number of papers have used time-domain measures that allow for more dynamic measures of entrainment, such cerebro-acoustic phase lag or mutual information. However, approaches making use of “steady-state evoked potentials” or “frequency-tagging” methods to measure entrainment typically transform long epochs of time-domain neural data to the frequency domain and use the height of peaks in the frequency spectrum to index the strength of entrainment at the corresponding rate. This approach effectively discards dynamics of entrainment and, as demonstrated in a new paper by Keitel, Thut, and Gross in NeuroImage, may lead to underestimations of entrainment strength when the stimulus rate varies over time.

The authors constructed well-controlled visual stimuli for which contrast fluctuated independently in the two hemifields within theta- (4–7 Hz), alpha- (8–13 Hz), or beta-band (14–20 Hz) ranges. At the same time, the frequency of modulation in each hemifield was itself modulated according to random, continuous modulation functions that were uncorrelated across hemifields, leading to “quasi-rhythmic” visual stimulation (and perhaps the first time “quasi-rhythmic” has been concretely, operationally defined!). An attention manipulation (“attend left” vs. “attend right”) allowed the authors to compare (using EEG) entrainment strength and strength of attentional modulation for quasi-rhythmic stimuli with modulations in each frequency band and to compare these data to fixed-frequency sinusoidal modulations in the alpha range (10 Hz on the left and 12 Hz on the right).

Although there are very many interesting findings reported in the paper (and I encourage anyone reading this blog to check out the paper!), I’d like to focus on an important methodological issue that the paper confronts as well as its implications. A critical feature of quasi-rhythmic stimuli in any modality is that the instantaneous frequency wanders around over time. For that reason, converting a whole time series of electrophysiological data to the frequency domain reduces the signal-to-noise ratio for any single frequency compared to fixed-frequency stimulation (and violates the stationarity assumption of the Fourier transform). The authors elegantly demonstrate this by analyzing EEG responses to quasi-rhythmic stimulation in two ways. First, they use an approach based on calculating the cross-coherence between short segments of narrow-band EEG (multi-taper method) and the corresponding segments of the stimulus. This technique leaves temporal dynamics intact, and demonstrates entrainment of frequency-band-specific neural activity to quasi-rhythmic stimuli. Analyzing the same EEG data using an approach that considers only power of short data segments and then averages those frequency-domain representations (if I’ve read that correctly; ignoring the fact that frequency may change over the time course of stimulation) failed to reveal entrainment, and instead looked like the power spectrum that might be expected during a resting-state measurement, regardless of the frequency range of the visual stimulation.

This demonstration potentially reconciles conflicting results in the literature regarding strength of entrainment to perfectly regular versus quasi-rhythmic stimuli. Moreover, this finding highlights the importance of not ignoring the dynamic, nonstationary nature of behaviorally relevant stimuli and the neural activity that synchronizes to such stimuli. Approaches focusing on steady-state evoked potentials and frequency-tagging often convert long stretches of time-domain data to the frequency domain without considering dynamics – which may be a close enough approximation when the stimulus has a single frequency, but certainly doesn’t represent the way that brains work generally or how entrainment to quasi-rhythmic, behaviorally relevant stimuli works more specifically. In order to really understand neural “dynamics” and how they are related to perception and cognition, making use of analysis techniques that don’t obscure dynamics will be critical. I’m optimistic that demonstrations like the current one – that not all analysis approaches preserve the dynamic nature of entrainment to quasi-rhythmic stimuli and that this matters for interpretation – will allow us to better understand the roles of neural entrainment in perception and cognition in naturalistic situations.

–Molly Henry, University of Western Ontario

Temporal statistical regularity results in a bias of perceived timing

Statistical regularity in the stimulus leads to learned expectations that give rise to intrinsic biases affecting the processing of subsequent stimulus information. One unresolved question in timing research is how learned temporal regularity affects the perceived timing of subsequent stimuli. The prevailing hypothesis is that expectations should bias any deviant intervals to be more similar to the regular intervals on which the expectations are learned. Precisely, expectations will have a symmetric performance effect on early and late stimuli.

In a series of experiments, Di Luca and Rhodes (2016) presented subjects with a sequence of isochronous stimuli followed by a test stimulus of varying asynchrony, and subjects are required to report if they perceive the test stimulus as isochronous or anisochronous compared to the initial sequence. They find that when subjects are presented with isochronous stimulus sequence, the expectations learned from the sequence give rise to a bias, termed Bias by expected timing (BET). Under such conditions, the minimum detectable anisochrony in the test stimulus should be greater than the actual BET. This is because BET counteracts the improved detectability of stimuli presented later than expected i.e., stimuli following a long sequence that are presented later than expected are perceptually accelerated against the detectability of asynchrony. However, behavioural results show that perceptual delay is only present at large anisochronies for stimuli presented earlier than expected. Thus, the effect of BET for early stimuli is insufficient to counteract the effect of improved detectability, leading to an asymmetric distribution of responses. To summarise, BET leads to acceleration of stimuli presented at the expected time point or later and a perceptual delay for stimuli presented earlier than expected (figure 1 below). Di Luca and Rhodes (2016) used novel experimental paradigm to obtain a less biased measure of BET in different conditions.

 

figure-1

Figure 1: A: Example trial sequences where participants judged the temporal order of the audiovisual pair presented at the end of a sequence. Top: An audio sequence with the final stimulus presented earlier than expected (negative anisochrony) and light presented before the final audio stimulus (positive stimulus-onset asynchrony). Bottom: A visual sequence with the final stimulus presented later than expected (positive anisochrony) and audio presented before the final visual stimulus (negative stimulus-onset asynchrony). B: Average perception of subjective simultaneity (PSS) corresponding to the stimulus-onset asynchrony at which the audio and visual stimulus are perceived to be simultaneous. The difference between the PSS values on the two curves indicates the BET. If there was no change in perceived timing across presented anisochronies, the PSS curves should be horizontal. Instead, the BET changes as a function of anisochrony as stimuli presented at -80ms are perceptually delayed while stimuli presented at 0 ms and +40ms are perceptually accelerated (from Di Luca and Rhodes (2016)).

Different classes of models that explain how the brain deals with temporal regularities (interval-based models and entrainment models) predict a symmetric performance of BET and cannot explain the asymmetry in performance seen above. Di Luca and Rhodes (2016) explain this counter-intuitive effect using a Bayesian model of perceived timing. A Bayesian framework usually requires two distributions: apriori probability and likelihood function that combine together to estimate the posterior probability distribution. In experimental paradigms described in the paper, subjects could perform the task by comparing the perceived timing of the test stimuli to the expected timing (the apriori probability). The probability of sensing a stimulus after it has occurred is given by the likelihood function and is modelled as a monophasic impulse response function resulting from an exponential low pass filter. As the isochronous sequence progresses in the trial, the initially flat prior updates dynamically after the posterior distribution and becomes increasingly similar to the asymmetric likelihood function (figure 2 below). The asymmetric prior represents the learned expectation and when combined with the asymmetric likelihood pushes the posterior away from the likelihood and towards the prior distribution. The perceived timing is given by the mean of the posterior probability distribution. As shown in figure 2, stimulus presented after or before the expected time leads to an acceleration or delay of perceived time respectively.

 

figure-2

Figure 2: Bayesian model of perceived timing. A: Likelihood probability distributions for audio and visual stimuli presented at t = 0. B: The apriori distribution curves for the next stimulus as the trial progresses. C: Posterior probability distributions obtained by integrating the prior and likelihood distributions for stimuli presented before, at and later than the expected time-point respectively.Perceived timing is the mean of the posterior distribution. Due to the asymmetric shape of prior and likelihood distributions, the posterior is pushed towards the prior distribution resulting in a bias of perceived timing (from Di Luca and Rhodes, 2016).

Now that we have a model that explains the data, the next step is to identify the neural correlates for the Bayesian model proposed above. It has been shown that temporal expectations lead to a desynchronization of alpha-band activity, where the neural response to stimuli is amplified at the expected time point (Rohenkohl and Nobre, 2011). As a result, stimulus presented before the expected time point is thus not amplified (leading to a perceptual delay) whereas stimuli presented at the expected time point or later are accelerated. Other studies have shown that alpha-beta band activity mediates feedback projections in human visual processing (Michalareas et al., 2016) and in a predictive coding framework, feedback connections subserve the signalling of predictions from higher cortical areas to lower cortical areas in the decision-making hierarchy. Taken together, these studies suggest alpha-band activity as a strong candidate for the neurophysiological correlate of BET.

Source article: Di Luca, M. & Rhodes, D. (2016). Optimal perceived timing: Integrating sensory information with dynamically updated expectations. Scientific Reports 6: 28563.

 Articles cited:

Rohenkohl, G. & Nobre, A. C. (2011). Alpha oscillations related to anticipatory attention follow temporal expectations. J. Neurosci. 31, 14076 – 14084.

Michalareas, G., Vezoli, J., van Pelt, S., Schoffelen, J.-M., Kennedy, H. & Fries, P. (2016). Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas. Neuron, 89, 384 – 397.