Anyone accessing this blog probably doesn’t need to be convinced that the ability to predict the timing of upcoming, behaviorally relevant stimuli is important for our ability to perceive and interact with the world. Although I’m quite rhythm-centric, it’s obvious that there are multiple ways in which we can estimate when something important might occur. For example, when the occurrence of an event is inevitable within a specific time window, its probability of occurrence usually increases as a function of time according to what is referred to as a “hazard function” (think of the probability that a car will eventually break down as you keep driving it). However, it’s also possible to engineer distributions for which the probability of occurrence is centered on a particular time point with a small or large amount of variability. The question is then about the neural mechanisms on which this type of temporal predictability (where the event usually occurs after about 1 second, for example) is based.
A recent EEG study by Herbst and Obleser examined the behavioral and neural differences between more and less temporally predictable situations, where temporal predictability had to be learned implicitly by the participants. The task was a pitch categorization task, in which a single tone was presented on each trial, and participants indicated whether it was high or low. The trick was that the time interval between a “cue” that the trial had started and the “target” (to-be-categorized tone – which was importantly embedded in noise to make the task more difficult) was varied according to distributions that made the target more or less temporally predictable. I’ll focus here on their Experiment 2, in which short blocks were presented in randomized order where the target timing was strongly predictable, weakly predictable, or not predictable.
Behaviorally, classical foreperiod effects made it clear that the basic experimental design worked as planned – reaction times decreased with increasing foreperiods (the later the target, the faster the RT). However, the condition-specific behavioral effects (or lack thereof) call into question whether the elegant experimental design (that involved completely implicit learning of temporal predictability) worked as well would have been hoped. The size of the foreperiod effect was indeed larger for temporally predictable compared to unpredictable conditions. However, the critical interaction was actually decidedly nonsignificant. Given that I might have rather expected some benefit at the expected time for the predictable conditions, rather than just a steeper foreperiod effect, I leave it up to the reader to judge whether they are sufficiently convinced by the behavioral results.
However, the some of the neural effects do seem to solidly indicate that implicitly learned temporal predictability was doing something. For example, P2-ish ERP magnitudes decreased with temporal predictability, and a later negative deflection increased in magnitude with temporal predictability. Maybe most interesting, dynamic changes in alpha power seemed to anticipate the expected target onset – alpha power increased briefly after cue onset, then decreased below baseline, and seemed to rebound back to baseline levels in anticipation of target onset. This effect was more obvious for temporally predictable compared to unpredictable conditions. [Of course, this begs the question why getting alpha back to baseline (to a zero-value) would be good for performance.]
For what should have been the most interesting neural dependent measure though, the results are confusing. The authors hypothesized (as I would have), that phase consistency across trials in low-frequency bands (esp. delta, ~0.5–4 Hz) would be higher prior to predictable than undpredictable targets. The reason is that temporal predictability allows low-frequency neural oscillations to get into the right phase at the right time for upcoming stimuli, which might be exactly why we perceive predictable things better than unpredictable things. This goes for paradigms using rhythmic stimuli to entrain low-frequency oscillations as well as more classical foreperiod-style paradigms that vary temporal predictability of a target in a more interval-based fashion. Turns out, Herbst and Obleser observed exactly the opposite – delta phase consistency was reduced for predictable compared to unpredictable schemes (though this difference did occur just after the cue and wasn’t necessarily present leading up to the target when it would have been expected).
With respect to delta phase, there are several possible explanations for the surprising results (that delta phase was less concentrated for predictable situations). First, the authors took great care not to contaminate the pre-target time window with target-evoked responses. By removing the target-evoked ERPS before time–frequency transformation, they may have removed an artifact that has been present in previous studies. Second, the authors took greater care than any paper of which I’m personally aware to not just manipulate foreperiod, but to randomize inter-trial intervals in a way that wouldn’t allow for entrainment to the low-frequency pace of the task itself. To my knowledge, all studies of the neural underpinnings of temporal preparation (except for the one being discussed here) using a fixed or jittered inter-trial interval have never taken such care to abolish an overall experimental pacing. Nonetheless, I still would not have expected opposite phase-consistency results.
In any case, I think the paradigm – where temporal predictability had to be learned entirely implicitly – is remarkably clever and can be used in future work to truly understand the neural mechanisms underlying temporal predictability that is not entrainment-based (i.e., based on rhythm). Given recent work moving in this direction, this work carefully removing rhythmicity (here, of the task itself) and eliminating evoked responses that could contaminate phase-concentration measures, should be used as an example of thoughtful experimental design.