Perceptual reorganisation in deaf participants: Can high-level auditory cortex become selective for visual timing?

A paper recently published in PNAS reports a fascinating example of task-specific perceptual reorganisation in deaf participants that raises interesting questions regarding the involvement of high-level auditory cortex in temporal processing.


The study found that a rhythmic sequence task involving visual stimuli (a flashing disc) evoked activity in a high-level auditory region in deaf participants. The region – called area Te3 – showed stronger responses to temporally patterned sequences of visual flashes compared to visual sequences comprised of isochronous stimulation. In participants with intact hearing however, area Te3 showed rhythm selective responses only to auditory sequences, confirming that this region is typically involved in auditory processing. The authors concluded that auditory sensory depravation led to a reorganisation of the pathways servicing high level auditory cortex, a suggestion supported by connectivity analysis showing increased connectivity between area Te3 and visual area MT/V5 in deaf participants.


Although a striking example of perceptual reorganization, what is interesting about this conclusion is that the authors interpret the results as evidence of task-specific reorganization of high-level cortex. The implication here being that area Te3 is specialized for rhythmic processing in a modality independent manner. To support their argument, the authors note similar evidence of modality independent functional specialization in blind participants who show activation in visual cortex to auditory stimuli.


How could it be that an auditory selective region could come to be visually selective in deaf participants? One answer may lie in the residual hearing reported by the deaf participants. A table in the supplementary materials indicates that all the participants used hearing aids (outside the study) and that most reported their speech perception to be poor-moderate. This is interesting since listeners with low hearing rely more on visual temporal cues from the face to facilitate speech intelligibility. The increased utilization of visual timing cues to improve auditory processing may have led to a strengthening of the structural pathways between higher auditory cortex and visual cortex.


If so this raises questions regarding the degree to which area Te3 should be considered a task-specific region (i.e., modality independent, selective for timing tasks), or an auditory region typically involved in the temporal organisation of speech. Posterior STS is a multi-sensory region and shows many areas that are strongly selective for audio-visual speech perception. To identify the properties of area Te3, a more careful analysis of stimulus specific and task specific responses would need to be carried out within individual participants before any definitive claim can be made regarding the functional properties of this region.

Sequence learning modulates neural responses and oscillatory coupling in human and monkey auditory cortex

Picking up on statistical regularities over time is an important prerequisite for language acquisition. For example, learning the transitional probabilities between syllables provides important scaffolding for segmenting the ongoing speech stream into component words – something that is not possible based on auditory information alone. A recent study by Kikuchi and colleagues examined the electrophysiological neural responses to confirmations and violations of an artificial grammar’s rules, but did so in an especially ambitious way – by comparing invasive recordings from human and monkey auditory cortex.

Both species were exposed to an artificial grammar (sequences of CVC nonsense words concatenated in rule-based ways) for 30 minutes, and then neural recordings were made during listening to sequences in which the context led to a specific nonsense word being consistent with or violating the grammatical structure. In response to all nonsense words, both species showed phase consistency in the theta frequency band (~4–8 Hz) as well as power modulations in the gamma band (>~50 Hz). In addition, significant phase–amplitude coupling was found between the theta and gamma bands in response to nonsense words. The more interesting question then, is what happens in response to confirmations versus violations of the artificial grammar rules?

In both species, phase–amplitude coupling was modulated by both confirmations and violations of the artificial grammar rules. Some neurons liked confirmations, some liked violations, and some liked both. In a classical statistical testing world, averaging over recording sites, this would very much be a null effect. Of course, that’s not how neural population coding goes, so we can imagine that looking at the activity pattern over the population of neurons may have provided more information about whether grammar was being respected, but this type of analysis was not performed. Instead, an analysis is presented which suggests that the latencies of the different neural effects in monkeys at least were different, such that phase–amplitude coupling effects and changes in single-unit activity occurred earlier in time than gamma-power modulations. Keep in mind that these are the latencies of the statistical effects, and not necessarily when the real action starts happening (just when the action crosses a significance threshold). There were no attempts to relate the effects to each other in a more fine-grained way, to learn for example whether single-trial phase–amplitude-coupling modulations might predict subsequent power modulations on the same trial.

So, I’ll ask, as the authors ask, what does it all mean? There were no species differences whatsoever, at least in as far as what the current techniques and measures could tell. What does that imply for the relationship between neural “oscillations” (here, theta–gamma coupling specifically) and speech segmentation / perception? That is, can a neural response that is conserved across species do something special for humans that it doesn’t do for other species that don’t use language in the way we typically think of language being used? I’d say, “sure”. For one, the study tested responses to learned statistical regularities in the transitions between complex sounds, something some species of non-human animals seem quite able to do (see also a recent demonstration that monkey auditory cortex neural activity synchronizes with the slow rhythms of speech). On top of that, to cite something Anne-Lise Giraud said at a “Neural Oscillations in Speech and Language Processing” workshop I just attended, one of the really appealing things about neural oscillations is exactly that they are evolutionarily conserved, but still DO seem to have been coopted to do something special for humans.

To sum up, despite my superficial grumpiness about the paper’s shortcomings, I do think the approach is 100% commendable, and one way forward for learning about speech and language processing. Species comparisons are hard, especially with invasive recordings even for humans(!). But having the opportunity to directly compare humans to other species and to use carefully matched stimuli, pipelines, and maybe even tasks has the potential to tell us a lot about the human capacity to learn and communicate via spoken language.

What Language You Speak Shapes Your Subjective Time

If the popular 2016 science fiction movie “Arrival”, wherein linguist Dr. Louise Banks learns an alien language that enables her to understand and perceive the concept of time in a very different way (i.e. past, present and future exists simultaneously), fails to amaze you then probably the real experimental evidence in similar vein might astonish you. Yes, the recent article by Prof. Emanuel Bylund and Prof. Panos Athanasopoulos published in the Journal of Experimental Psychology: General demonstrates the effect of language on time perception.

The linguistic relativity hypothesis or more popularly known as “Sapir-Whorf hypothesis[1]” suggest that language affects thought process and cognition (although see McWhorter[2], 2014 for opposing view). Previous studies[3-5] by Prof. Lera Boroditsky and colleagues have shown how the concept of time, is represented differently in different languages, but a strong experimental study to demonstrate that language affects time perception was lacking.

Prof. Bylund and Prof. Athanasopoulos used temporal reproduction task, involving three groups, i) only Spanish speakers, ii) only Swedish speakers and iii) Spanish-Swedish bilinguals to investigate the effect of language on time perception. They selected Spanish and Swedish speakers, as in both these languages time is represented and expressed differently. While Spanish speakers represents time in terms of volume and use metaphors like “much time”, Swedish speakers on the other hand represents time in terms of distance and use metaphors like “long time”.

For 40 Spanish and 40 Swedish speakers, they measured the performance in temporal reproduction task as a function of changes in the non-temporal stimulus dimensions such as growing line (representing distance metaphor) or filling of container (representing volume metaphor). The duration of the stimulus and the irrelevant stimulus dimensions (i.e. length of line and filling of container) were manipulated orthogonally. The stimulus duration for reproduction task ranged from 1000ms to 5000ms in steps of 500ms, whereas the length of growing line or the filling of container ranged from 100 to 500 pixels in steps of 50 pixels.

Half of the Spanish and Swedish speakers performed the temporal reproduction task with growing line stimulus while other half performed the temporal reproduction task with filling container stimulus. At the beginning of every trial, the instruction to perform either the temporal reproduction task or the non-temporal (line or container) task was prompted with a word label and a symbol (e.g. hourglass for temporal task, and cross for non-temporal task). For Spanish group the following word labels were used ‘duracion’ for temporal task, ‘distancia’ for line task or ‘cantidad’ for container task, whereas for Swedish group the following word labels were used ‘tid’ for temporal task, ‘avstand’ for line task or ‘mangd’ for container task.

When they categorized the data into extreme (1000ms, 1500ms, 4500ms, 5000ms) and medium category (2000ms to 4000ms), they found that for medium category Spanish speakers performance in temporal reproduction task was influenced when observing the filling container but not when observing the growing line. On the contrary, Swedish speakers performance in temporal reproduction task was influenced when observing the growing line but not when observing the filling container. As the Spanish speakers use amount or volume based metaphor to represent time, having a volume based stimulus interfered with their temporal reproduction whereas the Swedish speakers use distance based metaphor to represent time, having a distance based stimulus interfered with their temporal reproduction.

Interestingly when the same experiment was performed with different 40 Spanish and 40 Swedish speakers, without the word prompt (only symbols were used to indicate which task to perform), no such effect was observed, suggesting that linguistic cue or prompt is necessary for such effect to be tapped in the temporal reproduction task.

To establish that the above effect is mostly language related and not cultural bias, they performed the above experiment with 74 Spanish-Swedish bilinguals wherein half participants were given prompt in Spanish language and other half were given prompt in Swedish language. As predicted and observed in experiment 1, when Spanish word prompt was used participants temporal reproduction was influenced by filling container stimulus, whereas when Swedish word prompt was used participants temporal reproduction was influenced by growing line stimulus. Thus establishing that language context influences time perception.

In conclusion, this study provides a convincing evidence for the effect of language context on time perception and opens a range of possibilities and questions, to be explored and answered, resulting in better understanding the relationship between language and time perception. In future, it would be nice to investigate this effect with other languages and temporal paradigms such as temporal bisection and generalization. In addition, it would be interesting to investigate whether such linguistic cues really influence time perception or only induce response bias; such questions can be addressed by performing the ERP version of similar experiment and measuring the CNV (contingent negative variation) component.

Although to experience such a drastic change in time perception as depicted in the movie “Arrival” may not be feasible at the moment, but some milder progress has been made in this direction with the introduction of “The Whorfian Time Warp”.


1. Whorf, B. L. (1956). Language, thought, and reality: Selected writings (J. B. Carroll, Ed.). Cambridge, MA: MIT Press.

2. McWhorter, J. (2014). The Language Hoax. Why the World Looks the Same in Any Language. New York: Oxford University Press.

3. Boroditsky, L. (2001). Does language shape thought? Mandarin and English speakers’ conception of time. Cognitive Psychology, 43, 1-22.

4. Boroditsky, L., Fuhrman, O., & McKormick, K. (2010). Do English and Mandarin speakers think about time differently? Cognition, 118, 123-129.

5. Casasanto, D., Boroditsky, L., Phillps, W., Greene, J., Goswami, S., Bocanegra-Thiel, S. & Gil, D. (2004). How deep are effects of language on thought? Time estimation in speakers of English, Indonesian, Greek, and Spanish. In K. Forbus, D. Gentner, & T. Regier (Eds.). Proceedings of the 26th Annual Conference of the Cognitive Science Society (pp. 186–191). Mahwah, NJ: Lawrence Erlbaum Associates.

Source article: Bylund, E., & Athanasopoulos, P. (2017, April 27). The Whorfian Time Warp: Representing Duration Through the Language Hourglass. Journal of Experimental Psychology: General. Advance online publication.

—Mukesh Makwana (,
Doctoral student,
Centre of Behavioural and Cognitive Sciences (CBCS), India.

Temporal encoding in EEG derived brain states

How our brain encodes time is still a mystery. It is possible, that temporal information might be encoded in hippocampal time cells, or activity in the midbrain dopamine neurons or neural circuitry of basal ganglia or some other neural dynamics. Investigating these temporal encoders usually requires non-human invasive or in vitro experimental approaches. However, a recent study published in Scientific Reports by Fernanda Dantas Bueno, Vanessa C. Morita, RaphaelY. de Camargo, Marcelo B. Reyes, Marcelo S. Caetano & André M. Cravo showed that temporal information can also be extracted from the non-invasive human-EEG derived brain states.

They used a unique and interesting temporal generalization task. Participants saw a target circle at the extreme left–center of the screen. Beginning of each trial produced a beep (say B1, 1000Hz, 100ms) and simultaneously the target circle started moving in the horizontal direction from left to right side of the screen, at the speed of 90/sec. At the center of the screen, there was an aiming sight (white circle). The moving target circle took exactly 1.5sec to reach at the center of the aiming sight. Participants were instructed to press a key when the target circle aligns with the aiming sight, this produced another beep (say B2, 500hz, 100ms) and a green disc as an indication of the key pressed by the participants. These types of trials were called as regular trials, and they essentially helped in learning the standard 1.5sec interval between the two beeps (B1 and B2). Intermixed with the regular trials there were test trials, which differed from regular trials in two aspects. First, the trajectory of the target was occluded via a rectangular box, so participants only heard the B1 beep and did not see the moving target circle. Second, participants did not pressed the key to produce B2 beep, instead the B2 beep was produced automatically after varying intervals (0.8, 0.98, 1.22, 1.5, 1.85, 2.27 or 2.8 sec) from B1. Participants reported whether the interval between B1 and B2 took less time, equal time or more time than the standard 1.5sec learnt during regular trials. Overall each participants performed 350 regular trials and 350 test trials. While participants performed this task, their brain activity was recorded using 64-channel scalp EEG.

For behavioral analysis, they fitted the psychometric function (cumulative normal) to the p(short) and p(long) responses, and calculated the point of subjective equality (PSE), just noticeable difference (JND) and weber ratio (WR) for proportion of short and long responses, separately. They found that the sensitivity was better i.e. JND was small, for short responses compared to long responses, but when the sensitivity was normalized with the actual interval then there was no difference between the short and long response conditions. Thus, they demonstrated the scalar property of time perception.

In electrophysiological analysis, they showed the classical CNV (contingent negative variation) which peaked at the standard duration (1.5sec). To investigate whether time-resolved EEG signals carry temporal information, they cleverly used the multivariate pattern analysis (MVPA) and multidimensional scaling (MDS) approach. According to the state-dependent timing models, the temporal information is encoded in different brain states, if this is true, then distinct spatiotemporal patterns of activity might produce different patterns of activation across the EEG sensors.  To measure the pattern of activation across EEG sensors, they performed the following analysis. Using Mahalanobis distance, they performed MVPA on data for six intervals (0.8, 0.98, 1.22, 1.5, 1.85, 2.27 seconds) and used MDS to represent them in a two dimensional plot. From these analysis they showed that EEG-derived spatio-temporal dynamic pattern, predicts the response of the participants for the uncertain intervals (short – 1.22sec, long- 1.85sec). Moreover, they also showed that the rate of change in state space as a function of time was higher for the shortest interval, than for the last interval, once again demonstrating the scalar property of time for brain states.

In conclusion, this a very good study, demonstrating and encouraging the use of MVPA and MDS to human EEG derived brain states, and its implication in understanding temporal encoding.

Source article: Bueno, F. D., Morita, V. C., de Camargo, R. Y., Reyes, M. B., Caetano, M. S., & Cravo, A. M. (2017). Dynamic representation of time in brain states. Scientific Reports, 7.


—Mukesh Makwana (,

Doctoral student,

Centre of Behavioural and Cognitive Sciences (CBCS), India.


Perceptual lags in the detection of postural perturbations

The vestibular system is perceptually slow compared to other sensory modalities. Consequently, vestibular stimulation needs to occur prior to other sensory stimuli in order to be perceived as being simultaneous in tasks requiring multi-sensory integration. This could be a byproduct of the central nervous system relying on other sensory modalities to confirm sensory onset and prioritising physiological responses (as in reflexes) over conscious awareness.

In a recent paper in Neuroscience letters, Lupo & Barnett-Cowan (2017) investigated whether perceptual lags exist in response to temporally unpredictable postural perturbations (falls). If that is the case, there would be no lead time for vestibular stimulation relative to a control stimulation in a different sensory modality. This in-turn suggests that slow vestibular perception is restricted to direct vestibular stimulation and movements of the head and the onset of perception of a fall would be delayed relative to the control stimulus. In the current study, temporal order judgments were used to examine the perceived timing of a fall by pairing temporally unpredictable postural perturbations with an auditory stimulation. Temporal order judgments at various stimulus onset asynchronies (SOA) were used to determine the point of subjective simultaneity. Across subjects, the average point of subjective simultaneity (PSS) preceded the point of true simultaneity (a negative PSS).

A major limitation in the study is that the postural perturbations were initiated by the experimenter manually, and this gave rise to skewed SOA distributions. Using correlation and cross-validation analyses, the authors addressed this limitation and showed that the findings are robust to this limitation. Another limitation is that the perturbation stimulus and the level of auditory stimulus are not standardised across subjects. This might introduce significant inter-individual differences in the results, posing concerns on the generalisation of the results.

To summarise, the perception of the onset of a fall is perpetually delayed in human subjects. This delay can arise due to a slow inertial perception or due to slow vestibular perception. Future research need to tease apart these factors and identify the mechanisms that result in such perceptual delays. By showing that perceptual delays exist in healthy young subjects, Lupo & Barnett-Cowan suggest that such mechanisms might be impaired in humans with balance impairments, especially in aging population. Their work guides future research that aims at developing effective methods to help humans prone to fall behaviour.

Source article:

Lupo, J., & Barnett-Cowan, M. (2017). Perceived timing of a postural perturbation. Neuroscience Letters. 639, 167-172. doi: 10.1016/j.neulet.2016.12.055.

Time perception, mindfulness and attentional capacities in transcendental meditators and matched controls

Our perception of and memory for the passage of time depend on a lot of factors that are unrelated to the actual physical passage of time, as measured by a clock. The adages “time flies when you’re having fun” and “a watched pot never boils” summarize these effects: fill an interval with a lot of interesting stimuli, and time [prospectively] flies, but an empty interval occupied only by waiting will seem to last an eternity. As a generalization, explanations for these effects tend to focus on how much attention was on time itself. When exciting things are happening, you don’t pay much attention to time passing, and so time seems to fly, whereas when nothing’s happening, where else could you put your attention but on the passing of time?

Meditation can take many forms, but (here comes another generalization) one commonality among various practices is that they often promote awareness – whether of one’s surroundings, one’s own mental state, or one’s responses to external stimuli – and self-regulation. With respect to time perception, one consequence of meditation that is particularly interesting is mindfulness, that is, bringing one’s attention to the experiences occurring in the present moment. Intuitively, it makes sense that mindfulness might improve time perception, or in some cases, might lengthen perceived duration, as time won’t fly like it would if you were distracted from each present moment.

Transcendental meditation is a specific practice in which a calm, peaceful, and aware mental state is achieved via repetition of a mantra. Making an assumption that transcendental meditation practitioners would be more mindful than matched controls, Schötz, Otten, Wittmann, Schmidt, Kohls, and Meissner tested for a relation between mindfulness and time perception. Practitioners and controls were tested on mindfulness, impulsiveness, attention, time perspective, subjective experience of time, as well as time estimation, reproduction, and discrimination tasks. I’ve taken the liberty of plotting the important results, since the original manuscript didn’t contain any figures.

Meditators were significantly more mindful (present and accepting), scored higher on the “present fatalistic” dimension of the time perspective questionnaire (related to mindfulness, can be summarized by the statement “Because things always change, one cannot foresee the future”), and reported significantly less time pressure than matched controls. The groups were matched on other important things though, like attentional capacities, stress levels, and mental and physical activity.

There were also differences in terms of time perception, but some of the results were admittedly very confusing. Meditators were better at estimating an 80-s interval during which they were reading numbers, but weren’t better at producing a minute while reading numbers, or estimating a 40-s interval while not doing anything else. (The dashed lines in the figure are the target duration – if the bars hit those dashed lines, participants would be, on average, perfectly accurate.)

Meditators were also more precise at reproducing intervals in the milliseconds-to-seconds range (600 ms – 1400 ms and 8 s – 20 s), thought the metric used to determine precision completely escaped me – it’s also very strange based on the psychophysics of time perception that precision would be on the order of 13% for intervals that were hundreds-of-milliseconds long, but on the order of 2% for longer intervals (it’s the latter part that’s more weird). But details aside, it seems practitioners were more accurate at interval reproductions.

Finally, auditory temporal discrimination thresholds were smaller for meditators.

So, what can we conclude from these data? Should you practice transcendental meditation to improve your time perception abilities? Maybe just your ability to estimate 80 s while reading numbers? That’s unclear. There certainly does seem to be something to the idea that practicing transcendental meditation might come with a somewhat more accurate time sense. Does this come from using a mantra as a metronome as the authors suggest? Seems unlikely that would be a useful strategy for reproducing e.g., 600 ms. Does it result from being more mindful, and aware of the present and the passage of time? That seems realistic to me. But, importantly, moving forward, and as scientific as well as personal interest in meditative practices seems to increase, it’s critical to use well-motivated and well-controlled designs to test the potential benefits as well as detriments of meditation.

– Source article: Schötz, Otten, Wittmann, Schmidt, Kohls, Meissner. Time perception, mindfulness and attentional capacities in transcendental meditators and matched controls. Personality and Individual Differences.


Let’s Dissociate Neural Network for Time perception and Working Memory

At fundamental level time perception involves, storing the temporal information of the present event and comparing it with the past temporal memories of similar or other events. It is impossible to imagine the process of time perception in the absence of working memory, and hence it has always been difficult to dissociate and study them in a single paradigm.

A recent study published in Frontiers in Human Neuroscience by Sertaç Üstün, Emre Kale and Metehan Çiçek, designed a novel paradigm to understand and dissociate the neural networks involved in time perception and working memory. Although all time perception tasks involves working memory, the main objective of this study was to understand and compare the brain activity when participants are performing only timing task, only numerical working memory task, or both.

In this study, participants (N=15) performed four types of experimental tasks (control task, only timing task, only working memory task, and both) while their brain activities were scanned using fMRI. Before each trial participants were cued about which task they were supposed to focus and report.

In control task, participants saw a box, horizontally moving from left side of the screen towards right. The middle path of this moving box was occluded using a black wide vertical bar. This black bar could be imagined as a tunnel and the box as a car, so initially you see the car (box) moving from left to right, in the middle of the screen it goes through a tunnel (black bar) so you cannot see it, and after some time it reappears from the other side of the tunnel (black bar). Participants pressed a key, when the box reappeared from other side of the vertical black bar. In only timing task, the authors very smartly changed the speed of the moving box when it was occluded, so sometimes the box reappeared on the other side after a short time (when speed was increased) or after a long time (when speed was decreased). Participants reported whether the speed increased or decreased. In only working memory task, they used a numerical task, so this box could contain either 1, 2, 3 or 4 dots in it. The number of these dots could increase or decrease when it was occluded. Participants reported whether the number of dots increased or decreased. Lastly, in the dual task condition, they asked participants to focus and report both the number of the dots and the speed of the box.

Behaviourally, they only recorded the reaction time (RTs) and accuracy for the four experimental tasks. In general, they found that participants were more accurate and faster in control task compared to any other demanding tasks. Comparing the accuracy of only timing task with only numerical working memory task suggests that timing task was relatively difficult compared to numerical working memory task.

In terms of brain activation, they observed enhanced activity in right dorsolateral prefrontal and right intraparietal cortical networks, together with the anterior cingulate cortex (ACC), anterior insula and basal ganglia (BG) when timing task was contrasted with control. While a right hemisphere domination was observed in timing task, they observed a left hemisphere domination when numerical working memory task was contrasted with control, specifically, enhanced activation in left prefrontal cortex, ACC, left superior parietal cortex, BG and cerebellum were observed. Both time perception and working memory were related to a strong peristriate cortical activity. One more interesting observation, was that while timing deactivated intraparietal sulcus (IPS) and posterior cingulate cortex (PCC), conversely the control, numerical memory, and dual (time-memory) tasks activated these brain regions.

They conclude that their results support a distributed neural network based model for time perception and that the intraparietal and posterior cingulate areas might play a role in the interface of memory and timing.

Although this study provides a good paradigm to study timing and memory related questions, there are some points, which should be noted. First, they do not use any explicit psychophysical timing task, which would have further provided more insights into the neural networks involved in maintaining a temporal working memory vs. maintaining a non-temporal working memory. Second, they only use one direction of moving box i.e. left-to-right, they could have controlled this by including the right-to-left direction, as well. This would reflect more about the hemisphere lateralization observed for timing and numerical working memory task. In addition, even top-to-bottom vs. bottom-to-top could be conducted, with horizontal black bar as occluder.

Overall, this is a very interesting study, and cleverly designed to investigate brain networks involved in timing and working memory, and encourage the timing community to do more research addressing these questions, and focus on the role of intraparietal and posterior cingulate areas in these two processes.

Source article: Üstün, S., Kale, E. H., & Çiçek, M. (2017). Neural Networks for Time Perception and Working Memory. Frontiers in Human Neuroscience, 11 (83).

—-Mukesh Makwana (
Doctoral student,
Centre of Behavioural and Cognitive Sciences (CBCS), India.

Causal evidence that intrinsic beta-frequency is relevant for enhanced signal propagation in the motor system as shown through rhythmic TMS

Evidence is accumulating that beta-band neural oscillations (~13–30 Hz) are related to temporal prediction in the context of auditory rhythm perception. Since beta oscillations are faster than any musical rhythms in which we would be interested (more in the 1–5 Hz range), they can’t phase lock to the temporal structure of auditory rhythms. Instead, fluctuations in beta power synchronize with auditory rhythms. For example, during listening to an isochronous tone sequence (think, a metronome), beta oscillations weaken, or desynchronize (or both), after each tone, but then get stronger, or resynchronize (or both) in anticipation of the next tone. This pattern scales with tempo, meaning that the faster the tone sequence goes, the faster beta power fluctuates. And if you take away the temporal structure by randomizing the inter-tone intervals, patterned beta-power fluctuations go away. Beta power also differs for individual tones that are imagined as emphasized versus those that are not, suggesting a role in beat/meter perception. Beta oscillations are often linked to the motor system, and become pathological in Parkinson’s disease, which is meaningful because Parkinson’s patients (in addition to having well-described motor problems) have trouble discriminating rhythms with a regular beat.

The motor system (including the basal ganglia) is thought to be important for rhythm and beat perception. Given the tight association between the motor system and beta-band neural oscillations, one interesting possibility is to interfere with beta oscillations using non-invasive brain stimulation in a way that would be predicted to disrupt (or enhance) rhythm and beat perception. Which brings me to a recent paper by Romei et al., which actually has nothing to do with rhythm perception (but potentially opens a lot of doors for those of us who are interested in the topic).

The authors first measured the individual peak beta frequency for each participant during finger tapping (this by itself is very cool, as relatively few papers investigate what individual differences in neural oscillator properties actually mean). Then, they applied rhythmic transcranial magnetic stimulation (rTMS) to left M1. The critical thing is that rTMS was applied at the individual peak frequency, or at higher and lower frequencies that still fell within the beta range (±3 Hz, ± 6 Hz). Simultaneously, both EEG (electroencephalography) and EMG (electromyography) were measured (the latter from the right hand).

Cortical beta oscillations measured by EEG were stronger (power) and more synchronized with the rTMS (phase locking) when the rTMS matched the individual peak beta frequency (less power and less synchronization for off-best-frequency rTMS, and even less for sham stimulation). I interpret this to mean that the individual peak beta frequency reflects the resonance frequency of a neural oscillator, which can be enhanced by even weak (sub-threshold) noninvasive brain stimulation. EMG data showed a similar pattern (weaker, yes, but I’m not the authors, so I’m free to interpret the p=.07 and p=.11 interaction effects [in the theoretically predicted pattern] as meaningful). That is, EMG power and phase locking were enhanced in particular when rTMS was applied to motor cortex at the individual peak frequency. The authors interpret this finding to mean that signal propagation from the central to the peripheral motor system is dependent on beta oscillations, and proceeds most efficiently at the individual peak frequency within the beta band. Finally, coupling between EEG and EMG (cortico-spinal coherence) was observed basically exclusively for the situation where rTMS was applied at the individual peak frequency.

These results are great news (and a lesson) for those of us interested in the role of beta oscillations in rhythm and beat perception. We can use noninvasive brain stimulation techniques like rTMS to modify beta oscillations during listening to different types of rhythmic stimuli (which area we stimulate, and whether M1 is necessarily the right target for this type of question are issues that I’m not discussing here). And then we can start to ask questions about the causal role and dynamics of beta oscillations in rhythm and beat perception. The lesson here is that blindly applying a catch-all 20-Hz beta stimulation might lead to null effects, and it wouldn’t necessarily be fair to treat those null effects as evidence of absence. Instead, this paper demonstrates that it’s important to take into account individual differences in neural oscillator properties for our manipulations to work the way we’d like them to. (And I’d argue that this is a lesson that can be extended beyond this particular study or frequency band – the more we start to understand when and why these individual differences are important, the faster we’ll be able to make gains in understanding what neural oscillations in particular frequency bands are doing for us in what situations.)

–source article: Romei, Bauer, Brooks, Economides, Penney, Thut, Driver, & Bestmann. Causal evidence that intrinsic beta-frequency is relevant for enhanced signal propagation in the motor system as shown through rhythmic TMS. NeuroImage.

Can EEG distinguish between different types of temporal predictions?

When the next economic crisis occurs, will it be just another peak in a very, very slow oscillation? Or will it be triggered by specific circumstances and preceded by warning signs? Or perhaps we will expect a crisis to happen only because it’s been long enough since the last one? And most importantly, will the particular scenario of our prediction make any difference when it comes to the dynamics of an actual crisis and the recovery from it?

In the lab, neural and perceptual temporal predictions can similarly be induced by various experimental factors, including rhythms (periodic streams of stimuli), cues (contingencies between specific events and temporal intervals), and hazards (the contextual probability of an event occurring, given recent history). But are the neural mechanisms of these predictions different? A popular explanation of the first scenario – predictions based on rhythms – is that neural systems can entrain to external rhythms and amplify the processing of stimuli occurring at expected time points. Several measures of entrainment have been used in the past, with inter-trial coherence (ITC) being one of the most popular metrics. However, just like other forms of predictions, rhythmic predictions are also linked to enhanced processing of expected stimuli, as well as several other neural signatures, such as the contingent negative variation (CNV), a slow preparatory potential preceding the expected time point, or alpha-band modulation just before the onset of an expected target.

In this paper, Assaf Breska and Leon Deouell show impressive similarities between rhythm-based and memory-based temporal predictions in terms of their underlying neural signatures based on EEG data. In the rhythm-based paradigm, participants viewed a rhythmic stream of stimuli, followed by a cue and a target – both according the same rhythmic pattern as the preceding stream. In the memory-based paradigm, the rhythmicity of the stream was broken, such that only every second interval had a fixed duration, and the remaining intervals were random. As a result, the interval between the cue and the target could be predicted based on the most frequent preceding interval, but the whole stream would arguably be too jittered to entrain a neural oscillation. Both conditions could be in a faster (dominant interval lasting 700 ms) or slower (1300 ms) regime, and both also contained a subset of trials in which targets were presented at an unexpected (invalid) interval.

The authors have analysed four prominent neural signatures of temporal predictability: two preceding an expected target (the CNV and alpha-band modulation), one around the time of target onset (delta-band phase coherence), and one following target presentation (the latency of a P300 component). Crucially, none of them showed significant differences between the two paradigms. In other words, rhythm-based and memory-based temporal expectations produced strikingly similar neural correlates of target anticipation and processing. However, there was one exception: when a target was expected but omitted, in the rhythmic paradigm the CNV bounced back to baseline immediately after the omission, but in the memory-based paradigm, it took almost 400 ms more for the signal to start returning to baseline. One can interpret this finding, as the Authors do, in at least a couple of ways. On the one hand, rhythm-based predictions are likely more precise, so the CNV can return to baseline as soon as the system “realises” that its expectation was violated. On the other hand, a fast return to baseline might reflect a more automatic nature of rhythmic predictions, as opposed to a more flexible allocation of resources in the memory-based prediction, which might result in a prolonged state of readiness for the omitted (possibly delayed) stimulus.

As one reads through the results section of the paper, these analyses seem to suggest that the neural mechanisms underlying rhythmic and memory-based predictions are largely identical. Regarding the similarity of delta-band phase coherence between the two paradigms, one could even potentially conclude that there is just as much (or little) entrainment in rhythmic as in non-rhythmic temporal expectations. However, this is not the correct conclusion, as noted by the authors. What this paper does show is that the ITC is not a sensitive measure of entrainment. In other words, simply looking at low-frequency phase locking does not allow a differentiation between conditions in which one would expect a different level of low-frequency entrainment.

However, I wonder if – based on their data – the authors could not have focused on this point a bit more, and either show why the metric is not sensitive, or perhaps suggest a better alternative. First of all, I missed a plot showing actual neural entrainment to the streams. Given that the paradigms included both faster (1.42 Hz) and slower (.77 Hz) regimes which were not harmonic, one could quantify differences in entrainment to these specific frequencies between the two regimes. Second, we know that “significant entrainment” might be an artefact of rhythmic evoked potentials, and we also know what neural signatures we might expect from data showing true low-frequency entrainment. In this case, we don’t know if delta-band (here 0.5-3 Hz) phase estimates around target onsets were not contaminated by ERPs evoked by targets. For example, while the authors show that delta-band phase correlates with reaction times, a similar correlation might have been expected between ERP amplitude or latency and behaviour. Again, it would be nice to see whether different conditions show phase concentration (and possibly link with behaviour) in slightly different frequency bands, as suggested by the authors’ oscillatory entrainment model. Finally, I was left wondering whether the difference in CNV resolution time between rhythm-based and memory-based predictions could be picked up by the ITC metric, and if so, whether future research should not indeed concentrate on “resonance” effects (i.e., the persistence of an oscillation after the interruption of external stimulation) as a cleaner metric of rhythmic entrainment.

Nevertheless, the paper convincingly shows that a significant difference in low-frequency inter-trial coherence around stimulus onset between a purely rhythmic and a purely random stream does not constitute strong evidence for neural entrainment by external rhythms. And while the main conclusion here is methodological, the paper does raise a question to what extent different experimental manipulations of temporal predictions rely on qualitatively different neural mechanisms. While recent TMS work does suggest that different networks are involved in rhythm processing and other forms of temporal orienting, most measures – including perceptual sensitivity, fMRI neuroimaging, as well as our standard EEG measures – might not be as sensitive to different types of temporal predictions.

Ryszard Auksztulewicz, Oxford Centre for Human Brain Activity 

Source article: Breska A, Deouell LY (2017) Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment. PLOS Biol, February 10, doi: 10.1371/journal.pbio.2001665.

Does temporal binding involve a slow-down of the pacemaker?

Temporal binding is a phenomenon whereby the interval between an action and its outcome appears subjectively shorter that it really is. Much of the research into temporal binding has focused on whether the initial action must be self-generated, or whether any event perceived as “causal” or “intentional” is sufficient to compress the interval between the action and its corresponding effect. Temporal binding has clear relevance to timing and time perception. For example, self-initiated intervals are perceived as shorter than non-self-initiated intervals, for both duration judgment and duration reproduction. Despite this, temporal binding has most frequently been used as a measure of agency, with a larger effect (shorter perceived durations) being taken as a proxy for having higher perceived agency.

However, at least one study has explicitly associated temporal binding with the speed of a hypothetical biological pacemaker. Wenke and Haggard (2009), used an elegant paradigm to demonstrate that the speed of the pacemaker was affected by (or even underlies) temporal binding. Firstly, they used a standard temporal binding paradigm in which participants either actively pressed a button which resulted in a delayed tone, or “passively” had their finger forced to depress the button, also leading to a tone. In agreement with the canonical temporal binding phenomenon, the intervals in the active condition were perceived as significantly shorter than those in the passive condition. The critical innovation of the experiment was to nest a sensory discrimination procedure in the interval between the action and the tone. This involved sequential cutaneous shocks delivered a short time apart, and calibrated to participants’ individual discrimination thresholds.

The researchers found that participants’ ability to discriminate the two shocks was significantly impaired early in the interval (in the active condition), demonstrating that their temporal sensitivity was lower when temporal binding occurred. The implication is that as the rate of perceptual sampling was slower, any universal pacemaker driving this sampling was also slower. However, it’s an open question whether differences in time perception are actually associated with differences in the rate of perceptual sampling. Some researchers argue that duration distortions are a result of retrospective memory processes, while others have shown that information processing is enhanced when time is dilated. Overall, the results of this study appear to support the idea that pacemaker slowing could occur during temporal binding.

However, a new paper by Fereday and Beuhner has countered the claim that pacemaker rate is altered in temporal binding. In their experimental design, they simply nested an additional stimulus within the action/outcome interval and queried participants for an estimate of the duration of that stimulus. Over a range of stimulus types and modalities, they showed that the perceived durations of these nested stimuli were unaffected, despite recreating the classic temporal binding effect. This strongly suggests two alternative possibilities. Firstly, temporal binding may be a result of retrospective, post-hoc recalibration of the interval between the action and outcome, which does not affect interceding events. Secondly, the timing of different stimuli may be governed by their own, dedicated and independent pacemakers.

(An interesting extension to this study would to observe whether temporal binding can occur during temporal binding, by nesting an action/outcome interval within an action/outcome interval!)
(An interesting extension to this study would to observe whether temporal binding can occur during temporal binding, by nesting an action/outcome interval within an action/outcome interval. What about three nested action/outcome intervals? Presumably this mirrors the complex perception of causality in the real world: temporal binding all the way down!)

Time perception is integral to our notion of causality (and by extension, learning and inference). Our perception of causality appears to also impact our experience of time: causally related events are estimated as being closer in time, even on the scale of month or years. Why should this be the case? If our perception of time is purely a function of the perceived causality in the world, what implications does this have? Given that research into temporal binding brings us closer to understanding of the perception of both causality and time, as well as the bidirectional relationship between the two, this research agenda holds considerable value for the understanding of the fundamentals of cognition.

Source paper:

Fereday, R., & Buehner, M. J. (2017). Temporal Binding and Internal Clocks: No Evidence for General Pacemaker Slowing. Journal of Experimental Psychology. Human Perception and Performance.