Causal evidence that intrinsic beta-frequency is relevant for enhanced signal propagation in the motor system as shown through rhythmic TMS

Evidence is accumulating that beta-band neural oscillations (~13–30 Hz) are related to temporal prediction in the context of auditory rhythm perception. Since beta oscillations are faster than any musical rhythms in which we would be interested (more in the 1–5 Hz range), they can’t phase lock to the temporal structure of auditory rhythms. Instead, fluctuations in beta power synchronize with auditory rhythms. For example, during listening to an isochronous tone sequence (think, a metronome), beta oscillations weaken, or desynchronize (or both), after each tone, but then get stronger, or resynchronize (or both) in anticipation of the next tone. This pattern scales with tempo, meaning that the faster the tone sequence goes, the faster beta power fluctuates. And if you take away the temporal structure by randomizing the inter-tone intervals, patterned beta-power fluctuations go away. Beta power also differs for individual tones that are imagined as emphasized versus those that are not, suggesting a role in beat/meter perception. Beta oscillations are often linked to the motor system, and become pathological in Parkinson’s disease, which is meaningful because Parkinson’s patients (in addition to having well-described motor problems) have trouble discriminating rhythms with a regular beat.

The motor system (including the basal ganglia) is thought to be important for rhythm and beat perception. Given the tight association between the motor system and beta-band neural oscillations, one interesting possibility is to interfere with beta oscillations using non-invasive brain stimulation in a way that would be predicted to disrupt (or enhance) rhythm and beat perception. Which brings me to a recent paper by Romei et al., which actually has nothing to do with rhythm perception (but potentially opens a lot of doors for those of us who are interested in the topic).

The authors first measured the individual peak beta frequency for each participant during finger tapping (this by itself is very cool, as relatively few papers investigate what individual differences in neural oscillator properties actually mean). Then, they applied rhythmic transcranial magnetic stimulation (rTMS) to left M1. The critical thing is that rTMS was applied at the individual peak frequency, or at higher and lower frequencies that still fell within the beta range (±3 Hz, ± 6 Hz). Simultaneously, both EEG (electroencephalography) and EMG (electromyography) were measured (the latter from the right hand).

Cortical beta oscillations measured by EEG were stronger (power) and more synchronized with the rTMS (phase locking) when the rTMS matched the individual peak beta frequency (less power and less synchronization for off-best-frequency rTMS, and even less for sham stimulation). I interpret this to mean that the individual peak beta frequency reflects the resonance frequency of a neural oscillator, which can be enhanced by even weak (sub-threshold) noninvasive brain stimulation. EMG data showed a similar pattern (weaker, yes, but I’m not the authors, so I’m free to interpret the p=.07 and p=.11 interaction effects [in the theoretically predicted pattern] as meaningful). That is, EMG power and phase locking were enhanced in particular when rTMS was applied to motor cortex at the individual peak frequency. The authors interpret this finding to mean that signal propagation from the central to the peripheral motor system is dependent on beta oscillations, and proceeds most efficiently at the individual peak frequency within the beta band. Finally, coupling between EEG and EMG (cortico-spinal coherence) was observed basically exclusively for the situation where rTMS was applied at the individual peak frequency.

These results are great news (and a lesson) for those of us interested in the role of beta oscillations in rhythm and beat perception. We can use noninvasive brain stimulation techniques like rTMS to modify beta oscillations during listening to different types of rhythmic stimuli (which area we stimulate, and whether M1 is necessarily the right target for this type of question are issues that I’m not discussing here). And then we can start to ask questions about the causal role and dynamics of beta oscillations in rhythm and beat perception. The lesson here is that blindly applying a catch-all 20-Hz beta stimulation might lead to null effects, and it wouldn’t necessarily be fair to treat those null effects as evidence of absence. Instead, this paper demonstrates that it’s important to take into account individual differences in neural oscillator properties for our manipulations to work the way we’d like them to. (And I’d argue that this is a lesson that can be extended beyond this particular study or frequency band – the more we start to understand when and why these individual differences are important, the faster we’ll be able to make gains in understanding what neural oscillations in particular frequency bands are doing for us in what situations.)

–source article: Romei, Bauer, Brooks, Economides, Penney, Thut, Driver, & Bestmann. Causal evidence that intrinsic beta-frequency is relevant for enhanced signal propagation in the motor system as shown through rhythmic TMS. NeuroImage.

Can EEG distinguish between different types of temporal predictions?

When the next economic crisis occurs, will it be just another peak in a very, very slow oscillation? Or will it be triggered by specific circumstances and preceded by warning signs? Or perhaps we will expect a crisis to happen only because it’s been long enough since the last one? And most importantly, will the particular scenario of our prediction make any difference when it comes to the dynamics of an actual crisis and the recovery from it?

In the lab, neural and perceptual temporal predictions can similarly be induced by various experimental factors, including rhythms (periodic streams of stimuli), cues (contingencies between specific events and temporal intervals), and hazards (the contextual probability of an event occurring, given recent history). But are the neural mechanisms of these predictions different? A popular explanation of the first scenario – predictions based on rhythms – is that neural systems can entrain to external rhythms and amplify the processing of stimuli occurring at expected time points. Several measures of entrainment have been used in the past, with inter-trial coherence (ITC) being one of the most popular metrics. However, just like other forms of predictions, rhythmic predictions are also linked to enhanced processing of expected stimuli, as well as several other neural signatures, such as the contingent negative variation (CNV), a slow preparatory potential preceding the expected time point, or alpha-band modulation just before the onset of an expected target.

In this paper, Assaf Breska and Leon Deouell show impressive similarities between rhythm-based and memory-based temporal predictions in terms of their underlying neural signatures based on EEG data. In the rhythm-based paradigm, participants viewed a rhythmic stream of stimuli, followed by a cue and a target – both according the same rhythmic pattern as the preceding stream. In the memory-based paradigm, the rhythmicity of the stream was broken, such that only every second interval had a fixed duration, and the remaining intervals were random. As a result, the interval between the cue and the target could be predicted based on the most frequent preceding interval, but the whole stream would arguably be too jittered to entrain a neural oscillation. Both conditions could be in a faster (dominant interval lasting 700 ms) or slower (1300 ms) regime, and both also contained a subset of trials in which targets were presented at an unexpected (invalid) interval.

The authors have analysed four prominent neural signatures of temporal predictability: two preceding an expected target (the CNV and alpha-band modulation), one around the time of target onset (delta-band phase coherence), and one following target presentation (the latency of a P300 component). Crucially, none of them showed significant differences between the two paradigms. In other words, rhythm-based and memory-based temporal expectations produced strikingly similar neural correlates of target anticipation and processing. However, there was one exception: when a target was expected but omitted, in the rhythmic paradigm the CNV bounced back to baseline immediately after the omission, but in the memory-based paradigm, it took almost 400 ms more for the signal to start returning to baseline. One can interpret this finding, as the Authors do, in at least a couple of ways. On the one hand, rhythm-based predictions are likely more precise, so the CNV can return to baseline as soon as the system “realises” that its expectation was violated. On the other hand, a fast return to baseline might reflect a more automatic nature of rhythmic predictions, as opposed to a more flexible allocation of resources in the memory-based prediction, which might result in a prolonged state of readiness for the omitted (possibly delayed) stimulus.

As one reads through the results section of the paper, these analyses seem to suggest that the neural mechanisms underlying rhythmic and memory-based predictions are largely identical. Regarding the similarity of delta-band phase coherence between the two paradigms, one could even potentially conclude that there is just as much (or little) entrainment in rhythmic as in non-rhythmic temporal expectations. However, this is not the correct conclusion, as noted by the authors. What this paper does show is that the ITC is not a sensitive measure of entrainment. In other words, simply looking at low-frequency phase locking does not allow a differentiation between conditions in which one would expect a different level of low-frequency entrainment.

However, I wonder if – based on their data – the authors could not have focused on this point a bit more, and either show why the metric is not sensitive, or perhaps suggest a better alternative. First of all, I missed a plot showing actual neural entrainment to the streams. Given that the paradigms included both faster (1.42 Hz) and slower (.77 Hz) regimes which were not harmonic, one could quantify differences in entrainment to these specific frequencies between the two regimes. Second, we know that “significant entrainment” might be an artefact of rhythmic evoked potentials, and we also know what neural signatures we might expect from data showing true low-frequency entrainment. In this case, we don’t know if delta-band (here 0.5-3 Hz) phase estimates around target onsets were not contaminated by ERPs evoked by targets. For example, while the authors show that delta-band phase correlates with reaction times, a similar correlation might have been expected between ERP amplitude or latency and behaviour. Again, it would be nice to see whether different conditions show phase concentration (and possibly link with behaviour) in slightly different frequency bands, as suggested by the authors’ oscillatory entrainment model. Finally, I was left wondering whether the difference in CNV resolution time between rhythm-based and memory-based predictions could be picked up by the ITC metric, and if so, whether future research should not indeed concentrate on “resonance” effects (i.e., the persistence of an oscillation after the interruption of external stimulation) as a cleaner metric of rhythmic entrainment.

Nevertheless, the paper convincingly shows that a significant difference in low-frequency inter-trial coherence around stimulus onset between a purely rhythmic and a purely random stream does not constitute strong evidence for neural entrainment by external rhythms. And while the main conclusion here is methodological, the paper does raise a question to what extent different experimental manipulations of temporal predictions rely on qualitatively different neural mechanisms. While recent TMS work does suggest that different networks are involved in rhythm processing and other forms of temporal orienting, most measures – including perceptual sensitivity, fMRI neuroimaging, as well as our standard EEG measures – might not be as sensitive to different types of temporal predictions.

Ryszard Auksztulewicz, Oxford Centre for Human Brain Activity 

Source article: Breska A, Deouell LY (2017) Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment. PLOS Biol, February 10, doi: 10.1371/journal.pbio.2001665.

Does temporal binding involve a slow-down of the pacemaker?

Temporal binding is a phenomenon whereby the interval between an action and its outcome appears subjectively shorter that it really is. Much of the research into temporal binding has focused on whether the initial action must be self-generated, or whether any event perceived as “causal” or “intentional” is sufficient to compress the interval between the action and its corresponding effect. Temporal binding has clear relevance to timing and time perception. For example, self-initiated intervals are perceived as shorter than non-self-initiated intervals, for both duration judgment and duration reproduction. Despite this, temporal binding has most frequently been used as a measure of agency, with a larger effect (shorter perceived durations) being taken as a proxy for having higher perceived agency.

However, at least one study has explicitly associated temporal binding with the speed of a hypothetical biological pacemaker. Wenke and Haggard (2009), used an elegant paradigm to demonstrate that the speed of the pacemaker was affected by (or even underlies) temporal binding. Firstly, they used a standard temporal binding paradigm in which participants either actively pressed a button which resulted in a delayed tone, or “passively” had their finger forced to depress the button, also leading to a tone. In agreement with the canonical temporal binding phenomenon, the intervals in the active condition were perceived as significantly shorter than those in the passive condition. The critical innovation of the experiment was to nest a sensory discrimination procedure in the interval between the action and the tone. This involved sequential cutaneous shocks delivered a short time apart, and calibrated to participants’ individual discrimination thresholds.

The researchers found that participants’ ability to discriminate the two shocks was significantly impaired early in the interval (in the active condition), demonstrating that their temporal sensitivity was lower when temporal binding occurred. The implication is that as the rate of perceptual sampling was slower, any universal pacemaker driving this sampling was also slower. However, it’s an open question whether differences in time perception are actually associated with differences in the rate of perceptual sampling. Some researchers argue that duration distortions are a result of retrospective memory processes, while others have shown that information processing is enhanced when time is dilated. Overall, the results of this study appear to support the idea that pacemaker slowing could occur during temporal binding.

However, a new paper by Fereday and Beuhner has countered the claim that pacemaker rate is altered in temporal binding. In their experimental design, they simply nested an additional stimulus within the action/outcome interval and queried participants for an estimate of the duration of that stimulus. Over a range of stimulus types and modalities, they showed that the perceived durations of these nested stimuli were unaffected, despite recreating the classic temporal binding effect. This strongly suggests two alternative possibilities. Firstly, temporal binding may be a result of retrospective, post-hoc recalibration of the interval between the action and outcome, which does not affect interceding events. Secondly, the timing of different stimuli may be governed by their own, dedicated and independent pacemakers.

(An interesting extension to this study would to observe whether temporal binding can occur during temporal binding, by nesting an action/outcome interval within an action/outcome interval!)
(An interesting extension to this study would to observe whether temporal binding can occur during temporal binding, by nesting an action/outcome interval within an action/outcome interval. What about three nested action/outcome intervals? Presumably this mirrors the complex perception of causality in the real world: temporal binding all the way down!)

Time perception is integral to our notion of causality (and by extension, learning and inference). Our perception of causality appears to also impact our experience of time: causally related events are estimated as being closer in time, even on the scale of month or years. Why should this be the case? If our perception of time is purely a function of the perceived causality in the world, what implications does this have? Given that research into temporal binding brings us closer to understanding of the perception of both causality and time, as well as the bidirectional relationship between the two, this research agenda holds considerable value for the understanding of the fundamentals of cognition.


Source paper:

Fereday, R., & Buehner, M. J. (2017). Temporal Binding and Internal Clocks: No Evidence for General Pacemaker Slowing. Journal of Experimental Psychology. Human Perception and Performance. http://doi.org/10.1037/xhp0000370

Causal evidence for the right TPJ in temporal attention

Attention involves selecting a subset of the environment to undergo more elaborate processing in the brain. To respond appropriately to events in the world one must not only orient attention in space, but also in time. As it turns out, the brain regions most clearly implicated in spatial attention – the parietal lobes – are also thought to be involved in temporal attention, particularly ventral parietal regions such as the temporo-parietal junction (TPJ).

 

A recent study reported in the Journal of Cognitive Neuroscience provides further causal evidence in support of the view that the right TPJ in particular is dominant for temporal processing. The study utilized a novel simultaneity judgement task in two experiments that involved patients with lesions to the TPJ and healthy participants that had inhibitory TMS delivered to the TPJ. Participants were presented 4 flashing discs for 3 seconds (alternating uniform black and white) that were presented in the corners of an invisible square. On each trial, one disc was randomly selected to flash in counter phase to the other 3 discs (i.e., oddball disk was white when other disks were black). Prior to target onset, either the left or the right pair of disks was cued and participants were asked to judge whether cued pair flashed synchronously.

 

A staircase procedure showed that healthy controls and patients with damage to left TPJ could perform the simultaneity judgement at 80% accuracy when the flash rate for the array of items was approximately 9 Hz. In contrast, average flash thresholds for right TPJ patients were markedly worse, with 80% threshold observed when the flash rate was approximately 4 Hz.

 

The follow up experiment involving Transcranial Magnetic Stimulation (TMS) with healthy controls showed a similar pattern of results. In this experiment, inhibitory 1Hz TMS was applied for 20 minutes either to the left TPJ, the right TPJ or over early visual cortex. Simultaneity thresholds after TMS were worse (compared to pre-stimulation thresholds) only when the right TPJ was inhibited. Thresholds did not vary from baseline after inhibition of left TPJ, and inhibition of early visual cortex showed a slight improvement in flash thresholds.

 

By combining lesion and TMS methods, the results of the study provide convincing causal evidence that the TPJ is involved in temporal attention. Brain-imaging studies have previously reported TPJ activation during simultaneity tasks, however imaging studies are correlational and cannot say anything about the causal role of the TPJ in these processes. Indeed, the inclusion of TMS is important since many of the patients that participated in the study had very large lesions, whereas the effect of TMS is comparatively more focal.

 

However, the evidence that the right TPJ is dominant for temporal attention is somewhat ambiguous. It is difficult to tell from the data presented in the paper whether the extent of the brain damage observed in the left and right TPJ patients was the same. Moreover, the results of the TMS experiment did not provide strong evidence for the dominance of right TPJ for temporal processing. Although the right TPJ impaired temporal processing (compared to baseline), the magnitude of the impairment was not significantly different from the impairment observed in the left TPJ (however there was a trend toward significance). Strictly speaking then, the results of the TMS study do not provide firm support for the claim of selectivity. Nevertheless, the present paper adds to a growing number of studies examining the neural correlates of time perception using techniques that infer causality (TMS, TDCS etc) in a field that is largely dominated by brain imaging techniques.

 

Bronson Harry

The MARCS Institute, Western Sydney University

Twitter | ResearchGate | Web

Entrainment to an auditory signal: Is attention involved?

Behaviorally relevant environmental stimuli are often characterized by some degree of temporal regularity. Dynamic attending theory provides a framework for explaining how perception of stimulus events is affected by the temporal context within which they occur. At its core, dynamic attending theory (as the name suggests) is a theory about attention – the key insight of dynamic attending theory is that attention is not constant over time, but waxes and wanes with time’s passing, and can become coupled to the temporal structure of environmental stimuli. A wealth of empirical data supports this basic proposition, demonstrating that the detection and discrimination of, as well as response times to, target stimulus events differ based on how the target event was related to its temporal context. For example, the timing or pitch of an event is judged more accurately when that event happens “on time” with respect to a context rhythm, compared to when that same event happens at an unexpected time.

Extrapolating these psychophysical data, a new paper by Kunert and Jongman tests whether effects of temporal context might be observed for memory.

Very briefly, the paradigm involved listening to an 8-tone sequence, where one tone is higher than the others. Dutch pseudowords are presented at different phases of the 8-tone sequence (either at tone 4 or tone 8), and participants either give a speeded lexical decision (Exp. 1a), are asked to recall the words for a later recall test (Exp. 2a), or both (Exp. 2b). Although participants’ lexical decisions were significantly faster for tone-8 words than for tone-4 words, memory for words presented at those positions was not different.

There are a number of confusing aspects of this manuscript, but I’d like to focus on one. The authors ask the following:

“How does the brain react to rhythmic auditory input? Does it increase general attention at moments of rhythmic salience as predicted by the most-widely adopted interpretation of dynamic attending theory (DAT)?”;

…and they suggest that their results “raise[s] the question of what the “attentional energy” postulated by the DAT actually represents in terms of neurophysiology and/or psychology”.

The reason this is confusing to me is that there’s a growing neuropsychological literature on entrainment and its effects on perception. And a number of authors, including Mari Jones and Ed Large, for example, have proposed that neural oscillations are the correlates of “attentional energy” (see here for a shameless self promotion of my own views on the matter). If this is true, this means that fluctuations in “attentional energy” are fluctuations in neuronal excitability. What this means, I think, is easiest to understand from the perspective of a single neuron, who is more likely to fire during a period of high excitability and less likely to fire during a period of low excitability. When fluctuations in neuronal excitability become entrained by a stimulus rhythm, high excitability periods align with future events, improving neuronal responsiveness and perception of that event (or decreasing responsiveness between events as the case may be).

It seems to me that what the authors are then getting at is a question of how widely neural oscillations might be entrained by a modality-specific rhythm. That is, will an auditory rhythm entrain oscillations in brain regions responsible for pseudoword encoding? And this question has certainly not been definitively answered. There is evidence that neural oscillations in sensory cortices are entrained by rhythms presented in their favored  and sometimes nonfavored modality (audition, vision, somatosensation), and there is evidence that auditory rhythms can entrain specific populations of tonotopically tuned cells. So I think we have some idea about how specific entrainment can be, but it seems like we know less well how general entrainment can be. Do auditory rhythms entrain neural oscillations in motor regions, as many of us interested in rhythm and beat perception believe? What about brain regions responsible for memory encoding? It’s really the answer to this question that determines whether entrainment effects on memory are going to be observable, and paradigms designed to answer this question may be more informative when they are developed in the context of what we know about the brain. For example, entrainment of neural oscillations in relevant brain regions in the right relationship by noninvasive brain stimulation does improve memory encoding – it’s a relevant question why we would expect an auditory rhythm to do the same.

– source article: Kunert & Jongman. Entrainment to an auditory signal: Is attention involved? JEP: General.

 

 

Neural encoding of time: the striatum vs prefrontal cortex

The neural mechanisms of encoding timing are still controversial. According to one prominent hypothesis, time is encoded in local network dynamics – see a previous blog post dedicated to this issue. However, similar mechanisms (“population clocks”) have been linked to multiple areas across the brain, including the striatum, prefrontal and parietal cortices, and the hippocampus. Does this variety of brain regions reflect a specialisation of each area to track time at e.g. a different scale, or is time encoded in parallel in several regions?

To answer this question, Bakhurin et al. quantified and compared the degree of time encoding in two areas: the striatum and the orbitofrontal cortex (OFC). They acquired electrophysiological recordings in mice conditioned to receive a food reward (condensed milk) after a specific interval (2.5s) following an olfactory cue. Activity in both regions was measured simultaneously in 6 animals; in 5 further animals, only activity in the striatum was recorded. Spike sorting was used to isolate activity in single neurons (pyramidal cells in OFC and medium spiny neurons in the striatum). To quantify the degree to which each region tracks time, the authors used multivariate decoding – a multi-class support vector machine classifier, based on firing rates of multiple units – to estimate the elapsed time from neural activity. Ideally, feeding the data acquired e.g. 1s after the olfactory cue into the decoder would result in a correct estimation that 1s has elapsed since the cue. Using this technique, one can quantify whether neural activity in a given area is a better predictor of the actually elapsed time than neural activity in another area.

The results of this and several control analyses show that time can be decoded with higher fidelity from striatal activity than from prefrontal activity. This pattern of results – the striatum outperforming the OFC as a neural basis for decoding time – was robust and did not qualitatively change when using more or less neurons in each area; selecting units in the dorsal or ventral striatum, or in the medial or lateral OFC; or controlling for motor activity (animals licking in anticipation of the reward). These findings are interpreted by the authors in terms of the striatum providing a refined readout of upstream cortical activity. Thus, the striatum might outperform the OFC in encoding time per se. However, as the authors also note, neural activity in the OFC has a higher dimensionality than in the striatum (i.e., more principal components are needed to explain its variability). This might be due to the OFC encoding more task variables than the striatum, as suggested by the authors; however, it can also be explained by a higher anatomical or physiological variability, or a lower signal-to-noise ratio, in the OFC. Thus, it would have been beneficial for the study to include a task variable – perhaps reward accumulation over several trials – for which prefrontal activity would plausibly yield better decoding than striatal activity.

While the study shows differences in decoding performance between the two regions, it rarely addresses the question whether time encoding mechanisms are qualitatively similar or distinct between the two regions. The one finding that does suggest differences in how time is encoded by the two regions shows that motor responses distort time encoding more in the striatum than in the OFC. Specifically, training the decoder on trials in which animals displayed licking behaviour early on (first tercile) or relatively late (third tercile) induced systematic biases when the decoder was tested on the remaining trials (second tercile). Thus, in the striatum, motor responses seem to warp time encoding in the opposite directions: early motor response speed up in estimated time, while late motor responses induced delays in estimated time*. These effects were less pronounced in the OFC. In fact, early prefrontal activity seemed to be especially robust to any interference from motor responses.

Taken together, the paper shows that decoding elapsed time is overall more accurate based on striatal activity than on prefrontal activity – however, why this is the case remains an open question. On the other hand, striatal time-encoding activity might to some extent covary with motor-encoding activity. This co-dependency of time and motor encoding is weaker in the prefrontal cortex, suggesting intriguing qualitative dissociations between the neural mechanisms of time encoding in different regions. Previously, decoding based on different data modalities (MEG and fMRI) was used to find correlations and dissociations between decoding-enabling data features (e.g., early response latencies in the MEG and sensory regions in the fMRI). Perhaps future studies could use a similar approach to find whether time representation in one brain region can generalise to another region, suggesting shared mechanisms, or whether time encoding is subserved by neural mechanisms unique to each region.

Ryszard Auksztulewicz, Oxford Centre for Human Brain Activity 

Source article: Bakhurin KI, Goudar V, Shobe JL, Claar LD, Buonomano DV, Masmanidis SC (2016) Differential encoding of time by prefrontal and striatal network dynamics. J Neurosci, December 15, 1789-16. doi: 10.1523/JNEUROSCI.1789-16.2016

* In my original post, based on the published article, the sentence stated the opposite: “early motor response induce delays in estimated time, while late motor responses speed up estimated time”. However the Authors have asked me to correct this sentence according to their original intention, and have requested a correction in the journal article.

Image Contrast influence Perceived Duration

Majority of studies in time perception uses visual objects like faces (emotional, non-emotional), geometrical figures, scenes, numbers, etc as stimuli. Although these complex images have been shown to influence time perception (addressing different questions), but the role of basic perceptual features (like contrast) which constitutes these complex images are rarely studied. A recent study by Christopher Benton and Annabelle Redfern published in Frontiers in Psychology, investigated the role of contrast on perceived duration.

They based their hypothesis on two lines of research, first are the adaptation studies which shows duration compression explained by adaptation-related stimulus-specific reduction in neural activity in early visual areas. And second are those studies which shows increase in neural activity in early visual areas with increase in contrast. So from these studies they hypothesized that, if adaptation related decrease in neural activity in early visual areas is linked to decrease in perceived duration then contrast based increase in neural activity in early visual areas should lead to increase in perceived duration.

To test this hypothesis they used dynamic spatial noise patterns as stimuli with three levels of contrast (0.1, 0.3, and 0.9). Each noise element in this pattern changed its luminance sinusoidally at temporal frequency of 4Hz. Two types of spatial filters i.e. circular Gaussian envelope or circular aperture, were used to generate stimuli patches with either gradient boundary or sharp boundary. The sharp boundary circular patch acted as size control stimuli, as in gradient boundary the perceived size might also change with contrast confounding the timing results.

To measure the effect of contrast on perceived duration they used adaptive match-to-standard procedure. In each trial participants saw a standard stimuli (with contrast 0.3) for 600 ms followed by a match (test) stimuli (with either 0.1 or 0.9 contrast) displayed for time randomly decided by the adaptive procedure between 125ms to 3000ms. Participants reported which of the two appeared for longer duration. Gradient boundary and sharp boundary stimuli were used in separate blocks. Results indicated that participants perceived the high contrast stimuli to be longer in duration compared to low contrast stimuli irrespective of its boundary type.

Based on previous study linking contrast with temporal frequency, one might argue that the above results could not be due to contrast influencing duration but rather due to contrast influencing temporal frequency and which in-turn influencing duration. To control this they performed another experiment in which they first found the temporal frequency threshold for low and high contrast for individual participants. And then used individual specific temporal frequency to test the effect of contrast on perceived duration. In this experiment, even after controlling for temporal frequency change, results showed that perceived duration increased with contrast.

Authors suspected that the above increase in perceived duration due to contrast may not reflect entirely due to sustained neural activity but can also be explained by assuming some fixed neural activity threshold for stimulus onset and offset detection. In such a scenario, the perceived duration changes due to contrast is a result of difference in onset and offset timing rather than contrast driven sustained activity in the early visual areas. To investigate this, they designed a third experiment using a method of constant stimuli and tested the effect of contrast (0.1 and 0.9) on onset and offset perception. They found that contrast does influence the onset and offset perception of stimulus but this could account for around 20ms of difference between high and low contrast stimuli, which still cannot fully explain the 60ms difference they got in both the previous experiments.

Overall, the above study demonstrated the influence of low level perceptual feature such as contrast on perceived duration, further studies with multiple standard duration and larger N is needed to fully understand the role of contrast in time perception .  As pointed by authors, despite 89% reduction in contrast it only led to just 10% reduction in perceived duration, raises further interesting questions about the mechanisms underlying such effects. 

In my opinion, more studies are needed with not only contrast but also with other low level perceptual features like spatial frequencies, luminance; and also with curvatures, and textures, leading to complex images like faces. So that we can understand the role of specific components in altering time perception, which in future would enable researchers to model and predict (only to certain extend as complex objects influences are based on associated meaning as well) the perceived time of a complex image just by analyzing the lower level components of an image.

Source article: Benton, C. P., & Redfern, A. S. (2016). Perceived Duration Increases with Contrast, but Only a Little. Frontiers in Psychology, 7.

—-Mukesh Makwana, Doctoral student, (mukesh@cbcs.ac.in)
Centre of Behavioural and Cognitive Sciences (CBCS), University of Allahabad, India.

The phase of pre-stimulus alpha oscillations influences the visual perception of stimulus timing

Over the last decade or so, there’s been an absolute explosion in the interest in neural oscillations and their role in perception. In particular, I and others are very interested in how neural phase, assumed to reflect fluctuations in neuronal excitability, affects perception on a moment-to-moment basis. A new paper by Alex Milton and Christopher Pleydell-Pearce uses EEG to examine the role of neural alpha phase (in the 8–13 Hz range) in the perception of timing – in this case, asynchrony versus simultaneity of visual onsets.

Participants are cued (validly or invalidly) to either the left or right, and then two peripheral LEDs are illuminated with a stimulus onset asynchrony (SOA) chosen to keep asynchrony detection at threshold for an individual participant (sometimes they are simultaneous, but that’s rare and only to estimate false-alarm rates). When the LEDs were illuminated during the trough of the alpha oscillation (measured over a handful of left, posterior sensors), they were more likely to be correctly perceived as asynchronous, but when they were illuminated during the peak of the oscillation, they were more likely to be incorrectly perceived as simultaneous. The results replicate and extend older work by Varela. And they provide a new source of trial-to-trial variability in asynchrony judgments that may also have individual-differences components due to e.g., differences in individual alpha frequency.

I found myself wondering while reading what the explanation for their result was – specifically, was asynchrony more likely to be perceived because the individual events and their onsets were better perceived during more excitable phases of the neural oscillation [based on my current understanding of near-threshold detection/discrimination data]? –OR– are individual LED onsets that are “transmitted” during successive alpha cycles perceived as separate, but bound if they end up in the same alpha cycle together [a la Lisman & Idiart’s theory that individual items in working memory are “stored” in single cycles of a high-frequency gamma oscillation that are nested in a single cycle of a low-frequency theta oscillation, and consistent with Varela’s ideas]? The authors address this question in the Discussion, and assign the two possibilities to the two sides of the debate about whether perception and underlying “processing epochs” are continuous or discrete, respectively. They suggest that fluctuations in neuronal excitability leading to enhancement of the perception of the LEDs and their temporal relation would be an unlikely mechanism to strictly quantize sensory input (and suggest that their own results are more compatible with a continuous view of perception). On the other hand, assuming that the LEDs might be perceived as asynchronous when they ride along in separate alpha cycles is compatible with a “temporal framing” hypothesis; sensory information is gated into discrete “packets” (see for example VanRullen & Koch, 2003).

The arguments for continuity generally appeal to intuition; it’s of course true that our perception of the world flows from one moment to the next without sharp boundaries. But the most basic building block of brain function, a spike from a single neuron, is all or none – a neuron fires or it doesn’t. In between, there are psychophysical and neural data that can be interpreted as supporting both views. So the issue is far from solved. I dislike fractionation and dichotomies, especially in the context of brains, which seem quite hard to parcel up cleanly. So I’m a fan of the idea that both continuity and discreteness are not only present, but necessary for brain function and cognition (Fingelkurts & Fingelkurts, 2006). I don’t have time in a short blog post to talk about HOW perception and cognition might arise from a critical combination of continuous and discrete neural processes, but highly recommend the cited paper as supplementary reading. The authors suggest directions for future research (such as examining symptoms of certain neuropsychological disorders) that may help to better understand the continuity/discreteness trade-off. And with a better understanding of how both discreteness and continuity might be essential for consciousness and cognition, our interpretation of results like those of Milton and Pleydell-Pearce might allow us some insight into neural mechanisms that doesn’t rely on assuming one or the other, but not both, must be true with respect to continuity vs. discreteness.

–Source article: Milton & Pleydell-Pearce.The phase of pre-stimulus alpha oscillations influences the visual perception of stimulus timing. NeuroImage.

Meditation, Sense of Agency and Time Perception

Whenever we perform any action or have a thought in our mind, we rarely wonder whether those belong to us or somebody else. But this seemingly trivial sense becomes very evident in patients with disorders of volition e.g. schizophrenia, alien limb syndrome, etc. where they are sometimes unable to associate agency for their own actions or thoughts.

Generally, when an individual performs a voluntary action (e.g. key press) leading to an outcome (e.g. tone), the perceived time of action and its outcome are shifted towards each other. The amount of shift in perceived time of action towards the outcome is called as action binding, and the amount of shift in perceived time of outcome towards the action is called outcome binding. The overall compression of subjective time between voluntary action and its outcome is famously known as intentional binding and is mostly studied using a Libet’s clock paradigm. Many researchers believe that the compression of subjective time between the voluntary action and its outcome acts as a cue for brain to distinguish between self and non-self action-outcome, and is often used as an implicit measure of sense of agency.

Meditation practices have profound effect on both our physical and mental well-being. But whether meditation practices could also influence sense of agency is unclear. A recent study published in Mindfulness by Lush, Parkinson and Dienes, investigated the effect of mindfulness meditation on intentional binding. In meditator group they had Buddhist mindfulness meditators (N=8) having around 14.6 years of meditation experience and in non-meditator group they had age and gender matched controls (N=8) with no experience of mindfulness meditation.

To measure intentional binding they used the standard Libet’s clock paradigm. Participants performed key press at their will which produced an auditory tone (1000Hz, 100ms) after a delay of 250ms. While they were doing this task, they fixated at the center of the screen displaying a clock face and a dot (0.2o) revolving around it at the speed of 1 revolution per 2560ms. Participants reported the time of action (key press) or outcome (tone) by indicating the position of the dot on the clock face where they thought it was present when that particular event occurred. There were four blocks; 1) contingent action block, 2) contingent outcome block, 3) baseline action block and 4) baseline outcome block. In both, contingent action and contingent outcome block, participants action produced an auditory tone. In contingent action block they were asked to report the perceived time of action whereas in the contingent outcome block they were asked to report the perceived time of outcome. In baseline action block, participants performed only voluntary key press without outcome tone, and reported the time of action. On the other hand, in the baseline outcome block, participants heard a tone randomly between 2.5 to 7 sec without performing any action, and reported the time when they heard the tone.

Mean judgement errors were calculated for each participant and each condition. Action binding and outcome binding were calculated by subtracting the appropriate baseline condition from their respective contingent condition. Overall intentional binding was calculated by subtracting the action binding from outcome binding.

Data was analyzed using Bayesian statistics as it has some advantages over conventional null hypothesis significance testing and is based on accessing the strength of evidence in favor of a particular hypothesis. Bayes factor is used to access strength of evidence. Bayes factor of above 3 indicates substantial evidence in favor of the alternative hypothesis and below 1/3 substantial evidence in favor of the null hypothesis. Bayes factor between 3 and 1/3 indicate data is insensitive in distinguishing between the null and alternative hypothesis.

Results showed that meditators reported more intentional binding compared to non-meditators, specifically the outcome binding was greater in meditators compared to non-meditators. The results suggest that mindfulness meditation shows increase in sense of agency. They explain their results could be due to mindfulness practitioner having greater meta-cognitive access to their intentions, and hence greater intentional binding.

Further evidences with different types of meditation practices and their effect on sense of agency is needed to better understand the relationship between meditation, sense of agency and time perception. Nevertheless, the current study provides a good start in this area, with their results giving hope that meditation practices could also be considered in treating the disorders of volition.

 

—–Mukesh Makwana, Doctoral student, mukesh@cbcs.ac.in

Centre of Behavioural and Cognitive Sciences, India.

 

Source article: Lush, P., Parkinson, J., & Dienes, Z. (2016). Illusory temporal binding in meditators. Mindfulness, 7(6), 1416-1422.

Iterated reproduction task reveals rhythmic priors associated with exposure to music

According to Bayesian theories of cognition, perception involves integration of noisy sensory information with probabilistic internal models. These internal models reflect the net sum of all of our prior experiences and assist in structuring perception in the presence of unreliable sensory input. Known as priors, the influence of internal models can most clearly be observed in situations where sensory input is weak. In these cases, the prior makes a much larger contribution to perception, effectively biasing perception toward events that are more commonly encountered.

 

Musical practices are observed throughout all cultures and each musical system emphasises different rhythmic signatures. Is it possible that exposure to music forms rhythmic priors that help structure our perception of auditory sequences, and if so, are these rhythmic priors influenced by culture?

 

To assess rhythmic priors, Nori Jacoby and Josh McDermott from MIT devised an iterated reproduction task wherein participants tapped in time with auditory sequences comprised of repeating three-interval rhythms (e.g., 3:2:1, 1:2:1). On each trial the researchers surreptitiously replaced the auditory sequence with the rhythm produced by the participant from the previous trial. The idea of this procedure is that if temporal priors help structure the perception of musical sequences, then the rhythms produced by participants over successive trials should gradually become biased toward these priors. Indeed, the authors showed that reproductions tended to drift from the initial sequence and then stabalise after only five trials.

 

However, to ensure that the task itself was not biased toward cultural norms, the initial rhythm was randomly generated. In western music, interval ratios are usually comprised of integers. So to prevent the task from being influenced by western music conventions, the initial trial was randomly selected from all possible interval ratios, including non-integer values.

 

Despite the rhythms being randomly generated, reproductions tended to converge toward sequences with integer ratios. Importantly this effect was observed in a range of control experiments designed to rule out the role of motor demands. For example, the result was not specific to the effector since an integer bias was found when participants provided a verbal response. Likewise, a ratio bias was observed when sequences were reproduced from memory, indicating that the effect was not due to auditory-motor entrainment associated with synchronisation tasks.

 

Indeed, the effect of priors was also apparent in perceptual discrimination tasks. Participants were presented sequences that varied along a continuum between 3:2:3 and 1:1:1 and performed a same-different judgement task on pairs of sequences. Discrimination performance showed a pattern characteristic of categorical perception, with increased sensitivity found for non-integer rhythms and decreased sensitivity for rhythms near to integer ratios. The loss of perceptual sensitivity near integer patterns is indicative of a prior drawing the perception of patterns toward integer rhythms.

 

Crucially, the integer bias uncovered by the iterated reproduction task was influenced by exposure to music. In American participants, biases were observed only for ratios commonly found in western music. Likewise, a remote Amazonian population – the Tsimane – also showed a bias for integer ratios, however in this case, biases were only shown for intervals found in Tsimane music. However, the effect of the priors appeared to reflect passive exposure to common rhythmic structures, as American musicians also showed the same pattern of integer bias as Americans with no musical training.

 

Although Amercian and Tsimane cultures differed in the profile of intervals associated with priors, both cultures showed preferences for integer ratios. The Tsimane are a remote population with almost no exposure to western culture so it is unlikely that cultural transmission can explain a preference for integer ratios in the Tsimane. So this begs the question, how is it that both groups show priors for integer rhythms? Although iterated reproductions are often used in social science to explore the dynamics associated with the formation of shared practices, attitudes and beliefs, the authors’ stress that the task used here does not recapitulate the development of rhythmic preferences. Instead they argue the task only uncovers pre-existing internal preferences. How widespread such preferences are across different cultures and why preferences for integer rhythms emerge remains to be seen.

 

Bronson Harry

The MARCS Institute, Western Sydney University

Twitter | ResearchGate | Web