TRF2 Blog Post

After a very successful TRF1 in 2017, this year saw the second bi-annual Timing Research Forum held from the 15-17th of October, 2019 in Queretaro, Mexico. The conference largely followed the format of its predecessor, with the extremely exciting addition of a Moonshot session and keynotes from Mehrdad Jazayeri, Albert Tsao and Kia Nobre. 

It would be difficult to comment on all the new ideas and findings, with so many exciting and inspiring talks/posters in one place, but some of the musings I took away were related to how we still have a long (and adventurous) road ahead of us in terms of defining what we truly mean when we speak of the specifics of timing and time perception. Specifically, several speakers proposed the idea of rethinking relative features of time such as what regularity truly means for the brain, and whether previously demonstrated functional networks account for all presentations of time as has been suggested (for example, beats and intervals), or rather whether different networks account for different aspects of time. The importance of rhythmic expectation via improved performance in behavioural measures were also touched upon, but again, extrapolating these behavioural effects to occur as a result of temporal expectation is a perilous stance that still requires further exploration (and may actually be more challenging than we initially anticipate). With every meeting dedicated to exploring time perception, it becomes increasingly obvious that we don’t all mean the same thing when we speak of time. Investigating the perception of ‘rhythms’ or ‘beat’ is distinctly different to the perception of ‘intervals’, and even that is different to the processing of ‘durations’. Perhaps, however, these differences are what make the study of time endlessly exciting and compelling – that there truly is so much more we are yet to discover (and eventually piece together). 

The Moonshot session was a novel and particularly exciting addition at this years meeting. The basic premise for the discussion was this – 

“For the next 5 years, all techniques, methods, and workforce are available to solve the question you think is essential to understand time in the brain. Which question would that be, and why?”. 

Speakers proposed several really thought-provoking and progressive advances (the session was fully livestreamed and if you didn’t get a chance to tune in, you can still catch up at https://www.pscp.tv/w/1djxXRwNmYyGZ). Some of the suggestions included a focus on the ontology of timing and a particular need to develop a comprehensive taxonomy of time, as several accounts of interval timing actually describe pattern timing. Another suggestion was the immediate need to explore if distinct neurochemical systems mediate distinct aspects of timing and more social approaches to this question, for example, why synchronous movements with others increases social affiliation (even as early as infancy), and in a similar vein, the comparison that unlike humans, other primates (monkeys) do not spontaneously perform periodic temporal prediction, highlighting that predictive motor entrainment is intrinsically rewarding to humans (..and why exactly to species differ)?. 

While the Moonshot allowed for many exciting ideas to be brought to the forefront, I was especially excited by Molly Henry’s radical approach to the perusal of needing to delimit the now, and what exactly this means. Molly described reports on an ‘expanded now’ from individuals on LSD and other hallucinogens, and proposed that if we truly were to aim for the moon, considering the use of such stimulants to explore the extent of the ‘now’ might actually be more fruitful and eye-opening (excuse the pun!) than we realise. Whilst this leaves much to think about (and in some cases, reconsider), there is something to be said for the feeling of each of us working together to piece this puzzle in our own creative ways.

In conclusion, it is no understatement to say TRF2 was a resounding success – presenting novel insights, but also reaffirming the need to continue pushing our own assumptions of approaching the enigma of timing creatively. Building on the foundations of its predecessor, TRF2 has paved the way for more exciting work to be presented at TRF3 in Groningen (The Netherlands). Thanks again to all the organisers and attendees. See you in 2021! 

Aysha Motala is a postdoctoral fellow at the Brain and Mind Institute (Western University, Canada) working on cross-modal timing and neuroimaging of speech and rhythm processing 

(twitter:@aysha_motala)  

Meta-analysis of neuroimaging during passive music listening: Motor network contributions to timing perception

We often learn about what the brain is doing by observing what the body is doing when the brain is focused on a task. This is true for investigations into rhythmic timing perception. Many insights into timing have resulted from careful observation of sensorimotor synchronization with auditory rhythms. This draws from the works of Bruno Repp that suggest that perception of auditory rhythms relies on covert action—that synchronizing with a sequence is not so different than simply perceiving a sequence without moving along with it.

More specifically, in order to synchronize a finger-tap, or any other body movement, with an auditory stream, some timing prediction is necessary in order to perform all movement planning, effector assembly and execution in time with the auditory beat instead of several milliseconds too late. If we must plan for a synchronized movement in advance, and there is some automaticity to this planning when we listen to auditory rhythms, then it is reasonable to ask whether we also perform some degree of motor planning every time we perceive a rhythm even if we do not move any body part in time with it.

The evidence suggests we do use our motor systems, or at least that our motor systems are actively being used for some purpose while perceiving rhythms when we are not synchronizing. Brain images during rhythm perception experiments consistently show activation in areas of the brain that are known to be involved in movement of the body. These areas include primary motor cortex, premotor cortices, the basal ganglia, supplementary motor area, and cerebellum. Details about covert motor activity are still being investigated, but some theories suggest covert motor activity plays an essential role in rhythmic timing perception, a theory many music cognition researchers find intriguing.

But first, what does the neuroimaging literature actually say about which motor networks are active, and which rhythm perception tasks elicit this covert action? Each study uses musical stimuli that vary on a number of features, give different instructions to the subjects on how to attend to or experience the stimuli, and these differences induce varying emotional states, arousal, familiarity, attention and memory. However, across all this stimulus variability, motor networks still robustly present themselves as players in rhythm perception. Interestingly, the stimulus variability shows up less in whether we see covert action and more in which motor networks are covertly activated.

In a recent meta-analysis of neuroimaging studies on passive musical rhythm perception, Chelsea Gordon, Patrice Cobb and Ramesh Balasubramaniam (2018) asked which covert motor activations are most reliable and consistent across studies. They used the Activation Likelihood Estimation (ALE; Turkeltaub et al., 2002), derived from peak activations in Talairach or MNI space, to compare coordinates across all PET and fMRI studies with passive music listening conditions in typically healthy human subjects. Their sample included 42 experiments that met the criteria for inclusion. As expected, the results of the ALE meta-analysis revealed clear and consistent covert motor activations in various regions during passive music listening. These activations were in premotor cortex (bilaterally), right primary motor cortex, and a region of left cerebellum. Premotor activation patterns could not be further localized to dorsal or ventral subregions of premotor cortex, but were dorsal, ventral or both dorsal and ventral. Right primary motor activations might have been excitatory or inhibitory, and were stronger in studies that asked subjects to anticipate later tapping to a beat in subsequent trials or to subvocalize humming. Most consistent across studies were premotor and left cerebellum activations, supporting predictive theories of covert motor activity during passive music listening.

One surprising aspect of these results is that the ALE meta-analysis did not find consistent activation in SMA, pre-SMA or the basal ganglia. The authors suggest that basal-ganglia-thalamocortical circuits may be specifically involved in subjects with musical training, or only in tasks with specific instructions to attend to the rhythmic timing of the stimuli instead of to listen passively.

An important concern Gordon and colleagues raised in the discussion is that of how publication bias contributes to ALE results. Also described by Acar et al. (2018), unpublished data deemed uninteresting can lead to biases in meta-analytic techniques (known as the file drawer problem), including in the ALE measure. Gordon et al. attempted to account for the file drawer problem by contacting all authors of the analyzed manuscripts to ask for the full datasets from each study to use in their ALE analysis. However, many authors did not provide this data for unreported brain activations, leading to limitations in number of explanatory contrasts that could be performed and a possible influence of publication bias on the ALE results.

The ALE technique is a powerful tool in performing large-scale neuroimaging study meta-analyses, but as with any meta-analysis technique of published results could be susceptible to the pitfalls of the file drawer problem. That being said, covert motor activity during passive music listening presents consistently across studies, even with considerable stimulus variability. This may support that timing prediction uses premotor and cerebellar networks.

Jessica M. Ross (jross4@bidmc.harvard.edu)

Source:

Gordon, C.L., Cobb, P.R., Balasubramaniam, R. (2018). Recruitment of the motor system during music listening: An ALE meta-analysis of fMRI data. PLoS ONE, 13(11), e0207213. https://doi.org/10.1371/journal.pone.0207213

http://chelseagordon.me/index.html

https://www.patricehazam.com/

https://www.rameshlab.com/

References:

Acar, F., Seurinck, R., Eickhoff, S.B., Moerkerke, B. (2018). Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI. PLoS ONE, 13(11), e0208177. https://doi.org/10.1371/journal.pone.0208177

Turkeltaub, P.E., Eden, G.F., Jones, K.M., & Zeffiro, T.A. (2002). Meta-analysis of the functional neuroanatomy of single-word reading: Method and validation. Neuroimage, 16, 765–780. https://doi.org/10.1006/nimg.2002.1131

How humans compute estimates of sub-second visual duration

“Heron and colleagues sought to address the question of how humans compute estimates of sub-second visual duration. Historical attempts to answer this question have taken inspiration from the observation that different brain areas are functionally specialised for the processing of specific stimulus attributes such as spatial location. This led to the dominance of ‘dedicated’ models of duration perception: central, specialised mechanisms whose primary function is duration encoding.

Recently, these models have been challenged by the emergence of ‘distributed’ models which posit the localised encoding of duration alongside other, non-temporal stimulus features. This raises the possibility that some neurons might perform ‘double duty’ by (for example) encoding information about spatial location and temporal extent. However, given the potentially vast number of non-temporal stimulus features implicated, isolating those functionally tied to duration encoding represents a challenge.

Heron and colleagues attempted to quantify contributions to duration processing from three different strata within the visual processing hierarchy: monocular, depth-selective and depth-invariant. They began by isolating the duration information presented to left and right monocular channels. When this information induced duration aftereffects, strong aftereffects were also observed in the non-adapted eye. Nevertheless, a small but significant amount of adaptation did not show interocular transfer. Next, they used a novel class stimuli to present duration defined by the presence or absence of retinal disparity information. These stimuli allow the first demonstration of duration perception under conditions where stimuli are only visible to mechanisms that allow the integration of spatial information from both eyes.

They found robust duration aftereffects could be generated by viewing disparity-defined durations, revealing duration selective mechanisms entirely independent from monocular processing. Importantly, these aftereffects showed only partial selectivity for the visual the duration information’s depth plane. For example, adaptation to durations defined by crossed disparity information followed by testing with uncrossed disparity-defined stimuli produced aftereffects that were significantly greater than zero but significantly smaller than conditions were adapting and test durations were defined by the same type of retinal disparity.

Heron and colleagues findings provide clear support for duration selectivity at multiple stages of the visual hierarchy. They suggest that duration processing may have similarities with the well documented ‘serial cascade’ type processing documented in the spatial domain. In this scenario, downstream duration encoding mechanisms apply cumulative adaptation to effects inherited from their upstream counterparts.”

—-blog post by James Heron, University of Bradford

Source article:
Heron J, Fulcher C, Collins H, Whitaker D & Roach NW (2019). Adaptation reveals multi-stage coding of visual duration. Scientific Reports (9), Article number: 3016 (pdf)

Publisher’s link: https://www.nature.com/articles/s41598-018-37614-3

The effects of color on time perception – Blue stimuli are temporally overestimated

In a paper recently published in Scientific Reports, Sven Thönes, Christoph von Castell, Julia Iflinger, and Daniel Oberfeld investigated whether duration judgments depend on the color (hue) of stimuli to be judged.

As color represents a basic feature of visual stimuli in lab experiments as well as in every-day environments, potential effects of hue on our perception of time are important to be considered. In particular, the well-known effects of arousal on time perception suggest that arousing hues, such as red, induce an overestimation of duration.

In a two-interval duration-discrimination task, the authors investigated whether participants indeed overestimate the duration of red stimuli in comparison to blue stimuli, while controlling for differences in brightness (individual adjustments by means of flicker photometry) and saturation (colorimetric adjustment in terms of the CIELAB color space). The mean duration of the stimuli was 500 ms. Moreover, the participants’ affective reaction (arousal, valence, dominance) towards the color stimuli were measured by means of the Self Assessment Manikin Scales.

Interestingly, the results showed a significant overestimation of the duration of blue compared to red stimuli, even though the red stimuli were rated as being more arousing. The estimated point of subjective equality showed that blue and red stimuli were perceived to be of equal duration when the blue stimulus was in fact 60 ms (12%) shorter than the red stimulus.

These surprising results (high arousal related to temporal underestimation) question arousal to be the main driving factor in the context of color and time perception. Moreover, the precision (variability) of duration judgments, i.e., the duration difference limen, did not differ between red and blue stimuli, questioning also an explanation in terms of attentional processes. The authors propose that specific neurophysiological mechanisms of color processing might be the basis of the effect, which need to be investigated in more detail in future studies.

Importantly, in timing-related visual experiments, it needs to be considered that the hue of the stimuli can affect time perception.

Source article:

Thönes, S., von Castell, C., Iflinger, J., & Oberfeld, D. (2018). Color and time perception: Evidence for temporal overestimation of blue stimuli. Scientific Reports, 8(1688) doi: 10.1038/s41598-018-19892-z

–Dr. Sven Thones (thoenes@ifado.de)

Linking sense of agency to perceived duration

Sense of agency (SoA) is an important feeling associated with voluntary actions, enabling one to experience that he/she is controlling the actions and through them the events in the external environment. Until now, only the distortion of time interval between the action and its consequence (i.e. intentional binding effect) was associated with SoA, but a recent study by Shu Imaizumi and Tomohisa Asai, published in Consciousness and Cognition showed that even perceived duration of consequence is linked with SoA.

To investigate this association, they measured the perceived duration (measure of subjective time) of visual display and the rating for amount of control (explicit measure of agency) as a function of temporal contiguity (between action and visual display) and identity of visual display (being participants own hand vs. someone else’s hand). In each trial, participant performed a complex hand gesture as depicted by the image on the screen. This hand gesture was recorded by the overhead camera and projected on the screen after variable delay. While participants performed this task, their hands were covered so the only visual feedback of their action was the one that they saw on the screen. Participants reported whether they perceived the duration of the displayed video feedback (3000ms) as “short” or “long”. They also reported whether they felt that they controlled the displayed hand, by providing a binary response as “totally agree” or “totally disagree”.

The agency was manipulated in two ways, one by changing the visual display (self vs. others) and second, by manipulating the action consequence delay (50ms, 250ms, 500ms, 1000ms or 1500ms). In half trials, participants saw recording of their own hand (self-condition) and in other half trials, they saw the prerecorded clips of other person hand movements performing similar action (other-condition). Orthogonally, the temporal contiguity between action and visual feedback (50ms, 250ms, 500ms, 1000ms or 1500ms) was also manipulated. Based on prior studies on SoA, it was expected that seeing visual feedback of one’s own hand should elicit stronger SoA compared to seeing someone else hand. Similarly, one should experience a stronger SoA for visual feedback displayed with short delay (50ms, 250ms, or 500ms) compared to longer delay (1000ms or 1500ms). They hypothesized that if SoA influences perceived duration then participants should report “long” judgment more often for conditions that are known to boost SoA.

Results revealed that when the visual feedback display consisted of participant’s own hand, they reported stronger SoA and perceived the duration as longer, for short action outcome delay (50ms, 250ms, or 500ms) and this effect become weaker as the delay become longer (1000ms and 1500ms). Furthermore, the above effect was not observed when display consisted of someone else hand, suggesting the possibility that SoA and perceived outcome duration might be linked. Another similar experiment, investigated the effect of participants own hand projected from first person perspective (upright) vs. second person perspective (inverted). Authors expected that inverted perspective would be treated as non-self and will not influence perceived duration, but surprisingly both inverted as well as upright perspective showed similar effect on perceived duration and agency, suggesting that independent of orientation the visual information regarding one’s own hand is processed in a similar manner.

In conclusion, this study provides evidence that SoA also affects perceived duration and participants perceives the outcome duration to be longer when they feel stronger SoA. However, this study is unclear about the exact mechanism that would explain the observed temporal expansion associated with SoA. Moreover, only single duration was used to evaluate changes in temporal perception. Another recent study published in Scientific Reports by Makwana and Srinivasan, also demonstrated similar temporal expansion associated with intentional action, which was sensitive to temporal contiguity and source of action (intention-based vs. stimulus-based). They demonstrated the intention induced temporal expansion, using multiple durations and paradigms (temporal bisection and magnitude estimation), In addition, they also investigated its underlying mechanism in terms of internal clock (most influential model of time perception), suggesting the role of switch dynamics and not the pacemaker speed, to be involved in such temporal expansion. Thus, these studies overall suggest that intention and intentional action, not only influence the time between the action and the outcome but may also influence other aspects of the outcome events such as its duration, and more studies are required to fully understand in what all ways our perception is distorted  by intentional action.

 

Reference:

  1. Moore, J. W., & Obhi, S. S. (2012). Intentional binding and the sense of agency: a review. Consciousness and cognition21(1), 546-561.
  2. Makwana, M., & Srinivasan, N. (2017). Intended outcome expands in time. Scientific Reports, 7(6305) doi: 1038/s41598-017-05803-1

 

Source article:  Imaizumi, S., & Asai, T. (2017). My action lasts longer: Potential link between subjective time and agency during voluntary action. Consciousness and cognition, 51, 243-257.

 

—Mukesh Makwana (mukesh@cbcs.ac.in),

Doctoral student,

Centre of Behavioural and Cognitive Sciences (CBCS), India.

Expectation, information processing, and subjective duration

A paper recently published in Attention, Perception, and Psychophysics tested an implementation of the temporal oddball illusion (according to which standard stimuli seem shorter than oddball stimuli of the same duration) in a novel context using a novel methodology (musical imagery reproduction). This paper is, to the authors knowledge, the first to test whether the temporal oddball illusion translates from single events to multiple-event sequences, and whether information processing influences this potential translation.

In two experiments, musical chord sequences of varying durations (3.5 s; 7 s; 11.9 s) did or did not contain auditory oddballs (sliding tones), and people listened to the sequences while engaged in either direct temporal or indirect temporal processing. We manipulated information processing by independently varying the task (Experiment 1), the sequence event structure (Experiments 1 and 2), and the sequence familiarity (Experiment 2). The task was either to complete a verbal estimation (“What is the duration of this excerpt?”) or a musical imagery reproduction (“Imagine that excerpt playing back in your head. Re-play it through your head the exact way you heard it play through the headphones, from start to finish. Press the green button to mark the start of the excerpt you’re imagining. Press the red button to mark the finish of the excerpt you’re imagining.”). The sequence event structure was either repeated (the mere repetition of a single chord), coherent (chords progressions that follow the rules of Western tonal harmony), or incoherent (the coherent sequences scrambled such that the chords progressions violated the rules of Western tonal harmony). The sequence familiarity was either familiar (presented during an exposure phase) or unfamiliar (not presented during the exposure phase). Completing a verbal estimation task, and listening to coherent, repeated, and familiar sequences induces direct temporal processing. Completing a musical imagery reproduction task, and listening to incoherent and unfamiliar sequences induces indirect temporal processing.

The main findings were that the sequences containing oddballs seemed shorter and longer than those not containing oddballs when people were engaged in direct and indirect temporal processing, respectively. These results support the dual-process contingency model of short interval time estimation, and can be explained using the notion of an information processing continuum (Zakay, 1993): as attention shifted from counting seconds (direct temporal processing) to listening to music (indirect temporal processing), for example, the effect of oddballs shifted from decreasing the number of seconds counted to increasing the amount of music remembered.

References:

Zakay, D. (1993). Relative and absolute duration judgments under prospective and retrospective paradigms. Attention, Perception, & Psychophysics, 54, 656–664. https://doi.org/10.3758/BF03211789

Source paper:

Simchy-Gross, R., & Margulis, E. H. (2017). Expectation, information processing, and subjective duration. Attention, Perception, & Psychophysics, 1-17. https://doi.org/10.3758/s13414-017-1432-4

 

Reprints are available at https://www.researchgate.net/profile/Rhimmon_Simchy-Gross

– Rhimmon Simchy-Gross

PhD student

Music Cognition Lab @ University of Arkansas

Dopamine encodes retrospective temporal information

A new study published in Cell Reports shows that midbrain dopamine neurons are sensitive to previously experienced time intervals, and that this is likely to be important in terms of reward processing. Midbrain dopamine neurons are frequently discussed in terms of their roles in reward, motivation, and certain forms of learning. However, within the time perception literature, we commonly associate dopamine as modulating the rate of the internal pacemaker. Naturally, these functions of dopamine are not exclusive, and this study makes important progress in integrating them.

Dopamine in reinforcement learning

While early research implicated dopamine as the principle neurotransmitter responsible for the hedonic nature of “liking” something, the contemporary view conceptualises dopaminergic activity as a reinforcement signal that facilitates learning, rather than directly causing pleasure. This is in part due to the classic finding that phasic dopamine activity in the mesolimbic pathway constitutes a reward prediction error (the difference between expected and received reward), commensurate with prescriptive models of reinforcement learning.

During learning, dopamine responses gradually transfer to the earliest predictors of a reward, and after this associative pairing is established, response to the reward itself is reduced or absent. Importantly, this means that these response dynamics are fundamentally sensitive to the expected time of reward delivery.

Further to this, if rewards are delivered at different delays, the phasic responses of dopamine neurons to cues signalling these rewards depend on the duration of the delay (as well as reward probability, magnitude and type). This decreased response to longer reward delays typifies the economic principle of temporal discounting: rewards are devalued as a function of delay until their receipt. In reflecting the reduced value of delayed rewards, these neural responses demonstrate sensitivity to timing and appear to encode the intervals between cues and prospective (i.e. future) rewards.

Dopamine and time perception

In addition to its associations with motivation and reward, as a pharmacological agent, dopamine has been routinely acknowledged to play a significant role in time perception, in what some refer to as the ‘dopamine clock hypothesis1. Two sets of evidence in particular highlight this.

Firstly, non-human animal studies have pharmacologically manipulated dopamine during time perception tasks. When given dopamine agonists (e.g. methamphetamine) during a peak interval procedure, rats’ response rates peak earlier, as if their internal pacemaker was accelerated. When given dopamine antagonists (e.g. haloperidol), peak responses are later, commensurate with a slowing of the pacemaker2.

Secondly, electrophysiological and optogenetic studies of neurons in the substantia nigra (which produces dopamine and has inputs to the striatum) have shown that optogenetic activation or suppression of these neurons result in later and earlier timed responses, respectively. These results respectively reflect a slower or faster internal pacemaker, which is the opposite pattern of results seen in the pharmacological studies.

The present study

From the background above, we can see that dopamine appears to be involved in both time perception and reward processing. However, dopamine neurons have previously only been shown to encode elapsing and future delays. The study from Fonzi et al. questioned whether dopamine signals could also convey information related to retrospective, past delays. For example, do dopamine responses to a reward cue encode how much time has already been invested in the pursuit of the reward?

The researchers developed a Pavlovian conditioning paradigm with two reward cues that provided identical information about an upcoming reward, but differed in terms of how much time had elapsed since the previous reward. One cue was only presented after a 15–25 s wait time (“short cue”), while the other was only presented after a 65–75 s wait time (“long cue”). The researchers trained rats with this design while simultaneously using fast-scan cyclic voltammetry to record dopamine concentration in the nucleus accumbens core. If the dopamine responses to the short and long cues did not differ, then it would seem that dopamine activity only encodes prospective information. On the other hand, if the dopamine response to the long cue was larger than that of the short cue, this could be said to reflect the sunk cost of time. Conversely, if the signal to the long cue was decreased relative to that of the short cue, this could be said to reflect the rate of reward3.

The results showed that within this simple experimental design, dopamine responses to the long cue were decreased relative to short cue, suggesting that dopamine in the nucleus accumbens encodes reward rate. An alternative possibility was that this differing dopamine response reflected differing expectations about the time of delivery – the response to the long cue could be decreased because as time elapses, it is increasingly likely that the cue will be shown (i.e. a change in hazard rate). However, there was no relationship between the dopamine response and the time elapsed within each cue type. Furthermore, when another cohort of rats was trained with only a single cue for both short and long wait conditions, no differences were seen in the cue-evoked dopamine response for different wait times. Both of these results speak against the possibility that the dopamine response reflected the changing likelihood of reward delivery over time.

Notably, the principle finding above relied on a single analysis, and the relative difference between the short and long cues. The authors of the study thus performed a follow up analysis to determine whether this retrospective temporal information could be encoded when the animals were not able to directly compare cues. To do this, they trained an independent cohort of rats with short trials and long trials in separate sessions. Even in these scenarios, the short cue evoked a larger dopamine response than the long cue, which suggested that the encoding of retrospective delays was context-independent.

However, once these rats were exposed to both cues in a mixed session, the response to the short cue was increased. While for most of the above experiments there were no differences in behaviour between the two conditions, this increase in dopamine response to the short cue in this intermixed session was also accompanied by an increase in behavioural responding. This implies that (while elapsed wait times can be learnt independently) the dopaminergic encoding of retrospective delays is not entirely context-independent. It also shows that while there are not generally behavioural differences between the short and long cues, there appear to be changes in behaviour when there are also changes in dopamine response.

In a final analysis, the researchers also investigated the effect of the previous trial type, and the tonic dopamine signals over the waiting time. Firstly, for rats recently switch from the separate sessions to an intermixed session, they found that dopamine responses to short cues were significantly increased when the preceding trial was a long cue trial, compared when the preceding trial was a short cue trial. Similarly, dopamine levels were increased during the waiting period after long cue trials, relative to short cue trials (but only up to 25 s, before the identity of the current trial was known). From around the point that the identity was known (25 s), conditioned responding decreased when the preceding trial was a long cue trial, relative to when it was a short cue trial. One possible implication here is that a decrease in wait time dopamine could promote increased anticipatory responding. This would be consistent with the electrophysiological and optogenetic evidence that reducing dopamine increases pacemaker rate (see above).

It is important to reiterate that the results in the former two paragraphs only applied to the experiments where rats where moved from separate training on the short and long cues to an intermixed schedule. These results therefore represent peculiarities in how these animals learnt and adapted to their new context. Overall, the results of the first experiment are the most important here: phasic dopamine responses encode previous durations and appear to constitute a signal of previous reward rate.

This study compellingly demonstrates how even simple experimental designs can lead to novel and valuable findings. The fact that nucleus accumbens dopamine responses encode reward rate suggests a potential mechanism that could normalise value signals for future rewards, and provide contextual information such as the sunk cost of time.

If cue-evoked dopamine responses have to encode durations over a large range of timescales (potentially over 15 orders of magnitude) one interesting future avenue for research would be to describe the mapping between these dopamine responses and the duration of the delays preceding them, in order to precisely understand how durations are represented. More work needs to be done to comprehensively understand the functions of tonic and phasic dopamine and how they relate to perceived and experienced durations, but this study makes substantial progress toward this goal.


Source paper:

Fonzi, K. M., Lefner, M. J., Phillips, P. E. M., & Wanat, M. J. (2017). Dopamine Encodes Retrospective Temporal Information in a Context. Cell Reports 20(8), p. 1774. doi: 10.1016/j.celrep.2017.07.076


  1. It should be noted that much research into the neurobiology of reward and motivation typically focuses on the mesolimbic dopamine pathway. This is in contrast to time perception research, which is more often related to the nigrostriatal pathway (this is also commonly associated with movement). However, these pathways are not independent and the nigrostriatal pathway has also been shown to be critical for reward processing. ↩︎
  2. When both drugs were delivered simultaneously, rats’ peak responses are similar to that of a control condition. ↩︎
  3. Previous research has suggested that longer timescale tonic dopamine activity encodes reward rate. ↩︎

Intended outcome appears longer in time

We live in a complex and dynamic world where sometimes our action yields the intended (desired) outcomes and sometimes the unintended outcomes, but does our subjective time changes as a function of outcome being intended or unintended. To find the answer, read the recent article by Mukesh Makwana and Prof. Narayanan Srinivasan, published in Scientific Reports.

In a series of five experiments involving temporal bisection task (Exp1-4) and magnitude estimation task (Exp5), they investigated whether participants perceive the duration of intended outcome differently compared to unintended outcome, and if yes then what are its underlying mechanisms.

They reasoned that when a participant intends an outcome, its representation gets activated and this prior self-activated representation would lead to earlier awareness of the intended outcome compared to unintended outcome  extending the temporal experience. Recently, pre-activation account has been used to explain temporal expansion (Press et al., 2014).

To manipulate intentional nature of the outcome they used a simple color choice question. In each trial, amongst the choice of two colors, they asked participants to indicate what color circle they want to see, by pressing the allocated key for that color. After 250ms (Exp1) of the intentional key press, they were randomly presented with circle of either intended color (50% times) or unintended color (50% times) whose duration was randomly manipulated amongst nine levels (300ms to 700ms in steps of 50ms). This was done to reduce or eliminate the sensory-motor prediction between the key press and the color of the outcome circle, so that the effect of intention on the perceived duration of the outcome is not confounded with probability-based prediction. Irrespective of the intentional nature of the outcome, participants were supposed to report whether they perceived the duration of the outcome as closer to short (300ms) or long (700ms) anchor duration that they learnt in training phase before the main experiment. Each individual data was sorted into two conditions i.e. when they get the intended outcome (i.e. Intended condition) and when they did not get the intended outcome ( i.e. unintended condition). Psychometric (Weibull) functions were fitted for this two conditions and bisection points were calculated. Bisection point or point of subjective equality is the measure of shift in temporal perception, where lower values of bisection point in a condition indicate temporal expansion relative to condition with higher bisection point. Results of Exp1 showed that participants perceived the duration of intended outcome as longer compared to unintended outcome.

They also studied whether increase in delay between the intentional action and its outcome affects the intention induced temporal expansion observed in Exp1. So further two experiments were performed with increased delay between action and outcome i.e. 500ms in Exp2 and 1000ms in Exp3. Rest stimuli, apparatus and procedure were identical to Exp1 except that in Exp2 instead red and green, yellow and blue color circles were used. Results showed that the intention induced temporal expansion was observed till 500ms delay but as the delay increased to 1000ms the temporal expansion effect vanished, suggesting that the self-activated representation fades away around 1000ms of the intentional action.

To establish that for the above-observed temporal expansion effect, intentional activation of the representation is necessary and not just priming or instruction-based action is not sufficient Exp4 was performed. In Exp4, instead of intending and selecting what color circle they wanted to see, in each trial participants  were shown color word i.e. RED or GREEN on the screen and they just pressed the corresponding key. Rest procedure, stimuli and analysis was similar to Exp1. Results showed no difference in duration perception between word congruent condition and word incongruent condition, suggesting the importance of intention in the above effect.

Lastly, Exp5 was performed using magnitude estimation paradigm to investigate whether intention affects the time perception by increasing the pacemaker speed or affecting the switch or gating component of the “internal clock model”.  The internal clock model is the most influential classical model used to explain human timing behaviour. If any factor influences the pacemaker speed then as the magnitude of the actual duration increases the difference between two conditions should also increase given a typical “slope effect”. On the other hand, if the switch or gating component is affected then no slope effect is observed. Results showed no slope effect, indicating that intention might influence the switch or gating mechanism.

In conclusion, a series of experiments in this study provides convincing evidence that intention affects temporal perception and participants perceives the intended outcome to be longer in duration compared to unintended outcome. Moreover, this intention induced temporal expansion effect depends on the temporal contiguity between the action and the outcome and it vanishes at 1000ms action-outcome delay. Furthermore, in terms of internal clock, this effect is most probably not due to increase in pacemaker speed, rather opening or closing of the switch seems more probable mechanism. As humans are intentional agents and intentions forms a critical part of daily life, more studies investigating the effects of intention on perception in general should be pursued.

 

Reference:

 

  1. Press, C., Berlot, E., Bird, G., Ivry, R., & Cook, R. (2014). Moving time: The influence of action on duration perception. Journal of Experimental Psychology: General, 143(5), 1787.

 

Source article:  Makwana, M., & Srinivasan, N. (2017). Intended outcome expands in time. Scientific Reports, 7(6305) doi:  10.1038/s41598-017-05803-1

 

—Mukesh Makwana (mukesh@cbcs.ac.in),

Doctoral student,

Centre of Behavioural and Cognitive Sciences (CBCS), India.

 

Summer 2017 Conference Season

Finally back in London, ON, after a slightly extreme summer conference tour: The Neurosciences and Music IV in Boston; preceded by our homegrown satellite, Neural Entrainment and Rhythm Dynamics (NERD, credit Ed Large for name/acronym combo); cuttingEEG in Glasgow; and the Rhythm Perception and Production Workshop (RPPW) in Birmingham [had a little break in between those last two to drive around Scotland with my dad and brother]. All of it was extremely inspiring, but of course too much info for any one human to retain, so I’ll try to summarize what I felt were some of the highlights.

I’m of course extremely biased, but I loved every minute of NERD. It was full-on day of fantastic 7-min talks on rhythm and entrainment punctuated by thought-provoking discussion periods. There were two major things I took away from NERD (I’ll only talk about one in any detail). First, we’re not all speaking the same language a lot of the time. That’s of course a problem that has been and will be around forever to some extent, and it’s also OK. For example, when I talk about “rhythm” or “beat”, that’s not quite the same thing someone else is talking about when they use the same words. A particularly sticky word at the moment is “entrainment”, which seems to mean a lot of things to a lot of different people, despite being very well defined in the math/physics domains. Even things like “beat salience” or “beat strength” are contested terms, making them hard things to study and talk about. The important thing, I think, is that we make sure we’re operationally defining terms in the papers we’re publishing and the talks we’re giving, so that even if we’re using terms differently, we can talk about the same phenomena. It sounds obvious, but this is done surprisingly infrequently, including by myself I’m sure. The second thing I took away, which I won’t discuss here and which very well may be the subject of a future blog, is that rhythmic/temporal expectation effects on behavior are harder to observe than one might think. More on that later.

There were a million interesting talks and posters at both NeuroMusic and RPPW, but here I’ll focus on timing-related issues. Even though I’ve heard various bits of the data before, I was struck (again) by the idea and accumulating evidence that synchrony is social. We need to be synchronized with each other to successfully navigate conversational turn-taking. Toddlers are more likely to exhibit pro-social helping behaviors towards adults that they have moved in synchrony with compared to someone they have moved out-of-sync with. Babies synchronize eye contact with a singer to the beat of the song the singer is producing.

A relatively new focus was on synchronization between brains. New ways to analyze electrophysiological data (using “intersubject synchronization” or “intersubject correlation”) allow us to assess interpersonal neural synchrony. Traditionally, measures of intersubject synchronization don’t necessarily focus on social situations, but nonetheless show that individual brains are more synchronized with each other (i.e., respond more similarly to the same stimulus) when individuals are more engaged with whatever they are watching or listening to. We presented EEG data that we collected from 60 EEG participants, 20 at a time, in slightly different social situations while viewing/listening to a concert. But I’m not here to self-promote. One really interesting twist on this idea was to use noninvasive brain stimulation to force pairs of brains to be either in sync or out of sync with each other. Despite the situation not actual being social (the individuals making up each pair were not able to see or interact with each other, but did have auditory information about the other person’s behavior), pairs of participants synchronized tapping better with each other when their brains were in sync compared to when their brains were out of sync. The moral of the story is that better neural synchronization leads to better behavioral synchronization, which could in turn lead to stronger affiliation in the social domain.

In general, rhythm and timing were very present topics at the conferences I attended this summer. And it seems like the more we know about how brains are actually involved in behavioral synchrony, the better we stand to understand how synchrony is involved in social situations. I look very much forward to seeing how this research evolved over the next years, and in hopefully being a part of it myself.

Olfactory-Visual Sensory Integration Twists Time Perception

During everyday interactions, our senses are bombarded with different kinds of sensory information, which are processed by dedicated sensory systems operating at different temporal sampling scales to form a coherent percept. The question is whether information from one modality (say olfactory) influences the temporal perception of stimulus from other modality (say vision). Although, previous studies have investigated the effect of auditory stimulus on temporal perception of visual stimuli [1, 2], the evidence for the effect of olfactory stimulus on temporal perception of visual stimuli was lacking. A recent study published in Cerebral Cortex, by Prof. Wen Zhou and her lab members (Dr. Bin Zhou, Guo Feng, and Wei Chen), fills this gap and addresses whether odor influences visual temporal sampling and duration perception.

To study the effect of odor on visual temporal sampling, they used a two alternative forced choice chromatic critical flicker fusion (CFF) task with two isoluminant complimentary color images of banana or apple alternating at different frequencies (15, 20, 22.5 & 25 Hz in different blocks) for duration of 400ms (see figure 1, in original paper). In each trial, there were two 400ms flickering interval each flanked by 200ms mask, and separated by 600ms blank between the two intervals. Out of two, only one interval contained the flickering fruit image (either banana or apple) and participants reported the interval that contained fruit image. Along with visual stimuli, in Exp1 (N=16), participants were also exposed to two different odor stimuli (banana-like, amyl acetate 0.02%v/v in propylene glycol; and apple-like, apple flavor Givaudan, in separate blocks). The idea was to check whether the odor congruency influence the temporal sampling (CFF threshold) for the flickering banana or apple images. Results revealed that participants object detection increased significantly when the odor and the object image content matched, even when the task did not demanded any explicit object discrimination or identification, suggesting that sensory congruency between olfactory and visual inputs boosted the corresponding object visibility around CFF. Another analysis by fitting the psychometric function for the two odor conditions, with frequencies on x-axis and accuracy on y-axis, suggested that olfactory-visual congruency also facilitated the visual temporal sampling.

To establish that the above congruency effect is specific to odor and not just semantic information (or context) provided by the odor, they performed two control experiments. In the first control experiment (Exp2A, N=16), participants performed exactly the same task as Exp1 but instead of actual odors, odorless purified water was used and was suggested to participants as diluted banana or apple odor. In the second control experiment (Exp2B, N=16), semantic textual labels, “banana odor” or “apple odor”, were displayed at the center of the screen. In both the control experiments, they did not observe the odor-visual congruency effect, suggesting that presence of odor is important for such sensory integration.

The next question was to find the neural correlates of the odor-visual congruency effect, emphasizing at what level of visual processing the odor starts modulating it. For this, they performed an EEG experiment (Exp3, N=18) using the same stimuli as in Exp1, but modifying the task a bit. In the modified task, only one flickering interval of 400ms was presented flanked by red-green noise mask of 100ms, and participants reported whether the object was present or absent in that trial. All objects (apple or banana image) were presented to participants’ at subliminal frequency. For nine participants flicker frequency of 22.5Hz was used whereas for other nine participants 25Hz was used. Results from time-frequency analysis, revealed that maximum congruency-induced enhancement (i.e. greater normalized power difference) was observed in electrodes over right temporal regions around 150-300ms post stimulus onset. The difference around this time window suggests that during odor-visual congruency, odor starts influencing vision at the stage of object-level processing. Even source-localization analysis indicated the activation of right temporal region which is again known to be involved in object level representations. Thus, these evidences strongly suggest that odor influences the corresponding visual object at the stage of object-level processing.

From the above experiments, it was evident that the odor-visual congruency modulates visual temporal sampling, so the next logical question was whether it also influences the perceived duration of the visual stimuli. To answer this question, in Exp5 (N=24), they used a 2-Alternative Force Choice (AFC) comparison task, in which one image (either apple or banana) was a standard image (500ms) and the other image (either banana or apple) was test image (of varying durations 300, 350, 400, 450, 500, 550, 600, 650, 700ms). Participants reported which of the two images appeared longer in duration. For half participants (N=12) apple image was standard and banana image was comparison, whereas for other half (N=12) banana image served as standard and apple image served as comparison. Participants in both these groups were exposed to banana-like or apple-like odor in separate blocks. Point of subjective equality (PSE) and difference limen (DL) were measured for both the odor conditions. PSE is the measure of perceived duration whereas DL is the measure of temporal sensitivity. A two way mixed ANOVA on PSE values, with odor (banana-like, apple-like) as within-subjects factor and comparison image (banana image, apple image) as between-subjects factor, showed significant interaction. Further post hoc analysis after Bonferroni correction revealed that participants perceived the duration of the image to be longer when the image content and the odor were congruent compared to when they were incongruent. Similar analysis with DL, did not show any significant difference neither for main effects nor for interaction, suggesting that odor modulates only the perceived duration but not the temporal sensitivity.

Again to confirm that the above congruency effect on perceived duration is due to odor, not just because of semantic information (or context) provided by the odor, they performed two control experiments (Exp5A and Exp5B) similar to Exp2A and Exp2B. In Exp5A (N=24) instead of odor, odorless purified water suggested as diluted banana-like or apple-like odor were presented, whereas in Exp5B (N=24) instead of odor, textual labels (“banana odor” or “apple odor”) were presented on the screen. Neither the purified water nor the textual labels, showed the odor-visual congruency effect of perceived duration as seen in Exp4, suggesting the importance of odor in odor-visual sensory integration to modulate visual temporal perception.

In conclusion, this study provides a convincing evidence for the effect of odor on visual time perception, including temporal sampling and perceived duration. In future, it would be interesting to investigate this effect with other time perception paradigms such as magnitude estimation and measure the slope effect, which might help to know whether odor influences the pacemaker speed or the switch/ gating mechanisms in context of “internal clock model”. Moreover, it would be further interesting to investigate whether such odor-visual congruency effect influence the neural correlate of time perception such as CNV (contingent negative variation) component.

References:

1. Romei, V., De Haas, B., Mok, R. M., & Driver, J. (2011). Auditory stimulus timing influences perceived duration of co-occurring visual stimuli. Frontiers in psychology, 2.

2. Yuasa, K., & Yotsumoto, Y. (2015). Opposite distortions in interval timing perception for visual and auditory stimuli with temporal modulations. PloS one, 10(8), e0135646.

Source article: Zhou, B., Feng, G., Chen, W., & Zhou, W. (2017). Olfaction Warps Visual Time Perception. Cerebral Cortex, 1-11.

—Mukesh Makwana (mukesh@cbcs.ac.in),
Doctoral student,
Centre of Behavioural and Cognitive Sciences (CBCS), India.