Timing & Time Perception Special Issue on “Temporal Illusions”

Hosted by Fuat Balcı & Argiro Vatakis 

Decades-long research in interval timing has primarily focused on the psychophysical properties of this fundamental function typically in consideration of veridical timing behavior. Along the similar vein, generative models of interval timing mostly focus on the processing dynamics of the internal stop-watch in its default mode. Both of these approaches have largely overlooked the malleability of perceived time by exogenous factors such as stimulus intensity and endogenous factors such as physiological arousal. These very relations could actually help researchers better understand the representational constitution of subjective time and the processing dynamics of the internal stop-watch. This special issue aims to cover a wide range of empirical and theoretical work on the effects of different factors (e.g., stimulus features, physiological states, emotional states, drugs) on timing and time perception in humans and other animals.

Submission procedure:

1. Full paper submission by November 1st, 2019.

Instructions for submission: The submission website is located at: http://www.editorialmanager.com/timebrill/. To ensure that all manuscripts are correctly identified for inclusion into the special issue it is important to select “Special Issue: Temporal Illusions” when you reach the “Article Type” step in the submission process. More details on format that must be followed in preparing your manuscripts see here


3. Standard peer review/revision process will be followed.

4. Final decisions are expected by May 15th, 2020.

PhD position in audio-visual synchrony perception at TU Eindhoven

4-years PhD position in audio-visual synchrony perception at TU Eindhoven, The Netherlands

Within the framework of the EU funded Marie Skłodowska Curie Initial Training Network VRACE (Virtual Reality Audio for Cyber Environments) a 4-years PhD position is available in the Human-Technology Interaction group of the TU Eindhoven, The Netherlands, under the supervision of Armin Kohlrausch.

Applications are invited (and only possible) via the WEB portal of the TU/e. More detailed information on the position and details how to appply can found at:  https://jobs.tue.nl/en/vacancy/phd-investigating-audiovisual-synchrony-perception-in-virtual-environments-513162.html

Important: According to the eligibility rules of the EU, specific mobility requirements apply to this position, which are described in detail on the job site. Please check before applying whether you fulfill these requirements. 

The application deadline is April 15, 2019, start of the position preferably as soon as possible after the end of the selection process.

Meta-analysis of neuroimaging during passive music listening: Motor network contributions to timing perception

We often learn about what the brain is doing by observing what the body is doing when the brain is focused on a task. This is true for investigations into rhythmic timing perception. Many insights into timing have resulted from careful observation of sensorimotor synchronization with auditory rhythms. This draws from the works of Bruno Repp that suggest that perception of auditory rhythms relies on covert action—that synchronizing with a sequence is not so different than simply perceiving a sequence without moving along with it.

More specifically, in order to synchronize a finger-tap, or any other body movement, with an auditory stream, some timing prediction is necessary in order to perform all movement planning, effector assembly and execution in time with the auditory beat instead of several milliseconds too late. If we must plan for a synchronized movement in advance, and there is some automaticity to this planning when we listen to auditory rhythms, then it is reasonable to ask whether we also perform some degree of motor planning every time we perceive a rhythm even if we do not move any body part in time with it.

The evidence suggests we do use our motor systems, or at least that our motor systems are actively being used for some purpose while perceiving rhythms when we are not synchronizing. Brain images during rhythm perception experiments consistently show activation in areas of the brain that are known to be involved in movement of the body. These areas include primary motor cortex, premotor cortices, the basal ganglia, supplementary motor area, and cerebellum. Details about covert motor activity are still being investigated, but some theories suggest covert motor activity plays an essential role in rhythmic timing perception, a theory many music cognition researchers find intriguing.

But first, what does the neuroimaging literature actually say about which motor networks are active, and which rhythm perception tasks elicit this covert action? Each study uses musical stimuli that vary on a number of features, give different instructions to the subjects on how to attend to or experience the stimuli, and these differences induce varying emotional states, arousal, familiarity, attention and memory. However, across all this stimulus variability, motor networks still robustly present themselves as players in rhythm perception. Interestingly, the stimulus variability shows up less in whether we see covert action and more in which motor networks are covertly activated.

In a recent meta-analysis of neuroimaging studies on passive musical rhythm perception, Chelsea Gordon, Patrice Cobb and Ramesh Balasubramaniam (2018) asked which covert motor activations are most reliable and consistent across studies. They used the Activation Likelihood Estimation (ALE; Turkeltaub et al., 2002), derived from peak activations in Talairach or MNI space, to compare coordinates across all PET and fMRI studies with passive music listening conditions in typically healthy human subjects. Their sample included 42 experiments that met the criteria for inclusion. As expected, the results of the ALE meta-analysis revealed clear and consistent covert motor activations in various regions during passive music listening. These activations were in premotor cortex (bilaterally), right primary motor cortex, and a region of left cerebellum. Premotor activation patterns could not be further localized to dorsal or ventral subregions of premotor cortex, but were dorsal, ventral or both dorsal and ventral. Right primary motor activations might have been excitatory or inhibitory, and were stronger in studies that asked subjects to anticipate later tapping to a beat in subsequent trials or to subvocalize humming. Most consistent across studies were premotor and left cerebellum activations, supporting predictive theories of covert motor activity during passive music listening.

One surprising aspect of these results is that the ALE meta-analysis did not find consistent activation in SMA, pre-SMA or the basal ganglia. The authors suggest that basal-ganglia-thalamocortical circuits may be specifically involved in subjects with musical training, or only in tasks with specific instructions to attend to the rhythmic timing of the stimuli instead of to listen passively.

An important concern Gordon and colleagues raised in the discussion is that of how publication bias contributes to ALE results. Also described by Acar et al. (2018), unpublished data deemed uninteresting can lead to biases in meta-analytic techniques (known as the file drawer problem), including in the ALE measure. Gordon et al. attempted to account for the file drawer problem by contacting all authors of the analyzed manuscripts to ask for the full datasets from each study to use in their ALE analysis. However, many authors did not provide this data for unreported brain activations, leading to limitations in number of explanatory contrasts that could be performed and a possible influence of publication bias on the ALE results.

The ALE technique is a powerful tool in performing large-scale neuroimaging study meta-analyses, but as with any meta-analysis technique of published results could be susceptible to the pitfalls of the file drawer problem. That being said, covert motor activity during passive music listening presents consistently across studies, even with considerable stimulus variability. This may support that timing prediction uses premotor and cerebellar networks.

Jessica M. Ross (jross4@bidmc.harvard.edu)


Gordon, C.L., Cobb, P.R., Balasubramaniam, R. (2018). Recruitment of the motor system during music listening: An ALE meta-analysis of fMRI data. PLoS ONE, 13(11), e0207213. https://doi.org/10.1371/journal.pone.0207213





Acar, F., Seurinck, R., Eickhoff, S.B., Moerkerke, B. (2018). Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI. PLoS ONE, 13(11), e0208177. https://doi.org/10.1371/journal.pone.0208177

Turkeltaub, P.E., Eden, G.F., Jones, K.M., & Zeffiro, T.A. (2002). Meta-analysis of the functional neuroanatomy of single-word reading: Method and validation. Neuroimage, 16, 765–780. https://doi.org/10.1006/nimg.2002.1131

How humans compute estimates of sub-second visual duration

“Heron and colleagues sought to address the question of how humans compute estimates of sub-second visual duration. Historical attempts to answer this question have taken inspiration from the observation that different brain areas are functionally specialised for the processing of specific stimulus attributes such as spatial location. This led to the dominance of ‘dedicated’ models of duration perception: central, specialised mechanisms whose primary function is duration encoding.

Recently, these models have been challenged by the emergence of ‘distributed’ models which posit the localised encoding of duration alongside other, non-temporal stimulus features. This raises the possibility that some neurons might perform ‘double duty’ by (for example) encoding information about spatial location and temporal extent. However, given the potentially vast number of non-temporal stimulus features implicated, isolating those functionally tied to duration encoding represents a challenge.

Heron and colleagues attempted to quantify contributions to duration processing from three different strata within the visual processing hierarchy: monocular, depth-selective and depth-invariant. They began by isolating the duration information presented to left and right monocular channels. When this information induced duration aftereffects, strong aftereffects were also observed in the non-adapted eye. Nevertheless, a small but significant amount of adaptation did not show interocular transfer. Next, they used a novel class stimuli to present duration defined by the presence or absence of retinal disparity information. These stimuli allow the first demonstration of duration perception under conditions where stimuli are only visible to mechanisms that allow the integration of spatial information from both eyes.

They found robust duration aftereffects could be generated by viewing disparity-defined durations, revealing duration selective mechanisms entirely independent from monocular processing. Importantly, these aftereffects showed only partial selectivity for the visual the duration information’s depth plane. For example, adaptation to durations defined by crossed disparity information followed by testing with uncrossed disparity-defined stimuli produced aftereffects that were significantly greater than zero but significantly smaller than conditions were adapting and test durations were defined by the same type of retinal disparity.

Heron and colleagues findings provide clear support for duration selectivity at multiple stages of the visual hierarchy. They suggest that duration processing may have similarities with the well documented ‘serial cascade’ type processing documented in the spatial domain. In this scenario, downstream duration encoding mechanisms apply cumulative adaptation to effects inherited from their upstream counterparts.”

—-blog post by James Heron, University of Bradford

Source article:
Heron J, Fulcher C, Collins H, Whitaker D & Roach NW (2019). Adaptation reveals multi-stage coding of visual duration. Scientific Reports (9), Article number: 3016 (pdf)

Publisher’s link: https://www.nature.com/articles/s41598-018-37614-3

Research internship: Vocal learning and interactive communication in harbour seal pups

A 3-month Research Intern position is open for application at the
Sealcentre Pieterburen, The Netherlands. The successful candidate will
join the Seal Sounds team and actively perform bioacoustics research with newborn harbour seal pups (usually 1-3 weeks old) under the supervision of Dr. Andrea Ravignani (https://ravignani.wordpress.com). The project will investigate how harbour seal pups time their calls interactively with conspecifics and learn sounds from each other, as part of a larger Belgian/EU project on pinniped communication. The candidate will receive training in pinniped behaviour, bioacoustics, experimental design, etc. Apart from hands-on research with the pups, depending on the interests of the candidate, there will be possibilities to work on other research projects, and help as an assistant seal nurse in the daily care of the pups.

Qualifications: Bachelor or Master degree in any of the following (or related) disciplines: Biology, Psychology, Zoology, Animal Behaviour, Marine Biology, Neurosciences, Psychobiology, Cognitive Sciences, Speech and Language Sciences, Sound engineering, etc.

Necessary skills: Crucially, the candidate must (1) have a working command of English, (2) be enthusiastic about research in animal behavior and communication, and (3) be willing to work hard. Previous experience with animals, audio recordings and/or playback experiments is not needed but appreciated.

Location & accommodation: The Sealcentre (www.zeehondencentrum.nl) is located in Pieterburen, which is a small town in a natural area of the Netherlands. At any point in time, it hosts tens of international students, young volunteers, and veterinarians, all interested in seals. The university town of Groningen (200,000 inhabitants) is less than an hour away. If needed, cheap onsite accommodation with other students and volunteers can be arranged.

Financial matters: This is an unpaid internship.

Starting date: April, 28th, 2019.
Application deadline: February, 25th, 2019=20

How to apply: Applicants should send a brief cover letter (max 500 words =
explaining their reasons to apply) and a CV combined into a single PDF =
to Andrea Ravignani (andrea.ravignani@gmail.com) with the subject Research Internship Seal Sounds.

Andrea Ravignani
Research Dpt., Sealcentre Pieterburen
AI-Lab, Vrije Universiteit Brussel