Graduate School of Science, Kyoto University
Nonlinear Dynamics Group
Seminar of Nonlinear Dynamics Group

2019 2018 2017 2016 2015 2014 2013


2019/01/22 15:00 (Faculty of Science Bldg. 5 #434)
speaker Keiko Ono (International Christian University, Social Science Research Institute)
title Incidence of natural population decrease among Japanese municipalities (1980-2016) and its contributing factors
abstract In many post demographic transition societies, natural population decrease where deaths exceed birth is observed. Prior research indicates such decrease is geographically heterogeneous and starts at sub-national level. Japan°«s total population recorded its first decline in 2009, however, a significant number of municipalities had already experienced natural population decrease for a few decades prior to that. This talk will first focus on preliminary analysis of the longitudinal population data for the 1741 Japanese municipalities. Next it considers the effects of demographic factors identified in the literature as contributing to natural population decrease: population aging, the proportion of women of child bearing age, and fertility.


2018/11/30 15:00 (Faculty of Science Bldg. 5 #412)
speaker Shinsuke Koyama (The Institute of Statistical Mathematics)
title Modeling event cascades using networks of additive count sequences
abstract We propose a statistical model for networks of event count sequences built on a cascade structure. We assume that each event triggers successor events, whose counts follow additive probability distributions; the ensemble of counts is given by their superposition. These assumptions allow the marginal distribution of count sequences and the conditional distribution of event cascades to take analytic forms. We present our model framework using Poisson and negative binomial distributions as the building blocks. Based on this formulation, we describe a statistical method for estimating the model parameters and event cascades from the observed count sequences.

2018/10/23 13:30 (Faculty of Science Bldg. 5 #412)
speaker Dr. Massimiliano Tamborrino (Johannes Kepler University Linz)
title Statistical inference for multi-timescale adaptive threshold and neural mass models via Approximate Bayesian Computation
abstract In neuroscience it is of primary interest to decode or reconstruct the unobserved signal based on some partially observed information. From a mathematical point of view, this corresponds to estimate model parameters of an unknown coordinate based on discrete observations of one or more other coordinates. Quite often, due to the complexity of the models, the underlying likelihood is unknown or intractable, requiring the investigation of new ad-hoc mathematical and statistical techniques to handle it. Here I focus on likelihood-free methods, and in particular on Approximate Bayesian Computation (ABC) method, and I illustrate it in the framework of stochastic modelling of single neuron and neural network dynamics.
First, I consider the multi-timescale adaptive threshold (MAT) model, a bivariate stochastic process that can be derived from the detailed Hodgkin-Huxley model, can accurately predict spike times and incorporate the effects of slow K+ currents, usually mediating adaptation [1]. When performing statistical inference of the underlying model parameters of the threshold, four difficulties arise: none of the two model components is directly observed; the considered process is not of hidden Markov model type; the underlying likelihood is unknown/intractable; consecutive spikes are neither independent nor identically distributed. I show how to estimate the threshold parameters only from extra-cellular recordings, i.e. when the spike times are observed [2].
Second, I consider the stochastic version of the Jensen and Rit Neural mass model (JRNMM ) [3], a six-dimensional stochastic process describing the average properties of the electrical activity of a whole population of neurons that has been proposed to model and reproduce EEG data. We are interested in estimating two parameters that are relevant for the description of $\alpha$-rhythmic and epileptic behaviour from partial observations of the model. In fact, from an experimental point of view, the process X(t) is only partially observed through the EEG-related process Y(t)=X_1(t)-X_2(t), t in [0,T], making the statistical inference more challenging. We introduce a Structure-Preserving ABC model taking advantage of the dynamical and structural model properties and validate it on both Monte-Carlo simulated data and real EEG data [4].

References: [1] R. Kobayashi, K. Kitano. (2016) Impact of slow K+ currents on spike generation can be described by an adaptive threshold model. J. Comput. Neurosci., 40(3), 347-362.
[2] M. Ableidinger, E. Buckwar, H. Hinterleitner. (2017). A Stochastic version of the Jansen and Rit neural mass model: analysis and numerics. J. Math. Neurosc. 7(8).
[3] M. Tamborrino, A. Samson, U. Picchini. Approximate Bayesian Computation for the inference of non-renewal point processes arising from neuroscience, in preparation.
[4] E. Buckwar, M. Tamborrino, I. Tubikanec, Parameter Inference Through Structure-Preserving Approximate Bayesian Computation for Stochastic Hamiltonian Equations, in preparation.

2018/10/15 13:30 (Faculty of Science Bldg. 5 #413)
speaker Dmytro Velychko (Philipps University of Marburg, Germany)
title Sensorimotor control with delayed feedback.
abstract Information processing in the brain is a subject to different temporal delays on every stage, while processing the sensory information, estimating the environment state, computing the control signal, and executing it by the plant (body). Cortical sensorymotor feedback loop usually takes at least 60-70 ms to start correcting for an error. If the feedback is only visual, it takes even longer. Inevitable temporal delays add more complexity to the optimal feedback control framework. Brain has to implement some computational mechanism to deal with such delays.
I will present two hypothesis: a fully Bayesian approach with propagating an error through a memory buffer, and learning of an approximate posterior feedback policy. These two hypothesis give different behavioral predictions of learning and achievable performance.
I will talk about the theoretical differences between these two hypothesis and a simple experiment I performed to tackle this question.

2018/07/23 10:30 (Faculty of Science Bldg. 5 #412)
speaker Jiyoung Kang (Yonsei University)
title Energy landscape analysis of the subcortical brain system at rest
abstract The question of how the human brain system at rest, despite being in a transitory phase among multistable states, is configured remains unresolved. To understand the organization properties of the human brain at rest using resting state fMRI, we constructed an energy landscape of the subcortical brain network, a critical center modulating whole brain states, and evaluated alterations in energy landscapes following perturbations in network parameters. The perturbation analysis reveals characteristics of the dynamic brain systems at rest, such as the maximal number of attractors, unequal temporal occupations, and readiness for reconfiguration of the system. Changes in the network parameter as small as in individual nodes or edges cause a significant shift in the energy landscape of the brain systems, providing an explanatory basis for brain functions emerged by reconfiguration of the resting state network and for systems potentially responsible for brain diseases deviated from normal brain networks.

2018/03/14 15:00 (Ichikawa Bld. 103 in Yoshida campus)
speaker Ido Kanter (Bar-Ilan University)
title New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units
abstract Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to its axon. We experimentally show that neurons act like independent anisotropic multiplex hubs, which relay and mute incoming signals following their input directions. Theoretically, the observed information routing enriches the computational capabilities of neurons by allowing, for instance, equalization among different information routes in the network, as well as high-frequency transmission of complex time-dependent signals constructed via several parallel routes. Next, we present three types of experiments, using neuronal cultures, indicating that each neuron functions as a collection of independent threshold units. The neuron is anisotropically activated following the origin of the arriving signals to the membrane, via its dendritic trees. The first type of experiments demonstrates that a single neuron°«s spike waveform typically varies as a function of the stimulation location. The second type reveals that spatial summation is absent for extracellular stimulations from different directions. The third type indicates that spatial summation and subtraction are not achieved when combining intra- and extra- cellular stimulations, as well as for nonlocal time interference, where the precise timings of the stimulations are irrelevant. Results call to re-examine neuronal functionalities beyond the traditional framework, and the advanced computational capabilities and dynamical properties of such complex systems.

2018/01/11 10:00 (Faculty of Science Bldg. 5 #412)
speaker Alexey Medvedev (University of Namur)
title Predicting structure and dynamics of discussion threads in online boards using Hawkes processes
abstract Online social platforms provide a fruitful source of information about social interaction. Depending on the platform, various tree-like cascading patterns emerge as a consequence of such interaction. For example, on Twitter or on Facebook people interact via resharing messages, which turns into cascade trees of reshares, in email networks people forward messages to their peers resulting in trees of email forwards, in online boards like Digg or Reddit people interact via discussing particular posts, which leaves a trace of discussion trees. The two main questions arise: what is the shape of these cascades and how to predict the dynamics of their evolution?
The question of evolution of discussion threads is now gradually being understood. In [1,2] the authors studied only the structural evolution of discussion trees in four large Internet boards, and they suggested a tree generation model based on preferential attachment (PA) mechanism. However the dynamical properties are left out of consideration. In [3] the authors introduce a merely theoretical model which aims to describe structural and temporal evolution of the discussions. Their proposition is to use a specific Levy point process to generate timings, then construct the PA discussion tree assigning to each new node a subsequently generated timing. However, being a sort of a mean-field model, it describes evolution on average, thus having limited utility in practice.
We consider cascades given by discussion trees of posts in online board Reddit. The dataset of Reddit discussion threads consists of all posts and comments submitted to Reddit from Jan, 2008 till Jan, 2015. The dataset in total contains more than 150 million posts and around 1.4 billion comments. We propose a model of discussion trees generation based on the self-exciting Hawkes processes, which represents both the tree structure and temporal information. We use the dataset of Reddit discussion threads to show that structurally trees resemble Galton-Watson trees with a root bias, and distinct the cases when the dynamics of comments attraction can be well predicted using Hawkes processes.


2017/11/08 14:00 (Faculty of Science Bldg. 5 #412)
speaker Herut Uzan(Bar-Ilan University)
title Oscillations in sparse excitatory networks - the theory behind low firing rate
abstract I will present an analytical framework that allows the quantitative study of statistical dynamic properties of networks with adaptive nodes that have a finite memory. This framework is used to examine the emergence of oscillations in networks with nodal response failures. Analytical results are in agreement with large-scale simulations and open the horizon for understanding network dynamics composed of finite memory nodes as well as their different phases of activity.

2017/09/28 13:30 (Faculty of Science Bldg. 5 #412)
speaker Naotsugu Tsuchiya

1. School of Psychological Sciences, Monash University, Melbourne, Australia, 3168
2. Monash Institute of Cognitive and Clinical Neuroscience, Monash University, Australia, 3168
title Characterizing causal interactions and their integrations in hierarchical systems
abstract Causal interactions among neurons and their integrations are key to understand how cognitive functions and conscious experience arise from biological brains. While a number of measures have been proposed, characterizing causal interactions and their integrations in hierarchical systems remains difficult. We recently proposed a unified framework from information geometry to provide a solution and to propose a novel measure for this purpose, called geometric integrated information [phiG] (REF 1 and 2). In the talk, I will discuss how our framework relates our measure of integrated information with other more familiar measures, such as mutual information, Granger causality and transfer entropy. I will also introduce our recent applications of the measure on the analysis of neural data (Ref 3) to characterize hierarchical causal neural interactions, which I believe is the key to understand the physical substrates of phenomenal consciousness.

1) Oizumi M, Tsuchiya N, Amari S, °»Unified framework for quantifying causality and integrated information in a dynamical system°… (2016) PNAS lin
2) Shun-ichi Amari, Naotsugu Tsuchiya, Masafumi Oizumi °»Geometry of Information Integration°… (2017) ArXiv doi To appear in IGAIA IV
3) Andrew M. Haun, Masafumi Oizumi, Christopher K. Kovach, Hiroto Kawasaki, Hiroyuki Oya, Matthew A. Howard, Ralph Adolphs, Naotsugu Tsuchiya, (2017, accepted) °»Conscious perception as integrated information patterns in human electrocorticography°… eNeuro accepted version

2017/04/07 15:00 (Faculty of Science Bldg. 5 #412)
speaker Shinsuke Koyama (The Institute of Statistical Mathematics)
title Projection smoothing for stochastic dynamical systems
abstract This study concerns inference on the state of nonlinear stochastic dynamical systems, conditioned on noisy measurements. We take a differential geometric approach to construct finite dimensional algorithms for solving the filtering and smoothing problems. In particular, we apply a projection method based on the Hellinger distance and the related Fisher metric, to derive a novel backward equation that the approximate probability density associated with the smoothing problem satisfies. Combining with the projection filter developed in Brigo et al. (1999), we complete a finite dimensional approximation of the forward (filtering) and backward (smoothing) algorithms, based on the projection method.

2017/03/23 14:00 (Faculty of Science Bldg. 5 #434)
speaker Marie Levakova (Academy of Sciences of the Czech Republic)
title Accuracy of stimulus detection from noisy responses of single neurons
abstract Although it is universally accepted that neurons communicate through series of action potentials (spikes), it remains unclear what features of neural responses carry the sensory information. A related question is how precisely can be the stimulus decoded from different response characteristics. The decoding accuracy is commonly analyzed using methods of statistical estimation theory. Namely, the Fisher information is frequently applied as an approximation of the inverse mean square estimation error, hence the value of the Fisher information is supposed to reflect the ultimate decoding accuracy. Using the Fisher information and assuming the stochastic perfect integrate-and-fire model, we investigate the decoding accuracy if the stimulus is identified either from the first-spike latency (time delay between the stimulus onset and the first subsequent spike) or the number of spikes fired within a given time window. We analyze the impact of changing several parameters, especially the level of the presynaptic spontaneous activity and the duration of the time window. Paradoxically, the optimal performance is achieved when the level of spontaneous activity is nonzero, which can be explained by the influence of the spontaneous activity on the stabilization of the membrane potential before the stimulus onset. Another interesting phenomenon is connected with the dependence of the Fisher information calculated for the spike counts on the duration of the observation period, which can be highly nonmonotonic, implying that a longer observation period does not always necessarily lead to better decoding.


2016/10/18 15:00 (Faculty of Science Bldg. 5 #412)
speaker Dr. Christian Donner (Bernstein Center of Computational Neuroscience Berlin)
title Approaches for Statistical Modelling of Spike Train Dynamics
abstract Spike train data recorded from many neurons simultaneously in vivo allows assessment of coordinated network behavior during computation. However, analysis of these data requires methods that are sensible to their dynamics because the population activity changes due to external factors such as stimuli or internal brain states (e.g., attention). In my talk I will present two approaches that account for this dynamic:

1. Shimazaki et al (PLoS Comp. Bio., 2012) proposed a model which assumes that the observed multi-neuron spike trains are sampled from a joint distribution. This distribution is parameterized by a latent process accounting for dynamics of individual and correlated activity of the neurons. The latent process can be inferred as long as the number of recorded neurons is small (N<15). I will show in my talk how we incorporated approximation methods (namely pseudolikelihood, TAP and Bethe approximation) to solve the inference problem for networks up to 60 neurons. With this large-scale analysis, we can assess macroscopic quantities of the network such as probability of simultaneous silence, entropy and susceptibility in a time resolved manner.

2. For the second approach instead of a transient change of the latent space, we assume a jump process. At each time point the latent state might switch with a certain probability and the next state will be drawn from a multinomial state distribution. The correct inference problem of this model is infeasible and hence we utilized a variational approach to approximate the model parameters from the data.

I will show analysis results for spiking data recorded in vivo from the monkey V4 area, obtained by these two different methods.

2016/7/1 13:30 (Faculty of Science Bldg. 5 #412)
speaker Dr. Tomoki Kurikawa (RIKEN)
title A hippocampal-entorhinal microcircuit model for dynamic communication in a memory-based spatial navigation task
abstract Neural rhythm plays an important role in communication between different cortical areas. Such a communication is not constant in time, but dynamically changed depending on demands of memory-consolidation, memory recall, and so forth. Gamma and theta rhythms in hippocampus and entorhinal cortex (EC) are critical for memory-based navigation task. A recent study (Yamamoto, et al., 2014, Neuron) has shown that coherence of these rhythms is dynamically changed during a T-maze task in which a rat is required to choose one of arms depending on the previous choice. They have found that neural activities in EC3 locked with theta rhythm is selectively higher in a test trial than in sample trial and also found that gamma synchrony between hippocampus and entorhinal cortex which only occurs around a decision point is necessary for the correct choice. Blocking this connection reduces success rate to chance level. This study implies that information of previous choice is transported through the gamma synchrony in dynamic way. What mechanism, however, underlies such a dynamic and flexible communications remains unclear. For understanding this mechanism in microcircuit level, in this study, we model a local neural circuit including pyramidal neurons and some types of inhibitory neurons. Especially, here, we focus on the theta-locking activity in EC3. This model network has three sub networks corresponding to CA1 and entorhinal cortices (EC3 and EC5) with input from CA3 to CA1. We assume that working memory of the previous choice is stored in EC3 and/or EC5 neurons as persistent activities. By this model, we analyze the functional role of theta rhythms and conditions of each type of neurons under which the synchronization occurs in appropriate timing. We found that persistent activity with locked to theta rhythm is triggered by non-linear interactions of inputs from CA3 and EC3 to CA1 through CA1-EC5-EC3 loop. Because CA3 input encodes spatial information, this theta rhythm is generated in specific location on the maze. We will discuss dynamic routing in Hippocampus and the effect of the theta-locking behavior on it.

2016/6/10 14:00 (Faculty of Science Bldg. 5 #412)
speaker Dr. Sarah de Nigris (University of Namur, Belgium)
title Influence of network topology on the onset of long-range interaction
abstract In many systems of interacting agents, like spins, masses but also people, the interaction can be modelled though a potential whose decay gives precise information on the range of interaction. For instance, we experience every day the effect of a long-range force, gravity. Long-range interactions lead to, in the statistical mechanics frame, a very rich phenomenology of collective behaviours, like phase transitions and transient quasi stationary states. Now, if we take a dynamical system evolving on a network, this clear distinction between short-range and long-range interaction is completely blurred: we do not have anymore a range of interaction whose width is given by the potential decay; in its place we have a mixture of short-range links and long-range ones. Moreover on a network, the metric distance can be less meaningful than the information distance. For instance, people are likely to have many friends in their physical neighborhood, but at the same time, they can also have acquaintances who are physically far away. This example hints to the core question of this work: in what topological conditions does long-range order arise? It seems natural that the network structure can impact on the global behaviour of the system, but what is the dominant topological feature to steer it? Of course the answer strongly depends on the dynamical system: in this seminar I will focus on some results for the classical XY-rotors model, showing how through the control on some network features we can indeed change the effective range of interaction and obtain a variety of dynamical behaviours.

2016/6/7 13:30 (Faculty of Science Bldg. 5 #412)
speaker Dr. Luis E. C. Rocha (University of Namur, Belgium)
title Diffusion of Information and Epidemics on Dynamic Contact Networks
abstract Human contact networks are characterized by the structure of connections and by temporal patterns of node and link activity. Extensive research has been done to understand how structure constrains dynamic processes, such as opinion dynamics, random walks or epidemics, taking place on networks. On the other hand, the role of the timings of node and link activation remains little understood. In this talk, I will discuss some of our contributions to the field, in particular, results on a time-evolving network of sexual contacts and its role to regulate epidemics, recent results on the competition between time and structure to regulate diffusion processes and methods to model epidemics on human contact networks.

2016/4/4 13:30 (Faculty of Science Bldg. 5 #412)
speaker Shinsuke Koyama (The Institute of Statistic Mathematics)
title Approximate inference for stochastic reaction networks
abstract Reaction networks describe the evolution of networks of species. They are used for modeling phenomena in a wide range of disciplines; those species may represent molecules in chemical or biological systems, animal species in ecology, and information packets in telecommunication networks. The evolution of networks is modeled by a continuous-time Markov jump process, for which the probability distribution of species x obeys a master equation. Assume that we have noisy and partial observations y_1,. .., y_n at times t_1,..., t_n, where the probability distribution of the observation y_i is conditioned on x(t_i). Our goal is to compute the conditional distribution of x at time t_i, given the observations y_1,... , y_i. (In the literature on signal processing, this problem is called filtering.) The conditional distribution is not analytically tractable for the system we consider. Here, we use two approximation methods, the linear noise approximation (LNA) and the projection approximation (PA), to construct approximate filters, and compare them in terms of filtering performance. We demonstrate on simulated data that the approximate filter based on the PA outperforms that based on the LNA.


2015/11/9 15:00 (Faculty of Science Bldg. 5 #413)
speaker Professor Néstor PARGA(Departamento de Física Teórica C-XI Universidad Autónoma de Madrid)
title Neural dynamics of perceptual detection under temporal uncertainty
abstract During perceptual decisions the brain uses previous knowledge to transform noisy sensory evidence into percepts on which decisions are based. Internal states are believed to reflect acquired experience that can be used for making the best sense of the sensory inputs. We explored the dynamic nature of these internal states by asking how previous information about the timing of sensory evidence is incorporated in the decision-making process [1]. We combined computational modeling with neurophysiological and behavioral data recorded while monkeys performed a somatosensory detection task [2]. We obtained neural correlates of false alarms to show that the subject's response criterion is modulated over the course of a trial. Analysis of premotor cortex activity shows that this modulation is represented by the dynamics of the population responses. A recurrent network model trained on the same task reproduces the experimental findings, and demonstrates a novel neural mechanism to benefit from temporal expectations in perceptual detection. Previous knowledge about the probability of stimulation over time can be intrinsically encoded in the neural population dynamics, allowing a flexible control of the response criterion over time.

[1] Carnevale F, de Lafuente V, Romo R, Barak O, Parga N (2015). Neuron 86, 1067-1077.
[2] de Lafuente V, Romo R. (2006) Proc. Natl. Acad. Sci. U.S.A. 2006, 103: 14266-14271.

2015/3/6 14:00 (Faculty of Science Bldg. 5 #413)
speaker Amir Goldental(Bar-Ilan University, Israel)
title Neuronal impedance mechanism implementing cooperative networks with low firing rates and micro sec precision.
abstract Realizations of low firing rates in neural networks usually require globally balanced distributions among excitatory and inhibitory links, while feasibility of temporal coding is limited by neuronal millisecond precision. We experimentally demonstrate that above a critical stimulation frequency, which varies among neurons, response failures were found to emerge stochastically such that the neuron functions as a low pass filter, saturating the average inter-spike-interval. Simulations and analytical work show that this intrinsic neuronal impedance mechanism, as opposed to links distribution, leads to cooperation on a network level, such that firing rates are suppressed towards the lowest neuronal critical frequencies. We also show, in experiments, that neurons stimulated at such low frequencies experience micro sec precision of neuronal response timings.

2015/2/17 14:00 (Faculty of Science Bldg. 5 #413)
speaker Eliza-Olivia Lungu (Kyoto University)
title Why workers change occupations?
abstract Occupational mobility has been extensively studied in the literature; scholars distinguish between intergenerational (movement within or between occupations, the change occurring from one generation to the next) and intragenerational (movement within or between occupations, the change occurring within an individual's lifetime) occupational mobility. Here we focus on the second type of mobility and explore the occupational mobility patterns of the USA labour force using network analysis. Yearly networks are constructed considering the occupations as nodes, while the links are weighted with the number of persons experiencing occupation change in that year. The occupational mobility networks (OMN) are generated using two public databases: Panel Study Income Population (PSID) main family data and Current Population Survey (CPS) March file data. Considering that the occupations are related to each other via transferable skills, we visualize paths of mobility and calculate network indicators in order to understand models of connectivity between occupations.

Keywords: occupational mobility network, intragenerational mobility, weighted networks, network motifs


2014/10/15 14:00 (Faculty of Science Bldg. 5 #115)
speaker Yuzhe Li, Laboratory of Bioimaging and Cell Signaling, Graduate School of Biostudies, Kyoto University
title A model of Amygdala-mPFC interaction for resistance to extinction after partial reinforcement fear conditioning
abstract Animals have an ability to associate conditioned stimulus (CS, e.g. tone) with paired emotional unconditioned stimulus (US, e.g. electric shock). The conditioned fear memory can be extinguished when numbers of CSs are presented without US. Interestingly, the learning speed of such fear extinction depends on statistics of experienced fear conditioning; animals that experienced partial pairings between CS and US show larger resistance to extinction, compared with animals that experienced continuous pairings. However, how the brain processes the statistics of fear conditioning is largely unclear. Here, we developed a neural circuit-based model that consists of subpopulations of neurons in amygdala and the medial prefrontal cortex (mPFC), and addresses their interaction and synaptic plasticity within these regions. In the computer simulation, we reproduced behaviors of amygdala and mPFC activities as conditioned response in both continuous and partial fear conditioning and extinction. On the basis of these results, we proposed that balance between activities of the neuron subpopulations in amygdala and mPFC encodes surprise-like signal, which reflects the statistics of fear conditioning, and provides learning signal for synaptic plasticity in mPFC during extinction. Moreover, our model provides a prediction of therapy treatment to eliminate the resistant fear memory. Thus, our model shed light on neural circuit-level understanding of large resistance to extinction.

2014/9/11 14:00 (Faculty of Science Bldg. 5 #413)
speaker Shinsuke Koyama (The Institute of Statistic Mathematics)
title On the spike train variability characterized by variance-to-mean power relationship
abstract We propose a statistical framework for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and mean of interspike interval are related in the form of a power function, the exponent of which characterizes the variability of spike trains. It is shown that a simple mechanism consisting of first-passage-time to a threshold for Ornstein-Uhlenbeck processes explains the power law with various exponents depending on the subthreshold dynamics. We also propose a statistical model of spike trains that exhibits the variance-to-mean power relationship, based on which a maximum likelihood method is developed for inferring the exponent from rate-modulated spike trains. The ability of the proposed method is demonstrated with simulated and experimental data of spike trains. Finally, we discuss possible implications for the power law in terms of characterizing the intrinsic variability of neuronal discharges.

2014/7/23 14:30 (Faculty of Science Bldg. 5 #115)
speaker Paul Poon (National Cheng Kung University)
title Machine-discrimination of fast and slow FM sounds based on intracranial recordings from multi-contact electrodes in the auditory cortex of human
abstract FM (frequency modulation) is an important building block of speech signals. Neurons in the human primary auditory cortex (or Heschls gyrus, HG) respond to FM sounds. However, it remains unclear which part of the auditory cortices contains sufficient information for discriminating FM sounds of perceptual differences, in particular on single trial basis. Here, we hypothesized that such information is contained in the event related potential (ERP) of the HG. To test this, we analyzed ERPs recorded from a depth electrode (with 4 macro-contacts) placed along the long axis of HG within the grey matter in 10 investigatory epileptic patients. During a 5-min session, conscious subjects listened passively to the acoustic stimulation containing repeated random alternations of two FM tones (fast and slow FM, which can be easily discriminated by normal subjects). Over the stimulus duration of 250 ms, tone frequency varied from 0.5 kHz to 2 kHz and back to 0.5 kHz according to two asymmetrical linear ramps. The modulation profiles of the two FM tones are reversed in time with each other. But the two FMs are otherwise identical in spectrum. They were designed to emerge at 3-sec jittered intervals with no acoustic transients from a background of random FM tone (carrier frequency 0.5 kHz to 0.25 kHz). ERPs to individual FM sounds were first preprocessed with an adaptive filter to isolate the ERP response from the background EEG. In the subsequent analysis, the 100 trials of isolated ERPs in each session were randomly divided into two groups: (a) 50 trials for training a classifier (support vector machine) to discriminate the two FMs on single trial basis, and (b) the other 50 trials for testing the performance of the classifier. The optimal features (strength of ERP, or RMS) were automatically selected by an inheritable bi-objective combinatorial genetic algorithm (IBCGA). Results showed that the two FMs could be discriminated at high performance levels: (a) >99% in the within-subject cases, (b) >98% in the across-subject cases, and (c) >71% for the leave-one-subject-out cases. As expected by large individual variations, the optimal features clustered most clearly in time only for the across-subject cases where common features were extracted collectively for all the subjects. These optimal features fell within the duration of linear FM (0-250 ms) for the 3 medial locations of HG where the ERP strength was also high. These features fell off-duration (500-750 ms) at the most lateral location where ERP strength was low and ERP peaks also showed longer delays. Other off-duration signals (250-500 ms) bore optimal features only when the adjacent-difference ERPs were analyzed, likely due to the suppression of on-duration ERPs. Results showed that ERPs from HG contains sufficient information, both inside and outside the linear FM duration (for as long as 500 ms after the end of stimulus), for the satisfactory machine-discrimination of the two FMs on single trial basis. Temporal differences in the optimal features across the 4 HG locations further supported a functional subdivision between the medial and lateral parts. However HG as a whole, rather than only a part of it, appeared to be involved in the FM processing of the stimulus signal.


2013/12/5 14:00 (Faculty of Science Bldg. 5 #434)
speaker Ryota Kobayashi (National Institute of Informatics)
title Estimating input signals of a cortical neuron
abstract Neurons transmit information by transforming synaptic inputs into action potentials. It is essential to investigate the dynamics of the synaptic inputs to understand the computational mechanism in the brain. We consider an estimation problem of the input signals from a single voltage trace of a neuron obtained by intracellular recordings. Previous methods are based on the assumption that the input signals are constant over time. However, it is natural to expect that neuronal activity in vivo is time-variable, reflecting the variable external conditions. Here, we propose a Bayesian method to estimate the time-varying input signals from a voltage trace of the Ornstein-Uhlenbeck neuronal model. The proposed method is extended for more realistic models, i.e., Hodgkin-Huxley type models. I will also discuss about the possible application to the experimental data. This is a joint work with Shigeru Shinomoto (Kyoto university), Yasuhiro Tsubo (Ritsumeikan University), and Petr Lansky (Academy of Sciences of the Czech Republic).

2013/11/28 13:30 (Faculty of Science Bldg. 5 #413)
speaker Ido Kanter (Bar-Ilan University)
title Changes in neuronal response latencies and their applicability for advanced neural computations
abstract Understanding the brain mechanisms that underlie firing synchrony is one of the great challenges of neuroscience. Many variants of population codes were suggested, where a set of neurons in a population acts together to perform a specific computational task. There is much discussion over whether rate coding or temporal coding is used to represent perceptual entities in populations of neurons in the cortex. Recently, we have experimentally demonstrated a mechanism where time-lags among neuronal spiking leap from several tens of milliseconds to nearly zero-lag synchrony. This mechanism also allows sudden leaps out of synchrony, hence forming short epochs of synchrony. Our results are based on an experimental procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in-vitro and are corroborated by simulations of neuronal populations. The underlying biological mechanism is the unavoidable increase of neuronal response latency to ongoing stimulations, where evoked spikes require temporal or spatial summation. It requires recurrent neuronal circuits, and synchrony appears even among neurons which do not share a common drive. Sub-threshold stimulations serve as a switch that momentarily closes or opens loops in the neuronal circuit, changing the entire circuit's loops which determine the state of synchrony. These sudden leaps may be accompanied by jumps in the neuronal firing frequency, hence offering reliable, information-bearing indicators which may bridge between the two principal neuronal coding paradigms. Based on the same underlying biological mechanism, changes of neuronal response latency to ongoing stimulations, we also recently proposed a new experimentally corroborated paradigm, named as dynamic logic-gates, in which the truth tables of the brain's logic-gates depends on the history of their activity and the stimulation frequencies of their input neurons.

Discussion of Department of Physics I    2013/11/7 16:30 - 18:00 (Faculty of Science Bldg. 5 #525)
speaker Jun-nosuke Teramae (Osaka University)
title Origin and function of the fluctuations in the nervous system

2013/8/22 13:30 (Faculty of Science Bldg. 5 #413)
speaker Ryota Kobayashi (National Institute of Informatics)
title Inferring synaptic connections from multiple spike train data
abstract Significant correlations in neuronal activity are defined as functional connections between pairs of neurons. Characteristics of the functional connectivity have illustrated how neurons transmit information cooperatively in the brain. On the other hand, it is still unclear how the derived functional connectivity is related to underlying synaptic connectivity.
Here, we developed a coupled escape rate model (CERM) to infer synaptic connections from multiple neural spike train data (Kobayashi & Kitano, J. Comput. Neurosci., 2013). We applied this method as well as the functional connectivity methods, i.e., transfer entropy and cross-correlation, to simulated multi-neuronal activities generated by a cortical network model, which consists of thousands of biophysically detailed neurons (Kitano & Fukai, J. Comput. Neurosci., 2007). We also applied these methods to the spike data generated by the cortical network model with different topologies of synaptic connectivity (regular, small-world and random). Our results indicate that all the methods perform better for highly clustered (regular and small-world) networks than for random networks. With respect to model-free methods, CERM performs better especially in non-regular networks. Overall, CERM method is most suitable to infer synaptic connections from multi-neuronal spike activities although it involves high computational cost. This is a joint work with Katsunori Kitano (Ritsumeikan University).

2013/4/1 13:30 (Faculty of Science Bldg. 5 #401)
speaker Shinsuke Koyama (The Institute of Statistic Mathematics)
title Information gain on variable neuronal firing rate
abstract The question of how much information can be theoretically gained from variable neuronal firing rate is investigated. For this purpose, we employ the Kullback-Leibler divergence (relative entropy) as a measure of information gain.

We first give a statistical interpretation of this information in terms of detectability of rate variation: the lower bound of detectable rate variation, below which the temporal variation of firing rate is undetectable with a Bayesian decoder, is entirely determined by this information. We derive a formula for the lower bound, which tells how much information is necessary for the rate variation to be detected from spike trains. For instance, if a spike, on average, is expected to be observed in the characteristic timescale of the rate variation, it is necessary for the spike train to carry more than 0.36 bits per spike information so that the underlying rate variation is detectable.

We show that the information depends not only of the variation of firing rates (i.e., signals), but also significantly on the dispersion properties of neuronal firing described by the shape of interspike interval (ISI) distribution (i.e., noise properties). It is shown that under certain condition, the gamma distribution attains the theoretical lower bound of the information among all ISI distributions when the coefficient of variation of ISIs is given.

2015 2014 2013