Note: contributions are sorted in alphabetical order, please refer to the program for sessions and order of appearance

Quantifying dynamic stability and signal propagation: Lyapunov spectra of recurrent neural networks

R. Engelken, F. Wolf, L. F. Abbott
re2365@columbia.edu
Columbia University

Brains process information through the collective dynamics of large neural networks. Collective chaos was suggested to underlie the complex ongoing dynamics observed in cerebral cortical circuits and determine the impact and processing of incoming information streams. In dissipative systems, chaotic dynamics takes place on a subset of phase space of reduced dimensionality and is organized by a complex tangle of stable, neutral and unstable manifolds. Key topological invariants of this phase space structure such as attractor dimension, and Kolmogorov-Sinai entropy so far remained elusive.

Here we calculate the complete Lyapunov spectrum of recurrent neural networks. We show that chaos in these networks is extensive with a size-invariant Lyapunov spectrum and characterized by attractor dimensions much smaller than the number of phase space dimensions. We find that near the onset of chaos, for very intense chaos, and discrete-time dynamics, random matrix theory provides analytical approximations to the full Lyapunov spectrum. We show that a generalized time-reversal symmetry of the network dynamics induces a point-symmetry of the Lyapunov spectrum reminiscent of the symplectic structure of chaotic Hamiltonian systems. Fluctuating input reduces both the entropy rate and the attractor dimension. For trained recurrent networks, we find that Lyapunov spectrum analysis provides a quantification of error propagation and stability achieved. Our methods apply to systems of arbitrary connectivity, and we describe a comprehensive set of controls for the accuracy and convergence of Lyapunov exponents.

Our results open a novel avenue for characterizing the complex dynamics of recurrent neural networks and the geometry of the corresponding chaotic attractors. They also highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks.

Posted Wed 07 Jul 2021 11:51:23 AM CEST by Rainer Engelken

From seeing double to chaotic itinerancy with a multifunctional reservoir computer

Vassilios A. Tsachouridis, Andreas Amann
andrew_flynn@umail.ucc.ie
University College Cork
The ability of a neural network to perform more than one task is known as multifunctionality. In this talk we will explore some of the phenomena which arise when translating multifunctionality from the biological world to an artificial setting using the reservoir computer (RC) approach to machine learning. As a paradigmatic example of multifunctionality, we examine the dynamics of the RC when it is trained to simultaneously imitate trajectories along two overlapping circular orbits rotating in opposite directions. By virtue of this experiment’s simplicity, we are able to place a greater emphasis on understanding the necessary conditions for multifunctionality to occur in a RC. We expose the intricate relationship between multifunctionality and the spectral radius of the RC’s internal connections, a parameter associated with the RC's memory. Particular attention is given to the extreme case where the circular orbits are completely overlapping in space. A bifurcation analysis reveals that multifunctionality emerges through the evolution of ‘untrained attractors’ and is destroyed as the RC crosses the edge of chaos. As multifunctionality breaks down nearby this boundary we observe chaotic itinerancy as the state of the RC wanders between attractor ruins.
Posted Wed 07 Jul 2021 11:51:23 AM CEST by Andrew Flynn

Effects of degree distributions in random networks of type-I neurons

Carlo Laing
c.r.laing@massey.ac.nz
Massey University, Auckland, New Zealand

We consider large networks of theta neurons and use the Ott/Antonsen ansatz to derive degree-based mean field equations governing the expected dynamics of the networks. Assuming random connectivity we investigate the effects of varying the widths of the in- and out-degree distributions on the dynamics of excitatory or inhibitory synaptically coupled networks, and gap junction coupled networks. For synaptically coupled networks, the dynamics are independent of the out-degree distribution. Broadening the in-degree distribution destroys oscillations in inhibitory networks and decreases the range of bistability in excitatory networks. For gap junction coupled neurons, broadening the degree distribution varies the values of parameters at which there is an onset of collective oscillations. Many of the results are shown to also occur in networks of more realistic neurons.

Posted Thu 22 Jul 2021 10:32:18 AM CEST by Carlo Laing

Fast linear response algorithm for differentiating stationary measures of chaos

Angxiu Ni
niangxiu@math.berkeley.edu
Beijing International Center for Mathematical Research, Peking University, Beijing 100871, P. R. China

We devise a new algorithm, called the fast linear response algorithm, for accurately differentiating SRB measures with respect to some parameters of the dynamical system, where SRB measures are fractal limiting stationary measures of chaotic systems.

The core of our algorithm is the first numerical treatment of the unstable divergence, a central object in the linear response theory for fractal attractors. We derive the first computable expansion formula of the unstable divergence, where all terms are functions rather than distributions. Then we give a `fast' characterization of the expansion by renormalized second-order tangent equations, whose second derivative is taken in a modified shadowing direction, computed by the non-intrusive shadowing algorithm.

The new characterization makes the algorithm efficient and robust: its main cost is solving $u$, the unstable dimension, many first-order and second-order tangent equations, and it does not compute oblique projections. Moreover, the algorithm works for chaos on Riemannian manifolds with any $u$; its convergence to the true derivative is proved for uniform hyperbolic systems. The algorithm is illustrated on an example which is difficult for previous methods. The procedure list is easy to understand and implement.

Posted Wed 07 Jul 2021 11:51:23 AM CEST by Angxiu Ni

Bifurcation mechanisms behind solitary states in neural networks

L. Schülen, A. Gerdes, M. Wolfrum and A. Zakharova
l.schuelen@campus.tu-berlin.de
Institut für Theoretische Physik, Technische Universität Berlin

Networks of coupled identical oscillators are known to exhibit a variety of partial synchronization patterns. Among them are solitary states in which all but a few oscillators are synchronized in phase. Solitary states have been studied for nonlocally coupled networks with [1] and without delay [2], and multilayer networks [3, 4, 5]. In this work, we investigate the occurrence of such solitary states in globally coupled ensembles of FitzHugh-Nagumo oscillators. By restricting the system to a single solitary oscillator and applying the thermodynamic limit, the system can be reduced to a master-slave configuration in which the solitary node acts as a probe oscillator in the mean-field of the synchronized bulk. With this, we are able to show that solitary states are generated via a fold bifurcation on a branch originating in the symmetry breaking bifurcation of the homogeneous steady state. We show that these states undergo a period doubling cascade leading to a chaotic behavior of the solitary oscillator. Allowing more than one node to deviate from the synchronized bulk leads to intriguing patterns of non-identical behavior in the solitary cluster.

[1] L. Schülen, S. Ghosh, A. D. Kachhvah, A. Zakharova, S. Jalan, Delay engineered solitary states in complex networks, Chaos, Solitons & Fractals 128, 290-296, 2019 [2] E. Rybalova, V. S. Anishchenko, G. I. Strelkova, A. Zakharova, Solitary states and solitary state chimera in neural networks, Chaos 29, 071106, 2019 [3] M. Mikhaylenko, L. Ramlow, S. Jalan A. Zakharova, Weak multiplexing in neural net- works: Switching between chimera and solitary states, Chaos 29, 023122, 2019 [4] L. Schülen, D. A. Janzen, E. S. Medeiros, A. Zakharova, Solitary states in multiplex neural networks: Onset and vulnerability, Chaos, Solitons & Fractals 145, 110670, 2021 [5] E. Rybalova, A. Zakharova, G.I. Strelkova, Interplay between solitary states and chimeras in multiplex neural networks, Chaos, Solitons & Fractals 148, 111011, 2021

Posted Wed 07 Jul 2021 11:51:23 AM CEST by Leonhard Schülen

Representing and characterizing complex dynamics by state-transition networks

B. Sandor, Zs. I. Lazar, B. Schneider and M. Ercsey-Ravasz
bulcsu.sandor@ubbcluj.ro
Department of Physics, Babes-Bolyai University, Cluj-Napoca, Romania

State-transition networks (STN) allow for a powerful representation and visualization of the dynamics of various complex systems with discrete states. Dynamical systems characterized by continuous time and phase space can also be mapped to STNs by sampling properly both space and time. Inspired by the Lyapunov exponents well known for chaos theory, here we introduce a novel network measure for STNs, together with an algorithm for converting multivariate time-series to sate-transition networks [1]. This novel measure is able to reflect the dynamical behavior of the underlying dynamical system, furthermore, unlike the traditional network measures, it may be used to predict upcoming crisis-type bifurcations when changing the control parameters. We explore the measure's properties by analytical and numerical results based on a theoretical model and demonstrate its applicability on the STN counterparts of the Henon map and the Lorenz system.

[1] Sándor, B., Schneider, B., Lázár, Z. I., Ercsey-Ravasz, M., Entropy, 23(1), 2021

Posted Wed 07 Jul 2021 11:51:23 AM CEST by Bulcsú Sándor

Reconstructing Network Structures from Partial Measurements

Melvyn Tyloo, Robin Delabays, Philippe Jacquod
melvyn.tyloo@gmail.com
Department of quantum matter physics, University of Geneva
The dynamics of systems of interacting agents is determined by the structure of their coupling network. The knowledge of the latter is therefore highly desirable, for instance to develop efficient control schemes, to accurately predict the dynamics or to better understand inter-agent processes. In many important and interesting situations, the network structure is not known, however, and previous investigations have shown how it may be inferred from complete measurement time series, on each and every agent. These methods implicitly presuppose that, even though the network is not known, all its nodes are. A major shortcoming of theirs is that they cannot provide any reliable information, not even on partial network structures, as soon as some agents are unobservable. Here, we construct a novel method that determines network structures even when not all agents are measurable. We establish analytically and illustrate numerically that velocity signal correlators encode not only direct couplings, but also geodesic distances in the coupling network, within the subset of measurable agents. When dynamical data are accessible for all agents, our method is furthermore algorithmically more efficient than the traditional ones, because it does not rely on matrix inversion.