Mar. 15, 2004 Many complex systems are composed of a large number of similar units that are connected in a complicated manner. An important example is provided by neural networks where nerve cells in the brain communicate by exchanging pulses via synaptic connections. Unlike atoms in a crystal which are arranged on a regular, e.g cubic lattice, nerve cells in the brain grow synaptic connections in a highly specific but irregular fashion. In such systems, a particular question is how rapid coordination, e.g. synchronization, between units of a complex network can be achieved. Three theoretical neuro-physicists from the Max Planck Institut for Flow Research in Goettingen have now shed new light on this question for networks of pulse-coupled oscillators, simple models of neural networks in the brain (Physical Review Letters 92: 074101, 2004).
To analyze the impact of network structure on its function the scientists use the theory of random matrices. Initiated by the work of Wigner on correlations of energy levels in atomic nuclei, random matrix theory has been extensively investigated since the 1950s. Its range of application has been continuously growing since then and today includes the study of various phenomena as different as quantum mechanical aspects of chaos and price fluctuations on financial markets. Timme, Wolf, and Geisel have now demonstrated that the theory of random matrices can also be applied to the dynamic evolution of complex networks. This new approach allows the explorationof the impact of a network's topology on its dynamics, systematically and analytically. From the theory of random matrices the researchers derived mathematical expressions which precisely determined how fast neurons can coordinate their activity, i.e. how fast neural networks can synchronize. Using these random matrix theory expressions, the dependence on properties of single neurons as well as of the network topology can be accurately predicted.
As might be expected, they found that the faster the neurons synchronize the stronger the synaptic connections between the units are. Intriguingly, however, the new study revealed that there exists a speed limit to network synchronization: Even for arbitrary strong interactions synchronization cannot be achieved faster than an upper limit. This speed limit is set by the complicated connectivity of the network and is absent if every unit is coupled to every other. The limit originates from the fact that even if only a single unit is brought out of complete synchrony this information must be spread to all units in the network before synchronization is achieved again.
If this analysis captures key mechanisms of coordinating activity in neural networks of the brain, this would mean that the speed of neural information processing, i.e. thinking and reacting, can be severely limited by network connectivity. For instance, the analysis revealed that in random networks, the speed of synchronization only slowly increases with the average number of connections per neuron. This would imply that brain areas, within which rapid information exchange is essential, have to be highly connected.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.