## Servicios Personalizados

## Revista

## Articulo

## Indicadores

- Citado por SciELO
- Accesos

## Links relacionados

- Citado por Google
- Similares en SciELO
- Similares en Google

## Compartir

## Biological Research

##
*versión impresa* ISSN 0716-9760

### Biol. Res. v.40 n.4 Santiago 2007

#### http://dx.doi.org/10.4067/S0716-97602007000500009

^{1}**
Dirección para Correspondencia
In the last twenty years an important effort in brain sciences, especially in cognitive science, has been the development of mathematical tool that can deal with the complexity of extensive recordings corresponding to the neuronal activity obtained from hundreds of neurons. We discuss here along with some historical issues, advantages and limitations of Artificial Neural Networks (ANN) that can help to understand how simple brain circuits work and whether ANN can be helpful to understand brain neural complexity.
A major challenge in biology today is to understand how "cognitive abilities" emerge from a highly interconnected and complex neural net such as the brain. A heuristic way to approach the problem has been to model the brain on the basis of simple neuronal circuits (interconnected neurons or agents) with a minimum of learning and memory rules. However, to establish a realistic model an enormous challenge still remains, as such model must be embedded with cognitive properties. In this paper we will discuss some approaches that have been used to relate finite discrete networks, in the field of Artificial Neural Networks (ANN), to brain activity.
An ANN is defined by a -tuple W is a real nxn weight matrix and b an n-dimensional threshold vector. Given a vector x ∈{0,1}, the new vector value, ^{n}y, is given by y = 1 (W x - b), or equivalently:where 1 Such ANN networks were introduced by McCulloch and Pitts (1943) as a model of the nervous system in their seminal article The ANN output follows a "neuron activation function" of the form (e.g.) sigmoid, linear: The application of an ANN model leads to solutions of problems that tend to be associated with a particular topology, which can be acyclic (with absence of feedback) or cyclic (with recurrence) forms, depending of how neurons or nodes are interconnected. We don't plan to review extensively the theory and applications of ANN in detail (for a review see Anderson and Rosenfeld, 1988; Arbib et al. 2002), but to present some issues related to their complexity and their possible relation to brain cognitive processes (e.g, learning & memory, decision making).
At the cellular level a single artificial neuron should enfold cell excitability, that is the biophysical properties describing the conductivity of the neuron membrane as a function of ion concentrations. The conductivity changes determine neuron inhibition or excitation, what will or will not trigger an action potential (or spike), depending on whether a threshold On the basis of singles rules McCulloch and Pitts proved that by assembling an arbitrary number of such elementary neuron units it was possible to build a computer. Since a computer is nothing else than a set of circuits, they were able to build a universal set of logical gates, say the AND, OR, and NOT gates as follows: For instance, consider the XOR function, i.e: XOR(1,0) = XOR(0,1) = 1 and XOR(l,l) = XOR(0,0) = 0. This function can be built from the elementary functions as shows in Figure 2. To recover biological regulatory properties such, e.g. dynamical feedback, recurrence, robustness, learning and memory retrieval in neural cognitive network, a further step was important. The introduction of the Perceptron by Rosenblatt in the latest 50's.
It is a very simple ANN able to recognize some families of patterns. It consists of a square retina (a bidimensional array of cells) connected by weighted wires (w The idea was the following: suppose we want to recognize patterns which belong to a set F Unfortunately, later on Minsky and Papert (1988) in their famous book "Perceptrons" proved that the linear separable cases are not the most interesting. There exist a lot of patterns that our brain classifies quickly and yet are not linearly separable. The Minsky and Papert results literally slowed down, for more that ten years the use of ANN
Geometrically speaking a Another example more related with our perception is the failure of a
Only in the 80 's do more appear powerful models of ANN, which learn to recognize non linear separable patterns. Essentially it considers a multilayer Perceptron
SNNs appear in the 80's related to some automata dynamics and the Ising model in statistical physics. We say that To illustrate the associative memory model we will consider the weight matrix defined by the Hebb-rule: Hopfield established the conditions under which such vectors are attractors and also proved experimentally the retrieval qualities of that network: vectors at small Hamming distance of an attractor converge to it by using a sequential dynamic procedure. In fact, researchers in statistical physics show that the retrieval capacity of this model is high when the number of patterns to be memorized is small, i.e about a 0.14 n, where n is the number of artificial neurons (Kamp and Hasler, 1990). An interesting topic relate to Hopfield analysis is that each network is driven by an energy operator similar to the Ising model. In this context, each step of the network reduces its energy, so fixed points (i.e. memorized vectors) are local minima (see also Goles et al, 1985). A similar result was obtained for the synchronous update concerning also the two periodic configurations (Goles and Olivos, 1980). Apparently one may believe that SNN are not as powerful compared to a non-symmetric one but their computing performances are equivalent. In fact, to convey information in a SNN it is enough to establish a line of neurons with a decreasing ratio between thresholds and weights. Furthermore, after building symmetric gates it is possible to simulate any circuit (Figure 6).
In order to illustrate the complexity of ANN dynamics lets consider the following: given a symmetric nxn neural net with states -1 and 1. Is it possible to know in a reasonable amount of time if the problem (H) admits fixed points? Apparently, it seems to be a very simple problem if there exists at least a fixed point, but is actually very hard. In fact the majority of problems related to the dynamical neural net complexities are hard to solve. To illustrate this, consider a very simple Hopfield network and the following problem: To prove that (H) is NP-Complete we will reduce (P) to it. Let us consider
As we had said in the beginning the majority of interesting dynamical problems for neural nets are hard to solve, so one has to develop approximations or local strategies to build neural nets to solve particular problems. From this point of view one may say that after McCulloch work we continue, at least from the theoretical point of view, to see that the use of ANN models to understand real neurons is still a challenge at the edge of our mathematical and algorithmical knowledge. An important issue that can help to develop further application of ANN to brain sciences is to expand, mathematically, their internal adaptive properties. For instance, the 6 threshold function should be set to depend not only on a short term plasticity process but also on a long term plasticity process that would involve, e.g., synthesis of new protein, similar to the long term plasticity process that follows learning (Whitlocketal.2006). Recently, the introduction of new methodologies, such as multielectrode arrays for simultaneous neural brain recording opens new issues for ANN, since we are able now to explore the way that brain neural networks develop associative or cooperative neural behavior (i.e. synchronization, oscillation), which is critical to understand the mechanisms involved in the brain communication necessary to bring perceptual coherence to brain activity.
Partially supported by Fondecyt 1070022, Centro de Modelamiento Matemático (CMM-UCH) (E.G.), PBCT-CONICYT ACT45 (AGP). We are grateful to John Ewer for editorial suggestions.
ANDERSON JA, ROSENFIELD E (1988) Neurocomputing. Foundations of Research, MIT Press. ARBIB MA (2002) The Handbook of Brain Theory and Neural Networks. The MIT Press. FLOREEN P (1992) Computational Complexity Problems in Neural Associative Memories. Report A-1992-5Dept- of Computer Science, Univ of Helsinki, Finland. GOLES E (1982) Fixed point of threshold functions on a finite set. in SIAM Journal on Algebraic and Discrete Methods Vol.3. N° 4. GOLES E, FOGEMAN F, PELLEGRIN D (1985) The energy as a tool for the study of threshold networks, in Discrete Applied Maths 12: 261-277. GOLES E, OLIVOS J (1980) Periodic Behavior of Generalized Threshold Functions. Discrete Math 30: 187-89. HOPFIELD KAMP Y, HASLER M (1990) Réseaux de neurones récursifs pour mémoires associatives, Presses Polytechniques Romandes, Lausanne, Switzerland. English edition MCCULLOCH W., PITTS W. (1943). A logical calculus of the ideas immanent in nervous activity, Bull of Mathematical Biophysics 5: 521-531. MINSKY M, PAPERT S., (1988). Perceptrons. The MIT Press, expanded edition, (first edition 1969). ROSENBLATT F. (1958). The Perceptron: a probabilistic model for information storage and organization in brain. Psychological Review 65: 386-408. RUMELHART DE., HINTON G.E., WILLIAMS R.J. (1986). Learning internal representation by error propagation, in Parallel distributed processing: exploration in the microstructure of cognition. In: Rumelhart and McClelland eds, Cambridge, MIT Press, pp 318-362. WHITLOCK JR, HEYNEN AJ, SHULER MG, BEAR MF. (2006). Learning induces long-term potentiation in the hippocampus. Science. 13: 1093-7. Received: October 10, 2007. Accepted: March 3, 2008. * |