Statistical description of the operation of one neuron

When a neuron is immersed in an active neuronal network, we can describe its activation with a statistical formulation. In this way, we will then be able to put together many neurons in one network and understand the emergent activation properties of the circuit.

The first thing that we will assume is that the current that enters through the neuron’s membrane, as a result of all the activity of its presynaptic neurons is highly noisy, irregular, and can be described as a Gaussian noise (white or colored). This kind of current has the following form in time, and statistically we can describe it with two parameters: the mean current mu and its standard deviation sigma.

Statistical description of the operation of a single neuron

As a result of this input current through noisy membrane channels, the neuron’s membrane potential has also an irregular shape, which occasionally reaches a threshold that leads the neuron to fire an action potential. The time between two action potentials in a single neuron is called ISI (inter-spike interval), which can computed from the parameters μ and σ of the input current of an integrate-and-fire neuron using the Fokker-Planck formalism. This computation allows us to define a response function Φ of the neuron depending on μ and σ that gives us the discharge frequency ν of the neuron when subjected to a stream of Gaussian noise defined by these parameters (the discharge frequency ν is defined as the inverse of the average ISI of the neuron).

Statistical description of the operation of a single neuron

We have learnt, then, that a neuron immersed in a network that shows highly variable activity can be described assuming that we know the parameters of a Gaussian that approximates the current inputs received by the neuron. Therefore, the statistical description provides a mathematical relationship Φ between the input parameters (μ and σ) and the discharge of the neuron (ν).

Statistical description of the operation of a single neuron

How can we compute the states of a recurrent network?

The synaptic current is generated by the activity of the presynaptic neurons and thus the mean μ and the standard deviation σ depend on their discharge frequency according to the following formulae:

How can we compute the states of a recurrent network?

where C is the number of presynaptic neurons that our neuron has, ν is their discharge frequency, J is the strength of connections and the parameters μext and σext2 describe the input current to the neuron that do not depend on other neurons that we are simulating, but are the result of interactions with neurons in the external circuit.

Therefore, such a formulation allows us to move from discharge frequency of a group of presynaptic neurons to the discharge frequency of the postsynaptic neuron through Gaussian parameters that relate them. However, in a recurrent network, a good fraction of the presynaptic neurons are neurons in the same network and so the neuron we call postsynaptic is actually presynaptic for many other neurons and vice versa. The only way to make this riddle consistent is to impose the discharge frequency of the pre and postsynaptic to be the same; then, we arrive at a closed equation: ν defines μ and σ by means of equations (2) whereas μ and σ define ν by means of equation (1). What solutions can be found that satisfy this condition?

We can study it graphically. If we plot the postsynaptic discharge frequency depending on the presynaptic discharge frequency, the solution we seek is the intersection with the bisector (the line imposing the two discharge frequencies to be equal).

Stable activity states of the network

There may be two types of solutions: unstable solutions, when small perturbations around the solution bring the system away from this state, and stable solutions, which do not diverge for small perturbations.

A network can function as working memory

In order to have an operating state that allows to keep a memory we will need the system to have at least two stable states as a solution of equations (1) and (2). Thus, a brief stimulus can bring the system to one of the two states and it will be stable. When checking again the state of the system, one can “remember” which was the stimulus that had been presented. How can we get this operation? As depicted in the following figure (left), the trick is to choose a good parameter J for the Φ function, which must be an increasing function with change of concavity around the bisector. In such a away, we have two stable states:

  1. The state of spontaneous activity (low discharge rate across the network).
  2. State of persistent activity (high discharge rate in a population network).
  3. These two stable states change if we change the parameter J, the strength of connections within the network, so we only have the desired behavior (two stable states) within a relatively narrow range of values of J. It is therefore necessary that the network is quite finely adjusted so that it can function as a system of working memory.