A building block of the neuronal Generalized Linear Model: How to generate spikes from a Poisson process with a given rate?
What is a Theory? According to Wikipedia: "A theory is a contemplative and rational type of abstract or generalized thinking, or the resuls of such thinking." For me a Theory is a compression of complex streams of data from phenomena into a succinct model. It must be possible to derive novel predictions from theories so that we can test them and compare them to reality and data.
Neuroscience is a Science of the brain. As such, there are a number of theories that pervade it. One such theory is a mathematical description of how stimuli are translated into the language of the neurons, a neural code. This is called Encoding and it's the topic of this blog post. For a fun Youtube channel, #TheoryMatters, with interviews of neuroscientist who discuss the importance of theories in neuroscience see https://www.youtube.com/results?search_query=%23theoryMatters
Here's an encoding model which is an instance of a statistical model called Generalized Linear Model and more specifically called Linear-Nonlinear-Poisson model.
This is a causal model of how a spike train is generated by a neuron. As input we have a stimulus. This stimulus gets matched with the receptive field properties of the neuron (which are idealizations of prototypical stimulus features that the neuron is sensitive to, e.g. tends to responds to) in a convolution. The convolved stimulus is passed through a non-linearity which turns it into a Poisson rate parameter, which describes counts of spikes in independent time bins. Individual spikes are generated from this rate.
In this post we will study just a part of this model-- e.g. the rate and the dice and the spike part of the diagram. It's based on the exercises of the first chapter of a classic textbook "Theoretical Neuroscience" by Dayan and Abbott. The book and the exercises are available for free http://www.gatsby.ucl.ac.uk/~lmate/biblio/dayanabbott.pdf http://www.gatsby.ucl.ac.uk/~dayan/book/exercises/ .
The first chapter of the book is on Encoding. I would like to understand Encoding models thoroughly. The book goes thoroughly through the diagram. You have to start somewhere so I start with the first exercise from the book. The first and the simplest task is to generate spikes from a Poisson process with a constant rate. This means that the stimulus is not dynamic, it doesn't change in time.
To do so I followed the algorithm from: https://www.tu-chemnitz.de/informatik/KI/scripts/ws0910/Neuron_Poisson.pdf
The algorithm is:
1. split time into consecutive timebins dt (we choose a timebin of 1 ms, because a spike lasts for 1 ms),
2. then draw a sample from a uniform random distribution between 0 and 1.
3. Compare the sample to the quantity: rate*dt. If it's smaller or equal, the neuron emits a spike in that time bin. Otherwise, it doesn't.
The task is then to quantify the variance of the interspike interval distribution, it's Fano factor (variance of counts in time bins divided by their mean) and plot the interspike interval histogram.
Here's the code for the task and the result:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
r=100
dt=float(1)/1000
spike_lst=[]
#Simulate for 10 seconds, with the rate 1000Hz.
for t in range(0,10*1000):
if np.random.uniform()<=r*dt:
spike_lst.append(1)
else:
spike_lst.append(0)
plt.scatter(range(0,100),spike_lst[0:100])
plt.show()
Neuroscience is a Science of the brain. As such, there are a number of theories that pervade it. One such theory is a mathematical description of how stimuli are translated into the language of the neurons, a neural code. This is called Encoding and it's the topic of this blog post. For a fun Youtube channel, #TheoryMatters, with interviews of neuroscientist who discuss the importance of theories in neuroscience see https://www.youtube.com/results?search_query=%23theoryMatters
Here's an encoding model which is an instance of a statistical model called Generalized Linear Model and more specifically called Linear-Nonlinear-Poisson model.
This is a causal model of how a spike train is generated by a neuron. As input we have a stimulus. This stimulus gets matched with the receptive field properties of the neuron (which are idealizations of prototypical stimulus features that the neuron is sensitive to, e.g. tends to responds to) in a convolution. The convolved stimulus is passed through a non-linearity which turns it into a Poisson rate parameter, which describes counts of spikes in independent time bins. Individual spikes are generated from this rate.
In this post we will study just a part of this model-- e.g. the rate and the dice and the spike part of the diagram. It's based on the exercises of the first chapter of a classic textbook "Theoretical Neuroscience" by Dayan and Abbott. The book and the exercises are available for free http://www.gatsby.ucl.ac.uk/~lmate/biblio/dayanabbott.pdf http://www.gatsby.ucl.ac.uk/~dayan/book/exercises/ .
The first chapter of the book is on Encoding. I would like to understand Encoding models thoroughly. The book goes thoroughly through the diagram. You have to start somewhere so I start with the first exercise from the book. The first and the simplest task is to generate spikes from a Poisson process with a constant rate. This means that the stimulus is not dynamic, it doesn't change in time.
To do so I followed the algorithm from: https://www.tu-chemnitz.de/informatik/KI/scripts/ws0910/Neuron_Poisson.pdf
The algorithm is:
1. split time into consecutive timebins dt (we choose a timebin of 1 ms, because a spike lasts for 1 ms),
2. then draw a sample from a uniform random distribution between 0 and 1.
3. Compare the sample to the quantity: rate*dt. If it's smaller or equal, the neuron emits a spike in that time bin. Otherwise, it doesn't.
The task is then to quantify the variance of the interspike interval distribution, it's Fano factor (variance of counts in time bins divided by their mean) and plot the interspike interval histogram.
Here's the code for the task and the result:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
r=100
dt=float(1)/1000
spike_lst=[]
#Simulate for 10 seconds, with the rate 1000Hz.
for t in range(0,10*1000):
if np.random.uniform()<=r*dt:
spike_lst.append(1)
else:
spike_lst.append(0)
plt.scatter(range(0,100),spike_lst[0:100])
plt.show()
times=[]
for i in range(0,len(spike_lst)):
if spike_lst[i]==1:
times.append(i)
isi=[]
for t in range(1,len(times)):
isi.append(times[t]-times[t-1])
var=np.var(isi)
print(var)
plt.hist(isi,bins=20)
plt.show()
spike_counts=[]
for f in range(100,len(spike_lst)+100,100):
spike_counts.append(sum(spike_lst[f-100:f]))
Fano=np.var(spike_counts)/np.mean(spike_counts)
print(np.var(spike_counts))
print(Fano)
Here are the plots (sorry Blogger cuts the images for some reason that I don't know how to fix):
(1 is spiking, 0 is no activity)-- this is a spike train over 100 ms.
This is the inter-spike distribution which is exponential for the Poisson neurons.
The coefficient of variation for the interspike interval distribution was 82 ms (standard deviation is 9 ms).
The Fano factor matched for the spike counts (0.92) closely matched the theoretical prediction of 1 (Poisson distribution has equal mean and variance).
So that's it. The first exercise is complete. I would like to also do the other exercises and post solutions. For example one exercise requires you to incorporate refractory effects into the Poisson rate. And there are exercises for approximating the filter of a neuron e.g. it's receptive field (see the picture of the Linear-Nonlinear-Poisson model image above) from a stimulus and a spike train. Lots of interesting stuff to do. I think it's really worth it to sweat the exercises and do the implementations. Working through the book gives you a really solid foundation in neural models so that you can better understand the literature.
Kommentaarid
Postita kommentaar