Open In Colab

(Optional) Tutorial 2#

Probability & Statistics III: Statistical Encoding & Decoding

[insert your name]

Important reminders: Before starting, click “File -> Save a copy in Drive”. Produce a pdf for submission by “File -> Print” and then choose “Save to PDF”.

To complete this tutorial, you should have watched Video 8.1 and 8.2.

Imports

# @markdown Imports

# Imports
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets  # interactive display
import math
import scipy
import scipy.interpolate
import time
from ipywidgets import *

Plotting functions

# @markdown Plotting functions

import numpy
from numpy.linalg import inv, eig
from math import ceil
from matplotlib import pyplot, ticker, get_backend, rc
from mpl_toolkits.mplot3d import Axes3D
from itertools import cycle

from matplotlib.gridspec import GridSpec


%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")

Helper functions

# @markdown Helper functions
def get_ps_basis(ps_filter_basis, dt):

    ncos = ps_filter_basis['ncos']
    b = ps_filter_basis['b']
    hpeaks = ps_filter_basis['hpeaks']
    absref = ps_filter_basis['absref']

    ihbasis, iht = get_cos_basis(hpeaks, b, ncos, dt)

    if absref > 0:
      raise NotImplementedError


    # set first cosine basis vector bins (before 1st peak) to 1
    ii = np.where(iht<=hpeaks[0]);
    ihbasis[ii,0] = 1;

    hbasis = np.pad(ihbasis, ((1, 0), (0, 0)))

    return hbasis

def get_cos_basis(npeaks, b, ncos, dt):

    # Generate basis of raised cosines
    yrnge = np.log(npeaks + 1e-20 + b)
    db = np.diff(yrnge)/(ncos-1)
    ctrs = np.arange(yrnge[0], yrnge[1]+.1, db)
    mxt = np.exp(yrnge[1] + 2*db)-1e-20 - b
    kt0 = np.arange(0, mxt, dt)

    nt = len(kt0)   

    x = np.tile(np.log(kt0+b+1e-20)[:, None], (1, ncos))
    c = np.tile(ctrs, (nt, 1))

    aa = np.minimum(math.pi, (x-c)*math.pi/db/2)
    kbasis0 = (np.cos(np.maximum(-math.pi, aa)) + 1)/2

    return kbasis0, kt0


def get_stim_filter_basis(stim_filter_basis, nkt, dt):
   # Based on makeBasis_StimKernel.m 

    neye = stim_filter_basis['neye']
    ncos = stim_filter_basis['ncos']
    kpeaks = stim_filter_basis['kpeaks']
    b = stim_filter_basis['b']
    kdt = 1

    if neye != 0:
      raise NotImplementedError('neye > 0 not implemented')

    kbasis0, kt0 = get_cos_basis(kpeaks, b, ncos, kdt)

    # Concatenate identity-vectors
    nkt0 = kt0.shape[0]
    #kbasis = [[np.eye(neye); np.zeros((nkt0, neye))] [np.zeros((neye, ncos)); kbasis0]];
    kbasis0 = kbasis0[::-1]

    if nkt0 < nkt:
      kbasis0 = np.pad(kbasis0, ((nkt-nkt0, 0), (0, 0)))
    else:
      raise NotImplementedError

    # Normalize columns
    kbasisTemp = kbasis0/np.sqrt(np.sum(kbasis0**2, axis=0))
    nkb = kbasisTemp.shape[1]
    lenkb = kbasisTemp.shape[0]
    kbasis = np.zeros((int(lenkb/dt), nkb))
    for bNum in range(nkb):

      kbasis[:, bNum] = scipy.interpolate.interp1d(np.arange(0, lenkb), kbasisTemp[:, bNum], bounds_error=False)(np.linspace(0, lenkb-1, int(lenkb/dt)))

    return kbasis


def remove_abs_ref(A):
  # https://stackoverflow.com/questions/6518811/interpolate-nan-values-in-a-numpy-array

  A[A<-500] = np.nan
  ok = ~np.isnan(A)
  xp = ok.ravel().nonzero()[0]
  fp = A[~np.isnan(A)]
  x  = np.isnan(A).ravel().nonzero()[0]

  A[np.isnan(A)] = np.interp(x, xp, fp)
  return A


def simulate_glm(k_component_2, h_component_1):

    # Get basis filters
    dt = 0.1
    nkt = 100 # 100 ms in stim filter
    len_sim = 600
    x = np.zeros((int(len_sim/dt),));
    x[int(100/dt):int(500/dt)] = .4;

    stim_filter_basis = {'neye': 0, 'ncos': 2, 'kpeaks': np.array([0, round(nkt/4)]), 'b': 100}
    ps_filter_basis = {'ncos': 2, 'hpeaks': np.array([10, 40]), 'b': 50, 'absref': 0}

    kbasis = get_stim_filter_basis(stim_filter_basis, nkt, dt)
    hbasis = get_ps_basis(ps_filter_basis, dt)

    k_params = np.array([1, k_component_2])
    k = np.dot(kbasis, k_params)

    h_params = [h_component_1, -3]
    h = np.dot(hbasis, h_params)
    h = np.pad(h, (round(3/dt), 0), constant_values=-1000)

    dc = -1

    ### Simulate glm firing, get filtered output over time and spikes
    nTimePts = len(x)
    refreshRate = 1000/dt; # stimulus in ms, sampled at dt from izhikevich model
                
    g = np.zeros((nTimePts + len(h), )) # filtered stimulus + dc
    y = np.zeros((nTimePts, 1)) # initialize response vector (pad with zeros in order to convolve with ps filter)
    r = np.zeros((nTimePts+len(h)-1,)) # firing rate (output of nonlinearity)
    hcurr = np.zeros((g.shape))

    xconvk = np.convolve(x, k[::-1], 'full')[:-len(k)+1]

    g = np.pad(xconvk + dc, (0, len(h)))

    # Loop over time to get spikes and add post spike filter
    for t in range(nTimePts):
      r[t] = np.exp(g[t]) # firing rate
      y[t] = np.minimum(np.random.poisson(r[t]/refreshRate), 1) # 0 or 1 spikes
      if y[t] > 0:
        g[t:t+len(h)] += h
        hcurr[t:t+len(h)] += h
    t = np.arange(0, len_sim, dt)
    return x, xconvk, hcurr, g, y, k, h, t, dt, len_sim 

Exercise 1: Neural dynamics from LNP with spike history filter#

Let’s look a little more at possible neural dynamics we can capture with a spike history filter. We will assume a 1D stimulus pulse (like an injected current) and examine the resulting simulated spiking given different linear and post spike filters.

This setup was inspired by Weber AI & Pillow JW (2017) and the code in Helper Functions is adapted from https://github.com/aiweber/GLM_and_Izhikevich.

Play with the following interactive demo. Note that the time responsiveness may be a bit slow when changing parameters. You are changing components of the stimulus filter and post spike filter with the sliders (to be precise, each filter is made up of a basis of 2 raised cosine bumps and you’re changing the weight on one of them for each filter). Then answer the questions following the demo.

Please make sure you understand what is happening in the model! Make sure you really understand how the filter outputs are generated from the stimulus and filters etc

Execute this cell to enable the demo

# @markdown Execute this cell to enable the demo

x, xconvk, hcurr, g, y, k, h, t, dt, len_sim = simulate_glm(-0.5, 0)
fig = plt.figure(figsize=(15, 10))

gs = GridSpec(4, 2, figure=fig, width_ratios = [.4, .1])
ax = [None]*5

ax[0] = fig.add_subplot(gs[0, 0])
ax[0].set(title = 'Stimulus', xlim=[0, len_sim], xticks=[])
ax[0].plot(t, x, 'k')
ax[0].set(title = 'Stimulus', xlim=[0, len_sim], xticks=[])

# Filter outputs
ax1 = fig.add_subplot(gs[1, 0])
kout, = ax1.plot(t, xconvk, 'b')
hcurr_interp = remove_abs_ref(hcurr)
hout,  = ax1.plot(t, hcurr_interp[:-len(h)], 'r')
ax1.set(title = 'Filter outputs', xlim=[0, len_sim], xticks=[], ylim=[-40, 40])
ax1.legend(['k output', 'h output'])

# Firing rate
ax2 = fig.add_subplot(gs[2, 0])
g_interp = remove_abs_ref(g)
fr_out, = ax2.plot(t, g_interp[:-len(h)], 'b')
ax2.set(title = 'Firing rate', xlim=[0, len_sim], xticks=[], ylim=[-20, 20])

# Spikes
ax3 = fig.add_subplot(gs[3, 0])
events, = ax3.eventplot(np.where(y)[0], color='k')
ax3.set_xlim([0, int(len_sim/dt)])
ax3.set_xticks([0, int(len_sim/dt)])
ax3.set_xticklabels([0, str(len_sim) + 'ms'])
ax3.set(title = 'Spikes')

# k filter
ax4 = fig.add_subplot(gs[0:2, 1])
ax4.plot([0, len(k)*dt], [0, 0], 'k', alpha=.6)
k_plot, = ax4.plot(np.arange(0, len(k)*dt, dt), k, 'b')
ax4.set(title = 'Stimulus filter k', xticks=[0, 100], xticklabels=['-100 ms', 0], ylim=[-.2, .2])

# h filter
ax5 = fig.add_subplot(gs[2:, 1])
h_interp = remove_abs_ref(h)
ax5.plot([0, len(h)*dt], [0, 0], 'k', alpha=.6)
h_plot, = ax5.plot(np.arange(0, len(h)*dt, dt), h_interp, 'r')
ax5.set(title = 'Post spike filter h', xticks=[0, 100], xticklabels=[0, '100 ms'], ylim=[-3, 1])

plt.close(fig)


def update(k_comp, h_comp):

    x, xconvk, hcurr, g, y, k, h, t, dt, len_sim = simulate_glm(k_comp, h_comp)
    kout.set_ydata(xconvk)
    
    hcurr_interp = remove_abs_ref(hcurr)
    hout.set_ydata(hcurr_interp[:-len(h)])

    g_interp = remove_abs_ref(g)
    fr_out.set_ydata(g_interp[:-len(h)])

    events.set_positions(np.where(y)[0])
    h_interp = remove_abs_ref(h)
    k_plot.set_ydata(k)
    h_plot.set_ydata(h_interp)
    display(fig)

interact(update, k_comp = (-1.5, 0.5, .1), h_comp = (-1, 1, .1));

i) What is a parameter combination from the sliders that results in tonic bursting (repetitive bursting).

ii) What is a parameter combination from the sliders that results in phasic spiking (a spike or two upon the stimulus increase onset and then nothing)? Note that this is different from phasic bursting which is a burst of spikes upon stimulus increase onset.

Answer#

i) Answer here
ii) Answer here

Papers referenced in videos#

E.J. Chichilnisky (2001) A simple white noise analysis of neuronal light responses, Network: Computation in Neural Systems, 12:2, 199-213, DOI: 10.1080/net.12.2.199.213

  • Introduces STA

Liam Paninski (2003) Convergence properties of three spike-triggered analysis techniques, Network: Computation in Neural Systems, 14:3, 437-464, DOI: 10.1088/0954-898X_14_3_304

  • Dicussion of conditions for which the STA is an unbiased estimator

Liam Paninski (2004) Maximum likelihood estimation of cascade point-process neural encoding models, Network: Computation in Neural Systems, 15:4, 243-262, DOI: 10.1088/0954-898X_15_4_002

  • Covers MLE for LNPs, shows that negative log likelihood is convex for certain nonlinearities

Odelia Schwartz, Jonathan W. Pillow, Nicole C. Rust, Eero P. Simoncelli; Spike-triggered neural characterization. Journal of Vision 2006;6(4):13. doi: https://doi.org/10.1167/6.4.13.

  • Practical coverage of using spike triggered average and covariance analyses to estimate filters and nonlinearities for LNPs

Pillow, J., Shlens, J., Paninski, L. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008). https://doi.org/10.1038/nature07140

  • Introduces spike history filters/coupling filters for LNP models

Weber AI & Pillow JW (2017). Capturing the dynamical repertoire of single neurons with generalized linear models. Neural Computation 29(12): 3260-3289

  • Covers neural spiking dynamics that can be captured by GLMs