Tuesday, June 24, 2014

Day 26: Womp, womp

Notebook

Today I'm trying out another validation, this time making sure that sources which originate
in the amygdala aren't measured on the cortical surface. Unfortunately, this test has failed
with flying colors. I've included figures below with the cortical maps during peak source
activation. Both sources show widespread activation in posterior regions. I include the
pysurfer code at the bottom, but commented out, since I still haven't figured out how to
get these windows to show up in ipython notebook.

I'd like to highlight the method used to simulate the evoked data. Turns out its as simple
as doing a dot product of the forward solution (n_channels x n_sources) with the simulated
data (n_sources x n_times).

I'd also like to highlight that I'm using free orientataions in this source space. Fixed
orientations generally assume that dipoles are oriented outward, which is a reasonable
assumption for the cortical surface. Free orientation allows for random orientation, which
is reasonable for subcortical volumes.

Next, I would like to try merging a cortical surface with fixed orienations with a subcortical
volume with free orientations. Additionally, I'd like to try creating a subcortical surface
instead of a subcortical volume.

In [1]:
%matplotlib inline
%load spm_test_subcort2cort.py
In [3]:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import mne
from mne.datasets import spm_face
from surfer import Brain, TimeViewer

mne.set_log_level(False)

# set data files
data_path = spm_face.data_path()
labels_path = data_path + '/MEG/spm/labels/%s.label'
fstring = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D'
fwd_fname = fstring + '_oct-6-amyg-vol-fwd.fif'
evo_fname = fstring + '-ave.fif'
cov_fname = fstring + '-cov.fif'

# read evoked data
evo = mne.read_evokeds(evo_fname, condition=0)
sfreq = evo.info['sfreq']
times = evo.times

# read the forward model
fwd = mne.read_forward_solution(fwd_fname, force_fixed=False)

# extract the source space
src = fwd['src']

# read noise covariance
cov = mne.read_cov(cov_fname)

# generate two waveforms
stc_data = np.zeros((2, len(times)))
ws = mne.time_frequency.morlet(sfreq, [3, 10], n_cycles=[1, 1.5])
stc_data[0][:len(ws[0])] = np.real(ws[0])
stc_data[1][:len(ws[1])] = np.real(ws[1])
stc_data[0] = np.roll(stc_data[0], 100)
stc_data[1] = np.roll(stc_data[1], 200)
stc_data *= 100 * 1e-9

# generate simulated sparse source estimates
v = [np.array([]), np.array([]), np.array([0., 3000.])]
sim_stc = mne.source_estimate.SourceEstimate(stc_data, vertices=v,
                                             tmin=times.min(), tstep=1./sfreq)

# set parameters for inverse operator
snr = 10
lambda2 = 1. / snr**2

# generate simulated evoked data
gain = fwd['sol']['data'][:, (0, 3000)]
sim_data = np.dot(gain, sim_stc.data)
sim = evo.copy()
sim.nave = 1
sim.data = sim_data
sim.info = mne.forward._fill_measurement_info(sim.info, fwd, sfreq)

# add noise to the evoked data
noise = mne.simulation.generate_noise_evoked(sim, cov)
sim = mne.simulation.add_noise_evoked(sim, noise, snr)

# make the inverse operator
inv = mne.minimum_norm.make_inverse_operator(evo.info, fwd, cov, loose=None, 
                                             depth=None, fixed=False)

# apply the inverse operator
stc = mne.minimum_norm.apply_inverse(sim, inv, lambda2)

# convert time to milliseconds
t = stc.times*1000

#brain = Brain('spm', 'both', 'inflated')
#brain.add_data(stc.lh_data[:4098], vertices=stc.lh_vertno, hemi='lh',
#               smoothing_steps=10, colormap='hot', time=t, time_label='%.2f ms')
#brain.add_data(stc.rh_data[:4098], vertices=stc.rh_vertno, hemi='rh',
#               smoothing_steps=10, colormap='hot', time=t, time_label='%.2f ms')
#
#brain.scale_data_colormap(fmin=0, fmid=3, fmax=6, transparent=True)
#TimeViewer(brain)
In [4]:
from IPython.display import Image
Image('rhamg2cort.png')
Out[4]:
In [5]:
Image('lhamg2cort.png')
Out[5]:

No comments:

Post a Comment