Monday, August 18, 2014

Pencils Down

Welp, that's the end of Google Summer of Code 2014, but not the end of this project. I didn't blog as much as I would have liked to, but I think I accomplished a lot overall.

Regarding the last post, the problem I encountered was that the source spaces were in "head" coordinates, which happens when the forward solution is computed. "Head" coordinates refers to the coordinate space of the MEG or EEG sensors. To resolve this, I had to transform the grid of mri voxels to "head" coordinates as well when mri_resolution=True. This wasn't an issue when mri_resolution=False because the volume source space was also converted to "head" coordinates.

Now the class SourceSpaces has a method called export_volume, which saves the source spaces as a nifti or mgz file that can be viewed in freeview. This only works for mixed source spaces with at least one volume source space, since the volume source space is responsible for setting up the 3d grid.

The source estimate can also be computed from a mixed source space. I wasn't able to implement code to view the source estimate as a 4d image, but that will build largely on the export_volume code previously described.

In addition, I created an example file to generate mixed source spaces. This example outputs the following figures.

The first figure shows the cortical surface with the additional volume source space of the left cerebellum. The locations of the dipoles are in yellow. The second figure shows the .nii file in freeview, where source spaces are in red.

Future work that needs to be done is creating these visualizations for source estimates, add options to fix the orientation of surface but not volume dipoles, and continue testing the accuracy of these combined source spaces using simulated data.

Wednesday, August 6, 2014

Day 56: Checking In

Its been awhile since I wrote a post. There haven't been a lot of changed to the actual source localization, but I've spent a lot of time trying to integrate my work into the existing MNE repository.

For the past 2 weeks or so, I've been struggling with visualizing source spaces. This is an integral part of visualizing the source estimates. While I've been having success plotting the exact locations of source dipoles in down-sampled source spaces (e.g. volumes with 5 mm spacing, or surfaces with 6 mm spacing), when I try to interpolate onto a higher resolution image, I can't seem to get the right transformations for the surface sources.

For instance, the image below shows the source spaces for the cerebellum (blue) and the cortex (white). They look pixellated because they're lower resolution than the mri image.

But when I try to create higher resolution images of the source spaces, the cerebellum lines up but the cortex does not.

The goal now is to go through the code and find out exactly what coordinate frames the surfaces and volumes are generated in and then figure out how to transform from one coordinate system to another.

If you're interested in more details, the nitty gritty details have been on this pull request on github.

Thursday, July 10, 2014

Day 37: Aligning source spaces

Notebook

In case you're wondering what I've been up to all week, I got some feedback from the mne community about how
to incorporate source spaces based on volumes of interest into the existing mne code. We decided to implement
volumes of interest using the function mne.setup_volume_source_space. If you check out my pull request, you
can see the various steps I've been through to make this possible. The biggest hurdle was incorporating an
interpolation matrix. This matrix transforms vertices based on their individual vertex numbers in 3d source
space into mri space.

I won't go into detail about the process, but I've included a plot below showing how 3 different source spaces
align in mri space. The surfaces of the left and right hemispheres are in blue and green, respectively, the
whole brain volume is in red, and a volume of interest based on the left cortical white matter is in black.

In [1]:
%matplotlib inline
%load transform_workspace.py
In [3]:
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

mne.set_log_level(False)
data_path = sample.data_path()
subject = 'sample'
subjects_dir = data_path + '/subjects'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'

# create a whole brain volume source
vol_src = mne.setup_volume_source_space(subject, mri=aseg_fname,
                                        subjects_dir=subjects_dir)

# create left and right hemisphere cortical surface sources
srf_src = mne.setup_source_space(subject, subjects_dir=subjects_dir,
                                 overwrite=True)

# create a left hemisphere white matter source
lwm_src = mne.setup_volume_source_space(subject, mri=aseg_fname,
                                       volume_label='Left-Cerebral-White-Matter')

# get the coordinates from all sources
vol_rr = vol_src[0]['rr']
ix = vol_src[0]['inuse'].astype('bool')
lh_rr = srf_src[0]['rr']
rh_rr = srf_src[1]['rr']
lwm_rr = lwm_src[0]['rr']

# plot the source spaces in 3 dimensions
ax = plt.axes(projection='3d')
ax.plot(vol_rr[ix, 0], vol_rr[ix, 1], vol_rr[ix, 2], 'r.', alpha=0.1)
ax.plot(lh_rr[:, 0], lh_rr[:, 1], lh_rr[:, 2], 'b.', alpha=0.1)
ax.plot(rh_rr[:, 0], rh_rr[:, 1], rh_rr[:, 2], 'g.', alpha=0.1)
ix = lwm_src[0]['inuse'].astype('bool')
ax.plot(lwm_rr[ix, 0], lwm_rr[ix, 1], lwm_rr[ix, 2], 'k.', alpha=0.5)

plt.show()
The add_dist parameter to mne.setup_source_space currently defaults to False, but the default will change to True in release 0.9. Specify the parameter explicitly to avoid this warning.