Monday, May 18, 2015

resampling images with wb_command -volume-affine-resample

I often need to resample images without performing other calculations, for example, making a 3x3x3 mm voxel version of an anatomical image with 1x1x1 mm voxels for use as an underlay. This can be done with ImCalc in SPM, but that's a bit annoying, as it requires firing up SPM, and only outputs two-part NIfTI images (minor annoyances, but still).

The wb_command -volume-affine-resample program gets the resampling done at the command prompt with a single long command:

 wb_command -volume-affine-resample d:/temp/inImage.nii.gz d:/temp/affine.txt d:/temp/matchImage.nii CUBIC d:/temp/outImage.nii  

If the wb_command program isn't on the path, run this at the command prompt, from wherever wb_command.exe (or the equivalent for your platform) is installed. A lot of  things need to be specified:
  • inImage.nii.gz is the image you want to resample (for example, the 1x1x1 mm anatomical image)
  • affine.txt is a text file with the transformation to apply (see below)
  • matchImage.nii is the image with the dimensions you want the output image to have - what inImage should be transformed to match (for example, the 3x3x3 mm functional image)
  • CUBIC is how to do the resampling; other options are TRILINEAR and ENCLOSING_VOXEL
  • outImage.nii is the new image that will be written: inImage resampled to match matchImage; specifying a outImage.nii.gz will cause a gzipped NIfTI to be written.
The program writes outImage as a one-file (not a header-image pair) NIfTI. It takes input images as both compressed (i.e., .nii.gz) and uncompressed (i.e., .nii) one-file NIfTIs, but didn't like a header-image pair for input.

You need to specify an affine transform, but I don't want to warp anything so the matrix is all 1s and 0s; just put this matrix into a plain text file (I called it affine.txt):
 1 0 0 0  
 0 1 0 0  
 0 0 1 0  

UPDATE 24 May 2016: similar demo using afni 3dresample.
UPDATE 20 May 2015: Changed the resampling method to CUBIC and added a note that the program can output compressed images, as suggested by Tim Coalson.

Friday, May 15, 2015

MVPA on the surface: to interpolate or not to interpolate?

A few weeks ago I posted about a set of ROI-based MVPA results using HCP images, comparing the results of doing the analysis with the surface or volume version of the dataset. As mentioned there, there hasn't been a huge amount of MVPA with surface data, but there has been some, particularly using the algorithms in Surfing (they're also in pyMVPA and CoSMoMVPA), described by Nikolaas Oosterhof (et al., 2011).

The general strategy in all MVPA (volume or surface) is usually to minimize changing the fMRI timeseries as much as possible; motion correction is pretty much always unavoidable, but is sometimes the only whole-brain image manipulation applied: voxels are kept in the acquired resolution, not smoothed, not slice-time corrected, not spatially normalized to an atlas (i.e., each individual analyzed in their own space, allowing the people to have differently-shaped brains). The hope is that this minimal preprocessing will maximize spatial resolution: since we want to detect voxel-level patterns, let's change the voxels as little as possible.

The surface searchlighting procedure in Surfing follows this minimum-voxel-manipulation strategy, using a combination of surface and volume representations: voxel timecourses are used, but adjacency determined from the surface representation. Rephrased, even though the searchlights are drawn following the surface (using a high-resolution surface representation), the functional data is not interpolated, but rather kept as voxels: each surface vertex is spatially mapped to a voxel, allowing multiple vertices to fall within a single voxel in highly folded areas. Figure 2 from the Surfing documentation  shows this dual surface-and-volume way of working with the data, and describes the voxel selection procedure in more detail. In the way I've described my own searchlight code, the Surfing procedure results in a lookup table (which voxels constitute the searchlight for each voxel) where the searchlights are shaped to follow the surface in a particular way.

It should be possible to do this (Surfing-style, surface searchlights with voxel timecourses) with the released HCP data. The HCP volumetric task-fMRI images are spatially normalized to the MNI atlas, which will simplify things, since the same lookup table can be used with all people, though possibly at the cost of some spatial normalization-caused distortions. [EDIT 17 May 2015: Nick Oosterhof pointed out that even with MNI-normalized volumetric fMRI data, the subject-space surfaces could be used to map adjacent vertices, in which case each person would need their own lookup table. With this mapping, the same i,j,k-coordinate voxel could have different searchlights in different people.]

The HCP task fMRI data is also available as (CIFTI-format) surfaces, which were generated by resampling the (spatially-normalized) voxels' timecourses into surface vertices. The timecourses in the HCP surface fMRI data have thus been interpolated several times, including to volumetric MNI space and to the vertices.

Is this extra interpolation beneficial or not? Comparisons are needed, and I'd love to hear about any if you've tried them. The ones I've done so far are with comparatively large parcels, not searchlights, and certainly not the last word.

grey matter musings

fMRI data is always acquired as volumes,  usually (in humans) with voxels something like 2x2x2 to 4x4x4 mm in size. Some people have argued that for maximum power analyses should concentrate on the grey matter, ideally as surface representations. This strikes me as a bit dicey: fMRI data is acquired at the same resolution all over the brain; it isn't more precise where the brain is more folded (areas with more folding have closer-spaced vertices in the surface representation, so multiple vertices can fall within a single voxel).

But how much of a problem is this? How does the typically-acquired fMRI voxel size compare to the size of the grey matter? Trying to separate out fMRI signals from the grey matter is a very different proposition if something like ten voxels typically fit within the ribbon vs. just one.

Fischl and Dale (2000, PNAS, "Measuring the thickness of the human cerebral cortex from magnetic resonance images") answers my basic question of how wide the grey matter typically is in adults: 2.5 mm. This figure (Figure 3) shows the histogram of grey matter thickness that they found in one person's cortex; in that person, "More than 99% of the surface is between 1- and 4.5-mm thick."

So, it's more typical that the grey matter is one fMRI voxel wide than multiple.  A 4x4x4 mm functional voxel will be wider than nearly all grey matter; most voxels within the grey matter will contain some fractional proportion, not just grey matter. Things are better with 2x2x2 mm acquired voxels, but it will still be the case that a voxel falling completely into the grey matter will be fairly unusual, and even these totally-grey voxels will surrounded on several sides by non-grey matter voxels. To make it concrete, here's a sketch of common fMRI voxel sizes on a perfectly straight grey matter ribbon.



This nearness of all-grey, some-grey, and no-grey voxels is problematic for analysis. An obvious issue is blurring from motion: head motion of a mm or two within a run is almost impossible to avoid, and will totally change the proportion of grey matter within a given voxel. Even if there was no motion at all, the different proportions of grey matter causes problems ("partial volume effects"; see for example): if all the signal came from the grey matter, the furthest-right 2 mm voxels in the image above would be less informative than the adjacent 2 mm voxel which is centered in the grey, just because of the differing proportion grey. Field inhomogeneity effects, scanner drift, slice-time correction, resampling, smoothing, spatial normalization, etc. cause further blurring.

But the cortex grey matter is of course not perfectly flat like in the sketch: it's twisted and folded in three dimensions, like shown here in Figure 1 from Fischl and Dale (2000). This folding leads complicates things further: individual voxels still have varying amounts of grey matter, but can also encompass structures far apart if measured along the surface.





This figure is panels C (left) and D (right) from Figure 2 of Kang et al. (2007, Magnetic Resonance Imaging. Improving the resolution of functional brain imaging), and illustrates some of the "complications". The yellow outline at left is the grey-white boundary on an anatomical image (1x1x1 mm), with two functional voxels superimposed, one in red and one in green (the squares mark the voxels' corners; they had 1.88x1.88x5 mm functional voxels). The right pane shows the same two voxels' locations in a surface flat map (dark areas grey matter, light areas white). In their words, "Although the centers of the filled squares in the corners of the red and green functional voxels in (C) are the same distance apart in the 3-D space and points in the same voxel must be within 5.35 mm, functional activations in the red voxel spread to areas over 30 mm apart on the flat map, while activations in the green voxel remain close to each other."

Volume-to-surface mapping algorithms and processing pipelines attempt to minimize these problems, but there's no perfect solution: acquired voxels will necessarily not perfectly fall within the grey matter ribbon. We shouldn't allow the perfect to be the enemy of the good (no fMRI research would ever occur!) and give up on grey matter-localized analyses entirely, but we also shouldn't discount or minimize the additional difficulties and assumptions in surface-based fMRI analysis.

Tuesday, May 12, 2015

upcoming travels: PRNI and HBM

I'll be traveling a lot next month: attending PRNI June 10-12, then HBM June 14-18. I'll be talking about statistical testing for MVPA at both conferences, focusing on permutation testing for group analyses at PRNI, and a bit more general at our HBM workshop. I hope to meet some of you at one or both of these conferences!