NiTransforms
A development repo for nipy/nibabel#656.
About
Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats precludes the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base. Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
Contents
Installation
NiTransforms is distributed via Pypi and can easily be installed within your Python distribution with:
python -m pip install nitransforms
Alternatively, you can install the bleeding-edge version of the software directly from the GitHub repo with:
python -m pip install git+https://github.com/poldracklab/nitransforms.git@master
To verify the installation, you can run the following command:
python -c "import nitransforms as nt; print(nt.__version__)"
You should see the version number.
Developers
Advanced users and developers who plan to contribute with bugfixes, documentation, etc. can first clone our Git repository:
git clone https://github.com/poldracklab/nitransforms.git
and install the tool in editable mode:
cd nitransforms
python -m pip install -e .
Examples
A collection of Jupyter Notebooks to serve as interactive tutorials.
ISBI2020 presentation
Introduction
Submission (654) Software tool to read, represent, manipulate and apply \(n\)-dimensional spatial transforms
Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats hinders the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base. Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
The process is summarized in the following figure:
[2]:
import os
from pathlib import Path
import nibabel as nb
from niworkflows.viz.notebook import display
import nitransforms as nt
[3]:
print(nt.__version__)
DATA_PATH = Path(os.getenv("NT_TEST_DATA", "~/.nitransforms/testdata")).expanduser().absolute()
20.0.0rc2
Step 0: Load some data
We are going to load a structural T1w image and an average through time of a BOLD fMRI dataset. Both belong to participant sub-01
of https://openneuro.org/datasets/ds000005.
We first check that each image has a different extent and sampling grid, using nibabel.
[4]:
t1w_nii = nb.load(DATA_PATH / "T1w_scanner.nii.gz")
print(t1w_nii.affine)
print(t1w_nii.shape)
[[ 1. 0. 0. -81. ]
[ 0. 1.33333302 0. -133. ]
[ 0. 0. 1.33333302 -129. ]
[ 0. 0. 0. 1. ]]
(160, 192, 192)
[5]:
bold_nii = nb.load(DATA_PATH / "bold.nii.gz")
print(bold_nii.affine)
print(bold_nii.shape)
[[ -3.125 0. 0. 101. ]
[ 0. 3.125 0. -72. ]
[ 0. 0. 4. -99. ]
[ 0. 0. 0. 1. ]]
(64, 64, 34)
Step 1: Check they are not aligned
Let’s use NiTransforms to resample the BOLD image in the T1w’s space. We can do that just applying an identity transform.
[6]:
identity_xfm = nt.linear.Affine(reference=t1w_nii)
identity_xfm.matrix
[6]:
array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]])
[7]:
resampled_in_t1 = identity_xfm.apply(bold_nii)
print(resampled_in_t1.affine)
print(resampled_in_t1.shape)
[[ 1. 0. 0. -81. ]
[ 0. 1.33333302 0. -133. ]
[ 0. 0. 1.33333302 -129. ]
[ 0. 0. 0. 1. ]]
(160, 192, 192)
As it can be seen above, after applying the identity transform both datasets have the same structure (i.e., they are sampled exactly the same way and their internal data matrices have samples at the same locations of the physical extent they represent). However, the information in the images is not aligned (i.e., the brain structures captured by the functional signal and the T1w signal are not aligned).
[8]:
display(t1w_nii, resampled_in_t1)
Step 2: Image registration
Let’s use the FreeSurfer’s bbregister
image registration tool to estimate a function that maps the space of the T1w image to that of the BOLD image. That way we will be able to bring both brains into alignment.
The test data folder contains the result of the process, with the LTA format - which is unique of FreeSurfer (as in, no other software utilizes it).
[9]:
t1w_to_bold_xfm = nt.linear.load(DATA_PATH / "from-scanner_to-bold_mode-image.lta", fmt="fs")
t1w_to_bold_xfm.reference = t1w_nii
[10]:
moved_to_t1 = t1w_to_bold_xfm.apply(bold_nii)
print(moved_to_t1.affine)
print(moved_to_t1.shape)
[[ 1. 0. 0. -81. ]
[ 0. 1.33333302 0. -133. ]
[ 0. 0. 1.33333302 -129. ]
[ 0. 0. 0. 1. ]]
(160, 192, 192)
[11]:
display(t1w_nii, moved_to_t1)
Say we want to do the opposite: bring some information/knowledge we have in T1w space (e.g., a manual segmentation or annotation) into the BOLD data grid. We would need to have the transform in the other way around. That’s pretty easy with the inverse (~
) operator:
[12]:
bold_to_t1w_xfm = ~t1w_to_bold_xfm
bold_to_t1w_xfm.reference = bold_nii
[13]:
display(bold_nii, bold_to_t1w_xfm.apply(t1w_nii))
Final notes
Installation
pip install nitransforms
See also: https://github.com/poldracklab/nitransforms
[ ]:
[1]:
%matplotlib inline
I/O - Reading and writing transforms
This notebook showcases the nitransforms.io
module, which implements the input/output operations that allow this library to use other software packages’ formats and tools for transforms.
Preamble
Prepare a Python environment and use a temporal directory for the outputs. After that, fetch the actual file from NiBabel documentation.
[2]:
import os
from pathlib import Path
from tempfile import TemporaryDirectory
import numpy as np
import nibabel as nb
import nitransforms as nt
cwd = TemporaryDirectory()
os.chdir(cwd.name)
print(f"This notebook is being executed under <{os.getcwd()}>.")
This notebook is being executed under </private/var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpx7qowz2n>.
[3]:
anat_file = Path(os.getenv("TEST_DATA_HOME", str(Path.home() / ".nitransforms"))) / "someones_anatomy.nii.gz"
Load in one sample image
We pick NiBabel’s example dataset called someones_anatomy.nii.gz
. This is a 3D T1-weighted MRI image stored in NIfTI format. Before any transformation, let’s first visualize the example image, and retain some copies of the original header and affine. Although it is not the only use-case for 3D images, most often when working with spatial transforms it is the case that at least one 3D image is involved.
Depending on how the software implements the functional that converts coordinates between two reference systems (which is, in essence, the transformation itself), the input/output images may play a role in defining said reference systems. The most common scenario where images are important to the definition of the spatial transform is that of image alignment (that is, resolving the image registration problem), where the algorithm works with the image’s array coordinates). Obviously, in such a framework, it is impossible to interpret any given transform without knowing the image(s) that define the real coordinates.
[4]:
# Load the example
nii = nb.load(anat_file)
hdr = nii.header.copy()
aff = nii.affine.copy()
data = np.asanyarray(nii.dataobj)
nii.orthoview()
[4]:
<OrthoSlicer3D: /Users/oesteban/datalad/nitransforms-tests/someones_anatomy.nii.gz (57, 67, 56)>

Image orientation
NIfTI images have two header entries to define how the data array indexed by integer coordinates between (0, 0, 0) and (\(S_i - 1\), \(S_j - 1\), \(S_k - 1\)) maps onto a continous space of, e.g., scanner coordinates (typically in mm). In order to ensure NiTransforms implements all possible combinations of transform formats and image orientations, we will need to generate similar images with different orientations.
To do so, we will use the nitransforms.tests.test_io._generate_reoriented(path, directions, swapaxes, parameters)
function.
[5]:
from nitransforms.tests.test_io import _generate_reoriented
For instance, we may want to generate an image in LAS orientation, where the first axis’ direction has been flipped and coordinates with positive sign get further away from the origin towards the left, rather than the right (i.e., RAS).
[6]:
las_anatomy, _ = _generate_reoriented(anat_file, (-1, 1, 1), None, {"x": 0, "y": 0, "z": 0})
print(f"Orientation: {''.join(nb.aff2axcodes(las_anatomy.affine))}.")
print(f"Orientation of the original file: {''.join(nb.aff2axcodes(nii.affine))}.")
Orientation: LAS.
Orientation of the original file: RAS.
Because both orientations point to the same spatial mapping of T1w MRI measurements, with the LAS array having the first axis reversed, both images should look the same when visualized:
[7]:
nii.orthoview()
las_anatomy.orthoview()
[7]:
<OrthoSlicer3D: (57, 67, 56)>


Writing a rigid-body transform for AFNI, ANTs, and FSL
Now, let’s use these variants to check how they affect in concatenation with other transforms
First, we check that, as NiBabel represents the data array disregarding the affine, the .orthoview()
visualization of the oblique image shows the same apparent data orientation as for the original file.
Create a transform. We test with a rigid-body transformation with 3 rotations and 3 translations
[8]:
T = nb.affines.from_matvec(nb.eulerangles.euler2mat(x=0.9, y=0.001, z=0.001), [4.0, 2.0, -1.0])
Resampling the image with *NiTransforms*. Let’s resample the dataset using NiTransforms. We will be using the LAS image as reference, which means that it will define the output space, and the RAS conversion as moving image – where the actual values are drawn and moved from. This result should be similar with the other libraries.
[9]:
xfm = nt.linear.Affine(T)
xfm.reference = las_anatomy
[10]:
moved = xfm.apply(nii, order=0)
moved.to_filename('moved-nb.nii.gz')
[11]:
moved.orthoview()
[11]:
<OrthoSlicer3D: moved-nb.nii.gz (57, 67, 56)>

Store the transform in other formats. Let’s leverage NiTransforms’ features to store the transforms for ANTs, FSL, and AFNI. Because transforms can (generally) take any extension, for NiTransforms to know the format of the output we will need to inform the transform object’s method about the format with the fmt
argument.
The transform object (xfm
in this notebook) has a convenient method to_filename()
. This method calls internally a factory class that assigns the correct type from nitransforms.io
to correctly write out the output. to_filename()
accepts a moving
argument, and makes use of the reference
property for those packages which require them (for example, FSL requires both, FreeSurfer LTAs writes their characteristics in the output file –although only necessary when the transform
type is voxel-to-voxel,– and AFNI needs them only if the data array is not aligned with the cardinal axes –oblique).
[12]:
xfm.to_filename('M.tfm', fmt='itk')
xfm.to_filename('M.fsl', moving=las_anatomy, fmt='fsl') # reference is set in the xfm object
xfm.to_filename('M.afni', moving=las_anatomy, fmt='afni') # reference is set in the xfm object
!ls
M.afni M.fsl M.tfm moved-nb.nii.gz
The equivalent way of storing a transform using the low-level interface of the io
submodule follows. Let’s store the transform T
into FreeSurfer’s LTA format:
[13]:
lta = nt.io.lta.FSLinearTransform.from_ras(T, moving=las_anatomy, reference=nii)
print(lta.to_string())
# LTA file created by NiTransforms
type = 1
nxforms = 1
mean = 0.0000 0.0000 0.0000
sigma = 1.0000
1 4 4
9.999990000003335e-01 1.404936252078656e-03 1.617172252089032e-04 -4.002644155280283e+00
-9.999993333334666e-04 6.216088741390577e-01 7.833271395738223e-01 -4.558906113709593e-01
9.999998333333415e-04 -7.833265179640611e-01 6.216096574657062e-01 2.184262694060495e+00
0.000000000000000e+00 0.000000000000000e+00 0.000000000000000e+00 1.000000000000000e+00
src volume info
valid = 1 # volume info valid
filename = /Users/oesteban/datalad/nitransforms-tests/someones_anatomy.nii.gz
volume = 57 67 56
voxelsize = 2.750000000000000e+00 2.750000000000000e+00 2.750000000000000e+00
xras = 1.000000000000000e+00 0.000000000000000e+00 0.000000000000000e+00
yras = 0.000000000000000e+00 1.000000000000000e+00 0.000000000000000e+00
zras = 0.000000000000000e+00 0.000000000000000e+00 1.000000000000000e+00
cras = 3.750000000000000e-01 1.125000000000000e+00 -1.400000000000000e+01
dst volume info
valid = 1 # volume info valid
filename = None
volume = 57 67 56
voxelsize = 2.750000000000000e+00 2.750000000000000e+00 2.750000000000000e+00
xras = -1.000000000000000e+00 0.000000000000000e+00 0.000000000000000e+00
yras = 0.000000000000000e+00 1.000000000000000e+00 0.000000000000000e+00
zras = 0.000000000000000e+00 0.000000000000000e+00 1.000000000000000e+00
cras = -2.375000000000000e+00 1.125000000000000e+00 -1.400000000000000e+01
Applying the transforms we generated before on images
Now, let’s check that the transforms written out by NiTransforms generate the same output as the tool generated with the apply()
method when used with their corresponding software packages.
First, we will need to store a copy of our reference and moving images to the temporal directory where we are working:
[14]:
nii.to_filename("someones_anatomy_RAS.nii.gz")
las_anatomy.to_filename("someones_anatomy_LAS.nii.gz")
The AFNI use-case. Let’s apply AFNI’s 3dAllineate
to resample someones_anatomy_LAS.nii.gz
into the grid of someones_anatomy_RAS.nii.gz
through the affine we generated above:
[15]:
!3dAllineate -base someones_anatomy_RAS.nii.gz -input someones_anatomy_LAS.nii.gz -1Dmatrix_apply M.afni -prefix moved-afni.nii.gz -final NN
moved_afni = nb.load('moved-afni.nii.gz')
++ 3dAllineate: AFNI version=AFNI_16.0.00 (Jan 1 2016) [64-bit]
++ Authored by: Zhark the Registrator
** AFNI converts NIFTI_datatype=2 (UINT8) in file someones_anatomy_RAS.nii.gz to FLOAT32
Warnings of this type will be muted for this session.
Set AFNI_NIFTI_TYPE_WARN to YES to see them all, NO to see none.
++ Source dataset: ./someones_anatomy_LAS.nii.gz
++ Base dataset: ./someones_anatomy_RAS.nii.gz
++ You might want to use '-master' when using '-1D*_apply'
++ Loading datasets
++ NOTE: base and source coordinate systems have different handedness
+ Orientations: base=Right handed (LPI); source=Left handed (RPI)
++ master dataset for output = base
++ OpenMP thread count = 4
++ ========== Applying transformation to 1 sub-bricks ==========
++ ========== sub-brick #0 ========== [total CPU to here=0.1 s]
++ Output dataset ./moved-afni.nii.gz
++ 3dAllineate: total CPU time = 0.1 sec Elapsed = 0.1
++ ###########################################################
Now, the two resampled images –moved
which we generated at the beginning using NiTransforms’ apply
, and moved_afni
just generated using 3dAllineate
– should look the same.
[16]:
moved.orthoview()
moved_afni.orthoview()
[16]:
<OrthoSlicer3D: moved-afni.nii.gz (57, 67, 56)>


ANTs/ITK transforms. Similarly, let’s test antsApplyTransforms
[17]:
!antsApplyTransforms -d 3 -i 'someones_anatomy_LAS.nii.gz' -r 'someones_anatomy_RAS.nii.gz' -o 'moved-itk.nii.gz' -n 'NearestNeighbor' -t 'M.tfm' --float
nb.load('moved-itk.nii.gz').orthoview()
[17]:
<OrthoSlicer3D: moved-itk.nii.gz (57, 67, 56)>

FSL. Finally, let’s check with FSL flirt
[18]:
!flirt -in someones_anatomy_LAS.nii.gz -ref someones_anatomy_RAS.nii.gz -out moved-fsl.nii.gz -init M.fsl -applyxfm
nb.load('moved-fsl.nii.gz').orthoview()
[18]:
<OrthoSlicer3D: moved-fsl.nii.gz (57, 67, 56)>

The special case of oblique datasets and AFNI
AFNI implements spatial transforms in physical coordinates (mm), so it doesn’t generally require to know about the reference and moving images to calculate coordinate mappings (obviously, both are required when applying the transform to align one another).
Let’s use an oblique dataset rotated 0.2 rad around the X axis and 0.1 rad around Y for the reference
image in this case.
[19]:
oblique, _ = _generate_reoriented(anat_file, (1, 1, 1), None, {"x": 0.2, "y": 0.1, "z": 0})
print("Dataset is oblique" if nt.io.afni._is_oblique(oblique.affine) else "not oblique (?)")
oblique.to_filename("oblique.nii.gz")
Dataset is oblique
Let’s first check the contents of the output file when neither the reference nor the moving images were oblique:
[20]:
nonoblique_M = nt.io.afni.AFNILinearTransform.from_ras(T, moving=las_anatomy, reference=nii)
print(nonoblique_M.to_string())
# 3dvolreg matrices (DICOM-to-DICOM, row-by-row):
0.999999 -0.000999999 -0.001 -4 0.00140494 0.621609 0.783327 -2 -0.000161717 -0.783327 0.62161 -1
Now, let’s replace the reference with the oblique image:
[21]:
oblique_M = nt.io.afni.AFNILinearTransform.from_ras(T, moving=las_anatomy, reference=oblique)
oblique_M.to_filename("M.oblique.afni")
print(oblique_M.to_string())
# 3dvolreg matrices (DICOM-to-DICOM, row-by-row):
0.994885 -0.000781397 -0.101006 -13.492 0.0903701 0.453595 0.886614 12.7541 0.0451231 -0.891208 0.451346 -13.9663
It is apparent that the transform is not the same as above anymore. Let’s see whether AFNI interprets these new parameters correctly.
[22]:
!3dAllineate -base oblique.nii.gz -input someones_anatomy_LAS.nii.gz -1Dmatrix_apply M.oblique.afni -prefix moved-afni-oblique.nii.gz -final NN
moved_afni = nb.load('moved-afni-oblique.nii.gz')
++ 3dAllineate: AFNI version=AFNI_16.0.00 (Jan 1 2016) [64-bit]
++ Authored by: Zhark the Registrator
*+ WARNING: If you are performing spatial transformations on an oblique dset,
such as oblique.nii.gz,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:oblique.nii.gz is 12.794579 degrees from plumb.
++ Source dataset: ./someones_anatomy_LAS.nii.gz
++ Base dataset: ./oblique.nii.gz
++ You might want to use '-master' when using '-1D*_apply'
++ Loading datasets
++ NOTE: base and source coordinate systems have different handedness
+ Orientations: base=Right handed (LPI); source=Left handed (RPI)
++ master dataset for output = base
++ OpenMP thread count = 4
++ ========== Applying transformation to 1 sub-bricks ==========
++ ========== sub-brick #0 ========== [total CPU to here=0.1 s]
++ Output dataset ./moved-afni-oblique.nii.gz
++ 3dAllineate: total CPU time = 0.1 sec Elapsed = 0.1
++ ###########################################################
Looking closely at the standard output of 3dAllineate
we can spot that the dataset is correctly identified as oblique, and AFNI triggers the special behavior of deobliquing it:
*+ WARNING: If you are performing spatial transformations on an oblique dset,
such as oblique.nii.gz,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:oblique.nii.gz is 12.794579 degrees from plumb.
Let’s now run the corresponding operation with our original xfm
object. First, we need to replace the old reference, then execute apply()
:
[23]:
xfm.reference = oblique
moved_oblique = xfm.apply(las_anatomy)
The outputs of AFNI and NiTransforms should be consistent:
[24]:
moved_afni.orthoview()
moved_oblique.orthoview()
[24]:
<OrthoSlicer3D: (57, 67, 56)>


WAT?? O_O the two images do not match. Let’s look at the affines of each.
[25]:
moved_afni.affine[:3, ...].round(2), moved_oblique.affine[:3, ...].round(2)
[25]:
(array([[ 2.75, -0. , -0. , -86.7 ],
[ -0. , 2.75, -0. , -72.74],
[ 0. , 0. , 2.75, -99.19]]),
array([[ 2.740e+00, 0.000e+00, 2.700e-01, -8.670e+01],
[ 5.000e-02, 2.700e+00, -5.400e-01, -7.274e+01],
[-2.700e-01, 5.500e-01, 2.680e+00, -9.919e+01]]))
We can see that AFNI has generated a dataset without obliquity. If both tools are generating the same data at the output with only a difference in the orientation metadata, then both data arrays should be similar – let’s overwrite the metadata and compare visually:
[26]:
moved_oblique.__class__(
np.asanyarray(moved_afni.dataobj, dtype="uint8"),
moved_oblique.affine,
moved_oblique.header,
).orthoview()
moved_oblique.orthoview()
[26]:
<OrthoSlicer3D: (57, 67, 56)>


[ ]:
Contributing to NiTransforms (a NiBabel feature-repo)
Welcome to NiBabel, and the NiTransforms repository! We’re excited you’re here and want to contribute.
Please see the NiBabel Developer Guidelines on our on our documentation website.
These guidelines are designed to make it as easy as possible to get involved. If you have any questions that aren’t discussed in our documentation, or it’s difficult to find what you’re looking for, please let us know by opening an issue!
What’s new?
21.0.0 (September 10, 2021)
A first release of NiTransforms. This release accompanies a corresponding JOSS submission.
FIX: Final edits to JOSS submission (#135)
FIX: Add mention to potential alternatives in JOSS submission (#132)
FIX: Misinterpretation of voxel ordering in LTAs (#129)
FIX: Suggested edits to the JOSS submission (#121)
FIX: Invalid DOI (#124)
FIX: Remove the
--inv
flag from regressionmri_vol2vol
regression test (#78)FIX: Improve handling of optional fields in LTA (#65)
FIX: LTA conversions (#36)
ENH: Add more comprehensive comments to notebook (#134)
ENH: Add an
.asaffine()
member toTransformChain
(#90)ENH: Read (and apply) ITK/ANTs’ composite HDF5 transforms (#79)
ENH: Improved testing of LTA handling - ITK-to-LTA,
mri_concatenate_lta
(#75)ENH: Add FS transform regression (#74)
ENH: Add ITK-LTA conversion test (#66)
ENH: Support for transforms mappings (e.g., head-motion correction) (#59)
ENH: command line interface (#55)
ENH: Facilitate loading of displacements field transforms (#54)
ENH: First implementation of AFNI displacement fields (#50)
ENH: Base implementation of transforms chains (composition) (#43)
ENH: First implementation of loading and applying ITK displacements fields (#42)
ENH: Refactor of AFNI and FSL I/O with
StringStructs
(#39)ENH: More comprehensive implementation of ITK affines I/O (#35)
ENH: Added some minimal test-cases to the Affine class (#33)
ENH: Rewrite load/save utilities for ITK’s MatrixOffsetBased transforms in
io
(#31)ENH: Rename
resample()
withapply()
(#30)ENH: Write tests pulling up the coverage of base submodule (#28)
ENH: Add tests and implementation for Displacements fields and refactor linear accordingly (#27)
ENH: Uber-refactor of code style, method names, etc. (#24)
ENH: Increase coverage of linear transforms code (#23)
ENH: FreeSurfer LTA file support (#17)
ENH: Use
obliquity
directly from nibabel (#18)ENH: Setting up a battery of tests (#9)
ENH: Revise doctests and get them ready for more thorough testing. (#10)
DOC: Add Zenodo metadata record (#136)
DOC: Better document the IPython notebooks (#133)
DOC: Transfer
CoC
from NiBabel (#131)DOC: Clarify integration plans with NiBabel in the
README
(#128)DOC: Add contributing page to RTD (#130)
DOC: Add
CONTRIBUTING.md
file pointing at NiBabel (#127)DOC: Add example notebooks to sphinx documentation (#126)
DOC: Add an Installation section (#122)
DOC: Display API per module (#120)
DOC: Add figure to JOSS draft / Add @smoia to author list (#61)
DOC: Initial JOSS draft (#47)
MAINT: Add imports of modules in
__init__.py
to workaround #91 (#92)MAINT: Fix missing
python3
binary on CircleCI build job step (#85)MAINT: Use
setuptools_scm
to manage versioning (#83)MAINT: Split binary test-data out from gh repo (#84)
MAINT: Add Docker image/circle build (#80)
MAINT: Drop Python 3.5 (#77)
MAINT: Better config on
setup.py
(binary operator starting line) (#60)MAINT: add docker build to travis matrix (#29)
MAINT: testing coverage (#16)
MAINT: pep8 complaints (#14)
MAINT: skip unfinished implementation tests (#15)
MAINT: pep8speaks (#13)
Library API (application program interface)
Information on specific functions, classes, and methods for developers.
Base
Common interface for transforms.
- class nitransforms.base.ImageGrid(image)[source]
Class to represent spaces of gridded data (images).
- property affine
Access the indexes-to-RAS affine.
- property inverse
Access the RAS-to-indexes affine.
- property ndcoords
List the physical coordinates of this gridded space samples.
- property ndindex
List the indexes corresponding to the space grid.
- class nitransforms.base.SampledSpatialData(dataset)[source]
Represent sampled spatial data: regularly gridded (images) and surfaces.
- property ndcoords
List the physical coordinates of this sample.
- property ndim
Access the number of dimensions.
- property npoints
Access the total number of voxels.
- property shape
Access the space’s size of each dimension.
- class nitransforms.base.TransformBase(reference=None)[source]
Abstract image class to represent transforms.
- apply(spatialimage, reference=None, order=3, mode='constant', cval=0.0, prefilter=True, output_dtype=None)[source]
Apply a transformation to an image, resampling on the reference spatial object.
- Parameters
spatialimage (spatialimage) – The image object containing the data to be resampled in reference space
reference (spatial object, optional) – The image, surface, or combination thereof containing the coordinates of samples that will be sampled.
order (int, optional) – The order of the spline interpolation, default is 3. The order has to be in the range 0-5.
mode ({'constant', 'reflect', 'nearest', 'mirror', 'wrap'}, optional) – Determines how the input image is extended when the resamplings overflows a border. Default is ‘constant’.
cval (float, optional) – Constant value for
mode='constant'
. Default is 0.0.prefilter (bool, optional) – Determines if the image’s data array is prefiltered with a spline filter before interpolation. The default is
True
, which will create a temporary float64 array of filtered values if order > 1. If setting this toFalse
, the output will be slightly blurred if order > 1, unless the input is prefiltered, i.e. it is the result of calling the spline filter on the original input.
- Returns
resampled – The data imaged after resampling to reference space.
- Return type
spatialimage or ndarray
- map(x, inverse=False)[source]
Apply \(y = f(x)\).
TransformBase implements the identity transform.
- Parameters
x (N x D numpy.ndarray) – Input RAS+ coordinates (i.e., physical coordinates).
inverse (bool) – If
True
, apply the inverse transform \(x = f^{-1}(y)\).
- Returns
y – Transformed (mapped) RAS+ coordinates (i.e., physical coordinates).
- Return type
N x D numpy.ndarray
- property ndim
Access the dimensions of the reference space.
- property reference
Access a reference space where data will be resampled onto.
IO
Reading and writing of transform files.
Base I/O
Read/write linear transforms.
- class nitransforms.io.base.BaseLinearTransformList(xforms=None, binaryblock=None, endianness=None, check=True)[source]
A string-based structure for series of linear transforms.
- property xforms
Get the list of internal transforms.
- class nitransforms.io.base.DisplacementsField[source]
A data structure representing displacements fields.
- class nitransforms.io.base.LinearParameters(parameters=None)[source]
A string-based structure for linear transforms.
Examples
>>> lp = LinearParameters() >>> np.all(lp.structarr['parameters'] == np.eye(4)) True
>>> p = np.diag([2., 2., 2., 1.]) >>> lp = LinearParameters(p) >>> np.all(lp.structarr['parameters'] == p) True
- class nitransforms.io.base.LinearTransformStruct(binaryblock=None, endianness=None, check=True)[source]
File data structure from linear transforms.
Tool Specific I/O
AFNI
Read/write AFNI’s transforms.
- class nitransforms.io.afni.AFNIDisplacementsField[source]
A data structure representing displacements fields.
- class nitransforms.io.afni.AFNILinearTransform(parameters=None)[source]
A string-based structure for AFNI linear transforms.
- classmethod from_ras(ras, moving=None, reference=None)[source]
Create an AFNI affine from a nitransform’s RAS+ matrix.
AFNI implicitly de-obliques image affine matrices before applying transforms, so for consistency we update the transform to account for the obliquity of the images.
>>> moving.affine == ras @ reference.affine
We can decompose the affines into oblique and de-obliqued components:
>>> moving.affine == m_obl @ m_deobl >>> reference.affine == r_obl @ r_deobl
To generate an equivalent AFNI transform, we need an effective transform (
e_ras
):>>> m_obl @ m_deobl == ras @ r_obl @ r_deobl >>> m_deobl == inv(m_obl) @ ras @ r_obl @ r_deobl
Hence,
>>> m_deobl == e_ras @ r_deobl >>> e_ras == inv(m_obl) @ ras @ r_obl
- class nitransforms.io.afni.AFNILinearTransformArray(xforms=None, binaryblock=None, endianness=None, check=True)[source]
A string-based structure for series of AFNI linear transforms.
FSL
Read/write FSL’s transforms.
- class nitransforms.io.fsl.FSLDisplacementsField[source]
A data structure representing displacements fields.
- class nitransforms.io.fsl.FSLLinearTransform(parameters=None)[source]
A string-based structure for FSL linear transforms.
- class nitransforms.io.fsl.FSLLinearTransformArray(xforms=None, binaryblock=None, endianness=None, check=True)[source]
A string-based structure for series of FSL linear transforms.
- classmethod from_filename(filename)[source]
Read the struct from a file given its path.
If the file does not exist, then indexed names with the zero-padded suffix
.NNN
are attempted, following FSL’s MCFLIRT conventions.
ITK
Read/write ITK transforms.
- class nitransforms.io.itk.ITKDisplacementsField[source]
A data structure representing displacements fields.
- class nitransforms.io.itk.ITKLinearTransform(parameters=None, offset=None)[source]
A string-based structure for ITK linear transforms.
- class nitransforms.io.itk.ITKLinearTransformArray(xforms=None, binaryblock=None, endianness=None, check=True)[source]
A string-based structure for series of ITK linear transforms.
- classmethod from_ras(ras, moving=None, reference=None)[source]
Create an ITK affine from a nitransform’s RAS+ matrix.
The moving and reference parameters are included in this method’s signature for a consistent API, but they have no effect on this particular method because ITK already uses RAS+ coordinates to describe transfroms internally.
- property xforms
Get the list of internal ITKLinearTransforms.
FreeSurfer/LTA
Read/write linear transforms.
- class nitransforms.io.lta.FSLinearTransform(binaryblock=None, endianness=None, check=True)[source]
Represents a single LTA’s transform structure.
- classmethod from_ras(ras, moving=None, reference=None)[source]
Create an affine from a nitransform’s RAS+ matrix.
- to_ras(moving=None, reference=None)[source]
Return a nitransforms’ internal RAS+ array.
Seemingly, the matrix of an LTA is defined such that it maps coordinates from the
dest volume
to thesrc volume
. Therefore, without inversion, the LTA matrix is appropiate to move the information fromsrc volume
into thedest volume
’s grid.Important
The
moving
andreference
parameters are dismissed becauseVOX2VOX
LTAs are converted toRAS2RAS
type before returning the RAS+ matrix, using thedest
andsrc
contained in the LTA. Both arguments are kept for API compatibility.- Parameters
moving (dismissed) – The spatial reference of moving images.
reference (dismissed) – The spatial reference of moving images.
- Returns
matrix – The RAS+ affine matrix corresponding to the LTA.
- Return type
- class nitransforms.io.lta.FSLinearTransformArray(xforms=None, binaryblock=None, endianness=None, check=True)[source]
A list of linear transforms generated by FreeSurfer.
Linear Transforms
Linear transforms.
- class nitransforms.linear.Affine(matrix=None, reference=None)[source]
Represents linear transforms on image data.
- classmethod from_filename(filename, fmt='X5', reference=None, moving=None)[source]
Create an affine from a transform file.
- map(x, inverse=False)[source]
Apply \(y = f(x)\).
- Parameters
x (N x D numpy.ndarray) – Input RAS+ coordinates (i.e., physical coordinates).
inverse (bool) – If
True
, apply the inverse transform \(x = f^{-1}(y)\).
- Returns
y – Transformed (mapped) RAS+ coordinates (i.e., physical coordinates).
- Return type
N x D numpy.ndarray
Examples
>>> xfm = Affine([[1, 0, 0, 1], [0, 1, 0, 2], [0, 0, 1, 3], [0, 0, 0, 1]]) >>> xfm.map((0,0,0)) array([[1., 2., 3.]])
>>> xfm.map((0,0,0), inverse=True) array([[-1., -2., -3.]])
- property matrix
Access the internal representation of this affine.
- class nitransforms.linear.LinearTransformsMapping(transforms, reference=None)[source]
Represents a series of linear transforms.
- apply(spatialimage, reference=None, order=3, mode='constant', cval=0.0, prefilter=True, output_dtype=None)[source]
Apply a transformation to an image, resampling on the reference spatial object.
- Parameters
spatialimage (spatialimage) – The image object containing the data to be resampled in reference space
reference (spatial object, optional) – The image, surface, or combination thereof containing the coordinates of samples that will be sampled.
order (int, optional) – The order of the spline interpolation, default is 3. The order has to be in the range 0-5.
mode ({"constant", "reflect", "nearest", "mirror", "wrap"}, optional) – Determines how the input image is extended when the resamplings overflows a border. Default is “constant”.
cval (float, optional) – Constant value for
mode="constant"
. Default is 0.0.prefilter (bool, optional) – Determines if the image’s data array is prefiltered with a spline filter before interpolation. The default is
True
, which will create a temporary float64 array of filtered values if order > 1. If setting this toFalse
, the output will be slightly blurred if order > 1, unless the input is prefiltered, i.e. it is the result of calling the spline filter on the original input.
- Returns
resampled – The data imaged after resampling to reference space.
- Return type
spatialimage or ndarray
- map(x, inverse=False)[source]
Apply \(y = f(x)\).
- Parameters
x (N x D numpy.ndarray) – Input RAS+ coordinates (i.e., physical coordinates).
inverse (bool) – If
True
, apply the inverse transform \(x = f^{-1}(y)\).
- Returns
y – Transformed (mapped) RAS+ coordinates (i.e., physical coordinates).
- Return type
N x D numpy.ndarray
Examples
>>> xfm = LinearTransformsMapping([ ... [[1., 0, 0, 1.], [0, 1., 0, 2.], [0, 0, 1., 3.], [0, 0, 0, 1.]], ... [[1., 0, 0, -1.], [0, 1., 0, -2.], [0, 0, 1., -3.], [0, 0, 0, 1.]], ... ]) >>> xfm.matrix array([[[ 1., 0., 0., 1.], [ 0., 1., 0., 2.], [ 0., 0., 1., 3.], [ 0., 0., 0., 1.]], [[ 1., 0., 0., -1.], [ 0., 1., 0., -2.], [ 0., 0., 1., -3.], [ 0., 0., 0., 1.]]])
>>> y = xfm.map([(0, 0, 0), (-1, -1, -1), (1, 1, 1)]) >>> y[0, :, :3] array([[1., 2., 3.], [0., 1., 2.], [2., 3., 4.]])
>>> y = xfm.map([(0, 0, 0), (-1, -1, -1), (1, 1, 1)], inverse=True) >>> y[0, :, :3] array([[-1., -2., -3.], [-2., -3., -4.], [ 0., -1., -2.]])
- nitransforms.linear.load(filename, fmt='X5', reference=None, moving=None)[source]
Load a linear transform file.
Examples
>>> xfm = load(regress_dir / "affine-LAS.itk.tfm", fmt="itk") >>> isinstance(xfm, Affine) True
>>> xfm = load(regress_dir / "itktflist.tfm", fmt="itk") >>> isinstance(xfm, LinearTransformsMapping) True
Manipulations
Common interface for transforms.
- class nitransforms.manip.TransformChain(transforms=None)[source]
Implements the concatenation of transforms.
- append(x)[source]
Concatenate one element to the chain.
Example
>>> chain = TransformChain(transforms=TransformBase()) >>> chain.append((TransformBase(), TransformBase())) >>> len(chain) 3
- classmethod from_filename(filename, fmt='X5', reference=None, moving=None)[source]
Load a transform file.
- insert(i, x)[source]
Insert an item at a given position.
Example
>>> chain = TransformChain(transforms=[TransformBase(), TransformBase()]) >>> chain.insert(1, TransformBase()) >>> len(chain) 3
>>> chain.insert(1, TransformChain(chain)) >>> len(chain) 6
- map(x, inverse=False)[source]
Apply a succession of transforms, e.g., \(y = f_3(f_2(f_1(f_0(x))))\).
Example
>>> chain = TransformChain(transforms=[TransformBase(), TransformBase()]) >>> chain([(0., 0., 0.), (1., 1., 1.), (-1., -1., -1.)]) [(0.0, 0.0, 0.0), (1.0, 1.0, 1.0), (-1.0, -1.0, -1.0)]
>>> chain([(0., 0., 0.), (1., 1., 1.), (-1., -1., -1.)], inverse=True) [(0.0, 0.0, 0.0), (1.0, 1.0, 1.0), (-1.0, -1.0, -1.0)]
>>> TransformChain()((0., 0., 0.)) Traceback (most recent call last): TransformError:
- property transforms
Get the internal list of transforms.
- nitransforms.manip.load(filename, fmt='X5', reference=None, moving=None)
Load a transform file.
Nonlinear Transforms
Nonlinear transforms.
- class nitransforms.nonlinear.BSplineFieldTransform(coefficients, reference=None, order=3)[source]
Represent a nonlinear transform parameterized by BSpline basis.
- apply(spatialimage, reference=None, order=3, mode='constant', cval=0.0, prefilter=True, output_dtype=None)[source]
Apply a B-Spline transform on input data.
- map(x, inverse=False)[source]
Apply the transformation to a list of physical coordinate points.
\[\mathbf{y} = \mathbf{x} + \Psi^3(\mathbf{k}, \mathbf{x}), \label{eq:1}\tag{1}\]- Parameters
x (N x D numpy.ndarray) – Input RAS+ coordinates (i.e., physical coordinates).
inverse (bool) – If
True
, apply the inverse transform \(x = f^{-1}(y)\).
- Returns
y – Transformed (mapped) RAS+ coordinates (i.e., physical coordinates).
- Return type
N x D numpy.ndarray
Examples
>>> xfm = BSplineFieldTransform(test_dir / "someones_bspline_coefficients.nii.gz") >>> xfm.reference = test_dir / "someones_anatomy.nii.gz" >>> xfm.map([-6.5, -36., -19.5]).tolist() [[-6.5, -31.476097418406784, -19.5]]
>>> xfm.map([[-6.5, -36., -19.5], [-1., -41.5, -11.25]]).tolist() [[-6.5, -31.476097418406784, -19.5], [-1.0, -3.8072675377121996, -11.25]]
- class nitransforms.nonlinear.DisplacementsFieldTransform(field, reference=None)[source]
Represents a dense field of displacements (one vector per voxel).
- map(x, inverse=False)[source]
Apply the transformation to a list of physical coordinate points.
\[\mathbf{y} = \mathbf{x} + D(\mathbf{x}), \label{eq:2}\tag{2}\]where \(D(\mathbf{x})\) is the value of the discrete field of displacements \(D\) interpolated at the location \(\mathbf{x}\).
- Parameters
x (N x D numpy.ndarray) – Input RAS+ coordinates (i.e., physical coordinates).
inverse (bool) – If
True
, apply the inverse transform \(x = f^{-1}(y)\).
- Returns
y – Transformed (mapped) RAS+ coordinates (i.e., physical coordinates).
- Return type
N x D numpy.ndarray
Examples
>>> xfm = DisplacementsFieldTransform(test_dir / "someones_displacement_field.nii.gz") >>> xfm.map([-6.5, -36., -19.5]).tolist() [[-6.5, -36.475167989730835, -19.5]]
>>> xfm.map([[-6.5, -36., -19.5], [-1., -41.5, -11.25]]).tolist() [[-6.5, -36.475167989730835, -19.5], [-1.0, -42.038356602191925, -11.25]]
Interpolation methods
Method groups
B-Splines
Interpolate with 3D tensor-product B-Spline basis.
- nitransforms.interp.bspline.grid_bspline_weights(target_grid, ctrl_grid)[source]
Evaluate tensor-product B-Spline weights on a grid.
For each of the \(N\) input locations \(\mathbf{x} = (x_i, x_j, x_k)\) and \(K\) control points or knots \(\mathbf{c} =(c_i, c_j, c_k)\), the tensor-product cubic B-Spline kernel weights are calculated:
\[\Psi^3(\mathbf{x}, \mathbf{c}) = \beta^3(x_i - c_i) \cdot \beta^3(x_j - c_j) \cdot \beta^3(x_k - c_k), \label{eq:bspline_weights}\tag{1}\]where each \(\beta^3\) represents the cubic B-Spline for one dimension. The 1D B-Spline kernel implementation uses
numpy.piecewise
, and is based on the closed-form given by Eq. (6) of [Unser1999].By iterating over dimensions, the data samples that fall outside of the compact support of the tensor-product kernel associated to each control point can be filtered out and dismissed to lighten computation.
Finally, the resulting weights matrix \(\Psi^3(\mathbf{k}, \mathbf{s})\) can be easily identified in Eq. \(\eqref{eq:1}\) and used as the design matrix for approximation of data.
- Parameters
target_grid (
ImageGrid
ornibabel.spatialimages
) – Regular grid of \(N\) locations at which tensor B-Spline basis will be evaluated.ctrl_grid (
ImageGrid
ornibabel.spatialimages
) – Regular grid of \(K\) control points (knot) where B-Spline basis are defined.
- Returns
weights – A sparse matrix of interpolating weights \(\Psi^3(\mathbf{k}, \mathbf{s})\) for the N voxels of the target EPI, for each of the total K knots. This sparse matrix can be directly used as design matrix for the fitting step of approximation/extrapolation.
- Return type
numpy.ndarray
(\(K \times N\))
Patched Nibabel Functions
- class nitransforms.patched.LabeledWrapStruct(binaryblock=None, endianness=None, check=True)[source]
- nitransforms.patched.shape_zoom_affine(shape, zooms, x_flip=True, y_flip=False)[source]
Get affine implied by given shape and zooms.
We get the translations from the center of the image (implied by shape).
- Parameters
shape ((N,) array-like) – shape of image data.
N
is the number of dimensionszooms ((N,) array-like) – zooms (voxel sizes) of the image
x_flip ({True, False}) – whether to flip the X row of the affine. Corresponds to radiological storage on disk.
y_flip ({False, True}) – whether to flip the Y row of the affine. Corresponds to DICOM storage on disk when x_flip is also True.
- Returns
aff – affine giving correspondance of voxel coordinates to mm coordinates, taking the center of the image as origin
- Return type
(4,4) array
Examples
>>> shape = (3, 5, 7) >>> zooms = (3, 2, 1) >>> shape_zoom_affine((3, 5, 7), (3, 2, 1)) array([[-3., 0., 0., 3.], [ 0., 2., 0., -4.], [ 0., 0., 1., -3.], [ 0., 0., 0., 1.]]) >>> shape_zoom_affine((3, 5, 7), (3, 2, 1), False) array([[ 3., 0., 0., -3.], [ 0., 2., 0., -4.], [ 0., 0., 1., -3.], [ 0., 0., 0., 1.]])