tedana: TE Dependent ANAlysis

The tedana package is part of the ME-ICA pipeline, performing TE-dependent analysis of multi-echo functional magnetic resonance imaging (fMRI) data.

https://circleci.com/gh/ME-ICA/tedana.svg?style=svg http://img.shields.io/badge/License-LGPL%202.0-blue.svg

Citations

When using tedana, please include the following citations:

tedana Available from: https://doi.org/10.5281/zenodo.1250561

2. Kundu, P., Inati, S. J., Evans, J. W., Luh, W. M. & Bandettini, P. A. (2011). Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI. NeuroImage, 60, 1759-1770.

3. Kundu, P., Brenowitz, N. D., Voon, V., Worbe, Y., Vértes, P. E., Inati, S. J., Saad, Z. S., Bandettini, P. A., & Bullmore, E. T. (2013). Integrated strategy for improving functional connectivity mapping using multiecho fMRI. Proceedings of the National Academy of Sciences, 110, 16187-16192.

Alternatively, you can automatically compile relevant citations by running your tedana code with duecredit. For example, if you plan to run a script using tedana (in this case, tedana_script.py):

python -m duecredit tedana_script.py

You can also learn more about why citing software is important.

License Information

tedana is licensed under GNU Lesser General Public License version 2.1.

Multi-echo fMRI

In multi-echo (ME) fMRI, data are acquired for multiple echo times, resulting in multiple time series for each voxel.

The physics of multi-echo fMRI

Why use multi-echo?

Resources

Journal articles
  • A review on multi-echo fMRI and its applications
Videos
Sequences
  • Multi-echo sequences: who has them and how to get them.
Datasets

A small number of multi-echo datasets have been made public so far. This list is not necessarily up-to-date, so please check out OpenNeuro to potentially find more.

tedana’s approach

tedana works by decomposing multi-echo BOLD data via PCA and ICA. These components are then analyzed to determine whether they are TE-dependent or -independent. TE-dependent components are classified as BOLD, while TE-independent components are classified as non-BOLD, and are discarded as part of data cleaning.

Derivatives

  • medn
    ‘Denoised’ BOLD time series after: basic preprocessing, T2* weighted averaging of echoes (i.e. ‘optimal combination’), ICA denoising. Use this dataset for task analysis and resting state time series correlation analysis.
  • tsoc
    ‘Raw’ BOLD time series dataset after: basic preprocessing and T2* weighted averaging of echoes (i.e. ‘optimal combination’). ‘Standard’ denoising or task analyses can be assessed on this dataset (e.g. motion regression, physio correction, scrubbing, etc.) for comparison to ME-ICA denoising.
  • *mefc
    Component maps (in units of delta S) of accepted BOLD ICA components. Use this dataset for ME-ICR seed-based connectivity analysis.
  • mefl
    Component maps (in units of delta S) of ALL ICA components.
  • ctab
    Table of component Kappa, Rho, and variance explained values, plus listing of component classifications.

Usage

tedana minimally requires:

  1. acquired echo times (in milliseconds), and
  2. functional datasets equal to the number of acquired echoes.

But you can supply many other options, viewable with tedana -h or t2smap -h.

Run tedana

This is the full tedana workflow, which runs multi-echo ICA and outputs multi-echo denoised data along with many other derivatives. To see which files are generated by this workflow, check out the workflow documentation: tedana.workflows.tedana_workflow().

usage: tedana [-h] -d FILE [FILE ...] -e TE [TE ...] [--mask FILE]
              [--mix FILE] [--ctab FILE] [--manacc MANACC] [--kdaw KDAW]
              [--rdaw RDAW] [--conv CONV] [--sourceTEs STE]
              [--combmode {t2s,ste}] [--initcost {tanh,pow3,gaus,skew}]
              [--finalcost {tanh,pow3,gaus,skew}] [--denoiseTEs] [--strict]
              [--no_gscontrol] [--stabilize] [--filecsdata] [--wvpca]
              [--label LABEL] [--seed FIXED_SEED]
Named Arguments
-d Multi-echo dataset for analysis. May be a single file with spatially concatenated data or a set of echo-specific files, in the same order as the TEs are listed in the -e argument.
-e Echo times (in ms). E.g., 15.0 39.0 63.0
--mask Binary mask of voxels to include in TE Dependent ANAlysis. Must be in the same space as data.
--mix File containing mixing matrix. If not provided, ME-PCA & ME-ICA is done.
--ctab File containing a component table from which to extract pre-computed classifications.
--manacc Comma separated list of manually accepted components
--kdaw

Dimensionality augmentation weight (Kappa). Default=10. -1 for low-dimensional ICA

Default: 10.0

--rdaw

Dimensionality augmentation weight (Rho). Default=1. -1 for low-dimensional ICA

Default: 1.0

--conv

Convergence limit. Default 2.5e-5

Default: 2.5e-5

--sourceTEs

Source TEs for models. E.g., 0 for all, -1 for opt. com., and 1,2 for just TEs 1 and 2. Default=-1.

Default: -1

--combmode

Possible choices: t2s, ste

Combination scheme for TEs: t2s (Posse 1999, default), ste (Poser)

Default: “t2s”

--initcost

Possible choices: tanh, pow3, gaus, skew

Initial cost function for ICA.

Default: “tanh”

--finalcost

Possible choices: tanh, pow3, gaus, skew

Final cost function for ICA. Same options as initcost.

Default: “tanh”

--denoiseTEs

Denoise each TE dataset separately.

Default: False

--strict

Ignore low-variance ambiguous components

Default: False

--no_gscontrol

Disable global signal regression.

Default: True

--stabilize

Stabilize convergence by reducing dimensionality, for low quality data

Default: False

--filecsdata

Save component selection data

Default: False

--wvpca

Perform PCA on wavelet-transformed data

Default: False

--label Label for output directory.
--seed

Value passed to repr(mdp.numx_rand.seed()) Set to an integer value for reproducible ICA results; otherwise, set to -1 for varying results across calls.

Default: 42

Note

The --mask argument is not intended for use with very conservative region-of-interest analyses. One of the ways by which components are assessed as BOLD or non-BOLD is their spatial pattern, so overly conservative masks will invalidate several steps in the tedana workflow. To examine regions-of-interest with multi-echo data, apply masks after TE Dependent ANAlysis.

Run t2smap

This workflow uses multi-echo data to optimally combine data across echoes andto estimate T2* and S0 maps or time series. To see which files are generated by this workflow, check out the workflow documentation: tedana.workflows.t2smap_workflow().

usage: t2smap [-h] -d FILE [FILE ...] -e TE [TE ...] [--mask FILE]
              [--fitmode {all,ts}] [--combmode {t2s,ste}] [--label LABEL]
Named Arguments
-d Multi-echo dataset for analysis. May be a single file with spatially concatenated data or a set of echo-specific files, in the same order as the TEs are listed in the -e argument.
-e Echo times (in ms). E.g., 15.0 39.0 63.0
--mask Binary mask of voxels to include in TE Dependent ANAlysis. Must be in the same space as data.
--fitmode

Possible choices: all, ts

Monoexponential model fitting scheme. “all” means that the model is fit, per voxel, across all timepoints. “ts” means that the model is fit, per voxel and per timepoint.

Default: “all”

--combmode

Possible choices: t2s, ste

Combination scheme for TEs: t2s (Posse 1999, default), ste (Poser)

Default: “t2s”

--label Label for output directory.

API

tedana.workflows: Common workflows

tedana.workflows
tedana.workflows.tedana_workflow(data, tes) Run the “canonical” TE-Dependent ANAlysis workflow.
tedana.workflows.t2smap_workflow(data, tes) Estimate T2 and S0, and optimally combine data across TEs.

tedana.model: Modeling TE-dependence

tedana.model
tedana.model.fit_decay(data, tes, mask, masksum) Fit voxel-wise monoexponential decay models to data
tedana.model.fit_decay_ts(data, tes, mask, …) Fit voxel- and timepoint-wise monoexponential decay models to data
tedana.model.fitmodels_direct(catd, mmix, …) Fit TE-dependence and -independence models to components.
tedana.model.make_optcom(data, tes, mask[, …]) Optimally combine BOLD data across TEs.
tedana.model.monoexponential Functions to estimate S0 and T2* from multi-echo data.
tedana.model.fit Fit models.
tedana.model.combine Functions to optimally combine data across echoes.

tedana.decomposition: Data decomposition

tedana.decomposition
tedana.decomposition.tedpca(catd, OCcatd, …) Use principal components analysis (PCA) to identify and remove thermal noise from multi-echo data.
tedana.decomposition.tedica(n_components, …) Performs ICA on dd and returns mixing matrix
tedana.decomposition._utils Utility functions for tedana decomposition

tedana.selection: Component selection

tedana.selection
tedana.selection.selcomps(seldict, mmix, …) Labels ICA components to keep or remove from denoised data
tedana.selection._utils Utility functions for tedana.selection

tedana.utils: Utility functions

tedana.utils
tedana.utils.io Functions to handle file input/output
tedana.utils.utils Utilities for tedana package

Contributing to tedana

This document explains how to set up a development environment for contributing to tedana and code style conventions we follow within the project. For a more general guide to the tedana development, please see our contributing guide. Please also follow our code of conduct.

Style Guide

Code

Docstrings should follow numpydoc convention. We encourage extensive documentation.

The code itself should follow PEP8 convention as much as possible, with at most about 500 lines of code (not including docstrings) per script.

Pull Requests

We encourage the use of standardized tags for categorizing pull requests. When opening a pull request, please use one of the following prefixes:

  • [ENH] for enhancements
  • [FIX] for bug fixes
  • [TST] for new or updated tests
  • [DOC] for new or updated documentation
  • [STY] for stylistic changes
  • [RF] for refactoring existing code

Pull requests should be submitted early and often! If your pull request is not yet ready to be merged, please also include the [WIP] prefix. This tells the development team that your pull request is a “work-in-progress”, and that you plan to continue working on it.

Release Checklist

This is the checklist of items that must be completed when cutting a new release of tedana. These steps can only be completed by a project maintainer, but they are a good resource for releasing your own Python projects!

  1. All continuous integration must be passing and docs must be building successfully.
  2. Create a new release, using the GitHub guide for creating a release on GitHub. Release-drafter should have already drafted release notes listing all changes since the last release; check to make sure these are correct.
  3. Pulling from the master branch, locally build a new copy of tedana and
    upload it to PyPi.

We have set up tedana so that releases automatically mint a new DOI with Zenodo; a guide for doing this integration is available here.

Indices and tables