Talks

Image processing and diffeomorphic registration

Ariel Rokem

Image processing operations are fundamental in the analysis of scientific imaging data, yet we often rely on black-box implementations of the operations that we wish to perform on our data. In the image processing part of the workshop, we will construct an image processing pipeline with an emphasis on human brain MRI data. We will dive into the details, starting from simple segmentation, contrast correction and detrending operations, and we will progress in complexity to new-fangled diffeomorphic registration algorithms that allow you to register multi-modal data, overcoming even severe distortion and different contrast mechanisms.

Ariel Rokem

The representation of semantic information in the human brain during listening and reading

Fatma (Imamoglu) Deniz

Extracting meaning from spoken and written language is a uniquely human ability. A recent publication from our laboratory showed that semantic information from spoken language is represented in a broad network of semantically-selective areas distributed across the human cerebral cortex. However, it is unclear which of the representations revealed in that study are specific to the modality of speech, and which are amodal. Here we studied how the semantic content of narratives received through two different modalities, listening and reading, is represented in the human brain. We used functional magnetic resonance imaging (fMRI) to record brain activity in two separate experiments while participants listened to and read several hours of the same narrative stories. We then built voxel-wise encoding models to characterize selectivity for semantic content across the cerebral cortex. We found that, in a variety of regions across temporal, parietal and prefrontal cortices, voxel-wise models estimated from one modality (e.g. listening) accurately predicted responses in the other modality (e.g. reading). In fact these cross-modal predictions only failed in sensory regions such as early auditory and visual cortices. We then used principal components analysis on the estimated model weights to recover semantic selectivity for each voxel. We found strong correlations between the cortical maps of semantic content produced from listening and reading within these components. These results suggest that semantic representation of language outside of early sensory areas are not tied to the specific modality through which the semantic information is received.

Fatma Deniz

Introduction to machine learning

Chris Holdgraf

With advent of open-source languages and improved computing technology, we have many more tools for using data-driven methods to ask questions in neuroscience. In particular, machine learning allows us to use larger datasets to find more complex patterns in our data. Python offers one of the most comprehensive and user-friendly libraries for performing machine learning, called "scikit-learn". In addition, others built tools on top of these libraries for machine learning in neuroimaging and other neuroscience modalities. This tutorial will be a brief introduction to the principles of machine learning and fitting models in neuroscience. It will cover the components of any model, as well as some specific analyses that are often run (regression and classification). It will also cover these topics specifically in the context of neuroimaging, and serve as an introduction to popular packages that accomplish these goals.  

Chris Holdgraf

Introduction to deep learning and deep learning with caffe

Maryana Alegro

Deep learning neural networks are representation-learning methods that work with multiple layers of representations. Meaning that they are capable of automatically discover unknown patterns that best characterize a raw data set. Moreover, they model data in a botton-up approach, where data is characterized by small low-level features, such as edges and connected objects, in lower layers and increases in abstraction and complexity in the following layers. A neural network with several layers (thus the term “deep”) can represent highly intricate images in a way that resembles the human visual system. Improvements in theoretical results, together with advances in hardware and software made deep learning neural networks the state-of-the-art methods for recognition and detection task, such as in object localization and image segmentation. In our deep learning talk we will present the building blocks for creating a deep convolutional neural network, while in our hands-on tutorial we will show how to implement and use a network for classifying images using the Caffe framework (http://caffe.berkeleyvision.org/) and Python.

Maryana Alegro

Register here