Transfer Learning for Brain Decoding using Deep Architectures


Velioglu B., YARMAN VURAL F. T.

IEEE 16th International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC), Oxford, Birleşik Krallık, 26 - 28 Temmuz 2017, ss.65-70 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Basıldığı Şehir: Oxford
  • Basıldığı Ülke: Birleşik Krallık
  • Sayfa Sayıları: ss.65-70
  • Anahtar Kelimeler: Deep Neural Network, Transfer Learning, Feature Learning, Brain Decoding, NETWORK
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Is there a general representation of the information content of human brain, which can be extracted from the functional magnetic resonance imaging (fMRI) data? Is it possible to learn this representation automatically from big data sets by unsupervised learning methods? Is it possible to transfer this representation to learn and decode a set of cognitive states in other fMRI data sets? This study addresses partial answers to the above questions by using transfer learning in deep architectures. First, a hierarchical representation for fMRI data is learned from a large data set in Human Connectome Project (HCP) by a 3-layered stacked denoising autoencoder (SDAE). Then, the learned representations are used to train and recognize the cognitive states recorded by a relatively small data set of one-back repetition detection experiment. Results show that, it is possible to learn a general representation and transfer the learned representation of an fMRI data set to another dataset for brain decoding problem. The learned representation has a better discriminative power compared to the Pearson correlation features. Results also show us that deep neural networks transfer representations better than factor models commonly used in pattern recognition and neuroscience literature.