BRAIN IMAGING AND BEHAVIOR, cilt.12, sa.4, ss.1067-1083, 2018 (SCI-Expanded)
Human brain is supposed to process information in multiple frequency bands. Therefore, we can extract diverse information from functional Magnetic Resonance Imaging (fMRI) data by processing it at multiple resolutions. We propose a framework, called Hierarchical Multi-resolution Mesh Networks (HMMNs), which establishes a set of brain networks at multiple resolutions of fMRI signal to represent the underlying cognitive process. Our framework, first, decomposes the fMRI signal into various frequency subbands using wavelet transform. Then, a brain network is formed at each subband by ensembling a set of local meshes. Arc weights of each local mesh are estimated by ridge regression. Finally, adjacency matrices of mesh networks obtained at different subbands are used to train classifiers in an ensemble learning architecture, called fuzzy stacked generalization (FSG). Our decoding performances on Human Connectome Project task-fMRI dataset reflect that HMMNs can successfully discriminate tasks with 99% accuracy, across 808 subjects. Diversity of information embedded in mesh networks of multiple subbands enables the ensemble of classifiers to collaborate with each other for brain decoding. The suggested HMMNs decode the cognitive tasks better than a single classifier applied to any subband. Also mesh networks have a better representation power compared to pairwise correlations or average voxel time series. Moreover, fusion of diverse information using FSG outperforms fusion with majority voting. We conclude that, fMRI data, recorded during a cognitive task, provide diverse information in multi-resolution mesh networks. Our framework fuses this complementary information and boosts the brain decoding performances obtained at individual subbands.