Orthogonal connectivity factorization: Interpretable decomposition of variability in correlation matrices

View Researcher II's Other Codes

Disclaimer: “The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).”

Please contact us in case of a broken link from here

Authors A. Hyvärinen, J. Hirayama, V. Kiviniemi and M. Kawanabe
Journal/Conference Name Neural Computation, 28(3), 445-484
Paper Category
Paper Abstract In many multivariate time series, the correlation structure is non-stationary, i.e. it changes over time. The correlation structure may also change as a function of other cofactors, for example, the identity of the subject in biomedical data. A fundamental approach for analysis of such data is to estimate the correlation structure (connectivities) separately in short time windows or for different subjects, and use existing machine learning methods, such as principal component analysis (PCA), to summarize or visualize the changes in connectivity. However, the visualization of such a straightforward PCA is problematic because the ensuing connectivity patterns are much more complex objects than e.g. spatial patterns. Here, we develop a new framework for analysing variability in connectivities, using the PCA approach as the starting point. First, we show how to further analyze and visualize the principal components of connectivity matrices by a tailor-made rank-two matrix approximation in which we use outer product of two orthogonal vectors. This leads to a new kind of transformation of eigenvectors which is particularly suited for this purpose, and often enables interpretation of the principal component as connectivity between two groups of variables. Second, we show how to incorporate the orthogonality and the rank-two constraint in the estimation of PCA itself to improve the results. We further provide an interpretation of these methods in terms of estimation of a probabilistic generative model related to blind separation of dependent sources. Experiments on brain imaging data give very promising results.
Date of publication 2016
Code Programming Language MATLAB
Comment

Copyright Researcher II 2021