The components xi of the observed random vector are generated as a sum of the independent components sk, :

Weighted by the mixing weights ai,k. The same generative model can be written in vectorial form as,

Where the observed random vector x is represented by the basis vectors,.

The basis vectors ak form the columns of the mixing matrix

and the generative formula can be written as x = As, where

.

Given the model and realizations (samples) of the random vector x, the task is to estimate both the mixing matrix A and the sources s. This is done by adaptively calculating the w vectors and setting up a cost function which either maximizes the nongaussianity of the calculated sk = (wT * x) or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function.The original sources s can be recovered by multiplying the observed signals x with the inverse of the mixing matrix W = A − 1, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (n = m). If the number of basis vectors is greater than the dimensionality of the observed vectors, n > m, the task is overcomplete but is still solvable with the pseudo inverse.
### Tagged in:

Weighted by the mixing weights ai,k. The same generative model can be written in vectorial form as,

Where the observed random vector x is represented by the basis vectors,.

The basis vectors ak form the columns of the mixing matrix

and the generative formula can be written as x = As, where

.

Given the model and realizations (samples) of the random vector x, the task is to estimate both the mixing matrix A and the sources s. This is done by adaptively calculating the w vectors and setting up a cost function which either maximizes the nongaussianity of the calculated sk = (wT * x) or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function.The original sources s can be recovered by multiplying the observed signals x with the inverse of the mixing matrix W = A − 1, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (n = m). If the number of basis vectors is greater than the dimensionality of the observed vectors, n > m, the task is overcomplete but is still solvable with the pseudo inverse.

You must LOGIN to add comments