Let’s say I want to generate correlated random variables. I understand that I can use Cholesky decomposition of the correlation matrix to obtain the correlated values. If $C$ is the correlation matrix, then we can do the cholesky decomposition:

$LL^{T}=C$

Then I can easily generate correlated random variables:

$LX=Y$,

where $X$ are uncorrelated values and $Y$ are correlated values. If I want two correlated random variables then $L$ is:

$L = \left[ {\begin{array}{*{20}c}

1 & 0 \\

\rho & {\sqrt {1 – \rho ^2 } } \\

\end{array}} \right]

$I understand that this works, but I don’t really understand why… My question is: Why does this work?

**Answer**

The co-variance matrix of any random vector $Y$ is given as $\mathbb{E} \left(YY^T \right)$, where $Y$ is a random column vector of size $n \times 1$. Now take a random vector, $X$, consisting of uncorrelated random variables with each random variable, $X_i$, having zero mean and unit variance $1$. Since $X_i$’s are uncorrelated random variables with zero mean and unit variance, we have $\mathbb{E} \left(X_iX_j\right) = \delta_{ij}$. Hence, $$\mathbb{E} \left( X X^T \right) = I$$ To generate a random vector with a given covariance matrix $Q$, look at the Cholesky decomposition of $Q$ i.e. $Q = LL^T$. Note that it is possible to obtain a Cholesky decomposition of $Q$ since by definition the co-variance matrix $Q$ is symmetric and positive definite.

Now look at the random vector $Z = LX$. We have $$\mathbb{E} \left(ZZ^T\right) = \mathbb{E} \left((LX)(LX)^T \right) = \underbrace{\mathbb{E} \left(LX X^T L^T\right) = L \mathbb{E} \left(XX^T \right) L^T}_{\text{ Since expectation is a linear operator}} = LIL^T = LL^T = Q$$ Hence, the random vector $Z$ has the desired co-variance matrix, $Q$.

**Attribution***Source : Link , Question Author : Flux Capacitor , Answer Author : Community*