Texture Modeling
We try to learn a bank of textures using which one would be able to model any texture as a linear combination of these stored textures. This idea is analogous to the wavelet basis methodology. Unlike wavelet models whose basis were mathematically defined, the texture bank is learnt from data. This method is called as Dictionary Learning.
Consider the simple case of linear regression
\[Ax = b\]There are two problems here:

Which solution to pick if multiple solutions exist? (under constrained)

Overfitting (noisy data / over constrained)
Priors, regularization are the tools used to solve the above problems. Ridge regression is an example of a popular regularization technique where the 2norm is minimized. The objective function in ridge regression is quadratic, meaning that finding the optimal value is easy.
\[\text{Ridge} = \min \{\vert\vert Axb\vert\vert^2 + \vert\vert \Gamma x\vert\vert^2 \} \\ \hat{x} = (A^TA + \Gamma^T\Gamma)^{1}A^Tb\]The Bayesian interpretation would be a gaussian prior with mean 0 and covariance $(\Gamma^T\Gamma)^{1}$.
Lasso Regularization instead proposes using L1norm instead of the L2norm. Note that this objective function has no closed form solution, meaning that gradient descent must be used. The equivalent prior is the Laplacian distribution.
Dictionary Learning
One of the techniques earlier discussed was Dictionary Learning. This problem can now be formally defined as follows.
 Data Sample  $X = [x_1, \ldots x_K]$ where $X\in \mathcal{R}^d$
 Dictionary  $D\in\mathcal{R}^{d\times n}:D=[d_1,\ldots d_n]$ (each column is an “atom”)
 Coefficient Vectors  $R = [r_1,\ldots r_K]$ where $R\in\mathcal{R}^n$
The condition $\mathcal{C}$ is required to avoid trivial solutions where $r_i\to0$ and $D\to\infty$ which is not what we want. The regularization parameter can be changed on a casebycase basis.