About Contribute Source

Representation of Continuous Functions


Consider a one-dimensional function $f(x)$ with $0 \leq x < 1$. The variable $x$ can be expressed to high precision as a binary fraction \begin{equation} x = 0.b_1b_2\cdots b_n = b_1/2 + b_2/2^2 + \ldots + b_n/2^n \ . \end{equation} described by bits $b_i = 0,1$. (For example, $1/3 \approx 0.010101$.) This way of writing numbers is similar to the to the binary representation of integers, but with the numbers stepping through a finely-spaced grid of spacing $1/2^n$ instead of steps of size 1.

Next we can write the values the function $f(x)$ takes on this grid as $f(x) = f(0.b_1b_2\cdots b_n) = F^{b_1 b_2 \cdots b_n}$ so that the values of $f(x)$ have been repackaged into an $n$-index tensor $F$.

The last move is to approximate of $F$ as a tensor network, such as a matrix product state / tensor train network: \begin{equation} F^{b_1 b_2 \cdots b_n} = \sum_{\{\mathbf{\alpha}\}} A^{b_1}_{\alpha_1} A^{b_2}_{\alpha_1 \alpha_2} A^{b_3}_{\alpha_2 \alpha_3} \cdots A^{b_n}_{\alpha_{n-1}} \end{equation}

Tensor networks of this class turn out to have very low rank (or bond dimension) for a wide class of functions $f(x)$ such as functions which are smooth or have periodic or self-similar structure. For example, both the functions $f(x) = e^{i k x}$ and $f(x) = \delta(x-k)$ give MPS/TT ranks of exactly one (Kronecker product of vectors).

The idea of using tensor train (TT) this way is known as the “quantics tensor train” (QTT) representation of a function. It can be straightforwardly generalized to two-dimensional, three-dimensional, or higher-dimensional functions by using a tensor train with two, three, or more open indices on each tensor.

References and Review Articles

Edit This Page