About Contribute Source

Functions of Continuous Variables (Quantum Inspired)

Introduction

Consider a one-dimensional function $f(x)$ with $0 \leq x < 1$. The variable $x$ can be expressed to high precision as a binary fraction \begin{equation} x = 0.x_1x_2\cdots x_n = x_1/2 + x_2/2^2 + \ldots + x_n/2^n \ . \end{equation} described by bits $x_i = 0,1$. (For example, $1/3 \approx 0.010101$.) This way of writing numbers is similar to the to the binary representation of integers, but with the numbers stepping through a finely-spaced grid of spacing $1/2^n$ instead of steps of size 1.

Next we can write the values the function $f(x)$ takes on this grid as $f(x) = f(0.x_1x_2\cdots x_n) = F^{x_1 x_2 \cdots x_n}$ so that the values of $f(x)$ have been repackaged into an $n$-index tensor $F$.

The last move is to approximate of $F$ as a tensor network, such as a matrix product state / tensor train network: \begin{equation} F^{x_1 x_2 \cdots x_n} = \sum_{\{\mathbf{\alpha}\}} A^{x_1}_{\alpha_1} A^{x_2}_{\alpha_1 \alpha_2} A^{x_3}_{\alpha_2 \alpha_3} \cdots A^{x_n}_{\alpha_{n-1}} \end{equation}

Tensor networks of this class turn out to have very low rank (or bond dimension) for a wide class of functions $f(x)$ such as functions which are smooth, have self-similar structure, or a finite number of sharp features. For example, both the functions $f(x) = e^{i k x}$ and $f(x) = \delta(x-k)$ give MPS/TT ranks of exactly one. (Equal to an outer product of vectors or a product state.)

The idea of using a tensor train (TT) this way is known as the “quantics tensor train” (QTT) (or “quantized tensor train”) representation of a function [1][2][3][4][5]. It can be straightforwardly generalized to two-dimensional, three-dimensional, or higher-dimensional functions by using a tensor train with two, three, or more open indices on each tensor. For dimensions higher than three, however, other tensor decompositions such as tree tensor networks may be more appropriate.

In quantum physics and quantum computing, an essentially equivalent idea is known as the “amplitude encoding” of a function into a quantum register. In this context, it becomes an efficient tensor network method when the quantum entanglement (closely tied to tensor network rank) is moderate or low.

Motivating Examples

An exponential function $f(x) = e^{a x}$ can be represented by a tensor network of rank 1 (or “product state”). This holds whether $a$ is real or complex. The construction is as follows: \begin{align} e^{a x} = e^{a (x_1/2 + x_2/2^2 + \ldots + x_n/2^n)} = (e^{a/2})^{x_1}\ \ (e^{a/2^2})^{x_2}\ \ (e^{a/2^3})^{x_3} \cdots (e^{a/2^n})^{x_n} \end{align} Then because this is now a rank-1 tensor, it can be represented by a tensor network of rank 1 (e.g. MPS/TT with ranks or bond dimensions of size 1), where one sets the elements of each tensor as $A^{x_j} = (e^{a/2^j})^{x_j}$.

The above result for an exponential function implies that $\cos(a x)$ or $\sin(a x)$ are exactly MPS/TT of rank 2, since the sum of two MPS with ranks $r_1$ and $r_2$ gives a new MPS whose rank is at most $r_1+r_2$.

Application Areas

Early applications of this format include representing bosons in quantum systems [6] and compressing images[3]. More recently this image compression approach has been proposed for machine learning on classical and quantum computers [7][8].

The QTT representation can be used to represent solutions of a wide range of differential equations [5].

For example, QTT representation has been used to represent velocity fields of fluids in numerical algorithms for solving the Navier-Stokes partial differential equation [9][10], with a similar approach used to solve the Vlasov-Poisson equations for plasma physics [11].

Green’s functions of quantum many-body systems can also be compactly described by the QTT representation [12][13], and tensor network algorithms used to solve the Dyson and Bethe-Salpeter equations.

Electronic orbitals used in quantum chemistry can be captured by QTTs, leading to more efficient representations than with Gaussian type orbitals [14].

Rigorous Results

For the case of one-dimensional (univariate) functions, the tensor network or QTT representation has been shown to achieve optimal approximation order for functions in spaces of smooth functions (Sobolev or Besov spaces) of any smoothness order.[15] On the other hand, the QTT representation is itself not embedded into (space of all QTT lies outside of) any classical smoothness space.

Similar results can be proved for multivariate functions, with further considerations regarding whether the function is isotropic, anisotropic, or mixed smoothness spaces. Many types of classical smoothness spaces can be continuously embedded into the QTT representation, while the QTT approximation classes are themselves not embedded into any classical smoothness space.[16]

Any univariate polynomial of degree $p$ can be represented by a QTT of rank at most $(1+p)$.[15]

The discrete Fourier transform can be represented as an MPO tensor network with low rank (small bond dimension). In the QTT formalism, the discrete Fourier transform is equivalent to the “quantum Fourier transform” algorithm known from quantum computing, but compressed into a tensor network.[12][17]

References and Review Articles

References

  1. Quantics-TT Collocation Approximation of Parameter-Dependent and Stochastic Elliptic PDEs, B.N. Khoromskij, I. Oseledets, Computational Methods in Applied Mathematics 10, 376–394 (2010)
  2. Approximation of 2^d x 2^d matrices using tensor decomposition, Ivan V Oseledets, SIAM Journal on Matrix Analysis and Applications 31, 2130–2145 (2010)
  3. Image compression and entanglement, Jose I. Latorre (2005), quant‑ph/0510031
  4. O (d log N)-quantics approximation of N-d tensors in high-dimensional numerical modeling, Boris N Khoromskij, Constructive Approximation 34, 257–280 (2011)
  5. Tensor Numerical Methods for High-dimensional PDEs: Basic Theory and Initial Applications, Boris N. Khoromskij (2014), 1408.4053
  6. Density-matrix renormalization-group study of the polaron problem in the Holstein model, Eric Jeckelmann, Steven R. White, Phys. Rev. B 57, 6376–6385 (1998)
  7. Data compression for quantum machine learning, Rohit Dilip, Yu-Jie Liu, Adam Smith, Frank Pollmann, Phys. Rev. Res. 4, 043007 (2022)
  8. Deterministic Tensor Network Classifiers, L. Wright, F. Barratt, J. Dborin, V. Wimalaweera, B. Coyle, A. G. Green (2022), 2205.09768
  9. A quantum-inspired approach to exploit turbulence structures, Nikita Gourianov, Michael Lubasch, Sergey Dolgov, Quincy Y Berg, Hessam Babaee, Peyman Givi, Martin Kiffner, Dieter Jaksch, Nature Computational Science 2, 30–37 (2022)
  10. Tensor network reduced order models for wall-bounded flows, Martin Kiffner, Dieter Jaksch (2023), 2303.03010
  11. Quantum-inspired method for solving the Vlasov-Poisson equations, Erika Ye, Nuno F. G. Loureiro, Phys. Rev. E 106, 035208 (2022)
  12. Multiscale Space-Time Ansatz for Correlation Functions of Quantum Systems Based on Quantics Tensor Trains, Hiroshi Shinaoka, Markus Wallerberger, Yuta Murakami, Kosuke Nogaki, Rihito Sakurai, Philipp Werner, Anna Kauch, Phys. Rev. X 13, 021015 (2023)
  13. Quantics Tensor Cross Interpolation for High-Resolution, Parsimonious Representations of Multivariate Functions in Physics and Beyond, Marc K. Ritter, Yuriel Nunez Fernandez, Markus Wallerberger, Jan von Delft, Hiroshi Shinaoka, Xavier Waintal (2023), 2303.11819
  14. Tensorized orbitals for computational chemistry, Nicolas Jolly, Yuriel Nunez Fernandez, Xavier Waintal (2023), 2308.03508
  15. Approximation Theory of Tree Tensor Networks: Tensorized Univariate Functions, Mazen Ali, Anthony Nouy, Constr Approx
  16. Approximation Theory of Tree Tensor Networks: Tensorized Multivariate Functions, Mazen Ali, Anthony Nouy, arxiv:2101.11932
  17. Quantum Fourier Transform Has Small Entanglement, Jielun Chen, E.M. Stoudenmire, Steven R. White, PRX Quantum 4, 040318 (2023)

Edit This Page