The density matrix renormalization group (DMRG) is an adaptive algorithm for optimizing a matrix product state (or tensor train) (MPS) tensor network, such that after optimization, the MPS is approximately the dominant eigenvector of a large matrix $H$. The matrix $H$ is usually assumed to be a Hermitian matrix, but the algorithm can also be formulated for more general matrices.
The DMRG algorithm works by optimizing two neighboring MPS tensors at a time, temporarily combining them into a single tensor to be optimized. The optimization is performed using an iterative eigensolver approach such as Lanczos or Davidson. Before the next step, the single tensor is factorized using an SVD or density matrix decomposition in order to restore the MPS form. During this factorization, the bond dimension (or tensor train rank) of the MPS can be adapted. This adaptation is optimal in the sense of preserving the distance between the network just after the optimization step and the network with restored MPS form.
In physics and chemistry applications, DMRG is mainly used to find ground states of Hamiltonians of many-body quantum systems. It has also been extended to compute excited states, as well as simulate dynamical, finite-temperature, and non-equilibrium systems.
Algorithms similar to or inspired by DMRG have been developed for more general MPS computations, such as summing two MPS; multiplying MPS by MPO networks; or finding MPS solutions to linear systems.
Consider a Hermitian matrix $H$ acting in vector space that is the tensor product of $N$ smaller spaces, each of dimension $d$:

The DMRG algorithm seeks the dominant eigenvector of $H$ in the form of an MPS tensor network

Here $E_0 (\leq E_1 \leq E_2 \ldots)$ is the minimum eigenvalue $E_n$ of $H$.
For the algorithm to be efficient, $H$ must have certain simplifying properties. For example $H$ could be a sum of local terms

or, more generally, $H$ could be given as an MPO tensor network

Other simplifying forms of $H$ can also permit efficient formulations of the DMRG algorithm, such as if $H$ is a sum of MPO tensor networks or outer products of MPS tensor networks.
The DMRG algorithm can be extended straightforwardly to arbitrary tree tensor networks, of which MPS are a special case. Instead of sweeping forward and backward along an MPS chain, the algorithm visits each bond of a tree, otherwise performing very similar optimization and factorization steps.
An important optimization compared to the MPS case, however, is that instead of merging adjacent tree tensors by simply contracting their shared bond index, one first factorizes each factor tensor into a tensor carrying the external index and a remainder, then merges the tensors carrying the external indices as shown below

After optimizing the merged tensor, it is factorized using an SVD and the tree tensor network form restored:
