It is well known, functions of matrices are important both in theory and in applications.
They are widely used in science and engineering and are of growing interest. The archetypal application of matrix functions is in the
solution of differential equations. This paper focuses on the symbolic computation of a matrix power, which is a good starting point of symbolic computation of a matrix function since a power of is a typical elementary function.
Let $A$ be a square matrix of order $n$ and $m$ be another integer not less than $n$. Based on some facts from algebraic combinatorics and the Bell polynomials, three approaches to the symbolic computation of $A^m$ are provided in this paper.
Consider the expanding $A^m$ into a polynomial in $A.$ By the Cayley-Hamilton theorem, we may suppose
\begin{equation}\label{expan}
A^m = \sum^{n-1}_{k=0}\alpha_{mk}A^k.
\end{equation}
It is important to find explicit formulas for $\alpha_{mk}.$ To this end, there are several different ways. Let
\begin{equation}\label{eqch}
p(\lambda):=det(\lambda I_n-A)=\sum^{n}_{j=0}(-1)^je_j\lambda^{n-j}
\end{equation}
be the characteristic polynomial of the matrix $A$
and $e_j$'s be the elementary symmetric function of the eigenvalues $\lambda_1, \ldots, \lambda_n$. For general purpose, the $e_j$'s are in $n$ indeterminate $x_1, \ldots, x_n$, that is,
\begin{eqnarray*}
e_k:=e_k(x_i~|~i\in [n]) &:=&\left\{
\begin{array}{ll}
1, & \hbox{$k=0$;} \\
\displaystyle\sum_{1\leq i_1n$,}
\end{array}
\right.\\
\end{eqnarray*}
where $[n]=\{1,2,\ldots,n\}.$
The key step in the first two approaches is based on the formal power series expansion of the resolvent $(\lambda I_n-A)^{-1}$, making use of the following evident equality
\begin{equation}\label{eqbasic}
(\lambda I_n-A)adj(\lambda I_n-A)=Ip(\lambda),
\end{equation}
where $adj(A)$ denotes the adjacent matrix of $A$. From \eqref{eqbasic} yields
\begin{equation}\label{eqresol}
(\lambda I_n-A)^{-1}=p(\lambda)^{-1}adj(\lambda I_n-A),
\end{equation}
while on the other hand it is obvious
\begin{equation}\label{eqadj}
adj(\lambda I_n-A)=\sum^{n-1}_{k=0}Q_k\lambda^{n-k-1}.
\end{equation}
A recursive relation is thus obtained by putting \eqref{eqadj} into \eqref{eqbasic} and making use of \eqref{eqch}, which in turn yields
\begin{equation}\label{eqadjex}
adj(\lambda I_n-A)=\sum^{n-1}_{k=0}\left(\sum^{k}_{j=0}(-1)^je_jA^{k-j}\right)\lambda^{n-k-1}.
\end{equation}
The formula \eqref{eqadjex} plays a central role in the first two approaches. Next, consider the following formal power series expansion
\begin{equation}\label{eqresolex}
(\lambda I_n-A)^{-1}=\sum^{\infty}_{m=0}A^m\lambda^{-m-1}.
\end{equation}
Taking \eqref{eqresol} and \eqref{eqadjex} into account, we only need to expand $p(\lambda)^{-1}$.
The subtle difference of the first two approaches exists in the way how to expand $p(\lambda)^{-1}$. The first approach is based on a formal power series expansion by utilizing the Bell polynomials \cite{bell,com}. We leave out the details here due to space limitations (the details will be given in the full length paper). The second approach is based on a formal power series expansion of $p(\lambda)^{-1}$ by using the complete homogeneous symmetric functions defined by
\begin{equation}\label{compsym}
h_k(x_i~|~i\in [n])=\sum_{1\leq i_1\leq i_2\leq\cdots\leq i_k\leq n}x_{i_1}x_{i_2}\cdots x_{i_k}, \quad 1\leq k\leq n.
\end{equation}
If $A$ is invertible, then the main result in the second approach reads as follows.
\begin{equation}\label{main-compsym}
\alpha_{mk}=\sum^{n-k-1}_{j=0}(-1)^je_jh_{m-k-j}(\lambda_1^{-1}, \lambda_2^{-1}, \cdots, \lambda_n^{-1}),
\end{equation}
where $h_{m-k-j}$'s can be expressed in terms of $e_j$'s (see \cite{hous,wangyang,yang}). Finally, $\alpha_{mk}$ can be expressed as a polynomial only in $e_j$'s, not depending on the eigenvalues $\lambda_1, \ldots, \lambda_n$ and thus avoiding the computation of them. Moreover, the restriction on $A$
being invertible can also be removed. All these details can be found in the full length paper.
The third approach is based on the Lagrange-Hermite interpolation.
How does it relate to the Lagrange-Hermite interpolation?
Suppose
$$
\lambda^m=\lambda^{n-m}p(\lambda)+R_{n-1}(\lambda).
$$
Again, by the Cayley-Hamilton theorem, we have
$$
A^m=R_{n-1}(A).
$$
Therefore, to find explicit formulas for $\alpha_{mk}$ is equivalent to find that of $R_{n-1}(\lambda).$ It is easy to check that $R_{n-1}(\lambda)$ interpolates the function $f(\lambda)=\lambda^m$\, at all the eigenvalues $\lambda_1, \lambda_2, \cdots, \lambda_n$. Suppose
$$
p(\lambda)=\prod^s_{i=1}(\lambda-\lambda_i)^{n_i},
$$
where $\lambda_1, \lambda_2, \cdots, \lambda_s$ are the distinct eigenvalues of $A$. This leads to the following
Lagrange-Hermite interpolation problem
$$
R_{n-1}^{(j)}(\lambda_i)=p^{(j)}(\lambda_i),\; j=0, 1, \ldots, n_i-1, \quad i=1, \ldots, s.
$$
To facilitate the symbolic computation of a matrix power, we use a new representation of the Hermite interpolation polynomial derived in \cite{wang}. The new representation uses the complete Bell polynomials which admit a recursive relation. It is this recursive relation that makes symbolic and numerical computation with much ease.