We have seen that, given a regulatory network, we can describe the behavior of the system using a set of equations. For example, the state of a system with two elements is described by their “levels” or “concentrations”. For any given time \(t\) we represent the levels of each of the two elements by \(x_1(t)\) and \(x_2(t).\)

In the general case we know that the state “tomorrow” will depend on the state “today”: \[ \begin{gather} x_1(t+h) = A_{11} x_1(t) + A_{12} x_2(t)\\ x_2(t+h) = A_{21} x_1(t) + A_{22} x_2(t)\end{gather}\tag{1} \] In the same way the state of “today” depends on the state “yesterday”. To be very general we will consider that the constants in this case can be different: \[ \begin{gather} x_1(t) = B_{11} x_1(t-h) + B_{12} x_2(t-h)\\ x_2(t) = B_{21} x_1(t-h) + B_{22} x_2(t-h)\end{gather}\tag{2} \] Now if we replace these values \(x_1(t)\) and \(x_2(t)\) in the first equation, we can evaluate the state “tomorrow” depending directly on “yesterday”. \[ \begin{gather} x_1(t+h) = A_{11} (B_{11} x_1(t-h) + B_{12} x_2(t-h)) + A_{12} (B_{21} x_1(t-h) + B_{22} x_2(t-h))\\ x_2(t+h) = A_{21} (B_{11} x_1(t-h) + B_{12} x_2(t-h)) + A_{22} (B_{21} x_1(t-h) + B_{22} x_2(t-h)) \end{gather} \] Rearranging these formulas we get \[ x_1(t+h) = (A_{11} B_{11}+A_{12} B_{21}) x_1(t-h) + (A_{11} B_{12} + A_{12} B_{22}) x_2(t-h)\\ x_2(t+h) = (A_{21} B_{11}+A_{22} B_{21}) x_1(t-h) + (A_{21} B_{12} + A_{22} B_{22}) x_2(t-h) \] which has the same shape as the first two equations. We can then write \[ \begin{gather} x_1(t+h) = C_{11} x_1(t-h) + C_{12} x_2(t-h)\\ x_2(t+h) = C_{21} x_1(t-h) + C_{22} x_2(t-h) \end{gather}\tag{3} \] where \[ \begin{gather} C_{11} = A_{11} B_{11} + A_{12} B_{21}\\ C_{12} = A_{11} B_{12} + A_{12} B_{22}\\ C_{21} = A_{21} B_{11} + A_{22} B_{21}\\ C_{22} = A_{21} B_{12} + A_{22} B_{22} \end{gather} \] or, to write it more shortly, \[C_{ij}=A_{i1} B_{1j} + A_{i2} B_{2j}\quad\forall i,j.\tag{4}\]

## Matrix as a simpler notation

In the previous section we had to be careful with many details and write the subindices in the correct position. One way to simplify all this annoyances is to use the **matrix** notation. In this case equation (1) can be rewritten as \[
\begin{bmatrix}
x_1(t+h) \\ x_2(t+h)
\end{bmatrix}
=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
\begin{bmatrix}
x_1(t)\\
x_2(t)
\end{bmatrix}
\tag{5}
\] Equation (2) can also be rewritten as \[
\begin{bmatrix}
x_1(t) \\ x_2(t)
\end{bmatrix}
=
\begin{bmatrix}
B_{11} & B_{12}\\
B_{21} & B_{22}
\end{bmatrix}
\begin{bmatrix}
x_1(t-h)\\
x_2(t-h)
\end{bmatrix}
\tag{6}
\] Then we can replace equation (6) into equation (5) to get \[
\begin{bmatrix}
x_1(t+h) \\ x_2(t+h)
\end{bmatrix}
=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
\begin{bmatrix}
B_{11} & B_{12}\\
B_{21} & B_{22}
\end{bmatrix}
\begin{bmatrix}
x_1(t-h)\\
x_2(t-h)
\end{bmatrix}
\tag{6}
\]

## Matrix multiplication

Now, we also can write equation (3) as \[
\begin{bmatrix}
x_1(t+h) \\ x_2(t+h)
\end{bmatrix}
=
\begin{bmatrix}
C_{11} & C_{12}\\
C_{21} & C_{22}
\end{bmatrix}
\begin{bmatrix}
x_1(t-h)\\
x_2(t-h)
\end{bmatrix}
\tag{7}
\] Thus, comparing equations (6) and (7), we can say that \[
\begin{bmatrix}
C_{11} & C_{12}\\
C_{21} & C_{22}
\end{bmatrix}
=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
\begin{bmatrix}
B_{11} & B_{12}\\
B_{21} & B_{22}
\end{bmatrix}
\tag{8}
\] This operation is called **matrix multiplication**. Using the formula in equation (4) we have \[
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
\begin{bmatrix}
B_{11} & B_{12}\\
B_{21} & B_{22}
\end{bmatrix}
=
\begin{bmatrix}
A_{11} B_{11} + A_{12} B_{21} &
A_{11} B_{12} + A_{12} B_{22}\\
A_{21} B_{11} + A_{22} B_{21} &
A_{21} B_{12} + A_{22} B_{22}
\end{bmatrix}
\tag{9}
\]

## Generalization

For simplicity, in this document we will use mainly matrices with 2 rows and 2 columns that represent systems with two elements, but there is no reason why we cannot extend this to any number of components. The results are also valid for matrices representing systems with other number of elements. For example, we can represent a regulatory network with a big number of genes.

If the system has \(n\) components, then the state will be represented by a **vector** \(x(t)\) that we can write as \[
x(t)=
\begin{bmatrix}
x_1(t) \\ \vdots\\x_n(t)
\end{bmatrix}
=
\begin{bmatrix}
x_1 \\ \vdots\\x_n
\end{bmatrix}(t)
\] The last version is sometimes easier to understand. The system of equations that describes how the state changes between time \(t\) and \(t+h\) will be represented by a matrix \(A\) that can be written as \[
\begin{bmatrix}
A_{11} & \cdots & A_{1n}\\
\vdots & \ddots & \vdots\\
A_{21} & \cdots & A_{nn}
\end{bmatrix}
\] In this case the multiplication of two matrices \(A\) and \(B\) will be the matrix \(C=AB\) with components \(C_{ij}\) such that \[C_{ij}=A_{i1} B_{1j} + \cdots +A_{in} B_{2nj}\quad\forall i,j\] which can also be written as \[C_{ij}=\sum_{k=1}^nA_{ik} B_{kj}\quad\forall i,j\tag{10}\]

## Non square matrices and vectors

Looking carefully at equation (10) we can see that we can multiply any rectangular matrices, not only square ones. The only condition is that the number of columns of the first one must be equal to the number of rows of the second.

If the matrix \(A\) has \(m\) rows and \(n\) columns, and \(B\) has \(n\) rows and \(p\) columns, then the matrix \((AB)\) will have \(m\) rows and \(p\) columns. In equation (10) the index \(i\) gores from 1 to \(m\) and the index \(j\) goes from 1 to \(p.\)

In particular we can see that *vectors* are just one-column matrices. For instance, if we write the vector \(x(t)\) as a one-column matrix, we can rewrite equation (5) as \[
\begin{bmatrix}
x_{1,1}(t+h) \\ x_{2,1}(t+h)
\end{bmatrix}
=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}
\begin{bmatrix}
x_{1,1}(t)\\
x_{2,1}(t)
\end{bmatrix}
=
\begin{bmatrix}
A_{11} x_1(t) + A_{12} x_2(t)\\
A_{21} x_1(t) + A_{22} x_2(t)
\end{bmatrix}
\] which has the same meaning as the equation (1).

Therefore the notation that we introduced just as a convenience, can be understood as a matrix multiplication and makes sense.

## Notation

We are going to use capital letters to represent matrices, such as \(A, B\) and \(C.\) The components of these matrices are represented with lower case letters and two subindices for row and column, like \(A_{ij}\) and \(B_{12}.\) Please notice that in \(B_{12}\) the subindex has two parts: row 1 and column 2. In case of ambiguity we can use commas and write \(B_{1,2}.\)

We use lower case letters to represent vectors, such as \(x\) and \(y.\) The components of the vector are written with the same letter and a subindex, such as \(x_i\) and \(y_n.\)

Finaly, to represent single numbers we use greek letters, such as \(\alpha,\beta,\lambda\) and so. The components of vectors and matrices are also *single* numbers but we represent them as described earlier.