# An alternative perspective to differential equations systems

Foreword: In biological system modelling, often we are faced with a (huge) set of differential equations. They describe physical phenomena and are therefore relatively easy to construct and inspect if the system was constructed in the right way. However, solving these are computationally taxing and there is no way to predict the outcome just by looking at the entire set of differential equations. Since being introduced to System Engineering, I have always wanted to use the control theories to simplify these massive (and messy) systems, and thus be able to anticipate and adjust the system behaviour according to the need of the model.

System Modelling

A Simple Block Diagram in Time Domain

Here we have a very simple process model: an input $u(t)$ is manipulated by a $System$ to produce an output $y(t)$.

A set of differential equations are commonly used to model the system, which then gives the response in time domain. However this method gives very limited information on the predictable system behaviours.

1. Laplace Transform

• Transform state-space equations in time domain into frequency domain
• Definition: $\mathcal{L}[f(t)]=F(s)=\int^{\infty}_{0}f(t)e^{-st}dt$
• Similar to integration, we can look up the Laplace Transform of a specific function:
(only some common functions are listed)
$f(t)$ $F(s)$
$\delta(t)$ $1$
$u(t)$ $\frac{1}{s}$
$tu(t)$ $\frac{1}{s^{2}}$
$t^{n}u(t)$ $\frac{n!}{s^{n+1}}$
$e^{-at}u(t)$ $\frac{1}{s+a}$
$sin (at)$ $\frac{a}{s^{2}+a^{2}}$
$cos (at)$ $\frac{s}{s^{2}+a^{2}}$
• With the following theorems, system analysis in the frequency domain is greatly simplified:
Theorem Name
$\mathcal{L}[e^{-at}f(t)]=F(s+a)$ Frequency Shift Theorem
$\mathcal{L}[f(t-T)]=e^{-sT}F(s)$ Time Shift Theorem
$\mathcal{L}[f^{'}(t)]=sF(s)-f(0)$ First Order Differentiation Theorem
$\mathcal{L}[f^{''}(t)]=s^{2}F(s)-sf(0)-f^{'}(0)$ Second Order Differentiation Theorem
$\mathcal{L}[\int^{t}_{0}f(\tau)d\tau]=\frac{F(s)}{s}$ Integration Theorem
$f(\infty)=\lim_{s\to0}sF(s)$ Final Value Theorem
$f(0)=\lim_{s\to\infty}sF(s)$ Initial Value Theorem
• An example of how Laplace Transform is used to solve ordinary differential equations is shown below:
• Equation: $R\frac{dq}{dt}+\frac{1}{C}q=v$
• Given $R=10^{-2}\Omega; C=2\times10^{-5}F; v(t)=12V$
• When we take the Laplace transform (from the look-up table):
• $R[sQ(s)-q(0)]+\frac{1}{C}Q(s)=V(s) \to Q(s)=\frac{V(s)+Rq(0)}{Rs+\frac{1}{C}}=\frac{\frac{V(s)}{R}+q(0)}{s+\frac{1}{CR}}$
• also the input $V(s)=\frac{12}{s}$
• $Q(s)=\frac{\frac{1200}{s}+q(0)}{s+\frac{1}{CR}}$
• Now we take the partial fraction (and that $\frac{1}{CR}=5\times10^{6}$):
• $Q(s)=\frac{1200}{s(s+5\times10^{6})}+\frac{q(0)}{s+5\times10^{6}}=\frac{2.4\times10^{-4}}{s}+\frac{q(0)-2.4\times10^{-4}}{s+5\times10^6}$
• Take the inverse Laplace (to get back the time-domain representation):
• $q(t)=2.4\times10^{-4}+(q(0)-2.4\times10^{-4})e^{-5\times10^{6}t}$
• This is to prove that Laplace Transform can solve ODEs as with the usual explicit method. The benefit of this approach is the simplicity of the transformed system, and the underlying information (including the speed and stability of responses) in the system is easily accessible using simple tools.

2. Block Diagram

• Rationale: Compartmentalise different components of the system
• Processes = System/Plant $G(s)$
• Input-output = measurable variables eg. molecules production, genes transcription/translation events or any physical quantities
• In a closed-loop system, a sensor for the output signal sends back information to determine the error between the actual and desired output.

Open Loop System

In the open loop system presented here, $R(s)$ is the input to the plant $G(s)$ to give the output $Y(s)$. This gives the relationship of $Y(s) = G(s) R(s)$. Hence, the plant is $G(s)=\frac{Y(s)}{R(s)}$

Closed Loop System

In the closed loop system: $R(s)$ is the reference (desirable) output; $E(s)$ is the error between the actual and desirable outputs, which in this case acts as the input to the system $G(s)$.

System analysis rules the following:

$E(s)=R(s)-Y(s)$

$Y(s)=G(s)E(s)=G(s)(R(s)-Y(s)) \to Y(s)=\frac{G(s)}{1+G(s)}R(s)$

By this the error term is eliminated to establish the process based on the information of the actual and desirable outputs.

Alternatively, system reduction according to Mason’s rule (as shown on the right of the diagram) gives the following equation for the plant:

$\frac{Y(s)}{R(s)}=\frac{G(s)}{1+G(s)}$

The Laplace transform of the plant $G(s)$ can be implemented in MATLAB with the following codes:

Assume $g(t)=te^{-t}$

syms s t;g=t*exp(-t); % Enter equation in g(t)
GL=laplace(g,s); % Laplace transform
[num,den]=numden(GL); % Extract numerator and denominator
num=sym2poly(num); den=sym2poly(den); % Get coefficients
G=tf(num,den);% Set up Transfer function


3. System Analysis

• Root-finding:
• First order polynomial: $p(s)=As+B$ then $s=-B/A$
• Second order polynomial: $p(s)=as^{2}+bs+c$ then $s=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$
• Third order polynomial with imaginary roots: $p(s)=s^{3}+3s^{2}+3s+1+K$ where $K=j\omega, -j\omega$ then $p(s)=(s^{2}+\omega^{2})(s+a)$
• Inspect convergence from Pole-Zero Map:
• Usage: Inspect stability and convergence
• Pole = root(s) of the denominator polynomial (‘x’)
• Zero = root(s) of the numerator polynomial (‘o’)
• MATLAB command
pzmap(sys)

• Example:
• $J(s)=\frac{2(s^{2}-2s+5)}{s(s^{2}+4)}$ has poles (‘x’) at $0$ and $\pm2j$ and zeros (‘o’) at $1\pm2j$
• Pole-Zero Map of the system $J(s)=\frac{2(s^{2}-2s+5)}{s(s^{2}+4)}$

• Properties:
• Right-half plane (RHP) refers to the real part of the root > 0;
• Left-half plane (LHP) refers to the real part of the root < 0.
• We only consider the denominator polynomial (poles) for convergence evaluation:
• 1st Order Polynomial 2nd Order Polynomial Attribute
s+a All additive terms Root in LHP $\to$ Convergent
s-b Contain subtractive terms Root in RHP $\to$ Divergent
• For third order polynomial: according the Routh array, for $P(s) = As^{3}+Bs^{2}+Cs+D$, $P(s)$ has all roots in LHP only if $AD < BC$
• Pole-zero map Analysis

Pole position gives an insight on the system behaviour. This diagram roughly defines the regions of pole positions with predicted response. RHP is not considered since the systems do not converge (unlike genetic circuits, in engineering, we strive to model stabilisation and convergence!)

• Now we focus on the oscillatory responses by inspecting the characteristics of second order systems:
• We first transform the system into monic form: $G(s)=\frac{\omega^{2}_{n}}{s^{2}+2\zeta\omega_{n}s+\omega^{2}_n}$
• Damping $\zeta$: ratio of exponential decay constant to natural frequency: $\zeta=\frac{|\sigma|}{\omega_n}$ (hence gives system poles at $-\sigma \pm j \omega$)
• Undamped natural frequency: $\omega_{n}=\frac{\omega_{d}}{\sqrt{1-\zeta^{2}}}$
• Here are four types systems responses:

Four Second-Order System Response Types

• Some other specific characteristics which are interesting to look at in Underdamped system (\$latex 0<\zeta
• Damped natural frequency: $\omega_{d}=\omega{n}\sqrt{(1-\zeta^2)}$
• Rise time $T_{r}$: time for the response to go from 10% to the first 90% point of the final value
• Peak time $T_{p}$: time to the first peak
• Percentage overshoot $%OS$: highest peak to steady-state value in percentage
• Settling time $T_{s}$: time for oscillations to reach and stay within 2% (sometimes 5%) of steady-state value
• Ultimately, with all the system analysis tools, we would like to manipulate different blocks of the system to attaindesirable outputs. Given the existing plant, how can we manipulate the system behaviour? What if there are disturbances in the system – how do we reject these disturbances?
• A typical engineering scenario is that, when an aircraft during automatic cruising faces changes in air currents around it, how do different components (engine speed, direction etc.) coordinate to respond and provide a stable flight experience?

4. Controller design

• Controllers are inserted into the system to enhance the plant performance and/or reject disturbances. Here is an overview of the two traditional controller design strategies:
• PID control:
• PID Controller acting on a system

• Control Action Effects on System Response
Proportional $K_{P}$ -Scaling error signal $\to$ direct effect on steady-state error
-Transient response (change $\zeta$)
-Increases oscillations
Integral $\frac{K_{I}}{s}$ -Removes steady-state error (operates even when error is 0)
-Introduces oscillations
-Introduces critically stable pole
Derivative $K_{D}s$ -Damp down oscillations
-Provides prediction of future error
• Classical control:
• Stability criteria in the frequency domain step up to another level of complexity. In circuits, these controller designs are determined by Root-locus diagram, bode plot (gain and phase margins) and Nyquist criterion. With these indicators, lead compensator $K_{lead}=K\frac{s+a}{s+ra}$ and lag compensator $K_{lag}=K\frac{s+ra}{s+a}$ are chosen to bring about desirable system response in terms of the bandwidth, high- and low- frequency gains (final steady states) and so on.
simulink

• Outlook: Designing compensators to drive desirable system response can give us an insight into how other components in the biological system (which had not been considered in the main system plant $G(s)$) are contributing to the system. For instance, the need to introduce a lead compensator (which speeds up response) might translate to the fact that some biological factors unknown to the system (ie. not represented in the system) are acting to speed up the responses from known compensators.