## Continuous-time

In this first section, we'll assume that all signals are functions of a continuous time variable $t$. Later, we'll discretize the continuous-time controllers into a discrete-time approximation that can easily be manipulated by computers and microcontrollers.

### Closed-loop controllers

Figure 1 shows the block diagram of a general closed-loop or feedback control system. The output of the system $y\left(t\right)$ is subtracted from the reference $r\left(t\right)$, and this error $e\left(t\right)$ is fed to the controller, which produces the control signal $u\left(t\right)$ that is sent to the plant (the system being controlled) in an attempt to drive the error to zero.

Image source code

### The PID controller

In a PID controller, the control signal is calculated as the sum of three components: a proportional component, an integral component, and a derivative component. The proportional component simply multiplies the error by a constant ${K}_{p}$, the integral component multiplies the time integral of the error by a constant ${K}_{i}$, and the derivative component multiplies the time derivative of the error by a constant ${K}_{d}$. Mathematically, the control law is given by $u\left(t\right)={K}_{p}\phantom{\rule{thinmathspace}{0ex}}e\left(t\right)+{K}_{i}{\int }_{0}^{t}e\left(\tau \right)\mathrm{d}\tau +{K}_{d}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{d}}{\mathrm{d}t}e\left(t\right).$ The constants ${K}_{p}$, ${K}_{i}$ and ${K}_{d}$ are referred to as the proportional gain, the integral gain, and the derivative gain respectively.
The block diagram of this type of controller is shown in Figure 2.

Image source code

You can find intuitive explanations of the purpose of each of the three components all over the internet, but in short: the proportional component makes the controller act on the instantaneous error, the integral component accumulates past errors in order to minimize the steady-state and tracking error, and the derivative component penalizes the velocity at which the output changes, which can help to reduce overshoot.

### Frequency domain

In the frequency or $s$-domain, the PID control law can be written as $U\left(s\right)=\left({K}_{p}+{K}_{i}\phantom{\rule{thinmathspace}{0ex}}\frac{1}{s}+{K}_{d}\phantom{\rule{thinmathspace}{0ex}}s\right)E\left(s\right),$ where $U\left(s\right)$ and $E\left(s\right)$ are the Laplace transforms of the respective time-domain signals $u\left(t\right)$ and $e\left(t\right)$. This formulation is represented by Figure 3.

Image source code

### Derivative filtering

The derivative of the error can be rather noisy, so practical PID controllers often include a low-pass filter. Let ${e}_{f}\left(t\right)$ be the low-pass filtered error, then the control law can be modified into $u\left(t\right)={K}_{p}\phantom{\rule{thinmathspace}{0ex}}e\left(t\right)+{K}_{i}{\int }_{0}^{t}e\left(\tau \right)\mathrm{d}\tau +{K}_{d}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{d}}{\mathrm{d}t}{e}_{f}\left(t\right).$ Or, in the frequency domain, $U\left(s\right)=\left({K}_{p}+{K}_{i}\phantom{\rule{thinmathspace}{0ex}}\frac{1}{s}+{K}_{d}\phantom{\rule{thinmathspace}{0ex}}sH\left(s\right)\right)E\left(s\right).$ Here, $H\left(s\right)$ is the transfer function of the low-pass filter for the derivative component.

For the sake of simplicity, we'll use a single-pole low-pass filter to filter the error before taking the derivative. The transfer function of this filter is $H\left(s\right)=\frac{1}{1+s\phantom{\rule{thinmathspace}{0ex}}{T}_{f}},$ where ${T}_{f}$ is the filter's time constant, a parameter we can tune later.

## Discrete-time

Since computers and microcontrollers cannot deal with continuous time, the control law has to be discretized. We'll use ${T}_{s}$ to note the time step or sampling interval.

### Discrete-time signals

Given the continuous-time error signal $e:\mathbb{R}\to \mathbb{R}:t↦e\left(t\right)$, define the discrete-time error signal $e\left[k\right]$ as $e\left(t\right)$ sampled at $t=k\phantom{\rule{thinmathspace}{0ex}}{T}_{s}$ (with sampling interval ${T}_{s}$), $e\left[\cdot \right]:\mathbb{Z}\to \mathbb{R}:k↦e\left[k\right]\triangleq e\left(k{T}_{s}\right).$

We will use the same letters for continuous-time and discrete-time transfer functions and signals in the $s$- and $z$-domain, it should be clear from the context and the variables used ($s$ or $z$) whether it's a continuous-time or discrete-time signal. For example, $H\left(s\right)$ is a continuous-time transfer function, and $H\left(z\right)$ is a discrete-time transfer function, defined by different rational functions.

### Forward Euler

The first discretization method we'll have a look at is the forward Euler method, it is one of simplest methods available to approximate a continuous-time ordinary differential equation by a discrete-time difference equation or recurrence relation.

#### Integral

When the time step ${T}_{s}$ is sufficiently small, the integral term of the PID control law at time $t=k{T}_{s}$ can be approximated by a Riemann sum: ${e}_{i}\left(t\right)\triangleq {\int }_{0}^{t}e\left(\tau \right)\mathrm{d}\tau \approx \sum _{n=0}^{k-1}e\left[n\right]\phantom{\rule{thinmathspace}{0ex}}{T}_{s}\triangleq {e}_{i}\left[k\right]$ Note that this is an approximation, ${e}_{i}\left(k{T}_{s}\right)\approx {e}_{i}\left[k\right]$, they are not exactly equal.

This signal ${e}_{i}\left[k\right]$ can also be defined by the following recurrence relation $\left\{\begin{array}{l}{e}_{i}\left[k\right]={e}_{i}\left[k-1\right]+e\left[k-1\right]\phantom{\rule{thinmathspace}{0ex}}{T}_{s}\\ {e}_{i}\left[0\right]=0.\end{array}$

In the $z$-domain, the forward Euler discretization we carried out in the previous paragraph can be expressed as $\begin{array}{rl}{E}_{i}\left(z\right)& ={z}^{-1}\phantom{\rule{thinmathspace}{0ex}}{E}_{i}\left(z\right)+{T}_{s}\phantom{\rule{thinmathspace}{0ex}}{z}^{-1}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)\\ ⇔\phantom{\rule{1em}{0ex}}{E}_{i}\left(z\right)& =\frac{{T}_{s}}{z-1}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right).\end{array}$ Recall that in the $s$-domain, the relation between ${E}_{i}\left(s\right)$ and $E\left(s\right)$ was given by ${E}_{i}\left(s\right)=\frac{1}{s}E\left(s\right)$, so in general, we could define forward Euler discretization as the mapping from the $s$-domain to the $z$-domain where $s↦\frac{z-1}{{T}_{s}}$.

### Backward Euler

The backward Euler method is very similar to forward Euler, but it has a different time delay:
When applied to the derivative $y\left(t\right)=\frac{\mathrm{d}}{\mathrm{d}t}x\left(t\right)$, the forward Euler method results in the discrete-time recurrence relation $y\left[k\right]=\frac{x\left[k+1\right]-x\left[k\right]}{{T}_{s}}$, which is non-causal (the output $y\left[k\right]$ depends on the future input $x\left[k+1\right]$). The following section introduces the backward Euler method, which will discretize this derivative as the causal recurrence $y\left[k\right]=\frac{x\left[k\right]-x\left[k-1\right]}{{T}_{s}}$.

#### Derivative

We can approximate the derivative term in the control law using finite differences: ${e}_{d}\left(t\right)\triangleq \frac{\mathrm{d}}{\mathrm{d}t}{e}_{f}\left(t\right)\approx \frac{{e}_{f}\left(t\right)-{e}_{f}\left(t-{T}_{s}\right)}{{T}_{s}}\triangleq {e}_{d}\left[k\right]$

In the $z$-domain, this is equivalent to $\begin{array}{rl}{E}_{d}\left(z\right)& =\frac{1-{z}^{-1}}{{T}_{s}}\phantom{\rule{thinmathspace}{0ex}}{E}_{f}\left(z\right)\\ ⇔\phantom{\rule{1em}{0ex}}{E}_{d}\left(z\right)& =\frac{z-1}{z\phantom{\rule{thinmathspace}{0ex}}{T}_{s}}\phantom{\rule{thinmathspace}{0ex}}{E}_{f}\left(z\right).\end{array}$

In the $s$-domain, we have ${E}_{d}\left(s\right)=s\phantom{\rule{thinmathspace}{0ex}}{E}_{f}\left(s\right)$, so backward Euler discretization is the mapping $s↦\frac{z-1}{z\phantom{\rule{thinmathspace}{0ex}}{T}_{s}}$.

#### Low-pass filter

Applying this mapping to the transfer function of the low-pass filter for the derivative results in the following, $\begin{array}{rl}{E}_{f}\left(s\right)& =\frac{1}{1+s\phantom{\rule{thinmathspace}{0ex}}{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(s\right)\\ {E}_{f}\left(z\right)& =\frac{1}{1+\frac{z-1}{z\phantom{\rule{thinmathspace}{0ex}}{T}_{s}}{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)\\ & =\frac{z\phantom{\rule{thinmathspace}{0ex}}{T}_{s}}{z\phantom{\rule{thinmathspace}{0ex}}\left({T}_{s}+{T}_{f}\right)-{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)\\ & =\frac{z\phantom{\rule{thinmathspace}{0ex}}\beta }{z-\left(1-\beta \right)}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right),\end{array}$ where $\beta \triangleq \frac{{T}_{s}}{{T}_{s}+{T}_{f}}$. You might recognize this expression as the transfer function of the exponential moving average filter, usually defined by the recurrence relation ${e}_{f}\left[k\right]=\beta \phantom{\rule{thinmathspace}{0ex}}e\left[k\right]+\left(1-\beta \right)\phantom{\rule{thinmathspace}{0ex}}{e}_{f}\left[k-1\right]$.

In practice, one often treats the derivative term as a whole, discretizing the derivative and the low-pass filter in one go by combining their transfer functions and then applying forward Euler: $\begin{array}{rl}{E}_{d}\left(s\right)& =sH\left(s\right)E\left(s\right)\\ & =\frac{s}{1+s\phantom{\rule{thinmathspace}{0ex}}{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(s\right)\\ & =\frac{1}{\frac{1}{s}+{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(s\right)\\ {E}_{d}\left(z\right)& =\frac{1}{\frac{{T}_{s}}{z-1}+{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)\\ & =\frac{z-1}{{T}_{s}-{T}_{f}+z\phantom{\rule{thinmathspace}{0ex}}{T}_{f}}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)\end{array}$ In the time domain, this becomes $\begin{array}{r}\left({T}_{s}-{T}_{f}\right)\phantom{\rule{thinmathspace}{0ex}}{e}_{d}\left[k-1\right]+{T}_{f}\phantom{\rule{thinmathspace}{0ex}}{e}_{d}\left[k\right]=e\left[k\right]-e\left[k-1\right]\\ {e}_{d}\left[k\right]=\alpha \phantom{\rule{thinmathspace}{0ex}}\frac{e\left[k\right]-e\left[k-1\right]}{{T}_{s}}+\left(1-\alpha \right)\phantom{\rule{thinmathspace}{0ex}}{e}_{d}\left[k-1\right],\end{array}$ where $\alpha \triangleq \frac{{T}_{s}}{{T}_{f}}$. This can be written as $\begin{array}{rl}{e}_{d}\left[k\right]& =\frac{{e}_{f}\left[k\right]-{e}_{f}\left[k-1\right]}{{T}_{s}}\\ {e}_{f}\left[k\right]& \triangleq \alpha \phantom{\rule{thinmathspace}{0ex}}e\left[k\right]+\left(1-\alpha \right)\phantom{\rule{thinmathspace}{0ex}}{e}_{f}\left[k-1\right].\end{array}$ The first equation is the finite differences approximation of a derivative, and the second is again an exponential moving average filter, but with a different weight factor compared to the result we got earlier using backward Euler.

### Other discretization methods

An alternative method is the bilinear transform (also known as the trapezoidal rule or Tustin's rule), it is of a higher order than forward and backward Euler, and has some nice properties such as the fact that stable poles in one domain map to stable poles in the other. Other techniques include pole-zero matching, matched step response, frequency response approximations, but these are outside of the scope of this article as they are not usually applied to PID controllers.

### Overview

The following table gives an overview of all signals that make up the PID control law, as well as their discretizations. The third column is the most important one, because the discrete-time recurrence relations can easily be implemented in software.

Continuous-time $s$-domain Discrete-time $z$-domain
$e\left(t\right)$ $E\left(s\right)$ $e\left[k\right]$ $E\left(z\right)$
${e}_{i}\left(t\right)={\int }_{0}^{t}e\left(\tau \right)\mathrm{d}\tau$ ${E}_{i}\left(s\right)=\frac{1}{s}E\left(s\right)$ ${e}_{i}\left[k\right]={e}_{i}\left[k-1\right]+{T}_{s}\phantom{\rule{thinmathspace}{0ex}}e\left[k-1\right]$ ${E}_{i}\left(z\right)=\frac{{T}_{s}}{z-1}\phantom{\rule{thinmathspace}{0ex}}E\left(z\right)$
${e}_{d}\left(t\right)=\frac{\mathrm{d}}{\mathrm{d}t}{e}_{f}\left(t\right)$ ${E}_{d}\left(s\right)=s{E}_{f}\left(s\right)$ ${e}_{d}\left[k\right]=\frac{{e}_{f}\left[k\right]-{e}_{f}\left[k-1\right]}{{T}_{s}}$ ${E}_{d}\left(z\right)=\frac{z-1}{z\phantom{\rule{thinmathspace}{0ex}}{T}_{s}}\phantom{\rule{thinmathspace}{0ex}}{E}_{f}\left(z\right)$
${e}_{f}\left(t\right)=e\left(t\right)-{T}_{f}\frac{\mathrm{d}}{\mathrm{d}t}\phantom{\rule{negativethinmathspace}{0ex}}{e}_{f}\left(t\right)$ ${E}_{f}\left(s\right)=\frac{1}{1+s\phantom{\rule{thinmathspace}{0ex}}{T}_{f}}E\left(s\right)$ ${e}_{f}\left[k\right]=\alpha \phantom{\rule{thinmathspace}{0ex}}e\left[k\right]+\left(1-\alpha \right)\phantom{\rule{thinmathspace}{0ex}}{e}_{f}\left[k-1\right]$ ${E}_{f}\left(z\right)=\frac{\alpha \phantom{\rule{thinmathspace}{0ex}}z}{z-\left(1-\alpha \right)}E\left(z\right)$

## Derivative on measurement

One disadvantage of the PID topology discussed above is that the derivative component will become very large if the reference $r\left(t\right)$ suddenly changes. This effect is known as “derivative kick”.
The solution is really simple: instead of the derivative of the error, the derivative of the measurement is used. The former is known as “derivative on error”, the latter as “derivative on measurement”. Both topologies are equivalent if the reference is constant, because if $\frac{\mathrm{d}}{\mathrm{d}t}r\left(t\right)=0$, then $\frac{\mathrm{d}}{\mathrm{d}t}e\left(t\right)=-\frac{\mathrm{d}}{\mathrm{d}t}y\left(t\right)$.

Figure 4 shows a block diagram of this new derivative on measurement topology, including the low-pass filter on the derivative.

Image source code