Home/Chapter 8

Control Analysis and Design through Frequency Domain Methods

Transfer function shaping via feedback control: PID controllers, Bode plot analysis, root locus method, Nyquist stability criterion, system gain and passivity, predictive and feedforward control.

Transfer Function Shaping through Control: Closed-Loop vs. Open-Loop

In the relevant section, we discussed some control theoretic configurations mapping an input to an output and how the map can be shaped by control design. Two common architectures are depicted in Figure 1.2 (as a general output feedback; here the control depends on the output of the system) and in Figure 1.3 (as an error output feedback control system; here the control depends on the error between the external input and the system output). More general configurations are also possible, as discussed in the relevant section. For consistency, we will focus on a particular architecture noting that the analysis to follow can be generalized to any of these models.

Standard Negative Feedback Loop
+reC(s)uP(s)y

Consider the (error) feedback loop given in Figure 8.4. Here P(s)P(s) denotes the transfer function of the system to be controlled. By writing Y(s)=P(s)C(s)(R(s)Y(s))Y(s) = P(s)C(s)(R(s) - Y(s)), it follows that

Y(s)R(s)=P(s)C(s)1+P(s)C(s)\frac{Y(s)}{R(s)} = \frac{P(s)C(s)}{1 + P(s)C(s)}

is the closed-loop transfer function (under negative unity feedback). Compare this with the setup where there is no feedback: in this case, the transfer function would have been P(s)C(s)P(s)C(s). This latter expression is often called the (open) loop-transfer function.

The goal is to shape the closed-loop transfer function via the control characterized with C(s)C(s) in the frequency domain.

Some motivation via a common class of controllers: PID controllers

By Laplace transforms, we know that differentiation in time domain entails a multiplication with ss, and integration involves multiplication with 1s\frac{1}{s}. In the context of the setup of Figure 8.4, let e(t)=y(t)r(t)e(t) = y(t) - r(t). A popular and practical type of control structure involves:

u(t)=ki0te(t)dt+kddedt+kpe(t)u(t) = k_i \int_0^t e(t)\, dt + k_d \frac{de}{dt} + k_p e(t)

which writes as, in Laplace domain,

U(s)=(kis+kds+kp)E(s).U(s) = \left(\frac{k_i}{s} + k_d s + k_p\right) E(s).

Thus, the control uses both the error itself (proportional), its integration, and its derivative; leading to the term PID control.


Bode-Plot Analysis

Bode plots were studied earlier in class. With Bode plots, we observed that we can identify the transfer function of a system, when a system is already stable. However, the Bode plot does not provide insights on how to design a control system or how to adjust a controller so that stability is attained.


The Root Locus Method

The set {s:1+C(s)P(s)=0}\{s : 1 + C(s)P(s) = 0\} consists of the poles of the transfer function. If one associates with the controller a gain parameter KK, the root locus method traces the set of all poles as KK ranges from 00 to \infty (and often from 00 to -\infty as well), so that {s:1+KC(s)P(s)=0}\{s : 1 + KC(s)P(s) = 0\} is traced. Thus, this method provides a design technique in identifying desirable values for the parameter KK.

The root locus method allows one to identify the poles. As we studied in the relevant section, the pole information clearly identifies BIBO stability properties and all of the poles are to have negative real values for the closed-loop system to be BIBO stable.

Additionally, it lets one select desirable pole values: for example, poles with imaginary components lead to significant transient fluctuations and poles with real parts closer to the origin (on the left half plane) dominate the behaviour involving the response characteristics. A control engineer/designer may have reasons to choose certain poles over others.

For the approach to be applicable, the following key mathematical result is to be noted.

TheoremContinuous Dependence of Roots on Parameters

Consider the polynomial a(s)+Kb(s)=0a(s) + Kb(s) = 0 where aa and bb are polynomials. The roots of this polynomial vary continuously as KRK \in \mathbb{R} changes.

Remark.

Intuition: This theorem guarantees that as you slowly turn a gain knob, the poles of your system move smoothly through the complex plane -- they cannot suddenly jump from one location to another. This is what makes the root locus method work: you can trace continuous curves showing where the poles go as gain increases, and identify the exact gain value where poles cross from the stable left half-plane into the unstable right half-plane.

Remark.

The above also directly establishes the very useful result that when one is given a matrix, the eigenvalues of the matrix are continuous in (pointwise perturbations of) its entries.

ExampleRoot Locus for Double Integrator

Consider Figure 8.4.

a) [Proportional Control] Let the plant and controller be given with P(s)=1s2P(s) = \frac{1}{s^2} (double integrator dynamics) and C(s)=kpC(s) = k_p (such a controller is known as a proportional controller). Find the root locus as kpk_p changes from 00 to \infty. Can this system be made stable by such a proportional controller?

b) [PD Control] Consider P(s)=1s2P(s) = \frac{1}{s^2} (double integrator dynamics), and C(s)=kp+kdsC(s) = k_p + k_d s (the term PD-controller means proportional plus derivative control). Let kp=kd=Kk_p = k_d = K. Find the root locus as KK changes from 00 to \infty. Conclude, while comparing with part a above, that the addition of the derivative controller has pushed the poles to the left-half plane (thus, leading to stability!)

c) [Reference Tracking] For the system with the controller in part b), let K>0K > 0: Let r(t)=A1{t0}r(t) = A1_{\{t \ge 0\}} for some ARA \in \mathbb{R}. Find limty(t)\lim_{t \to \infty} y(t). Hint: Apply the Final Value Theorem. We have that R(s)=AsR(s) = \frac{A}{s} and with Y(s)=AK(s+K)s(s2+Ks+K)Y(s) = A \frac{K(s+K)}{s(s^2 + Ks + K)} we have that sY(s)sY(s) has all poles on the left-half plane. By the final value theorem, the limit is AA. Thus, the output asymptotically tracks the input signal.

Some engineering interpretation. 1s2\frac{1}{s^2} can be viewed as a map from acceleration to position: d2dt2=u\frac{d^2}{dt^2} = u; part a) in the above suggests that if we only use position error we cannot have a stable tracking system; but if we use position and derivative (that is, velocity) information, then we can make the system stable. Furthermore, if we have a reference tracking problem, the output will indeed track the reference path.

Solution. a) The roots solve s2+K=0s^2 + K = 0 and therefore the system is not BIBO stable for any K>0K > 0. b) Here, 1+Ks+1s2=01 + K\frac{s+1}{s^2} = 0 and thus s2+Ks+K=0s^2 + Ks + K = 0. All reals to the left of 1-1, and values on the unit circle centred around 1-1 form the root loci. Therefore, with K>0K > 0, the presence of the derivative controller together with the proportional control pushes the root loci to the left half plane!

We have that U(s)=1sU(s) = \frac{1}{s}. Then, Y(s)=K(s+K)s(s2+Ks+K)Y(s) = \frac{K(s+K)}{s(s^2+Ks+K)}. Note that sY(s)sY(s) has all poles on the left-half plane. By the final value theorem, the limit is AA (you can also directly apply an analysis based on partial fraction expansion which upon some analysis leads to α1s+β1s+β2s2+Ks+K\frac{\alpha_1}{s} + \frac{\beta_1 s + \beta_2}{s^2 + Ks + K} for constants α1,β1,β2\alpha_1, \beta_1, \beta_2, and note that only the term involving the partial fraction α1s\frac{\alpha_1}{s} leads to a term which does not decay to zero in view of part b, and thus it suffices to compute α1=1\alpha_1 = 1). Thus, the output asymptotically tracks the input signal.


Nyquist Stability Criterion

With the Bode plot, we observed that we can identify the transfer function when a system is already stable. However, the Bode plot does not provide insights on how to adjust the controller so that stability is attained. The Root Locus method allows for parametrically adjusting the instability region. Complementing the Root Locus method, the Nyquist plot provides further insight on controller design and its robustness properties to parameter variations with also the phase of the transfer function appearing in the analysis (unlike the Root Locus method), to be discussed further below.

Recall first that a right-half plane pole of 1+P(s)C(s)1 + P(s)C(s) implies instability. In general, it is not difficult to identify the poles of the (open-loop) transfer function P(s)C(s)P(s)C(s). Therefore, in the following we will assume that we know the number of right-half plane poles P(s)C(s)P(s)C(s). Note also that the poles of P(s)C(s)P(s)C(s) are the same as the poles of 1+P(s)C(s)1 + P(s)C(s); thus, we will assume that we know the number of right-half plane poles of 1+P(s)C(s)1 + P(s)C(s).

Constructing the Nyquist contour. For the Nyquist plot, we will construct a clockwise contour starting from iR-iR to the origin and then to +iR+iR and then along a semi-circle of radius RR to close the curve. Later on we will take RR \to \infty.

We will refer to this contour as a Nyquist contour.

If there is a pole of L(s)L(s) on the imaginary axis, we should carefully exclude this from our contour: to exclude these, we divert the path along a semi-circle of a very small radius rr around the pole in a counter clock-wise fashion (which will later be taken to be arbitrarily close to 0). The exclusion of such a pole will not make a difference in the stability analysis: we focus on the zeroes of 1+P(s)C(s)1 + P(s)C(s) in the right-half plane (as such a pole cannot make 1+P(s)C(s)=01 + P(s)C(s) = 0).

TheoremNyquist Criterion

Consider a closed loop system with the loop transfer function L(s)=C(s)P(s)L(s) = C(s)P(s). Suppose L(s)L(s) has PP poles in the region encircled by the Nyquist contour. Let NN be the number of clockwise encirclements of 1-1 by L(s)L(s) when ss encircles the Nyquist contour Γ\Gamma clock-wise (where we note that N-N would be the number of counter-clockwise encirclements). Then, the closed loop has N+PN + P poles in the right-half plane.

Remark.

Intuition: The Nyquist criterion lets you determine closed-loop stability by looking at the open-loop frequency response -- you trace out the open-loop transfer function as frequency sweeps from 0 to infinity and count how many times the curve encircles the critical point 1-1. This is powerful because you can assess stability from experimental frequency response data without knowing the exact system model. If there are no open-loop right-half-plane poles (P=0P = 0), then stability requires zero encirclements of 1-1.

Note: One can alternatively trace the contours counter-clockwise and then count NN as the number of counterclockwise encirclements. The result will be the same.

The proof builds on what is known as the principle of variation of the argument theorem, which we state and prove next.

TheoremPrinciple of Variation of the Argument

Let DD be a closed region in the complex plane with Γ\Gamma its boundary. Let f:CCf : \mathbb{C} \to \mathbb{C} be complex differentiable (and hence analytic) on DD (and on Γ\Gamma) except at a finite number of poles and with a finite number of zeroes, all in the interior of DD. Then, the change in the argument of ff (normalized by 2π2\pi) over Γ\Gamma (known as the winding number wnw_n) is given by:

wn=12πΔΓarg(f)=12πΓf(z)f(z)dz=ZP,w_n = \frac{1}{2\pi}\Delta_\Gamma \arg(f) = \frac{1}{2\pi} \int_\Gamma \frac{f'(z)}{f(z)}\, dz = Z - P,

where ΔΓ\Delta_\Gamma is the net variation in the angle (or argument) of ff when zz traces the contour Γ\Gamma in the counter-clockwise direction; ZZ is the number of zeroes, and PP is the number of poles (with multiplicities counted).

Remark.

Intuition: Think of walking along a closed path in the complex plane and watching where f(z)f(z) points. Each zero of ff inside the contour acts like a "positive vortex" that winds the output once counterclockwise, while each pole acts like a "negative vortex" that winds it once clockwise. The net winding number (total counterclockwise turns) equals the difference ZPZ - P. This is the mathematical engine behind the Nyquist criterion: by counting encirclements, you are really counting the zeros minus poles of 1+L(s)1 + L(s) in the right half-plane.

Now, to apply this theorem, consider f(s)=1+P(s)C(s)f(s) = 1 + P(s)C(s), as noted earlier observe that the poles of 1+P(s)C(s)1 + P(s)C(s) are the same as the poles of P(s)C(s)P(s)C(s). We are interested in the zeroes of 1+P(s)C(s)1 + P(s)C(s), and whether they are in the right-half plane. So, all we need to compute is the number of zeroes of 1+P(s)C(s)1 + P(s)C(s) through the difference given by the number of encirclements: Note now that the number of encirclements of 1+P(s)C(s)1 + P(s)C(s) around 0 is the same as the number of encirclements of P(s)C(s)P(s)C(s) around 1-1. So, the number of zeroes in the right-half plane of 1+P(s)C(s)1 + P(s)C(s) will be PP plus the winding number. As a final note, in the Nyquist analysis, we construct the contour clockwise and apply the argument principle accordingly (so, the number of encirclements around 1-1 should be counted clock-wise). So, the number of interest is N+PN + P, as claimed.

Computing the Nyquist plot. To compute the number of clock-wise encirclements, compute L(iω)L(i\omega) starting from ω=0\omega = 0 and increase ω\omega to \infty. Observe that L(iω)=L(iω)L(i\omega) = \overline{L(-i\omega)}, computing L(iω)L(i\omega) for ω>0\omega > 0 is sufficient to compute the values for ω<0\omega < 0. Finally, to compute L(s)L(s) as s|s| \to \infty, we note that often L(s)|L(s)| converges to a constant as s|s| \to \infty, and the essence of the encirclements is given by the changes observed as ss traces the imaginary axis.

Problem 8.4.1Nyquist Stability Analysis

a) Consider C(s)=KC(s) = K, P(s)=1(s+1)2P(s) = \frac{1}{(s+1)^2}. Is this system stable for a given K>0K > 0? Explain through the Nyquist stability criterion.

b) Consider P(s)C(s)=1(s+a)3P(s)C(s) = \frac{1}{(s+a)^3} with the controller in an error feedback form so that the closed loop transfer function is given by P(s)C(s)1+P(s)C(s)\frac{P(s)C(s)}{1 + P(s)C(s)}. Is this system stable? Explain through the Nyquist stability criterion.

c) Let P(s)C(s)=3(s+1)3P(s)C(s) = \frac{3}{(s+1)^3}. Compute the gain stability margin. Draw the phase stability margin on the Nyquist curve.

Problem 8.4.2Inverted Pendulum with Nyquist Analysis

Consider the inverted pendulum displayed in Figure 8.2, where a torque of uu is applied to maintain the pendulum around θ=0\theta = 0. The dynamics can be derived as

d2θ(t)dt2=glsin(θ(t))+u(t)cos(θ(t))ml2\frac{d^2\theta(t)}{dt^2} = \frac{g}{l}\sin(\theta(t)) + \frac{u(t)\cos(\theta(t))}{ml^2}

For simplicity, let us assume the coefficients (m,l)(m, l) are selected so that the above simplifies to:

d2θ(t)dt2=sin(θ(t))+u(t)cos(θ(t))\frac{d^2\theta(t)}{dt^2} = \sin(\theta(t)) + u(t)\cos(\theta(t))

Consider the linearization around θ=0\theta = 0, dθdt=0\frac{d\theta}{dt} = 0 (with the approximation sin(θ)θ,cos(θ)1\sin(\theta) \approx \theta, \cos(\theta) \approx 1).

Then, it follows that the Plant, modeling the linearized inverted pendulum, would have its transfer function as

P(s)=1s21P(s) = \frac{1}{s^2 - 1}

Now, suppose we apply the control

C(s)=k(s+1),C(s) = k(s + 1),

with an error feedback control configuration as in Figure 8.4.

Via the Nyquist criterion, find conditions on kk so that the closed-loop linearized system is BIBO stable.

Hint. Note that P(s)P(s) has a right-half plane pole, so the Nyquist criterion has to encircle 1-1 clock-wise.

Robustness via Nyquist plots

Nyquist's criterion suggests a robustness analysis, via gain and phase margins, mainly as a way to measure how far P(s)L(s)P(s)L(s) is from 1-1 in both the magnitude (1) and phase (π\pi) terms.

Roughly speaking, for systems which hit the real-line only once, in the complex plane the angle between π\pi and the location where the Nyquist plot hits the unit circle (i.e., with magnitude equaling 1) is called the phase stability margin.

The ratio between 1-1 and the point where the Nyquist plot hits the negative xx-axis is called the gain stability margin.

Root locus method cannot give such a strong characterization for robustness. A useful implication is the small gain theorem presented in the following subsection.

System gain, passivity and the small gain theorem

Consider a linear system with feedback, which we assume to be stable. We generalize the observation above by viewing the input as one in L2(R;C)L_2(\mathbb{R}; \mathbb{C}). Consider then the gain of a linear system with:

γ:=supuL2(R;C):u20y2u2\gamma := \sup_{u \in L_2(\mathbb{R}; \mathbb{C}): \|u\|_2 \neq 0} \frac{\|y\|_2}{\|u\|_2}

We know, by Parseval's theorem (Theorem), that

γ:=supuL2(R;C):u20FCC(y)2FCC(u)2\gamma := \sup_{u \in L_2(\mathbb{R}; \mathbb{C}): \|u\|_2 \neq 0} \frac{\|\mathcal{F}_{CC}(y)\|_2}{\|\mathcal{F}_{CC}(u)\|_2}

By writing iωi\omega instead of i2πfi2\pi f in the following, we have

FCC(y)(ω)=FCC(u)(ω)G(iω),\mathcal{F}_{CC}(y)(\omega) = \mathcal{F}_{CC}(u)(\omega) G(i\omega),

where GG is the closed-loop transfer function with s=iωs = i\omega. It can then be shown by noting that

ωFCC(u)(iω)2G(iω)2dω(supωG(iω)2)ωFCC(u)(iω)2,\int_\omega |\mathcal{F}_{CC}(u)(i\omega)|^2 |G(i\omega)|^2\, d\omega \le \left(\sup_\omega |G(i\omega)|^2\right) \int_\omega |\mathcal{F}_{CC}(u)(i\omega)|^2,

the following holds:

γ:=supuL2(R;C):u20ωFCC(u)(iω)G(iω)2dωFCC(u)2=supωRG(iω)=:G\gamma := \sup_{u \in L_2(\mathbb{R}; \mathbb{C}): \|u\|_2 \neq 0} \sqrt{\frac{\int_\omega |\mathcal{F}_{CC}(u)(i\omega) G(i\omega)|^2\, d\omega}{\|\mathcal{F}_{CC}(u)\|^2}} = \sup_{\omega \in \mathbb{R}} |G(i\omega)| =: \|G\|_\infty

In the above, we write supuL2(R;C)\sup_{u \in L_2(\mathbb{R};\mathbb{C})}, since a maximizing frequency ω\omega^* exists, a corresponding input u(t)=eiωtu(t) = e^{i\omega^* t}, will not be square integrable. However, this can be approximated arbitrarily well by truncation of the input and the output:

First observe that since gg is integrable, G(iω)G(i\omega) is continuous in ω\omega (or g^(2πf)\hat{g}(2\pi f) is continuous in ff). This can be shown by an application of the dominated convergence theorem. Let ω=2πf\omega^* = 2\pi f^* and uK(t)=1{K2t<K2}ei2πftu_K(t) = 1_{\{-\frac{K}{2} \le t < \frac{K}{2}\}} e^{i2\pi f^* t}. As KK \to \infty, the gain of the system will approximate γ\gamma arbitrarily well. This is slightly beyond the scope of our course, but the reasoning is that the Fourier transform of uK(t)=1{K2tK2}ei2πftu_K(t) = 1_{\{-\frac{K}{2} \le t \le \frac{K}{2}\}} e^{i2\pi f^* t} will form an approximate identity sequence sin(π(ff)K)π(ff)\frac{\sin(\pi(f-f^*)K)}{\pi(f-f^*)}, and 1Ku^K2(f)\frac{1}{K}|\hat{u}_K|^2(f) will also form an approximate identity-like sequence around ff^*, since

1K(sin(π(ff))K)π(ff))2=sin(π(ff))K)π(ff)Ksin(π(ff))K)π(ff)\frac{1}{K}\left(\frac{\sin(\pi(f-f^*))K)}{\pi(f-f^*)}\right)^2 = \frac{\sin(\pi(f-f^*))K)}{\pi(f-f^*)K} \frac{\sin(\pi(f-f^*))K)}{\pi(f-f^*)}

Thus, the gain, when the input is uKu_K, is:

(fsin(π(ff)K)π(ff)2g^(f)2dffsin(π(ff)K)π(ff)2df)1/2\left(\frac{\int_f \big|\frac{\sin(\pi(f-f^*)K)}{\pi(f-f^*)}\big|^2 |\hat{g}(f)|^2\, df}{\int_f \big|\frac{\sin(\pi(f-f^*)K)}{\pi(f-f^*)}\big|^2\, df}\right)^{1/2}

Now, by viewing the function

R(f):=sin(π(ff)K)π(ff)2vsin(π(vf)K)π(vf)2dvR(f) := \frac{\big|\frac{\sin(\pi(f-f^*)K)}{\pi(f-f^*)}\big|^2}{\int_v \big|\frac{\sin(\pi(v-f^*)K)}{\pi(v-f^*)}\big|^2\, dv}

as a probability density function which concentrates around ff^*, we have that

(ωsin(π(ff)K)π(ff)2ωsin(π(ff)K)π(ff)2dfg^(f)2df)\left(\int_\omega \frac{\big|\frac{\sin(\pi(f-f^*)K)}{\pi(f-f^*)}\big|^2}{\int_\omega \big|\frac{\sin(\pi(f'-f^*)K)}{\pi(f'-f^*)}\big|^2\, df'} \hat{g}(f)^2\, df\right)

converges to g^(f)2|\hat{g}(f^*)|^2 as KK gets larger and larger by the continuity of g^\hat{g}.

Thus, we can get arbitrarily close to the supremum gain by first approaching the supremum value over all frequencies arbitrarily well, and then by approaching the gain corresponding to such frequencies arbitrarily well with inputs constructed as above. Thus, we define the gain of a linear system as:

γ:=G(iω)=supωRG(iω)=supfRg^(f)\gamma := \|G(i\omega)\|_\infty = \sup_{\omega \in \mathbb{R}} |G(i\omega)| = \sup_{f \in \mathbb{R}} |\hat{g}(f)|

Define a system to be L2L_2-stable if a bounded input, in the L2L_2-sense, leads to a bounded output in the L2L_2-sense. BIBO stability of a linear system implies L2L_2-stability: BIBO stability implies h1<\|h\|_1 < \infty, which implies that H(iω)H(i\omega) is uniformly bounded for ωR\omega \in \mathbb{R} so that H(iω)h1|H(i\omega)| \le \|h\|_1. Then, observe that the output ym22=Ym(iω)212πdω=H2(iω)2Um(iω)212πdωh12(Um(iω))212πdω\|y^m\|_2^2 = \int |Y^m(i\omega)^2| \frac{1}{2\pi}\, d\omega = \int |H^2(i\omega)|^2 |U^m(i\omega)|^2 \frac{1}{2\pi}\, d\omega \le \|h\|_1^2 \int |(U^m(i\omega))^2| \frac{1}{2\pi}\, d\omega. Thus, following Theorem, we can extend the domain (input function space) to be the entire L2L_2 space: Now, approximate any uu with u2<\|u\|_2 < \infty with the sequence {umS,mN}\{u^m \in \mathcal{S}, m \in \mathbb{N}\}, taking the limit of the output sequence of which lets us conclude that the output y2\|y\|_2 is also bounded. Thus y22=Y(iω)212πdω=H2(iω)U2(iω)12πdω<\|y\|_2^2 = \int |Y(i\omega)^2| \frac{1}{2\pi}\, d\omega = \int |H^2(i\omega) U^2(i\omega)| \frac{1}{2\pi}\, d\omega < \infty.

We next state a useful result on the verification of stability.

TheoremSmall Gain Theorem

Consider a feedback control system with closed-loop transfer function G(s)=H1(s)1+H1(s)H2(s)G(s) = \frac{H_1(s)}{1 + H_1(s)H_2(s)}, where H1H_1 and H2H_2 are stable. Suppose further that the gains of H1H_1 and H2H_2 are γ1\gamma_1 and γ2\gamma_2, respectively. Then, if γ1γ2<1\gamma_1 \gamma_2 < 1, the closed-loop system is stable.

Remark.

Intuition: The small gain theorem says that if a feedback loop has two stable subsystems and the product of their peak gains is less than 1, then the loop cannot amplify signals enough to sustain instability. Think of it like an echo between two walls: if each wall reflects less than the full sound energy, the echo eventually dies out. This is a conservative but robust test -- it ignores phase information and only looks at magnitudes, so it works even when you have uncertainty about the exact system model.

Note that H1(iω)H2(iω)<1|H_1(i\omega)H_2(i\omega)| < 1 uniformly for ωR\omega \in \mathbb{R} and therefore supωH1(iω)1+H1(iω)H2(iω)\sup_\omega \frac{H_1(i\omega)}{1 + H_1(i\omega)H_2(i\omega)} is uniformly bounded from above by 1, since 1+H1(iω)H2(iω)1 + H_1(i\omega)H_2(i\omega) is uniformly bounded away from 0. The proof then follows from Nyquist's criterion: Since H1,H2H_1, H_2 are stable, they have no poles in the right half plane. Furthermore, since γ1γ2<1\gamma_1 \gamma_2 < 1, C(s)P(s)|C(s)P(s)| (there: H1(iω)H2(iω)H_1(i\omega)H_2(i\omega)) will be uniformly away from the point 1-1, thus a positive gain margin will be maintained and 1-1 will not be encircled. The closed-loop system is then stable.

The above concept does not involve phase properties, and it may be conservative for certain applications. On phase properties, there is a further commonly used concept of passivity: By Nyquist's criterion, for stable PP and CC, if P(s)C(s)P(s)C(s) is so that the phase is in (π,π)(-\pi, \pi) for all s=iω,ωRs = i\omega, \omega \in \mathbb{R}, then the closed loop system will be stable since 1+P(s)C(s)1 + P(s)C(s) will never hit the negative real axis and therefore 1-1 will not be encircled. Since PP and CC are stable, they have no poles in the right half-plane. The closed-loop system is then stable.


Predictive and Feedforward Control

In addition to stability and robustness, one may also wish to have desirable tracking and settling time properties (such as a low delay in the response). For such setups, it is often desirable to have a minimum-phase closed loop system (with no zeroes in the right half-plane). When one has a plant which is not minimum-phase, the design is often more challenging, due to the effective time delay present, as discussed in the relevant section.

In such cases, it may be necessary to go beyond classical PID control, and consider schemes such as predictive control or feedforward control where part of the control input can be computed by some pre-processing by possibly using feedback from the output.

To gain some insight, consider the following equation:

P(s)C(s)1+P(s)C(s)=Hd(s)\frac{P(s)C(s)}{1 + P(s)C(s)} = H_d(s)

where Hd(s)H_d(s) is a desired closed-loop system transfer function, which we may assume to be minimum-phase. Some algebra leads to:

C(s)=1P(s)(1Hd(s)1)C(s) = \frac{1}{P(s)}\left(\frac{1}{H_d(s)} - 1\right)

What we see is that if P(s)P(s) has a zero in the right half-plane, the presence of 1P(s)\frac{1}{P(s)} above leads to an unstable control transfer function.

For such settings, predictive control or feedforward control policies are often utilized in practice. Feedforward control is a general control paradigm where, unlike feedback control, the control is defined by an open-loop inner system in the control. The feedforward control is often used as a predictive correction term for the system. However, in many designs, the feedforward control is often added to a feedback control term leading to the overall control to the system with stronger robustness properties.

To gain further insight, let P(s)=Pm(s)Pnm(s)P(s) = P_m(s)P_{nm}(s) where PmP_m is minimum-phase and PnmP_{nm} is not minimum-phase. Consider Yf(s)=Y(s)Pnm1(s)Y_f(s) = Y(s)P_{nm}^{-1}(s). This then effectively allows the control to only work with the minimum-phase portion of the system as if the feedback YfY_f only involves PmP_m. A PID control can then be designed for this system, and the output may be then subjected to PnmP_{nm} at the end: Observe that via Yf(s)=Y(s)Pnm1(s)Y_f(s) = Y(s)P_{nm}^{-1}(s), yfy_f acts as a prediction of the output yy: For example, if Pnm(s)=e2sP_{nm}(s) = e^{-2s} were a pure delay system (see the Pade approximation discussion in the relevant section), then Yf(s)=Y(s)e2sY_f(s) = Y(s)e^{2s} would essentially be written in time domain as yf(t)=y(t+2)y_f(t) = y(t+2), serving as a prediction of the output. Thus, future is anticipated in the design.

For example, in reference tracking applications in which a path is given and the system is to closely steer the path, using feedback and causal reference information alone may lead to a weaker performance; here, future reference information can be used to compute an additional control input via an inverse analysis mapping reference paths to control actions. One can see that if it is possible to predict/estimate the future behaviour of a system to be tracked, this may make the application of a baseline control feasible (e.g. if an obstacle is known to be soon coming ahead, one may be able to slow down in anticipation). This additional control input is a predictive control and its presence can provide significant improvement in performance.

Some specific predictive control design methods are known as Smith Predictor Based or Internal Model Based Control Designs (which allow the control to anticipate the output and require further system model information), or Model Predictive Control (which is an optimization based control design and is more aligned with modern control theory).


The Routh-Hurwitz Stability Criterion

The root locus and Nyquist methods provide ways to analyze stability graphically. The Routh-Hurwitz criterion provides an algebraic test: given a polynomial (such as the characteristic polynomial of a closed-loop system), it determines whether all roots have negative real parts -- without computing the roots explicitly.

This is particularly useful for checking BIBO stability of a closed-loop transfer function P(s)C(s)1+P(s)C(s)\frac{P(s)C(s)}{1 + P(s)C(s)}, where the denominator polynomial 1+P(s)C(s)=01 + P(s)C(s) = 0 determines the pole locations.

Necessary Condition

TheoremNecessary Condition for Stability

For a polynomial p(s)=ansn+an1sn1++a1s+a0p(s) = a_n s^n + a_{n-1} s^{n-1} + \cdots + a_1 s + a_0 to have all roots with negative real parts, it is necessary that all coefficients aia_i have the same sign and that no coefficient is zero.

Remark.

Intuition: If any coefficient is zero or has a different sign from the others, you can immediately conclude the system is unstable (or marginally stable) without further analysis. This is a quick first check before constructing the full Routh table.

Constructing the Routh Table

Given a polynomial p(s)=ansn+an1sn1++a1s+a0p(s) = a_n s^n + a_{n-1} s^{n-1} + \cdots + a_1 s + a_0, the Routh table is constructed as follows. The first two rows are filled directly from the polynomial coefficients:

  • Row sns^n: an,  an2,  an4,  a_n, \; a_{n-2}, \; a_{n-4}, \; \ldots
  • Row sn1s^{n-1}: an1,  an3,  an5,  a_{n-1}, \; a_{n-3}, \; a_{n-5}, \; \ldots
  • Row sn2s^{n-2}: b1,  b2,  b3,  b_1, \; b_2, \; b_3, \; \ldots
  • Row sn3s^{n-3}: c1,  c2,  c3,  c_1, \; c_2, \; c_3, \; \ldots
  • Continue until row s0s^0.

Each subsequent entry is computed by a 2×22 \times 2 determinant divided by the leading element of the row above:

b1=an1an2anan3an1,b2=an1an4anan5an1b_1 = \frac{a_{n-1} \cdot a_{n-2} - a_n \cdot a_{n-3}}{a_{n-1}}, \qquad b_2 = \frac{a_{n-1} \cdot a_{n-4} - a_n \cdot a_{n-5}}{a_{n-1}}

and similarly for subsequent rows. The table has n+1n + 1 rows total.

TheoremRouth-Hurwitz Criterion

The number of roots of p(s)p(s) with positive real parts equals the number of sign changes in the first column of the Routh table. In particular, the polynomial has all roots in the open left half-plane if and only if all entries in the first column are positive (assuming an>0a_n > 0).

Remark.

Intuition: Each sign change in the first column of the Routh table corresponds to a root crossing from the left half-plane to the right half-plane. The table effectively counts how many roots are "on the wrong side" without ever computing the roots themselves. This is especially valuable when the characteristic polynomial depends on a design parameter (like a gain KK), because you can determine the range of KK for which all first-column entries remain positive.

Worked Example

ExampleRouth-Hurwitz for a Third-Order System

Consider a closed-loop system with characteristic polynomial

p(s)=s3+6s2+11s+6.p(s) = s^3 + 6s^2 + 11s + 6.

Step 1: Check the necessary condition. All coefficients are positive, so the necessary condition is satisfied.

Step 2: Construct the Routh table.

  • s^3 1 11
  • s^2 6 6
  • s^1 b_1 0
  • s^0 c_1

Computing b1b_1:

b1=611166=6666=10b_1 = \frac{6 \cdot 11 - 1 \cdot 6}{6} = \frac{66 - 6}{6} = 10

Computing c1c_1:

c1=1066010=6c_1 = \frac{10 \cdot 6 - 6 \cdot 0}{10} = 6

The first column is: 1,6,10,61, 6, 10, 6 -- all positive, so there are zero sign changes.

Conclusion: All roots have negative real parts. The system is stable. (Indeed, the roots are s=1,2,3s = -1, -2, -3.)

ExampleRouth-Hurwitz with a Gain Parameter

Consider a feedback system with loop transfer L(s)=Ks(s+1)(s+2)L(s) = \frac{K}{s(s+1)(s+2)}. The closed-loop characteristic polynomial is

p(s)=s3+3s2+2s+K.p(s) = s^3 + 3s^2 + 2s + K.

The Routh table:

  • s^3 1 2
  • s^2 3 K
  • s^1 \frac 0
  • s^0 K

For stability, all first-column entries must be positive:

  • 1>01 > 0 (always satisfied)
  • 3>03 > 0 (always satisfied)
  • 6K3>0    K<6\frac{6 - K}{3} > 0 \implies K < 6
  • K>0K > 0

Conclusion: The system is stable for 0<K<60 < K < 6. At K=6K = 6 the system is marginally stable (the s1s^1 row vanishes, indicating purely imaginary roots). For K>6K > 6, there are two right half-plane poles.

Remark.

Intuition: The Routh-Hurwitz criterion complements the Nyquist and root locus methods. While root locus traces poles graphically and Nyquist counts encirclements, Routh-Hurwitz gives a purely algebraic condition. It is especially useful for finding the exact stability boundary in terms of a design parameter, as in the example above.


Exercises

Problem 8.6.1Inverted Pendulum Control Design

Consider the inverted pendulum displayed in Figure 8.3, where a torque of uu is applied to keep the pendulum around θ=0\theta = 0. The dynamics can be derived as d2θ(t)dt2=glsin(θ(t))+u(t)cos(θ(t))ml2\frac{d^2\theta(t)}{dt^2} = \frac{g}{l}\sin(\theta(t)) + \frac{u(t)\cos(\theta(t))}{ml^2}. Assume the coefficients (m,l)(m, l) are selected so that this simplifies to: d2θ(t)dt2=sin(θ(t))+u(t)cos(θ(t))\frac{d^2\theta(t)}{dt^2} = \sin(\theta(t)) + u(t)\cos(\theta(t)). Linearizing this system at θ=0\theta = 0, dθdt=0\frac{d\theta}{dt} = 0 (with approximations sin(θ)θ,cos(θ)1\sin(\theta) \approx \theta, \cos(\theta) \approx 1), we arrive at the linear system:

d2θ(t)dt2=θ(t)+u(t).\frac{d^2\theta(t)}{dt^2} = \theta(t) + u(t).

Now, consider the linearized pendulum model as our plant PP, mapping uu to θ\theta, to be used in Figure 8.4. Note that the transfer function of this model given above is P(s)=1s21P(s) = \frac{1}{s^2 - 1}. Here y=θy = \theta.

Suppose that the transfer function of the controller, CC, is given with C(s)=K1s+K2C(s) = K_1 s + K_2, with K1,K2R+K_1, K_2 \in \mathbb{R}_+. Observe that this means that u(t)=K1ddte(t)+K2e(t)u(t) = K_1 \frac{d}{dt}e(t) + K_2 e(t), where e(t)=r(t)y(t)e(t) = r(t) - y(t).

a) Find the (closed-loop) transfer function, mapping the signal rr to yy in the frequency/Laplace domain.

b) With C(s)=K1s+K2C(s) = K_1 s + K_2, set K2=0K_2 = 0. For what values of K10K_1 \ge 0, is the closed-loop system, mapping the signal rr to yy, BIBO stable? You can use any method.

c) With C(s)=K1s+K2C(s) = K_1 s + K_2, set K1=0K_1 = 0. For what values of K20K_2 \ge 0, is the closed-loop system, mapping the signal rr to yy, BIBO stable? You can use any method.

d) Let C(s)=Ks+KC(s) = Ks + K. For what values of K0K \ge 0, is the closed-loop system, mapping the signal rr to yy, BIBO stable? You can use any method.

e) Interpret your analysis in the context of the inverted pendulum system.

Problem 8.6.2Root Locus Study

Let C(s)=KC(s) = K, P(s)=s+1s(s101)P(s) = \frac{s+1}{s(\frac{s}{10} - 1)}.

Study stability properties using the root locus method as KK is varied from 00 to \infty.

Problem 8.6.3State Space P-D Controller Analysis

Consider Figure 8.4. The plant P(s)P(s) is a linearized inverted pendulum and suppose for simplicity that its transfer function is P(s)=1s21P(s) = \frac{1}{s^2 - 1}. Suppose that the controller applied is given with C(s)=k(s+2)C(s) = k(s + 2) (and thus, it is a P-D controller) for some parameter kR+k \in \mathbb{R}_+.

a) Write the plant PP in state space form, where the input is uu and the output is yy.

b) By writing uu as a function of r(t)y(t)r(t) - y(t), express the overall (closed-loop) system as a linear map from rr to yy.

c) Find conditions on kk for the system to be BIBO stable.

d) Verify that this result is consistent with a Nyquist or root-locus method analysis.