Home/Chapter 7

The Laplace and Z-Transformations

Two-sided and one-sided Laplace and Z-transforms. Properties: linearity, convolution, shift, differentiation, scaling. Initial and final value theorems. Inverse transforms, systems analysis, causality, stability, and minimum-phase systems.

Why Laplace and Z-Transforms?

The Fourier transform is a powerful tool, but it has a fundamental limitation: it requires the signal to be square-integrable (finite energy). Many signals we encounter in practice -- exponentially growing signals in unstable systems, ramp signals, even simple step functions -- fail this requirement. Their Fourier transforms simply do not exist.

The Laplace transform fixes this by a clever trick: before taking the Fourier transform, multiply the signal by a decaying exponential eσte^{-\sigma t}. If σ\sigma is large enough, the product x(t)eσtx(t) e^{-\sigma t} decays fast enough for the Fourier integral to converge. In other words:

X(s)=x(t)eσtejωtdt=Fourier transform of x(t)eσtX(s) = \int_{-\infty}^{\infty} x(t) \, e^{-\sigma t} \, e^{-j\omega t} \, dt = \text{Fourier transform of } x(t) e^{-\sigma t}

So the Laplace transform is really just the Fourier transform with a damping factor built in. The complex variable s=σ+jωs = \sigma + j\omega packages both pieces together:

  • σ=Re(s)\sigma = \text{Re}(s) -- the damping/convergence factor, controlling how aggressively we attenuate the signal before analyzing it
  • ω=Im(s)\omega = \text{Im}(s) -- the frequency, exactly as in the Fourier transform

The Z-transform plays the same role for discrete-time signals. The complex variable z=rejωz = r e^{j\omega} separates into:

  • r=zr = |z| -- the damping factor (analogous to eσe^{\sigma})
  • ω=z\omega = \angle z -- the frequency, exactly as in the DCFT

This is why the Fourier transform is a special case of both:

  • Laplace: set σ=0\sigma = 0 (no damping), so s=jωs = j\omega, and you recover the CCFT. Geometrically, this means evaluating X(s)X(s) along the imaginary axis.
  • Z-transform: set r=1r = 1 (no damping), so z=ejωz = e^{j\omega}, and you recover the DCFT. Geometrically, this means evaluating X(z)X(z) on the unit circle.

The extra freedom in choosing σ\sigma or rr is what lets these transforms handle a much broader class of signals -- but it also means we must keep track of which values of ss or zz make the integral/sum converge. This is the region of convergence (ROC), a concept that has no analogue in Fourier analysis and which will be central to everything that follows.


Introduction

The powerful tools of Fourier transforms do not directly generalize to signals which are not square integrable. Laplace and Z-transforms allow for the generalization of the machinery developed for Fourier transformable signals to a more general class of signals. For example, applications in control systems often require the design of control policies/laws which may turn an open-loop unstable system into a stable system; for studying such unstable signals it is essential to expand the class of signals which can be studied using frequency domain methods. Furthermore, one-sided Laplace and Z-transforms will be seen to be useful in studying systems which have non-zero initial conditions.

The Laplace transform generalizes the CCFT and the Z-transform generalizes the DCFT: If a signal is xx is not in L1(R;R)L_1(\mathbb{R}; \mathbb{R}), y(t)=x(t)erty(t) = x(t)e^{-rt} may be in L1(R;R)L_1(\mathbb{R}; \mathbb{R}) for some r>0r > 0. Likewise, if a signal is xx is not in l1(Z;R)l_1(\mathbb{Z}; \mathbb{R}), y(n)=x(n)rny(n) = x(n)r^{-n} may be in L1(R;R)L_1(\mathbb{R}; \mathbb{R}) for some r>1r > 1. The Fourier transforms of these scaled signals correspond to the Laplace and the Z-transforms of xx.

A signal is said to be of at-most-exponential growth if there exist real numbers M,αM, \alpha with x(t)M1{t0}eαt|x(t)| \le M 1_{\{t \ge 0\}} e^{\alpha t} for some MR,αRM \in \mathbb{R}, \alpha \in \mathbb{R}. For such signals, the Laplace transform will be defined for certain parameter values. A similar discussion applies for the Z-transform such that if x(n)M1{n0}rn|x(n)| \le M 1_{\{n \ge 0\}} r^n for some MR,rRM \in \mathbb{R}, r \in \mathbb{R}, the Z-transform will be defined for a range of values.

Remark.

Understanding the Region of Convergence (ROC) in plain English:

  • The ROC tells you for which values of ss (or zz) the integral/sum actually converges to a finite number. Outside the ROC, the transform is infinite and meaningless.
  • Different ROCs for the same X(s)X(s) correspond to different time-domain signals. For example, 1sa\frac{1}{s-a} is the Laplace transform of eat1{t0}e^{at} 1_{\{t \ge 0\}} when Re(s)>a\text{Re}(s) > a, but it is also the transform of eat1{t<0}-e^{at} 1_{\{t < 0\}} when Re(s)<a\text{Re}(s) < a. The formula is the same; the ROC tells you which signal you have.
  • For causal signals (zero for t<0t < 0 or n<0n < 0): the ROC is a right half-plane Re(s)>σ0\text{Re}(s) > \sigma_0 for Laplace, or the exterior of a circle z>r0|z| > r_0 for the Z-transform.
  • For BIBO stability: the ROC must include the imaginary axis s=jωs = j\omega (Laplace) or the unit circle z=1|z| = 1 (Z-transform). This is because stability requires the Fourier transform to exist, and Fourier is the special case on those curves.

The Two-sided Laplace Transform

The two-sided Laplace transform of a continuous-time signal xx is defined through the pointwise relation:

X=L(x)X = \mathcal{L}(x)

with

X(s)=tRx(t)estdt,sCX(s) = \int_{t \in \mathbb{R}} x(t) e^{-st}\, dt, \quad s \in \mathbb{C}

The set {sC:tRx(t)est<}\left\{s \in \mathbb{C} : \int_{t \in \mathbb{R}} |x(t)e^{-st}| < \infty \right\} is called the region of convergence (ROC).

Remark.

What each part does:

X(s)=x(t)estdt,s=σ+jωX(s) = \int_{-\infty}^{\infty} x(t) \, e^{-st} \, dt, \quad s = \sigma + j\omega

  • x(t)x(t) -- the original time-domain signal
  • est=eσtejωte^{-st} = e^{-\sigma t} \cdot e^{-j\omega t} -- two factors combined: eσte^{-\sigma t} is a real exponential that "damps" the signal (making divergent signals convergent), and ejωte^{-j\omega t} is the Fourier "frequency detector"
  • dt\int dt -- integrate over all time to accumulate the contribution at each combination of damping rate and frequency
  • X(s)X(s) -- the result is a function of the complex variable ss. Each value of ss probes a different combination of damping rate σ\sigma and frequency ω\omega
  • The Fourier transform is the special case σ=0\sigma = 0, i.e., evaluating X(s)X(s) along the imaginary axis s=jωs = j\omega

The Two-sided Z-Transform

The two-sided Z-transform of a discrete-time signal xx is defined through the pointwise relation:

X=Z(x)X = \mathcal{Z}(x)

with

X(z)=nZx(n)zn,zCX(z) = \sum_{n \in \mathbb{Z}} x(n) z^{-n}, \quad z \in \mathbb{C}

The set {zC:nZx(n)zn<}\{z \in \mathbb{C} : \sum_{n \in \mathbb{Z}} |x(n) z^{-n}| < \infty\} is called the region of convergence (ROC).

Remark.

What each part does:

X(z)=nZx(n)zn,z=rejωX(z) = \sum_{n \in \mathbb{Z}} x(n) \, z^{-n}, \quad z = r e^{j\omega}

  • x(n)x(n) -- the original discrete-time signal
  • zn=rnejωnz^{-n} = r^{-n} \cdot e^{-j\omega n} -- two factors combined: rnr^{-n} damps or grows the signal (controlling convergence, just as eσte^{-\sigma t} does for Laplace), and ejωne^{-j\omega n} is the Fourier "frequency detector"
  • n\sum_n -- sum over all time indices to accumulate each sample's contribution
  • X(z)X(z) -- the result is a function of the complex variable zz. The magnitude z=r|z| = r controls the damping, and the angle z=ω\angle z = \omega selects the frequency
  • The DCFT (Fourier) is the special case z=1|z| = 1, i.e., evaluating X(z)X(z) on the unit circle z=ejωz = e^{j\omega}

The One-sided Laplace Transform

The one-sided Laplace transform of a continuous-time signal is defined through the pointwise relation:

X+=L+(x)X_+ = \mathcal{L}_+(x)

with

X+(s)=tR+x(t)estdtX_+(s) = \int_{t \in \mathbb{R}_+} x(t) e^{-st}\, dt

The set {sC:tR+x(t)est<}\{s \in \mathbb{C} : \int_{t \in \mathbb{R}_+} |x(t)e^{-st}| < \infty\} is called the region of convergence (ROC).

Remark.

One-sided vs. two-sided: The one-sided Laplace transform only integrates over t0t \ge 0. This is equivalent to applying the two-sided transform to x(t)1{t0}x(t) \cdot 1_{\{t \ge 0\}}. The key advantage is that it naturally handles initial conditions -- when you apply the differentiation property, the boundary term x(0)x(0) appears explicitly, which is exactly what you need for solving differential equations with given initial values.

The One-sided Z-Transform

The one-sided Z-transform of a discrete-time signal is defined through the pointwise relation:

X=Z+(x)X = \mathcal{Z}_+(x)

with

X+(z)=nZ+x(n)znX_+(z) = \sum_{n \in \mathbb{Z}_+} x(n) z^{-n}

The set {zC:nZ+x(n)zn<}\{z \in \mathbb{C} : \sum_{n \in \mathbb{Z}_+} |x(n) z^{-n}| < \infty\} is called the region of convergence (ROC).

Remark.

One-sided vs. two-sided: The one-sided Z-transform only sums over n0n \ge 0. Just as with the one-sided Laplace transform, the key advantage is in handling initial conditions for difference equations. When you apply the shift property, terms like x(0),x(1),x(0), x(-1), \ldots appear explicitly, giving you the mechanism to incorporate initial values into the frequency-domain solution.


Properties

Linearity

Provided that zz is in the ROC for both of the signals xx and yy:

(Z(x+y))(z)=(Z(x))(z)+(Z(y))(z)(\mathcal{Z}(x + y))(z) = (\mathcal{Z}(x))(z) + (\mathcal{Z}(y))(z)

This property applies for the other transforms as well.

Convolution

Provided that zz is in the ROC for both of the signals xx and yy:

(Z(xy))(z)=(Z(x))(z)(Z(y))(z)(\mathcal{Z}(x * y))(z) = (\mathcal{Z}(x))(z)(\mathcal{Z}(y))(z)

This property applies for the other transforms as well.

Shift Property

Z-transform (two-sided). Let y(n)=x(n+m)y(n) = x(n+m), then

(Z(y))(z)=(Z(x))(z)zm(\mathcal{Z}(y))(z) = (\mathcal{Z}(x))(z) z^m

One-sided Z-transform. For the one-sided transform, let m=1m = 1. Then,

(Z+(y))(z)=((Z+(x))(z)x(0))z(\mathcal{Z}_+(y))(z) = \Big((\mathcal{Z}_+(x))(z) - x(0)\Big) z

The general case for mZ+m \in \mathbb{Z}_+ can be computed accordingly. For example, let y(n)=x(n1)y(n) = x(n-1), then

(Z+(y))(z)=(Z+{x}(z))z1x(1).(\mathcal{Z}_+(y))(z) = \Big(\mathcal{Z}_+\{x\}(z)\Big) z^{-1} - x(-1).

Laplace transform (two-sided). Likewise, let y(t)=x(tθ)y(t) = x(t - \theta). Then,

(L(y))(s)=(L(x))(s)esθ(\mathcal{L}(y))(s) = (\mathcal{L}(x))(s) e^{-s\theta}

One-sided Laplace transform. For the one-sided transform, let y(t)=x(tθ)1{tθ}y(t) = x(t - \theta)1_{\{t \ge \theta\}}. Then,

(L+(y))(s)=(L+(x))(s)esθ(\mathcal{L}_+(y))(s) = (\mathcal{L}_+(x))(s) e^{-s\theta}

Converse Shift Property

Z-transform. Let y(n)=x(n)an1{n0}y(n) = x(n) a^n 1_{\{n \ge 0\}}, then

(Z(y))(z)=(Z(x))(za),(\mathcal{Z}(y))(z) = (\mathcal{Z}(x))\Big(\frac{z}{a}\Big),

provided that zaROC\frac{z}{a} \in ROC for xx.

Laplace transform. Let y(t)=x(t)eat1{t0}y(t) = x(t) e^{at} 1_{\{t \ge 0\}}, then

(L(y))(s)=(L(x))(sa),(\mathcal{L}(y))(s) = (\mathcal{L}(x))(s - a),

provided that saROCs - a \in ROC for xx.

Differentiation Property (in time domain)

Let D(x)D(x) denote dxdt\frac{dx}{dt} (assumed to exist), and let x(t)=0x(t) = 0 for tbt \le b some bRb \in \mathbb{R} and that x(t)Meat|x(t)| \le Me^{at}. Then,

L(Dx)(s)=sL(x)(s)\mathcal{L}(Dx)(s) = s\mathcal{L}(x)(s)

L+(Dx)(s)=sL+(x)(s)x(0),\mathcal{L}_+(Dx)(s) = s\mathcal{L}_+(x)(s) - x(0),

for Re{s}>a\text{Re}\{s\} > a.

Converse Differentiation

Z-transform. Suppose that lim supnx(n)(1/n)R\limsup_{n \to \infty} |x(n)|^{(1/n)} \le R for some RRR \in \mathbb{R}. This implies that {z:z>R}\{z : |z| > R\} is in the ROC. To see this one should note that for every δ>0\delta > 0, there exists NδN_\delta such that for n>Nδn > N_\delta we have x(n)(1/n)<R+δ|x(n)|^{(1/n)} < R + \delta. Then, take δ\delta to be less than zR|z| - R for any given z>R|z| > R.

Now, let y(n)=nx(n)y(n) = -nx(n). Then,

Z+(y)(z)=zddz(Z+(x))(z),\mathcal{Z}_+(y)(z) = z \frac{d}{dz}(\mathcal{Z}_+(x))(z),

for z>R|z| > R.

Laplace transform. The derivative rule for the Laplace transforms follows from a similar reasoning. Let x(t)Meat|x(t)| \le Me^{at} for some M,aRM, a \in \mathbb{R}. Let y(t)=tx(t)y(t) = -tx(t). Then, for ss with Re{s}>a\text{Re}\{s\} > a, we have

L+(y)(s)=dds(L+(x))(s)\mathcal{L}_+(y)(s) = \frac{d}{ds}(\mathcal{L}_+(x))(s)

Remark.

Note that for FCC\mathcal{F}_{CC}, a more subtle argument is needed for pushing the derivative inside the integration in via applying Theorem A.2.1 to imaginary and real parts of an exponential separately. For the Z and Laplace transforms above, we are using the liberty of the region of convergence being outside the critical curves/lines (as in z>R|z| > R).

Scaling

If y(t)=x(αt)y(t) = x(\alpha t), then

L(y)(s)=1αL(x)(sα),\mathcal{L}(y)(s) = \frac{1}{|\alpha|} \mathcal{L}(x)\Big(\frac{s}{\alpha}\Big),

provided that sαROC\frac{s}{\alpha} \in ROC for xx.

Initial Value Theorem

Discrete-time. Let x(n)Manx(n) \le Ma^n for all nZn \in \mathbb{Z} and for some M,aRM, a \in \mathbb{R}. Then,

limzX+(z)=x(0),\lim_{z \to \infty} X_+(z) = x(0),

for z>a|z| > a.

Continuous-time. Let x(t)Meatx(t) \le Me^{at} and ddtx(t)Meat\frac{d}{dt}x(t) \le Me^{at} for all tRt \in \mathbb{R} and for some M,aRM, a \in \mathbb{R}. Then,

lims,Re{s}>asX+(s)=x(0),\lim_{s \to \infty, \text{Re}\{s\} > a} sX_+(s) = x(0),

for Re{s}>a\text{Re}\{s\} > a. The proof of this result follows from the differentiation property (in time domain) and an application of the dominated convergence theorem (see Theorem A.1.5) if Re(s)\text{Re}(s) \to \infty or the Riemann-Lebesgue Lemma (see Theorem) if Im(s)\text{Im}(s) \to \infty but the real part does not converge to infinity.

Final Value Theorem

Continuous-time. If limtx(t)=:M<\lim_{t \to \infty} x(t) =: M < \infty, then

limtx(t)=lims0,sRsX+(s),\lim_{t \to \infty} x(t) = \lim_{s \to 0, s \in \mathbb{R}} sX_+(s),

We note that we can relax the above to:

limtx(t)=lims0,Re{s}>0;limsRe{s}<sX+(s),\lim_{t \to \infty} x(t) = \lim_{s \to 0, \text{Re}\{s\} > 0; \lim \frac{|s|}{\text{Re}\{s\}} < \infty} sX_+(s),

To be able to apply the Final Value Theorem, it is important to ensure that the finiteness condition, limtx(t)=:M<\lim_{t \to \infty} x(t) =: M < \infty, holds. Note that if we have that all poles of sX+(s)sX_+(s) are in the left half plane, this ensures that limtx(t)\lim_{t \to \infty} x(t) exists and is finite.

Discrete-time. For a discrete-time signal, if limnx(n)<\lim_{n \to \infty} x(n) < \infty, then

limnx(n)=limz1,z>1(1z1)X+(z)\lim_{n \to \infty} x(n) = \lim_{z \to 1, |z| > 1} (1 - z^{-1}) X_+(z)

The proof follows from the same arguments used in the proof above for the Laplace setup. Once again, note that if we have that all poles of (1z1)X+(z)(1 - z^{-1})X_+(z) are strictly inside the unit circle, then limnx(n)\lim_{n \to \infty} x(n) exists and is finite.


Key Properties Reference

This section collects the most important transform properties side by side for quick reference. For each property, we give the Laplace and Z-transform versions, a plain-English description, and when you would typically use it.


Linearity:

  • Laplace: ax1(t)+bx2(t)    aX1(s)+bX2(s)a \, x_1(t) + b \, x_2(t) \;\leftrightarrow\; a \, X_1(s) + b \, X_2(s)
  • Z-transform: ax1(n)+bx2(n)    aX1(z)+bX2(z)a \, x_1(n) + b \, x_2(n) \;\leftrightarrow\; a \, X_1(z) + b \, X_2(z)
  • Meaning: The transform of a sum is the sum of the transforms; scaling passes through unchanged.
  • Use when: You have a signal that is a weighted combination of simpler signals whose transforms you already know.

Convolution:

  • Laplace: x1(t)x2(t)    X1(s)X2(s)x_1(t) * x_2(t) \;\leftrightarrow\; X_1(s) \cdot X_2(s)
  • Z-transform: x1(n)x2(n)    X1(z)X2(z)x_1(n) * x_2(n) \;\leftrightarrow\; X_1(z) \cdot X_2(z)
  • Meaning: Convolution in time becomes multiplication in the transform domain. This is the single most powerful property -- it turns the hardest operation in time (convolution) into the easiest operation in frequency (multiplication).
  • Use when: Computing the output of an LTI system (output = input convolved with impulse response).

Time Shift:

  • Laplace (two-sided): x(tt0)    est0X(s)x(t - t_0) \;\leftrightarrow\; e^{-s t_0} X(s)
  • Z-transform (two-sided): x(nn0)    zn0X(z)x(n - n_0) \;\leftrightarrow\; z^{-n_0} X(z)
  • Meaning: A time delay in the signal just multiplies the transform by a complex exponential. Delays are trivial to handle in the transform domain.
  • Use when: You see a time-delayed version of a known signal, or when modeling transport delays in a system.

Converse Shift (Frequency/Exponential Shift):

  • Laplace: eatx(t)    X(sa)e^{at} x(t) \;\leftrightarrow\; X(s - a)
  • Z-transform: anx(n)    X(z/a)a^n x(n) \;\leftrightarrow\; X(z/a)
  • Meaning: Multiplying a signal by an exponential in time shifts the transform in the complex plane.
  • Use when: You have a known transform but the signal is modulated by an exponential (e.g., a damped sinusoid).

Differentiation in Time:

  • Laplace (two-sided): ddtx(t)    sX(s)\frac{d}{dt} x(t) \;\leftrightarrow\; s \, X(s)
  • Laplace (one-sided): ddtx(t)    sX+(s)x(0)\frac{d}{dt} x(t) \;\leftrightarrow\; s \, X_+(s) - x(0)
  • Meaning: Differentiation in time becomes multiplication by ss. The one-sided version also picks up the initial condition x(0)x(0).
  • Use when: Solving differential equations -- each derivative becomes a power of ss, turning the ODE into an algebraic equation.

Converse Differentiation (Differentiation in ss or zz):

  • Laplace: tx(t)    ddsX(s)-t \, x(t) \;\leftrightarrow\; \frac{d}{ds} X(s)
  • Z-transform: nx(n)    zddzX(z)-n \, x(n) \;\leftrightarrow\; z \frac{d}{dz} X(z)
  • Meaning: Multiplying a signal by tt (or nn) in time corresponds to differentiating its transform.
  • Use when: You need the transform of tx(t)t \cdot x(t) or nx(n)n \cdot x(n), which arises in computing moments or when partial fractions have repeated roots.

Scaling (Laplace only):

  • Laplace: x(αt)    1αX ⁣(sα)x(\alpha t) \;\leftrightarrow\; \frac{1}{|\alpha|} X\!\left(\frac{s}{\alpha}\right)
  • Meaning: Compressing a signal in time stretches its transform in frequency (and vice versa). This is the continuous-time time-frequency duality.
  • Use when: You know the transform of a signal and need the transform of a time-scaled version.

Initial Value Theorem:

  • Laplace: x(0)=limssX+(s)x(0) = \lim_{s \to \infty} s \, X_+(s)
  • Z-transform: x(0)=limzX+(z)x(0) = \lim_{z \to \infty} X_+(z)
  • Meaning: You can read off the initial value of a signal directly from its transform without inverting.
  • Use when: You want a quick sanity check on a computed transform, or you need x(0)x(0) but only have X(s)X(s) or X(z)X(z).

Final Value Theorem:

  • Laplace: limtx(t)=lims0sX+(s)\lim_{t \to \infty} x(t) = \lim_{s \to 0} s \, X_+(s) (provided the limit exists)
  • Z-transform: limnx(n)=limz1(1z1)X+(z)\lim_{n \to \infty} x(n) = \lim_{z \to 1} (1 - z^{-1}) X_+(z) (provided the limit exists)
  • Meaning: You can find the steady-state value of a signal directly from its transform. Caution: this only works if x(t)x(t) (or x(n)x(n)) actually converges to a finite limit. Check that all poles of sX+(s)s X_+(s) are in the left half-plane (Laplace) or that all poles of (1z1)X+(z)(1 - z^{-1}) X_+(z) are strictly inside the unit circle (Z-transform).
  • Use when: Determining the steady-state output of a stable system without computing the full inverse transform.

Computing the Inverse Transforms

There are usually three methods that can be applied depending on a particular problem.

Method 1: Partial Fraction Expansion. One is through the partial fraction expansion and using the properties of the transforms. Typically, for linear systems, this is the most direct approach. All is required is to know that

Z(an1n0)(z)=11az1\mathcal{Z}(a^n 1_{n \ge 0})(z) = \frac{1}{1 - az^{-1}}

with z{z:z>a}z \in \{z : |z| > a\} and

L(eat1t0)(s)=1sa\mathcal{L}(e^{at} 1_{t \ge 0})(s) = \frac{1}{s - a}

with s{s:Re{s}>a}s \in \{s : \text{Re}\{s\} > a\}, together with the properties we discussed above. One needs to pay particular attention to the regions of convergence: for example both of the signals x1(t)=eat1{t0}x_1(t) = e^{at}1_{\{t \ge 0\}} and x2(t)=eat1{t<0}x_2(t) = -e^{at}1_{\{t < 0\}} have their Laplace transforms as 1sa\frac{1}{s-a}, but the first one is defined for Re{s}>a\text{Re}\{s\} > a and the second one for Re{s}<a\text{Re}\{s\} < a.

ExampleInverse Z-Transform via Partial Fractions

Compute the inverse transform of X(z)=1z21X(z) = \frac{1}{z^2 - 1} where the region of convergence is defined to be {z:z>1}\{z : |z| > 1\}. You may want to first write X(z)=z211z2X(z) = z^{-2} \frac{1}{1 - z^{-2}}.

Method 2: Power/Laurent Series Expansion. A second method is to try to expand the transforms using power series (Laurent series) and match the components in the series with the signal itself.

Method 3: Contour Integration. The most general method is to compute a contour integration along the unit circle or the imaginary line of a scaled signal. In this case, for Z-transforms:

x(n)=1i2πcX(z)zn1dzx(n) = \frac{1}{i2\pi} \int_c X(z) z^{n-1}\, dz

where the contour integral is taken along a circle in the region of convergence in a counter-clockwise fashion. For the Laplace transform:

x(t)=1i2πcX(s)estds,x(t) = \frac{1}{i2\pi} \int_c X(s) e^{st}\, ds,

where the integral is taken along the line Re{s}=R\text{Re}\{s\} = R which is in the region of convergence. Cauchy's Integral Formula (see Theorem B.0.2) may be employed to obtain solutions.

However, for applications considered in this course, the partial fraction expansion is the most direct approach.


Systems Analysis using the Laplace and the Z Transforms

For a given convolution system, the property that if uu is an input, hh is the impulse response and yy the output

(L(y))(s)=(L(u))(s)(L(h))(s)(\mathcal{L}(y))(s) = (\mathcal{L}(u))(s)(\mathcal{L}(h))(s)

leads to the fact that

H(s)=Y(s)U(s)H(s) = \frac{Y(s)}{U(s)}

Likewise,

(Z(y))(z)=(Z(u))(z)(Z(h))(z)(\mathcal{Z}(y))(z) = (\mathcal{Z}(u))(z)(\mathcal{Z}(h))(z)

leads to the fact that

H(z)=Y(z)U(z)H(z) = \frac{Y(z)}{U(z)}

As with the complex harmonics, for a continuous-time convolution system with input u(t)=estu(t) = e^{st}, the output equals H(s)estH(s)e^{st} provided that HH is well-defined. Likewise, for a discrete-time system with input u(n)=znu(n) = z^n, the output equals H(z)znH(z)z^n provided that HH is well-defined. In view of this, HH above is called the transfer function of the convolution system with impulse response hh (and thus generalizes frequency response introduced earlier).

From these relationships, often one can often compute the transfer functions directly.

ExampleMoving Average System

Let a convolution system (which defines a moving-average system) be defined with the relation

y(t)=1T1+T2tT1t+T2u(τ)dτy(t) = \frac{1}{T_1 + T_2} \int_{t - T_1}^{t + T_2} u(\tau)\, d\tau

Show that

H(s)=1T1+T21s(esT2esT1),H(s) = \frac{1}{T_1 + T_2} \frac{1}{s}(e^{sT_2} - e^{-sT_1}),

by considering u(t)=estu(t) = e^{st} as the input.

Besides being able to compute the impulse response and transfer functions for such systems, we can obtain useful properties of a convolution system through the use of Laplace and Z transforms.


Causality, Stability and Minimum-Phase Systems

Recall that a convolution system is causal if h(n)=0h(n) = 0 for n<0n < 0. This implies that if rR+r \in \mathbb{R}_+ is in the region of convergence, so is RR for any R>rR > r. Therefore, the region of convergence must contain the entire area outside some circle if it is non-empty.

Recall that a convolution system is BIBO stable if and only if nh(n)<\sum_n |h(n)| < \infty, which implies that z=1|z| = 1 must be in the region of convergence.

In particular, let P(z)P(z) and Q(z)Q(z) be polynomials in zz. Let the transfer function of a discrete-time LTI system be given by

H(z)=P(z)Q(z)H(z) = \frac{P(z)}{Q(z)}

This system is stable and causal if and only if: the degree of PP is less than or equal to the degree of QQ and all poles of HH (that is, the zeros of QQ which do not cancel with the zeros of PP) are inside the unit circle.

Therefore, a discrete-time causal convolution system of the form H(z)=P(z)Q(z)H(z) = \frac{P(z)}{Q(z)} noted above is BIBO stable if and only if the region of convergence is of the form {z:z>R}\{z : |z| > R\} for some R<1R < 1.

A similar discussion applies for continuous-time systems: Such a system is causal if h(t)=0h(t) = 0 for t<0t < 0. Therefore, whenever the region of convergence includes aCa \in \mathbb{C}, then {s:Re{s}>Re{a}}\{s : \text{Re}\{s\} > \text{Re}\{a\}\} is also in the region of convergence.

Such a system is BIBO stable if and only if h(t)dt<\int |h(t)|\, dt < \infty which then implies that the imaginary axis is in the region of convergence. Thus, if P(s)P(s) and Q(s)Q(s) are polynomials in ss and the transfer function of a continuous-time LTI system be given by

H(s)=P(s)Q(s),H(s) = \frac{P(s)}{Q(s)},

this system is stable if poles of HH are in the left-half plane.

Therefore, a continuous-time system is BIBO stable and causal if the region of convergence is of the form {s:Re{s}>R}\{s : \text{Re}\{s\} > R\} for some R<0R < 0.

A practically important property of stable and causal convolution systems is whether the inverse of the transfer function is realizable through also a causal and stable system: Such systems are called minimum-phase systems. Thus, a system is minimum-phase if all of its zeros and poles are inside the unit circle. A similar discussion applies for continuous-time systems, all the poles and zeros in this case belong to the left-half plane.

Such systems are called minimum phase since every rational system transfer function can be written as a product of a minimum phase system transfer function and a transfer function which has unit magnitude for s=i2πf,fRs = i2\pi f, f \in \mathbb{R}, but which has a larger (positive) phase change as the frequency is varied. To make this more concrete, consider

G1(s)=s1s+5G_1(s) = \frac{s - 1}{s + 5}

This system is not minimum-phase. Now, write this system as:

G1(s)=(s+1s+5)(s1s+1)G_1(s) = \left(\frac{s+1}{s+5}\right)\left(\frac{s-1}{s+1}\right)

Here, G2(s):=s+1s+5G_2(s) := \frac{s+1}{s+5} is minimum-phase. G3(s):=s1s+1G_3(s) := \frac{s-1}{s+1} is so that its magnitude on the imaginary axis is always 1. However, this term contributes to a positive phase. This can be observed by plotting the Bode diagram, as for small ω\omega values, the signal has a phase close to π\pi which gradually decays to zero.

This phase is associated with delay: To observe this added delay consider the following: Write s1s+1=12s+1\frac{s-1}{s+1} = 1 - \frac{2}{s+1}, and observe that the inverse Laplace of this term is the Dirac delta impulse minus the effect of the inverse Laplace of 2s+1-\frac{2}{s+1}; this latter term is b(t):=2et1{t0}b(t) := -2e^{-t}1_{\{t \ge 0\}} or in terms of a linear system it is the solution of a causal system with input uu whose output is b(tτ)u(τ)dτ\int b(t-\tau)u(\tau)\, d\tau (or generally tCeA(ts)Bu(s)\int^t Ce^{A(t-s)}Bu(s) for appropriate matrices A,B,CA, B, C): This term adds a delayed response compared with the effect of the Dirac delta impulse.

Yet, another interpretation is the following: Let θ>0\theta > 0. One has the approximation

1sθ/21+sθ/2esθ\frac{1 - s\theta/2}{1 + s\theta/2} \approx e^{-s\theta}

for small s=iωs = i\omega values through expanding the exponential term. By our analysis earlier in , a negative complex exponential in the Laplace transform contributes to a positive time delay. The approximation above is known as a first-order Pade approximation.

Thus, non-minimum-phase systems have higher delay properties in their impulse responses compared to minimum-phase systems.


Initial Value Problems using the Laplace and Z Transforms

The one-sided Laplace and Z transforms are very useful in finding solutions to differential equations or difference equations with initial conditions.


Exercises

Problem 7.7.1Laplace and Z-Transform Computations

a) Compute the (two-sided) Z-transform of

x(n)=2n1{n0}x(n) = 2^n 1_{\{n \ge 0\}}

Note that you should find the Region of Convergence as well.

b) Compute the (two-sided) Laplace-transform of

x(t)=e2t1{t0}x(t) = e^{2t} 1_{\{t \ge 0\}}

Find the regions in the complex plane, where the transforms are finite valued.

c) Show that the one-sided Laplace transform of cos(αt)\cos(\alpha t) satisfies

L+{cosαt}=ss2+α2,Re(s)>0\mathcal{L}_+\{\cos \alpha t\} = \frac{s}{s^2 + \alpha^2}, \quad \text{Re}(s) > 0

d) Compute the inverse Laplace transform of

s2+9s+2(s1)2(s+3),Re(s)>1\frac{s^2 + 9s + 2}{(s-1)^2(s+3)}, \quad \text{Re}(s) > 1

Hint: Use partial fraction expansion and the properties of the derivative of a Laplace transform.

Problem 7.7.2Inverse Z-Transform

Find the inverse Z-transform of:

X(z)=356z1(114z1)(113z1),z>1X(z) = \frac{3 - \frac{5}{6}z^{-1}}{(1 - \frac{1}{4}z^{-1})(1 - \frac{1}{3}z^{-1})}, \quad |z| > 1

Problem 7.7.3BIBO Stability and Causality

Let P(z)P(z) and Q(z)Q(z) be polynomials in zz. Let the transfer function of a discrete-time LTI system be given by

H(z)=P(z)Q(z)H(z) = \frac{P(z)}{Q(z)}

a) Suppose the system is BIBO stable. Show that the system is causal (non-anticipative) if and only if P(z)Q(z)\frac{P(z)}{Q(z)} is a proper fraction (that is the degree of the polynomial in the numerator cannot be greater than the one of the denominator).

b) Show that the system is BIBO stable if and only if the Region of Convergence of the transfer function contains the unit circle. Thus, for a system to be both causal and stable, what are the conditions on the roots of Q(z)Q(z)?

Problem 7.7.4Discrete-Time System Analysis I

Let a system be described by:

y(n+2)=3y(n+1)2y(n)+u(n),nZ.y(n+2) = 3y(n+1) - 2y(n) + u(n), \quad n \in \mathbb{Z}.

a) Is this system non-anticipative? Bounded-input-bounded-output (BIBO) stable?

b) Compute the transfer function of this system.

c) Compute the impulse response of the system.

d) Compute the output when the input is

u(n)=(1)n1{n0}u(n) = (-1)^n 1_{\{n \ge 0\}}

Problem 7.7.5Discrete-Time System Analysis II

Let a system be described by:

y(n+2)=3y(n+1)2y(n)+u(n),nZ.y(n+2) = 3y(n+1) - 2y(n) + u(n), \quad n \in \mathbb{Z}.

a) Is this system non-anticipative? Bounded-input-bounded-output (BIBO) stable?

b) Compute the transfer function of this system.

c) Compute the impulse response of the system.

d) Compute the output when the input is

u(n)=(1)n1{n0}u(n) = (-1)^n 1_{\{n \ge 0\}}

Problem 7.7.6Causal Filter and Initial Value Problem

a) Let H(z)=1112zH(z) = \frac{1}{1 - \frac{1}{2}z}. Given that HH represents the transfer function (Z-transform of the impulse response) of a causal filter, find h(n)h(n).

b) Find the solution to the following sequence of equations:

y(n+2)=2y(n+1)+y(n),n0y(n+2) = 2y(n+1) + y(n), \quad n \ge 0

with initial conditions:

y(0)=0,y(1)=1.y(0) = 0, \quad y(1) = 1.

That is, find y(n),n0y(n), n \ge 0.