SCHAUM'S OVTLIISE OF
THEORY AIVD PROBLEMS
OF
STATE SPACE
and
LINEAR SYSTEMS
BY
DONALD M. WIBERG, Ph.D.
Associate Professor of Engineering
University of California, Los Angeles
H^
SCHAIJM'S OUTLINE SERIES
McGRAW-HILL BOOK COMPANY
New York, St. Louis, San Francisco, Diisseldorf, Johannesburg, Kuala Lumpur, London, Mexico
Montreal, New Delhi, Panama, Rio de Janeiro, Singapore, Sydney, and Toronto
Copyright © 1971 by McGraw-Hill, Inc. All Rights Reserved. Printed in the
United States of America. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the
prior written permission of the publisher.
07-070096-6
5 6 7 8 9 10 11 la 18 14 15 SH SH 7 S 4 8 2 1 6 9 8
Preface
The importance of state space analysis is recognized in fields where the time behavior
of any physical process is of interest. The concept of state is comparatively recent, but the
methods used have been kno-wn to mathematicians for many years. As engineering, physics,
medicine, economics, and business become more cognizant of the insight that the state space
approach offers, its popularity increases.
^ This book vs^as vi^ritten not only for upper division and graduate students, but for prac-
ticing professionals as well. It is an attempt to bridge the gap between theory and practical
use of the state space approach to the analysis and design of dynamical systems. The book
is meant to encourage the use of state space as a tool for analysis and design, in proper
relation with other such tools. The state space approach is more general than the "classical"
Laplace and Fourier transform theory. Consequently, state space theory is applicable to all
systems that can be analyzed by integral transforms in time, and is applicable to many
systems for which transform theory breaks down. Furthermore, state space theory gives
a somewhat different insight into the time behavior of linear systems, and is worth studying
for this aspect alone.
In particular, the state space approach is useful because: (1) linear systems with time-
varying parameters can be analyzed in essentially the same manner as time-invariant linear
systems, (2) problems formulated by state space methods can easily be programmed on a
computer, (3) high-order linear systems can be analyzed, (4) multiple input-multiple output
systems can be treated almost as easily as single input-single output linear systems, and
(5) state space theory is the foundation for further studies in such areas as nonlinear
systems, stochastic systems, and optimal control. These are five of the most important
advantages obtained from the generalization and rigorousness that state space brings to
the classical transform theory.
Because state space theory describes the time behavior of physical systems in a mathe-
matical manner, the reader is assumed to have some knowledge of differential equations and
of Laplace transform theory. Some classical control theory is needed for Chapter 8 only.
No knowledge of matrices or complex variables is prerequisite.
The book may appear to contain too many theorems to be comprehensible and/or useful
to the nonmathematician. But the theorems have been stated and proven in a manner
suited to show the range of application of the ideas and their logical interdependence.
Space that might otherwise have been devoted to solved problems has been used instead
to present the physical motivation of the proofs. Consequently I give my strongest recom-
mendation that the reader seek to understand the physical ideas underlying the proofs rather
than to merely memorize the theorems. Since the emphasis is on applications, the book
might not be rigorous enough for the pure mathematician, but I feel that enough informa-
tion has been provided so that he can tidy up the statements and proofs himself.
The book has a number of novel features. Chapter 1 gives the fundamental ideas of
state from an informal, physical viewpoint, and also gives a correct statement of linearity.
Chapter 2 shows how to write transfer functions and ordinary differential equations in
matrix notation, thus motivating the material on matrices to follow. Chapter 3 develops
the important concepts of range space and null space in detail, for later application. Also
exterior products (Grassmann algebra) are developed, vi^hich give insight into determinants,
and which considerably shorten a number of later proofs. Chapter 4 shows how to actually
solve for the Jordan form, rather than just proving its existence. Also a detailed treatment
of pseudoinverses is given. Chapter 5 gives techniques for computation of transition
matrices for high-order time-invariant systems, and contrasts this with a detailed develop-
ment of transition matrices for time-varying systems. Chapter 6 starts with giving physical
insight into controllability and observability of simple systems, and progresses to the point
of giving algebraic criteria for time-varying systems. Chapter 7 shows how to reduce a
system to its essential parameters. Chapter 8 is perhaps the most novel. Techniques from
classical control theory are extended to time-varying, multiple input-multiple output linear
systems using state space formulation. This gives practical methods for control system
design, as well as analysis. Furthermore, the pole placement and observer theory developed
can serve as an introduction to linear optimal control and to Kalman filtering. Chapter 9
considers asymptotic stability of linear systems, and the usual restriction of uniformity is
dispensed with. Chapter 10 gives motivation for the quadratic optimal control problem,
with special emphasis on the practical time-invariant problem and its associated computa-
tional techniques. Since Chapters 6, 8, and 9 precede, relations with controllability, pole
placement, and stability properties can be explored.
The book has come from a set of notes developed for engineering course 122B at UCLA,
originally dating from 1966. It was given to the publisher in June 1969. Unfortunately,
the publication delay has dated some of the material. Fortunately, it also enabled a number
of errors to be weeded out.
Now I would like to apologize because I have not included references, historical develop-
ment, and credit to the originators of each idea. This was simply impossible to do because
of the outline nature of the book.
I would like to express my appreciation to those who helped me write this book. Chapter
1 was written with a great deal of help from A. V. Balakrishnan. L. M. Silverman helped
with Chapter 7 and P.K.C. Wang with Chapter 9. Interspersed throughout the book is
material from a course given by R. E. Kalman during the spring of 1961 at Caltech. J. J.
DiStefano, R. C. Erdmann, N. Levan, and K. Yao have used the notes as a text in UCLA
course 122B and have given me suggestions. I have had discussions with R. E. Mortensen,
M. M. Sholar, A. R. Stubberud, D. R. Vaughan, and many other colleagues. Improvements
in the final draft were made through the help of the control group under the direction of
J. Ackermann at the DFVLR in Oberpfaffenhofen, West Germany, especially by G. Grubel
and R. Sharma. Also, I want to thank those UCLA students, too numerous to mention, that
have served as guinea pigs and have caught many errors of mine. Ruthie Alperin was very
efficient as usual while typing the text. David Beckwith, Henry Hayden, and Daniel Schaum
helped publish the book in its present form. Finally, I want to express my appreciation of
my wife Merideth and my children Erik and Kristin for their understanding during the
long hours of involvement with the book.
Donald M. Wiberg
University of California, Los Angeles
June 1971
CONTENTS
Page
Chapter / MEANING OF STATE 1
Introduction to State. State of an Abstract Object. Trajectories in State
Space. Dynamical Systems. Linearity and Time Invariance. Systems Con-
sidered. Linearization of Nonlinear Systems.
Chapter 2 METHODS FOR OBTAINING THE STATE EQUATIONS 16
Plow Diagrams. Properties of Flow Diagrams. Canonical Flow Diagrams
for Time-Invariant Systems. Jordan Flow Diagram. Time- Varying Systems.
General State Equations.
Chapter 3 ELEMENTARY MATRIX THEORY 38
Introduction. Basic Definitions. Basic Operations. Special Matrices. Deter-
minants and Inverse Matrices. Vector Spaces. Bases. Solution of Sets of
Linear Algebraic Equations. Generalization of a Vector. Distance in a Vector
Space. Reciprocal Basis. Matrix Representation of a Linear Operator. Ex-
terior Products.
Chapter 4 MATRIX ANALYSIS
Eigenvalues and Eigenvectors. Introduction to the Similarity Transformation.
Properties of Similarity Transformations. Jordan Form. Quadratic Forms.
Matrix Norms. Functions of a Matrix. Pseudoinverse.
69
Chapter 5 SOLUTIONS TO THE LINEAR STATE EQUATION 99
Transition Matrix. Calculation of the Transition Matrix for Time-Invariant
Systems. Transition Matrix for Time-Varying Differential Systems. Closed
Forms for Special Cases of Time-Varying Linear Differential Systems. Peri-
odically-Varying Linear Differential Systems. Solution of the Linear State
Equations with Input. Transition Matrix for Time- Varying .Difference Equa-
tions. Impulse Response Matrices. The Adjoint System.
Chapter & CONTROLLABILITY AND OBSERVABILITY 128
Introduction to Controllability and Observability. Controllability in Time-
Invariant Linear Systems. Observability in Time-Invariant Linear Systems.
Direct Criteria from A, B, and C. Controllability and Observability of Time-
Varying Systems. Duality.
CONTENTS
Page
Chapter 7 CANONICAL FORMS OF THE STATE EQUATION 147
Introduction to Canonical Forms. Jordan Form for Time-Invariant Systems.
Real Jordan Form. Controllable and Observable Forms for Time-Varying
Systems. Canonical Forms for Time-Varying Systems.
Chapter 8 RELATIONS WITH CLASSICAL TECHNIQUES 164
Introduction. Matrix Flow Diagrams. Steady State Errors. Root Locus.
Nyquist Diagrams. State Feedback Pole Placement. Observer Systems.
Algebraic Separation. Sensitivity, Noise Rejection, and Nonlinear Effects.
Chapter 9 STABILITY OF LINEAR SYSTEMS 191
Introduction. Definitions of Stability for Zero-Input Linear Systems. De-
finitions of Stability for Nonzero Inputs. Liapunov Techniques. Liapunov
Functions for Linear Systems. Equations for the Construction of Liapunov
Functions.
Chapter 10 INTRODUCTION TO OPTIMAL CONTROL 208
Introduction. The Criterion Functional. Derivation of the Optimal Control
Law. The Matrix Riccati Equation. Time-Invariant Optimal Systems. Out-
put Feedback. The Servomechanism Problem. Conclusion.
INDEX 233
chapter 1
Meaning of State
1.1 INTRODUCTION TO STATE
To introduce the subject, let's take an informal, physical approach to the idea of state.
(An exact mathematical approach is taken in more advanced texts.) First, we make a
distinction between physical and abstract objects. A physical object is an object perceived
by our senses whose time behavior we wish to describe, and its abstraction is the mathe-
matical relationships that give some expression for its behavior. This distinction is made
because, in making an abstraction, it is possible to lose some of the relationships that make
the abstraction behave similar to the physical object. Also, not all mathematical relation-
ships can be realized by a physical object.
The concept of state relates to those physical objects whose behavior can change with
time, and to which a stimulus can be applied and the response observed. To predict the
future behavior of the physical object under any input, a series of experiments could be
performed by applying stimuli, or inputs, and observing the responses, or outputs. From
these experiments we could obtain a listing of these inputs and their corresponding observed
outputs, i.e. a list of input-output pairs. An input-output pair is an ordered pair of real
time functions defined for all t ^- U, where to is the time the input is first applied. Of
course segments of these input time functions must be consistent and we must agree upon
what kind of functions to consider, but in this introduction we shall not go into these
mathematical details.
Definition 1.1: The state of a physical object is any property of the object which relates
input to output such that knowledge of the input time function for t — to
and state at time t = to completely determines a unique output for t — to.
Example 1.1.
Consider a black box. Fig. 1-1, contain-
ing a switch to one of two voltage dividers.
Intuitively, the state of the box is the posi-
tion of the switch, which agrees with Defi-
nition 1.1. This can be ascertained by the
experiment of applying a voltage V to the
input terminal. Natural laws (Ohm's law)
dictate that if the switch is in the lower
position A, the output voltage is V/2, and
if the switch is in the upper position B, the
output voltage is y/4. Then the state A
determines the input-output pair to be
{V, V/2), and the state B corresponds to
(V, y/4). Fig. 1-1
1.2 STATE OF AN ABSTRACT OBJECT
The basic ideas contained in the above example can be extended to many physical objects
and to the abstract relationships describing their time behavior. This will be done after
abstracting the properties of physical objects such as the black box. For example, the
color of the box has no effect on the experiment of applying a voltage. More subtly, the
value of resistance R is immaterial if it is greater than zero. All that is needed is a listing
of every input-output pair over all segments of time t — to, and the corresponding states
at time to.
Ms)
Output
terminal
MEANING OP STATE
[CHAP. 1
Definition 1.2:
An abstract object is the totality of input-output pairs that describe the
behavior of a physical object.
Instead of a specific list of input time functions and their corresponding output time
functions, the abstract object is usually characterized as a class of all time functions that
obey a set of mathematical equations. This is in accord with the scientific method of
hypothesizing an equation and then checking to see that the physical object behaves in a
manner similar to that predicted by the equation. Hence we can often summarize the
abstract object by using the mathematical equations representing physical laws.
The mathematical relations which summarize an abstract object must be oriented,
in that m of the time functions that obey the relations must be designated inputs (denoted
by the vector u, having m elements tu) and k of the time functions must be designated
outputs (denoted by the vector y, having k elements yt). This need has nothing to do with
causality, in that the outputs are not "caused" by the inputs.
Definition 1.3:
The state of an abstract object is a collection of numbers which together
with the input u(i) for all t — to uniquely determines the output y{t)
for all t — to.
In essence the state parametrizes the listing of input-output pairs. The state is the
answer to the question "Given u{t) for t — to and the mathematical relationships of the
abstract object, what additional information is needed to completely specify y(t) for t^toV
Example 1.2.
A physical object is the resistor-capacitor network
shown in Fig. 1-2. An experiment is performed by applying
a voltage u{t), the input, and measuring a voltage y(t), the
output. Note that another experiment could be to apply
y{t) and measure u{t), so that these choices are determined
by the experiment.
The list of all input-output pairs for this example is
the class of all functions u(t),y{t) which satisfy the mathe-
matical relationship
RCdy/dt + y = u (1.1)
This summarizes the abstract object. The solution of (1.1) is
J?
yit) = y(to)e'
.(.ta-ttlRC
+
hf"
T-fl/RC,
m(t) dr {1.2)
Fig. 1-2
This relationship explicitly exhibits the list of input-output pairs. For any input time function u{t) for
T ^ to. the output time function y{t) is uniquely determined by 2/(to). a number at time to- Note the
distinction between time functions and numbers. Thus the set of numbers y(to) parametrizes all input-
output pairs, and therefore is the state of the abstract object described by {1.1). Correspondingly, a
choice of state of the RC network is the output voltage at time to-
Example 1.3.
The physical object shown in Fig. 1-3 is two RC networks in series. The pertinent equation is
R^C^ diy/dt^ + 2.5RC dy/dt + y = u
{1.3)
Fig. 1-3
CHAP. 1] MEANING OF STATE
with a solution , .
y(t) = .^[4e(to-tV2Rc _ gCto-m/BC-j
2_ rf^,^ sr (f„-t)/2RC _ p(to-t)2/RCl
+ S^ J' [e(-»/2RC _ ^<.-t)2/RC]„(^) rf^ (^.^)
_2_
3fiC,
to
Here the set of numbers j/(to) and -^ (tp) parametrizes the input-output pairs, and may be chosen as state.
Physically, the voltage and its derivative across the smaller capacitor at time to correspond to the state.
Definition 1.4: A state variable, denoted by the vector x(i), is the time function whose
value at any specified time is the state of the abstract object at that time.
Note this difference in going from a set of numbers to a time function. The state can
be a set consisting of an infinity of numbers (e.g. Problems 1.1 and 1.2), in which case the
state variable is an infinite collection of time functions. However, in most cases considered
in this book, the state is a set of n numbers and correspondingly x(t) is an K-vector function
of time.
Definition 1,5: The state space, denoted by 2, is the set of all x(i).
Example 1.4.
The state variable in Example 1.2 is x(t) = y{t), whereas, in Example 1.1 the state variable remains
either A or B for all time.
Ebcample 1.5.
The state variable in Example 1.3 is x(t) =
The state representation is not unique. There can be many different ways of expressing
the relationship of input to output.
Example 1.6.
In Example 1.3, instead of the voltage and its derivative across the smaller capacitor, the state could
be the voltage and its derivative across the larger capacitor, or the state could be the voltages across
both capacitors.
There can exist inputs that do not influence the state, and, conversely, there can exist
outputs that are not influenced by the state. These cases are called uncontrollable and
unobservable, respectively, about which much more will be said in Chapter 6.
Example 1.7.
In Example 1.1, the physical object is state uncontrollable. No input can make the switch change
positions. However, the switch position is observable. If the wire to the output were broken, it would be
unobservable. A state that is both unobservable and uncontrollable makes no physical sense, since it can-
not be detected by experiment. Examples 1.2 and 1.3 are both controllable and observable.
One more point to note is that we consider here only deterministic abstract objects.
The problem of obtaining the state of an abstract object in which random processes are
inputs, etc., is beyond the scope of this book. Consequently, all statements in the whole
book are intended only for deterministic processes.
MEANING OF STATE
[CHAP. 1
1.3 TRAJECTORIES IN STATE SPACE
The state variable x(*) is an explicit function of time, but also depends implicitly on the
starting time U, the initial state x(fo) = xo, and the input u(t). This functional dependency
can be written as x(t) = ^{t; to, xo, u(t)), called a trajectory. The trajectory can be plotted
in TC-dimensional state space as t increases from to, with t an implicit parameter. Often
this plot can be made by eliminating t from the solutions to the state equation.
Example 1.8.
Given Xi{t) = sin t and X2(t) = cos t, squaring each equation and adding gives xl + x^ = 1. This
is a circle in the a;ia;2 plane with t an implicit parameter.
Example 1.9.
In Example 1.3, note that equation (14) depends on t, u(t), x(to) and to. where x(to) is the vector with
components y(tg) and dy/dt(ta). Therefore the trajectories ^ depend on these quantities.
Suppose now u{t) = and RC = 1. Let x^ = y{f) and Xz = dyldt. Then dx^/dt = x^ and d^y/dt^ =
Therefore dt = dx^/x^ and so d^y/dt^ = Xzdx^/dx^. Substituting these relationships into {l.S)
X2 dx^ldx]^ + 2.5a;2 + ajj =
which is independent of t. This has a solution
sci + 2a;2 = C(2xi + Xg)*
where the constant C = [x^ito) + 2a;2(to)]/[2a;i(to) + a;2(*o)]*- Typical trajectories in state space are shown
in Fig. 1-4. The one passing through points a;i(to) = and Xzitg) = 1 is drawn in bold. The arrows
point in the direction of increasing time, and all trajectories eventually reach the origin for this particular
stable system.
dx2/dt.
gives
Fig. 1-4
1.4 DYNAMICAL SYSTEMS
In the foregoing we have assumed that an abstract object exists, and that sometimes
we can find a set of oriented mathematical relationships that summarizes this listing of
input and output pairs. Now suppose we are given a set of oriented mathematical relation-
ships, do we have an abstract object? The answer to this question is not always affirmative,
because there exist mathematical equations whose solutions do not result in abstract objects.
Example 1.10.
The oriented mathematical equation y(t) — ju(t) cannot give an abstract object, because either the
input or the output must be imaginary.
If a mathematical relationship always determines a real output y{t) existing for all
t^ to given any real input u(^) for all time t, then we can form an abstract object. Note
that by supposing an input u{t) for all past times as well as future times, we can form an
abstract object from the equation for a delayor y{t) = uit-T). [See Problem 1.1.]
CHAP. 1] MEANING OF STATE 5
However, we can also form an abstract object from the equation for a predictor
y{t) = u{t + T). If we are to restrict ourselves to mathematical relations that can be
mechanized, we must specifically rule out such relations whose present outputs depend
on future values of the input.
Definition 1.6: A dynamical system is an oriented mathematical relationship in which:
(1) A real output y{t) exists for all t - to given a real input u(t) for
all t.
(2) Outputs y(t) do not depend on inputs u(t) for t> t.
Given that we have a dynamical system relating y{t) to u(t), we would like to construct
a set of mathematical relations defining a state x(f). We shall assume that a state space
description can be found for the dynamical system of interest satisfying the following
conditions (although such a construction may take considerable thought):
Condition 1: A real, unique output y(t) = ii{t, 4,{t; to,xo,u(T)), u(t)) exists for all i > to
given the state xo at time to and a real input u(t) for t ^ to.
Condition 2: A unique trajectory ^(t; to,xo,u(T)) exists for all t > to given the state at
time to and a real input for all t — to.
Condition 3: A unique trajectory starts from each state, i.e.
lim^(t; t,,x(ti),u(T)) = x(ti) for all ti ^ to {1.5)
Condition i: Trajectories satisfy the transition property
^(t; to, x(to), u(t)) = ^(t; ti, x(t,), u(t)) for to < ti < t (1.6)
where x(ti) = ^(ti; to,x(to),u(T)) (1.7)
Condition 5: Trajectories ^(t; to, xo, u(r)) do not depend on inputs u(t) for t > t.
Condition 1 gives the functional relationship y(t) - Ti(t,x(f),u(t)) between initial state
and future input such that a unique output is determined. Therefore, with a proper state
space description, it is not necessary to know inputs prior to to, but only the state at time to.
The state at the initial time completely summarizes all the past history of the input.
Example 1.11.
In Example 1.2., it does not matter how the voltage across the capacitor was obtained in the past.
All that is needed to determine the unique future output is the state and the future input.
Condition 2 insures that the state at a future time is uniquely determined. Therefore
knowledge of the state at any time, not necessarily to, uniquely determines the output. For
a given u(t), one and only one trajectory passes through each point in state space and exists
for all finite t ^ to. As can be verified in Fig. 1-4, one consequence of this is that the state
trajectories do not cross one another. Also, notice that condition 2 does not require the
state to be real, even though the input and output must be real.
Example 1.12.
The relation dy/dt = u(t) is obviously a dynamical system. A state space description d«/dt = ju{t)
with output y(t) = -jx{t) can be constructed satisfying conditions 1-5, yet the state is imaginary.
6 MEANING OF STATE [CHAP. 1
Condition 3 merely requires the state space description to be consistent, in that the
starting point of the trajectory should correspond to the initial state. Condition 4 says
that the input u(t) takes the system from a state x(to) to a state x(*), and if x(ii) is on
that trajectory, then the corresponding segment of the input will take the system from
x(ii) to x(i). Finally, condition 5 has been added to assure causality of the input-output
relationship resulting from the state space description to correspond with the causality of
the original dynamical system.
Example 1.13.
We can construct a state space description of equation (1.1) of Example 1.2 by defining a state
a^(*) = yit)- Then condition 1 is satisfied as seen by examination of the solution, equation (1.2). Clearly
the trajectory <p{t; tg, x^, m(t)) exists and is unique given a specified fo, Kq and m(t), so condition 2 is satisfied.
Also, conditions 3 and 5 are satisfied. To check condition 4, given x(tg) = y(tQ) and u{t) over <<, — t — t,
then t
x(t) = a:(*i)e"i-"/«c + ;^J e'^^-'''''^ u(r,) dr, (1.8)
where ^ t,
x(t,) = ^(ye"«-'>''«c + ^J e<^»-''>^«'^u(ro)dro (1.9)
Substitution of (1.9) into (1.8) gives the previously obtained (1.2). Therefore the dynamical system (1.1) has
a state space description satisfying conditions 1-5.
Henceforth, instead of "dynamical system with a state space description" we will simply
say "system" and the rest will be understood.
1.5 LINEARITY AND TIME INVARIANCE
Definition 1.7: Given any two numbers a, /3; two states xi(io), X2(io); two inputs ui(t), U2(t);
and two corresponding outputs yi(T), y2(T) for t — U. Then a system is
linear if (1) the state X3(io) = axi(fo) + (6x2(^0), the output ysW = ayi(T) +
/^yzi-r), and the input U3(t) = aUi{T) + /3u2{t) can appear in the oriented ab-
stract object and (2) both y3(T) and X3(t) correspond to the state X3(to) and
input U3(t).
The operators ^(t; io, xo, u(t)) = x(i) and rt{t;<j>(t;to,xo,u(T))) = y(t) are linear on {u(t)} ©
{x(io)} is an equivalent statement.
Example 1.14.
In Example 1.2,
y,(t) = x,(t) = x.ay'"-"'^' + ^j e<-»/«c„i(r)dr
2/2(0 = «'2(<) = a;2(ye"o-»/RC + ^j',Cx-«/RC„^(^)rf^
to
are the corresponding outputs j/i(t) and y2(t) to the states Xi(t) and X2(t) with inputs Mi(t) and U2(t).
Since any magnitude of voltage is permitted in this idealized system, any state x^(t) = aXi(t) + Px2(t),
any input Us(t) = aUi(t) + pu2(t), and any output ys(t) = aj/i(t) + I3y2(t) will appear in the list of input-
output pairs that form the abstract object. Therefore part (1) of Definition 1.7 is satisfied. Furthermore,
let's look at the response generated by !K3(fo) and M3(t).
y(t) = Xs(to)e'*»-""''' + —f e''-*'"'''u,(r)dr
to
to
= aj/l(*) + i8j/2(*) = J/3(«)
Since ys(t) = Xg(t), both the future output and state correspond to X3(to) and WgCf) and the system is linear.
CHAP. 1] MEANING OF STATE 7
Example 1.15.
Consider the system of Example 1.1. For some a and P there is no state equal to aA + pB, where A and
B are the switch positions. Consequently the system violates condition (1) of Definition 1.7 and is not linear.
Example 1.16.
Given the system dx/dt = 0, y = u cos x. Then 2/i(t) = Ui(t) cos x^Hq) and j/2(*) = '*2(*) cos x^itQ).
The state X3{t) — x^it^ = a«i(to) + P'x'ziio) and is linear, but the output
y{t) = [auiit) + I3u2(t)] cos [aXi(to) + Px^Ho)] ^ aj/i(t) + PViit)
except in special cases like Xi{tg) = x^iti^ = 0, so the system is not linear.
If a system is linear, then superposition holds for nonzero u(t) with x(to) = and also
for nonzero x(io) with \x{t) = but not both together. In Example 1.14, with zero initial
voltage on the capacitor, the response to a biased a-c voltage input (constant + sin mt) could
be calculated as the response to a constant voltage input plus the response to an unbiased
a-c voltage input. Also, note from Example 1.16 that even if superposition does hold for
nonzero u(i) with x(io) = and for nonzero x(«o) with u(t) = 0, the system may still not
be linear.
Definition 1.8: A system is time-invariant if the time axis can be translated and an equiva-
lent system results.
One test for time-invariance is to compare the original output with the shifted output.
First, shift the input time function by T seconds. Starting from the same initial state
xo at time ta + T, does y{t + T) of the shifted system equal y(i) of the original system?
Example 1.17.
Given the nonlinear differential equation
dx __
dt ~
with £t;(6) = a. Let r = i — 6 so that dr = dt and
k2 + m2
^ = a;2 + it2
dr
where x{t = 0) — a, resulting in the same system.
If the nonlinear equation for the state x were changed to
= tx^ + u^
dx
di
with the substitution t = t — 6, then
^ = ra;2 + m2 + 6a;2
and the appearance of the last term on the right gives a different system. Therefore this is a time-
varying nonlinear system. Equations with explicit functions of t as coefficients multiplying the state will
usually be time-varying.
1.6 SYSTEMS CONSIDERED
This book will consider only time-invariant and time-varying linear dynamical systems
described by sets of differential or difference equations of finite order. We shall see in the
next chapter that in this case the state variable x(it) is an n-vector and the system is linear.
Example 1.18.
A time-varying linear differential system of order n with one input and one output is described by the
equation , ^ „ u^ iiin\
II + o.M^+ ■■■ + a^{t)y = ^o(*)rf^ + ••• + l^n{t)u (1.10)
8 MEANING OF STATE [CHAP. 1
Example 1.19.
A time-varying linear difference system of order n with one input and one output is described by
the equation
y{k + n) + ai(k) y(k + n-l) + ■ • • + ajk) y(k) = ^^ik) u{k + n)+ ■■■ + /3„(fc) u{k) {1.11)
The values of a{k) depend on the step (fe) of the process, in a way analogous to which the a{t) depend on
t in the previous example.
1.7 LINEARIZATION OF NONLINEAR SYSTEMS
State space techniques are especially applicable to time-varying linear systems. In this
section we shall find out why time-varying linear systems are of such practical importance.
Comparatively little design of systems is performed from the time-varying point of view
at present, but state space methods offer great promise for the future.
Consider a set of n nonlinear differential equations of first order:
dyi/dt = fi{y 1,1/2, .. .,yn,u,t)
dy2/dt - /2(2/i,2/2, ...,yn,u,t)
dyjdt — /n (2/1,2/2, ...,yn,u,t)
A nonlinear equation of nth order d^y/dt" = g{y, dy/dt, . . ., d''~'^y/dt''-'^, u, t) can always
be written in this form by defining 1/1 = V, dy/dt = 2/2, . . ., d''~'^yldt^-'^ - yn. Then a set
of n first order nonlinear differential equations can be obtained as
dyi/dt = 1/2
dy2/dt = 2/3
(i-l^)
dyn-i/dt = y„
dyJdt = g{yi, 2/2, ... , 2/n, u, t)
Example 1.20.
To reduce the second order nonlinear differential equation d?yldt^ — 2y^ + u dy/dt = to two first
order nonlinear differential equations, define y — yi and dy/dt = y^. Then
dyJdt = j/2
dy2/dt = 2y\ - wj/a
Suppose a solution can be found (perhaps by computer) to equations [1.12) for some
initial conditions yi{U), 2/2(^0), . . ., yn{to) and some input w{t). Denote this solution as the
trajectory ^{t; w{t), yi{to), . . ., yn{to), U). Suppose now that the initial conditions are changed:
dy d" ~^v
y{to) = yiito) + Xl{to), Jfito) = y2{to) + X2{to), ..., -^pr^ito) = yn{to) + Xn(to)
where Xi{to), X2{to) and Xn{to) are small. Furthermore, suppose the input is changed slightly
to u(t) — w{t) + v(t) where v{t) is small. To satisfy the differential equations,
d{<j>^ + x^/dt = /j(^j + x^, 4,^ + x^, ...,^^ + x^,w + V, t)
d{4,^ + x^/dt = f^ (^^ + iCj, 4>2 + X2, .. .,<j,^ + x^,w + v, t)
<ii<t>n + ^nV^* = fn (<Al + ^1' <l>2 + ^2' ■ ■ ' ' 'i>n + ^n' '^ + ^' > ^)
CHAP. 1] MEANING OF STATE 9
If /j, /j, ...,/„ can be expanded about ,j>^, <j>2, . . . , <^„ and w using Taylor's theorem for sev-
eral variables, then neglecting higher order terms we obtain
t + t = />..♦.-.*..-.') + i-+t- + - + S-+S"
where each dfjdy^ is the partial derivative of fiiv^.V^, ...,V^,u, t) with respect to y^ evalu-
ated at 2/1 = <#.!, 2/2 = ^2' • • • ' l/n = ^n ^^^^^ M = w. Now, since each <^j satisfies the original
equation, then 6,4, Jdt = /j can be canceled to leave
Idfildyi dfi/dyi . . . dfi/dyn\ /xi\ /dfi/du\
3/2/32/1 3/2/33/2 • . • 3/2/3|/» V X2 \ ( 3/2/31*
^dfjdyi 3/„/32/2 ... dfjdyn/ \Xn/ \dfJdU/
which is, in general, a time-varying linear differential equation, so that the nonlinear equa-
tion has been linearized. Note this procedure is valid only for Xi, X2, ...,Xn and v small
enough so that the higher order terms in the Taylor's series can be neglected. The matrix
of dfjdy. evaluated at y^ = <^j. is called the Jacobian matrix of the vector f (y, u, t).
Example 1.21.
Consider the system of Example 1.20 with initial conditions j/(to) = 1 and y(t^ = -1 at toJ= 1-
If the particular input w(t) = 0, we obtain the trajectories ^i(t) = t~^ and ^gC*) = ~*~^- ^^^'^^ fi - 3/2.
then a/i/aj/i - 0, 3/i/3j/2 = 1 and dfi/du = 0. Since h = 2j/? -M3/2, then dh/dyi = 6j/i, df^/dVi - -«■ and
5/2/aM = -j/2. Hence for initial conditions y(«o) = 1 + *i(*o). ^(*o) = -1 + »2(*o) and inputs u - v(t),
we obtain
This linear equation gives the solution y(t) = «i(t) + t-i and dy/dt = x^-t'^ for the original nonlinear
equation, and is valid as long as x^, x^, and v are small.
Example 1.22. _
Given the system dyldt = ky-v^ + u. Taking u(t) = 0, we can find two constant solutions <t>{t) -
and Mt) = k. The equation for small motions x(t) about ,/,(t) = is dx/dt = kx + u so that y[t) «= x(t),
and the equation for small motions x(t) about .^(t) = fc is da;/dt = -kx + u so that j/(t) » k + x(t).
10 MEANING OP STATE [CHAPi 1
Solved Problems
1.1. Given a delay line whose output is a voltage input delayed T seconds. What is the
physical object, the abstract object, the state variable and the state space? Also,
is it controllable, observable and a dynamical system with a state space description?
The physical object is the delay line for an input u{t) and an output y(t) = u(t — T). This
equation is the abstract object. Given an input time function M(t) for all *, the output y{t) is
defined for t ^ to, so it is a dynamical system. To completely specify the output given only u{t)
for t ^ to, the voltages already inside the delay line must be known. Therefore, the state at
time to is x(to) = M[t„-T,t„) . where the notation u^^^f^y means the time function m(t) for t in the
interval tz- r < t^. For e > as small as we please, M[f(,_T,to) '^^^ ^^ considered as the lin-
countably infinite set of numbers
{u(to-T), u(to-T + e), ..., u(to-.)} = M[e„_r,t„, = x{to)
In this sense we can consider the state as consisting of an infinity of numbers. Then the state
variable is the infinite set of time functions
X(t) = M[(_r,0 = Mt-T), u(t-T + e), ..., U(t-e))
The state space is the space of all time functions T seconds long, perhaps limited by the
breakdovra voltage of the delay line.
An input u(t) for to-t<to+T will completely determine the state T seconds later, and.
any state will be observed in the output after T seconds, so the system is both observable and con-
trollable. Finally, x(t) is uniquely made up of x(t — t) shifted r seconds plus the input over t sec-
onds, so that the mathematical relation y(t) = u{t — T) gives a system with a state space
description.
1.2. Given the uncontrollable partial differential equation (diffusion equation)
^ = ^ + 0„(.,,)
with boundary conditions 2/(0, i) = 1/(1, i) = 0. What is the state variable?
The solution to this zero input equation for t — to is
CO
y{r,t) — 2 c„e-«^'^^»"'o' smmrr
where c„ = I y(r, to) sin nirr dr. All that is needed to determine the output is v(r, tj), so that
y{r, t) is a choice for the state at any time t. Since y{r, t) must be known for almost all r in the
interval — r — 1, the state can be considered as an infinity of numbers similar to the case of
Problem 1.1.
1.3. Given the mathematical equation (dy/dty = y^ + Ou. Is this a dynamical system
with a state space description?
A real output exists for all t — to for any u{f}, so it is a dynamical system. The equation
can be written as dy/dt = s{t)y, where s(t) is a member of a set of time functions that take on
the value +1 or —1 at any fixed time t. Hence knowledge of y(to) and s(t) for t - to uniquely
specify y(t) and they are the state.
1.4. Plot the trajectories of
^ _ f if idy/dt-y^l)^
dt^ ~ \-y if {dy/dt-y<l)j
Changing variables to 2/ = Ki and dy/dt = X2, the mathematical relation becomes
X2 dx^ldxi = or X2 dx^ldx-^ = —x^
CHAP. 1]
MEANING OF STATE
11
The former equation can be integrated immediately to x^it) = a;2(to)> a straight line in the phase
plane. The latter equation is solved by multiplying by dxi/2 to obtain »£ ^3:2/2 + Xi dxi/2 = 0.
This can be integrated to obtain a;|(t) + xl(t) = xlit^) + xl(to). The result is an equation of circles
in the phase plane. The straight lines lie to the left of the line 9;2 ~ *'i ~ ^ ^^^ *h® circles to
the right, as plotted in Fig. 1-5.
Fig. 1-5
Note xi increases for aig > (positive velocity) and decreases for X2 < 0, giving the motion
of the system in the direction of the arrows as t increases. For instance, starting at the initial
conditions Xi(to) and «2(*o) corresponding to the point numbered 1, the system moves along the
outer trajectory to the point 6. Similarly point 2 moves to point 5. However, starting at either
point 3 or point 4, the system goes to point 7 where the system motion in the next instant is not
determined. At point 7 the output y(t) does not exist for future times, so that this is not a
dynamical system.
1.5. Given the electronic device diagrammed in Fig. 1-6, with a voltage input u{t) and a
voltage output i/(i). The resistors R have constant values. For to~t< ti, the
switch S is open; and for t^ti, S is closed. Is this system linear?
Fig. 1-6
Referring to Definition 1.7, it becomes apparent the first thing to do is find the state. No other
information is necessary to determine the output given the input, so there is no state, i.e. the
dimension of the state space is zero. This problem is somewhat analogous to Example 1.1, except
that the position of the switch is specified at each instant of time.
To see if the system is linear, since x{t) = for all time, we only need assume two inputs u^)
and u,{t). Then y,{t) = u,(t)/2 for t,^t<t, and j/i(t) = ^i(t)/3 for t ^ t,. Siniilarly j/ajt) -
uJt)/2 or uM)/S. Now assume an input au^it) + pu^it). The output is [au,(t) + pu2{t)]/2 for
t„ ^ t < t, and faM,(t) + Pu^miZ for t > t,. Substituting y^{t) and y^it), the output is ai/x(t) + PV^^t),
showing that superposition does hold and that the system is linear. The switch S can be consid-
ered a time-varying resistor whose resistance is infinite for t < h and zero for t ~ t^. Therefore
Fig. 1-6 depicts a time-varying linear device.
12
MEANING' OF STATE
[CHAP. 1
1.6. Given the electronic device of Problem 1.5 (Fig. 1-6), with a voltage input u{t) and
a voltage output y{t). The resistors R have constant values. However, now the. posi-
tion of the switch S depends on y(t). Whenever y{t) is positive, the switch S is open;
and whenever y{t) is negative, the switch S is closed. Is this system linear?
Again there is no state, and only superposition for zero state but nonzero input need be
investigated. The input-output relationship is now
Vit) = [5u{t) + u(t) sgn m(0]/12
where sgn m = +1 if u is positive and -1 if u is negative. Given two inputs u^ and u^ with
resultant outputs y^ and y^ respectively, an output y with an input Mg = au^ + pu^ is expressible as
y = [5(aMi + Pu^) + (aMi + pu^ sgn {au^ + Pu^)]/X2.
To be linear, ay + ^2/2 must be equal y which would be true only if
aui sgn Ml + pu2 sgn Mg = (om^ + PU2) sgn (au^ + pu^)
This equality holds only in special cases, such as sgn Mj = sgn u^ = sgn (au^ + pu^), so that the
system is not linear.
1.7. Given the abstract object characterized by
y{t) = xoe'«"' + f e^''u{T)di
Is this time-varying?
This abstract object is that of Example 1.2, with
RC = 1 in equation (1.1). By the same procedure
used in Example 1.16, it can also be shown time-
invariant. However, it can also be shown time-invari-
ant by the test given after Definition 1.8. The input
time function u{t) is shifted by T, to become w(t).
Then as can be seen in Fig. 1-7,
u{t) = u(t-T)
Starting from the same initial state at time t^ + T,
y(a) = a;oe'o + r-<. + J" e^-''u{r- T) dr
Let I = T - T:
to+T
T-T
y{a) = a;oe'»+'^ " + j ei + 'r~'^u(i) di
to
Evaluating ^ at a= t+T gives
y(t+T) = a;oe*»-' + J'ef-Ml)d{
'o
which is identical vidth the output y{t).
H t
Original System
<o + r t+r
Shifted System
Fig. 1-7
CHAP. 1]
MEANING OF STATE
13
IX
1.9.
Supplementary Problems
Given the sprinir-niass system shown in Fig. 1-8. What is the
physical object, the abstract object, and the state variable?
Given the hereditary system y{t) = I K{t, r) m(t) dr where
K{t,T) is some single-valued continuously differentiable function
of t and T. What is the state variable? Is the system linear?
Is the system time-varying?
M
k
Fig. 1-8
1.10. Given the discrete time system x{n + 1) = x{n) + u{n), the series of inputs m(0), m(1), . . . , u(k), . . . ,
and the state at step 3, a;(3). Find the state variable x{m) at any step m — 0.
1.11. An abstract object is characterized by y(t) = u(t) for t(, — t< ti, and by dy/dt = du/dt for t — tj.
It is given that this abstract object will permit discontinuities in y(t) at tj. What is the dimension
of the state space for t^ — t < ti and for t — ti?
1.12. Verify the solution (i.jg) of equation (1.1), and then verify the solution (1.S) of equation (i.4).
Finally, verify the solution Xi + Zx^ = C(2xi + W2)* of Example 1.9.
1.13. Draw the trajectories in two dimensional state space of the system y + y — 0.
1.14. Given the circuit diagram of Fig. l-9(a.), where the nonlinear device NL has the voltage-current
relation shown in Fig. 1-9(6). A mathematical equation is formed using i = Cv and v = /(i) = f{Cv):
V = (l/C)/-i(v)
where /"' is the inverse function. Also, v(to) is taken to be the initial voltage on the capacitors.
Is this mathematical relation a dynamical system with a state space description?
(a)
Fig. 1-9
1.15. Is the mathematical equation y^ + 1 = u a dynamical system?
1.16. Is the system dy/dt = t^y time-varying? Is it linear?
1.17. Is the system dy/dt = 1/y time-varying? Is it linear?
1.18. Verify that the system of Example 1.16 is nonlinear.
1.19. Show equation (1.10) is linear.
1.20. Show equation (1.10) is time-invariant if the coefficients on and ^j for i
j = 0, 1, . . .,n, are not functions of time.
1.21. Given dxi/dt =Xi + 2, dxi/dt = x^ + u, y = Xi + X2-
(a) Does this system have a state space description?
(6) What is the input-output relation?
(e) Is the system linear?
= 1, 2, ...,7i and
14 MEANING OF STATE
[CHAP. 1
1.22. What is the state space description for the anticipatory system y(t) = u{t + T) in which only con-
tion 5 for dynamical systems is violated?
1.23. Is the system dx/dt = e*u, y — e-*x time-varying?
1.24. What is the state space description for the differentiator y = du/dtl
1.25. Is the equation y = f(t)u a dynamical system given values of f(t) for to - * - *i only?
Answers to Supplementary Problems
1.8. The physical object is the spring-mass system, the abstract object is all x obeying Mx + kx = 0,
and the state variable is the vector having elements x{f) and dx/dt. This system has a zero input.
1.9. It is not possible to represent
y(t) - Vih) + ) K{t, t) u(t) dr, unless K(t, t) = K(«o, t)
(This is true for to—T for the delay line.) For a general K{t, r), the state at time t must be
taken as u(t) for — « < t < t. The system is linear and time varying, unless K{t,T) = K{t — T),
in which case it is time-invariant.
fc-l 3-fc
1.10. x{k) = x{S) + 2 m(z) for fe = 4, 5, . . . and x(k) = x{3) - 2 m(3 - 1) for A; = 1, 2, 3. Note we
t=3 i=l
need not "tie" ourselves to an "initial" condition because any one of the values x{i) will be the
state for i =^ 0, 1, . . .n.
1.11. The dimension of the state space is zero for to — t < ti, and one-dimensional for t — ti. Because
the state space is time-dependent in general, it must be a family of sets for each time t. Usually
it is possible to consider a single set of input-output pairs over all t, i.e. the state space is time-
invariant. Abstract objects possessing this property are called uniform abstract objects. This
problem illustrates a nonuniform abstract object.
1.12. Plugging the solutions into the equations will verify them.
1.13. The trajectories are circles. It is a dynamical system.
1.14. It is a dynamical system, because v{t) is real and defined for all t - t^. However, care must be
taken in giving a state space description, because f~^(v) is not single valued. The state space
description must include a means of determining which of the lines 1-2, 2-3 or 3-4 a particular
voltage corresponds to.
1.15. No, because the input u < 1 results in an imaginary output.
1.16. It is linear and time-varying.
1.17. It is nonlinear and time-invariant.
1.21. (a) Yes ^
(6) y(t) = e'-'»[*i(*o) + a'2«o)]+J e*-^[M(r) + 2]dr
(c) No '"
CHAP. 1] MEANING OF STATE 15
1.22. No additional knowledge is needed other than the input, so that state is zero dimensional and the
state space description is y(t) = u(t + T). It is not a dynamical system because it is not realizable
physically if u{t) is unknown in advance for all t — to. However, its state space description
violates only condition 5, so that other equations besides dynamical systems can have a state
space description if the requirement of causality is waived.
1.23. Yes. y{t) = e-txo + j e^""""
to
on when the system is started. If Xq = 0, the system is equivalent to dx/dt = —x + u where
y = X, which is time-invariant.
1.24. If we define
du/dt = lim [u(t + e) — u(t)]/e
c->0
so that y(t(,) is defined, then the state space is zero dimensional and knowledge of u(t) determines
y{t) for all t — tg. Other definitions of du/dt may require knowledge of u(t) for fg — e — t — to,
which would be the state in that case.
1.25. Obviously y(t) is not defined for t > ti, so that as stated the equation is not a dynamical system.
However, if the behavior of engineering interest lies between tg and ti, merely append y = Ou for
t—ti to the equation and a dynamical system results.
chapter 2
Methods for Obtaining
the State Equations
2.1 FLOW DIAGRAMS
Flow diagrams are a simple diagrammatical means of obtaining the state equations.
Because only linear differential or difference equations are considered here, only four basic
objects are needed. The utility of flow diagrams results from the fact that no differenti-
ating devices are permitted.
Definition 2.1:
Definition 2.2:
Definition 2.3:
A summer is a diagrammatical abstract object having n inputs Ui{t),U2{t),
. . ., Un(t) and one output y{t) that obey the relationship
y(t) = ±Ui{t) ± U2{t) ± ■■■ ± Un{t)
where the sign is positive or negative as indicated in Fig. 2-1, for example.
Mj(<)
M2(t) ■
M„(«)-
y(t)
Fig. 2-1. Summer
A scalar is a diagrammatical abstract object having one input u{t) and one
output y{t) such that the input is scaled up or down by the time function a{t)
as indicated in Fig. 2-2. The output obeys the relationship y{t) = a{t) u{t).
«(«)•
-Um-
■y(t)
Fig. 2-2. Scaler
An integrator is a diagrammatical abstract object having one input u{t),
one output y{t), and perhaps an initial condition y{to) which may be shown
or not, as in Fig. 2-3. The output obeys the relationship
vit)
y{to) + J u{T)dr
u(t)-
9 v(*o)
Fig. 2-3. Integrator at Time t
■^y(t)
16
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
17
Definition 2.4:
A delayer is a diagrammatical abstract object having one input u{k), one
output y(k), and perhaps an initial condition y(l) which may be shown or
not, as in Fig. 2-4. The output obeys the relationship
y{j + l + l) ^ u{j + l) for i = 0,1,2, ...
v{D
u(k)-
Fig. 2-4. Delayer at Time k
■*y(k)
2.2 PROPERTIES OF FLOW DIAGRAMS
Any set of time-varying or time-invariant linear differential or difference equations
of the form {1.10) or (1.11) can be represented diagrammatically by an interconnection of
the foregoing elements. Also, any given transfer function can also be represented merely
by rewriting it in terms of (1.10) or (l-ll). Furthermore, multiple input and/or multiple
output systems can be represented in an analogous manner.
Equivalent interconnections can be made to represent the same system.
Example 2.1.
Given the system
dy/dt = ay + au
with initial condition j/(to)- An interconnection for this system is shown in Fig. 2-5.
(2.i)
u{t)-
+
<? — o
+
y(to)
■^yit)
Fig. 2-5
Since a is a constant function of time, the integrator and scaler can be interchanged if the initial condi-
tion is adjusted accordingly, as shown in Fig. 2-6.
u(t)-
tO
-i-
y(to)/a
o
y(t)
Fig. 2-6
This interchange could not be done if a were a general function of time. In certain special cases it
is possible to use integration by parts to accomplish this interchange. If a(t) = t, then (2.1) can be inte-
grated to -t
y{t) = y(to) + J t[j/(t) + u(r)] dr
to
18
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
Using integration by parts,
V{t) = J/(*o)-J J [y{0 + u(i)]didT + tC [v{r) + u(t)] dr
h *o to
which gives the alternate flow diagram shown in Fig. 2-7.
<t) ^
Fig. 2-7
Integrators are used in continuous time systems, delayors in discrete time (sampled
data) systems. Discrete time diagrams can be drawn by considering the analogous con-
tinuous time system, and vice versa. For time-invariant systems, the diagrams are almost
identical, but the situation is not so easy for time-varying systems.
Example 2.2.
Given the discrete time system
y(k + l+l) = ay(k + l) + au{k + l) {2.S)
with initial condition y(1). The analogous continuous time systems is equation {2.1), where d/dt takes the
place of a unit advance in time. This is more evident by taking the Laplace transform of {2,1),
sY{s)-y{to) = aY{s)+aU{s)
zY{z) - zy{l) = aY{z) + aU{z)
and the 2 transform of {2.2),
zY{
Hence from Fig. 2-5 the diagram for {2£) can be drawn immediately as in Fig. 2-8.
>y{l)
u{k + 1)
tO
+
■*■ y{k + 1)
Fig. 2-8
If the initial condition of the integrator or delayor is arbitrary, the output of that
integrator or delayor can be taken to be a state variable.
Example 2.3.
The state variable for {2.1) is y{t), the output of the integrator. To verify this, the solution to equa-
tion {2 J.) is ^t
y{t) = J/(to) e""-'»' + aj
e««-^>M(r)dr
Note 2/(to) is the state at to. so the state variable is y{t).
Example 2.4.
The state variable for equation {2.2) is y{k + 1), the output of the delayor. This can be veriiied in a
manner similar to the previous example.
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
19
Example 2.5.
From Fig. 2-7, the state is the output of the second integrator only, because the initial condition of the
first integrator is specified to be zero. This is true because Fig. 2-7 and Fig. 2-5 are equivalent systems.
Example 2.6.
A summer or a scalor has no state associated with it, because the output is completely determined by
the input.
2.3 CANONICAL PLOW DIAGRAMS FOR TIME-INVARIANT SYSTEMS
Consider a general time-invariant linear differential equation with one input and one
output, with the letter v denoting the time derivative didt. Only the differential equations
need be considered, because by Section 2.2 discrete time systems follow analogously.
p"2/ + "iP""*!/ + • • • + a„_iP2/ + a„2/ = i8„p"M -f pjy-^u -!-•••-+- i8„_jPM + Pji {2.3)
This can be rewritten as
PHV - W + P""'(«i2/ -I3^u)+ ■■■ + Pia^_,y - ^„_itt) + a„2/ - )8„M =
because a.p^-^y = 'p^~\y, which is not true if a^ depends on time. Dividing through by.p"
and rearranging gives
y = ^«« + ^(^,« -«,!/) + ••• + ~=iW„-^u-a„_,y) + ^(J3„u-aj) {24)
from which the flow diagram shown in Fig. 2-9 can be drawn starting with the output y
at the right and working to the left.
"-vit)
Fig. 2-9. Flow Diagram of the First Canonical Form
The output of each integrator is labeled as a state variable.
The summer equations for the state variables have the form
y ^ x^ + /?„M
-a^y + iCj + p^u
X, =
x„
{2.5)
20
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
Using the first equation in {2.5) to eliminate y, the differential equations for the state vari-
ables can be written in the canonical matrix form
(xr \
/-a.
1
.
• ^
/^^ \
1 ^i-«A \
d
X2 \
Xn-lj
- r
1 .
• M
/ X2
a;n-i
\
+
dt
\
.
• '1
\^» /
\-«„
.
■ 0/
\x. J
\ ^™-«„^o /
u
(2.6)
We will call this the first canonical form. Note the Is above the diagonal and the a a down
the first column of the nxn matrix. Also, the output can be written in terms of the state
vector
(Xt
X2
• I + j8„tt {2.7)
Xn
Note this form can be written down directly from the original equation {2.3).
Another useful form can be obtained by turning the first canonical flow diagram "back-
wards." This change is accomplished by reversing all arrows and integrators, interchang-
ing summers and connection points, and interchanging input and output. This is a heuristic
method of deriving a specific form that will be developed further in Chapter 7.
Q-^M«)
Fig. 2-10. Flow Diagram of the Second Canonical (Phase-variable) Form
Here the output of each integrator has been relabeled. The equations for the state
variables are now
x^ — Xg
«2 = *3
x^
{2.8)
^n ^ -«A~«2"'n-l
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
21
In matrix form, (2.8) may be written as
dt
X2
3/ji— 1
Xn
1
1
•*n-2
(2.9)
and
Xn-l
{2.10)
This will be called the second canonical form, or phase-variable canonical form. Here the
Is are above the diagonal but the as go across the bottom row of the nxn matrix. By
eliminating the state variables x, the general input-output relation {2.3) can be verified.
The phase-variable canonical form can also be written down upon inspection of the original
differential equation {2.3).
2.4 JORDAN FLOW DIAGRAM
The general time-invariant linear differential equation {2.3) for one input and one out-
put can be written as
By dividing once by the denominator, this becomes
Consider first the case where the denominator polynomial factors into distinct poles \i,
i= 1,2, . . .,n. Distinct means \i ¥= Xj for i ¥- j, that is, no repeated roots. Because most
practical systems are stable, the Ai usually have negative real parts.
2?" + ttjP"-! + • • • + a„_jip + «„ = (p - Aj)(p -\^---{v- A„)
A partial fraction expansion can now be made having the form
o , Pi , ''2 , , Pn
^ = ^o« + ^^^« + ^rri^« + • • • + ^-3r„«
Here the residue pj can be calculated as
{^, - a,P,)\r' + {P, ~ ^^o)^r^ +••• + (^„_i - «„-i^o)\ +(^n-«n^o)
Pi
(\ - Aj)(\ - A,) • • • (A, - A,_,)(A, - A,^ J • ■ • (A. - A„)
(;2.15)
{2.U)
(2.15)
The partial fraction expansion {2.14^) gives a very simple flow diagram, shown in Fig.
2-11 following.
22
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
M(t).
-^O— -[>
— rO — v^*~r~G) — ^
L-9-D^
•-j/(«)
Fig. 2-11. Jordan Flow Diagram for Distinct Roots
Note that because pj and \^ can be complex numbers, the states x. are complex-valued
functions of time. The state equations assume the simple form
^2 — K^2 + ^
«„ = A„a;„ + u
V = ^o« + Pi^'i + P2^2 + • • • + pna;„
(;g.ie)
Consider now the general case. For simplicity, only one multiple root (actually one
Jordan block, see Section 4.4, page 73) will be considered, because the results are easily
extended to the general case. Then the denominator in {2.12) factors to
2)" + ajP"-! +•••+«„_!?) + «„ = {p-\y{p-K^{)---{p-Xn)
{2.17)
instead of {2.13). Here there are v identical roots. Performing the partial fraction expan-
sion for this case gives
Pi«
p^U p^U Py + iU
y - ^0^ + ^p-xtY + {p-xi)"-' ■ ■ p-xi ■ P-X.+1
The residues at the multiple roots can be evaluated as
P- kn ^ '
^[|^<'-^'H.=H "='■'■
Pfc = ., .V. I ■,,,._, v/^-^v /vy; I , IV- j.,^, . . .,^ {2.19)
where f{p) is the polynomial fraction in p from {2.12). This gives the flow diagram shown
in Fig. 2-12 following.
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
23
Fig. 2-12. Jordan Flow Diagram with One Multiple Root
The state equations are then
Xz = A.ia;2 + Xa
Xv-l = AiXy-i + Xv
Xv = AiXi. + u
Xv + l = Av + iX^ + i + u
{2.20)
y — PoU + PiXi + 92Xz + • • • 4- PnaJn
24
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
The matrix diiferential equations associated with this Jordan form are
dt
Xi
Xv-l
Xv
Xv + l
Xi 1
Ai 1
\ /a;i
\ / a;2
\xn
Xv-l
Xv + i
and
00 ... Alio ...
... OXiO ...
0... 00 A„+i ...
... ... X„ / \ ccn
Xi
y = (Pl P2 • • • Pn) I *' I + Po
Xn i
l'\
+
1
1
u (2M)
In the w X M matrix, there is a diagonal row of ones above each A,i on the diagonal, and then
the other X's follow on the diagonal.
Example 2.7.
Derive the Jordan form of the differential system
V+2y + v = M + M (S£S)
Equation (2.2S) can be written as j/ = T^qrjT^w whose partial fraction expansion gives
(P + 1)2 ' p + 1
Figure 2-13 is then the flow diagram.
u(t)^^*Q .p^
KE>
■O
+
Xl
^E>
M
Fig. 2-13
Because the scalor following x^, is zero, this state is unobservable. The matrix state equations in
Jordan form are
y = (0 1)
\X2
CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 25
2.5 TIME-VARYING SYSTEMS
Finding the state equations of a time-varying system is not as easy as for time-invariant
systems. However, the procedure is somewhat analogous, and so only one method will be
given here.
The general time-varying differential equation of order n with one input and one output
is shown again for convenience.
S + «i(*)£^ + ••■ + «„(% = p,{t)^ + ■■■ + p^{t)u {1.10)
Differentiability of the coefficients a suitable number of times is assumed. Proceeding in
a manner somewhat similar to the second canonical form, we shall try defining states as in
{2.8). However, an amount yH) [to be determined] of the input u{t) enters into all the states.
'^i = a;2 + yi(t)M
K = «3 + -)'2(*)«
{2.2A)
V = x^ + y^{t)u
By differentiating y n times and using the relations for each state, each of the unknown y.
can be found. In the general case.
Example 2Ji.
Consider the second order equation
S + «l(*)^ + «2(*)3/ = l^oit)-^ + Pi(t)^ + P^(t)u (2.26)
Then by (2.24)
y = xi + 7o(*)m (2.27)
and differentiating,
y = Xi + yo(t)u + yo(t)u (2.28)
Substituting: the first relation of (2.24) into (2.28) gives
y = X2+ [yi(*) + ro(*)]M + yo(*)M (2.29)
Differentiating again,
y = *2 + [ri(«) + "o(<)]m + [7i(*) + 2yo(t)]u + yo(t) u (2.30)
From (2.24) we have
X2 = -ai(t)x2 - a2(t)xi + y2(t)u (2.31)
Now substituting (2.27), (2.29) and (2.30) into (2J1) yields
y - [ yi(«) + 'Um^ - [ri«) + 2yo(«)] u - y^) M
= -ai(t){y - [7i(«) + yo(*)]M - 7o(t)u} - oi2(t)[y - yo(t)u] + y2(t)u
26
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
Equating coefficients in (2.26) and (2.SS),
72 + Vi + Vo + (7i + Vo)«i + yo«2 = P2
71 + 2yo + ai7o = ^1
7o - Po
Substituting (2.35) into (2.Si),
and putting (2.55) and (2J6) into (;2.55),
72 = fio -k + ^2 + (o'lPo + 2;8o " ^i)«i + j8o«i - jSoaz
Using equation (^.;2^), the matrix state equations become
1
1
... 1
(2.SS)
{2M)
(2.S5)
(2M)
(2.S7)
2/ = (1
0)
+ y,{t)u
"n-l
u {2.38)
2.6 GENERAL STATE EQUATIONS
Multiple input-multiple output systems can be put in the same canonical forms as single
input-single output systems. Due to complexity of notation, they will not be considered here.
The input becomes a vector u(i) and the output a vector y(i). The components are the inputs
and outputs, respectively. Inspection of matrix equations {2.6), {2.9), {2.21) and {2.38)
indicates a similarity of form. Accordingly a general form for the state equations of a
linear differential system of order n with m inputs and k outputs is
dyjdt = A(f)x + B(t)u
y = C(t)x + D(t)u
where x(t) is an w-vector,
u(i) is an m-vector,
y(i) is a fc-vector,
A.{t) is an « X w matrix,
B(<) is an TO X m matrix,
C{t) is a A; X « matrix,
D(f) is a A; X m matrix.
In a similar manner a general form for discrete time systems is
x(« + 1) = A(w) x(m) + B(m) u(n)
y{n) = C{n) x(%) + D(tc) u{n)
where the dimensions are also similar to the continuous time case.
{2.39)
{2.i0)
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
27
Specifically, if the system has only one input u and one output y, the differential equa-
tions for the system are
dx/dt = A(i)x + b(t)M
y = ct(it)x + d(i)M
and similarly for discrete time systems. Here c(i) is taken to be a column vector, and
ct(«) denotes the complex conjugate transpose of the column vector. Hence ct(i) is a row
vector, and c+(*)x is a scalar. Also, since u, y and d{t) are not boldface, they are scalars.
Since these state equations are matrix equations, to analyze their properties a knowledge
of matrix analysis is needed before progressing further.
2.1.
Solved Problems
Find the matrix state equations in the first canonical form for the linear time-
invariant differential equation
y + 5y + 6y = ii + u (2.4.1)
with initial conditions y{0) = yo, y{0) = yo. Also find the initial conditions on the
state variables.
Using p = d/dt, equation (241) can be written as p^y + 5py + 6y — pu + u. Dividing by p2
and rearranging,
y = piu-5y) + -^(u-ey)
The flow diagram of Fig. 2-14 can be drawn starting from the output at the right.
•- y
Fig. 2-14
Next, the outputs of the integrators are labeled the state variables xi and X2 as shown. Now
an equation can be formed using the summer on the left:
X2 = —62/ + u
Similarly, an equation can be formed using the summer on the right:
Xi = X2 — 5y + u
Also, the output equation is y = x^. Substitution of this back into the previous equations gives
Xi = —5xi + X2 + u
X2 = -exi + u (242)
28
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
The state equations can then be written in matrix notation as
-6 o)G) + (l^'"
with the output equation
y = (10)
X2
The initial conditions on the state variables must be related to j/o and Vo, the given output initial
conditions. The output equation is a;i(t) = ?/(<), so that a:j(0) = i/(0) = 2/„. Also, substitutmg
y(t) = Xi(t) into (2.42) and setting * = gives
y{0) = -5j/(0) + 052(0) + M(0)
Use of the given initial conditions determines
a;2(0) - yo+ 5j/o - "(0)
These relationships for the initial conditions can also be obtained by referring to the flow diagram
at time t = 0.
2.2. Find the matrix state equations in the second canonical form for the equation {2..i.l)
of Problem 2.1, and the initial conditions on the state variables.
The flow diagram (Fig. 2-14) of the previous problem is turned "backwards" to get the flow
diagram of Fig. 2-15.
y~*-
o
+
Fig. 2-15
The outputs of the integrators are labeled x^ and X2 as shown. These state variables are dif-
ferent from those in Problem 2.1, but are also denoted x^ and X2 to keep the state vector x(t) nota-
tion, as is conventional. Then looking at the summers gives the equations
(243)
(2.U)
(245)
y = Xi + X2
X2 = —6x1 — 5a;2 + u
Furthermore, the input to the left integrator is
Xi = X2
This gives the state equations
dt \X2
n/a^i
-6 -5/U2
and
y = (1 1)
X2
The initial conditions are found using (243),
j/o = a;i(0) + a;2(0)
and its derivative
Vo = a;i(0) + X2(0)
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
29
Use of (244) and (2.i5) then gives
Vo = «2(0) - 6a!i(0) - 5a;2(0) + m{0)
Equations (248) and (247) can be solved for the initial conditions
a;i(0) _= -22/0 - i^o + ^"(0)
^2(0) = Byo + ^yo-^u(0)
(247)
2.3. Find the matrix state equations in Jordan canonical form for equation (241) of Prob-
lem 2.1, and the initial conditions on the state variables.
The transfer function is
y
p + 1
p^ + 5p + 6
p + 1
(p + 2)(p + S)'
A partial fraction expansion gives
- -1 , 2
p+2 p+S
From this the flow diagram can be drawn:
+
o
+
+
o
+
O^
-H -1
X2
<3H
-H 2
Fig. 2-16
6-
The state equations can then be written from the equalities at each summer:
A pi
dt \X2
i-:)::ha-
y = (-1 2)
From the output equation and its derivative at time to,
j/o = 2a!2(0) - a;i(0)
Vo = 2i2(0)-ii(0)
The state equation is used to eliminate »i(0) and «2(0):
^0 = 2a!i(0) - 6x2(0) + M(0)
Solving these equations for a;i(0) and 0:2(0) gives
!ri(0) = m(0) - 3^0 - Vo
^2(0) = i[M(0)-2j/o-^o]
30
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
2.4. Given the state equations
dt \X2
-l-lX^-i'."
y ^ (1 1)
Find the differential equation relating the input to the output.
In operator notation, the state equations are
pxi — X2
px2 — —6xi — 5X2 + w
y = Xi + X2,
Eliminating Xi and X2 then gives
p^y + 5py + 6y = pu + u
This is equation (S41) of Example 2.1 and the given state equations were derived from this equa-
tion in Problem 2.2. Therefore this is a way to check the results.
2.5. Given the feedback system of Fig. 2-17 find a state space representation of this
closed loop system.
R{s)-
+
o
«(«) = TTi
»W = JT3
■C{s}
Fig. 2-17
The transfer function diagram is almost in flow diagram form already. Using the Jordan
canonical form for the plant G(s) and the feedback H(s) separately gives the flow diagram of
Fig. 2-18.
n
r«).
-1-
o
-1-
o
4-
€y
■0
■*c(t)
Fig. 2-18
Note G{s) in Jordan form is enclosed by the dashed lines. Similarly the part for H(s) was drawn,
and then the transfer function diagram is used to connect the parts. From the flow diagram,
dt
Z) ^ {k -8)(S) + G)'(*^' '^*^ = ^"^
0)
Xi
x%l\
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
31
2.6. Given the linear, time-invariant, multiple input-multiple output, discrete-time system
V^{n + 2) + a^y^in + 1) + a^v^{n) + y^v.J,n + 1) + y^yj^n) = (i^u^(n) + ^^u^(n)
Vln + 1) + y^yln) + a^v^{n + 1) + cc^y^{n) = ^^u^n) + h^u^n)
Put this in the form
x(w + 1) = Ax(n) + Bu(n)
y(w) = Cx(n) + Du(n)
Where „„ = (^.«), „<„, = (».«
The first canonical form will be used. Putting the given input-output equations into z opera-
tions (analogous to p operations of continuous time system),
z^Vx + a-^zvx + a22/l + li^Vl + 732/2 = jSiiti -|- SiMg
«2/2 + Yi2/2 + ffsZl/i + «42/i = ySjMi -|- S2M2
Dividing by «2 and « respectively and solving for y^ and 2/2,
2/1 = - (-«l3/i - 722/2) + ;5(/3iMi + SiM2-«22/i- 732/2)
2/2 = -«3l/i + ; (/32M1 + S2M2 - 7i2/2 - a42/i)
Starting from the right, the flow diagram can be drawn as shown in Fig. 2-19.
♦-2/2
Fig. 2-19
fn„ t^^ T^A. *^" ^''"^ delayers with arbitrary Initial conditions are not needed because a
32 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2
X2(m+1) 1 = I -04 + ri«3 "Yl )( ^2W ) +
^Xgin + 1)/ \-a2 + r3«3
\y2in)J \
2.7. Write the matrix state equations for a general time-varying second order discrete time
equation, i.e. find matrices A{n), B{n), C{n), D{n) such that
x(n + 1) = A(n) x(n) + B(n) u(n)
y{n) = C(tc) x(n) + D(n) u(n) (^.^S)
given the discrete time equation
y{n + 2) 4- «j(n) i/(n + 1) + a^{n) y{n) = /8„(n) w(« + 2) + fi^{n) u{n + 1) + pj^n) u{n) {249)
Analogously with the continuous time equations, try
Xi(n + 1) = x^in) + yi(n) u{n) (2.50)
x^in + 1) = —a^ivL) X2(n) — 02(11) Xi(n) + Y^M "W (^•^■^)
y(n) — xi(n) + Yo(w) u(n) (2.52)
Stepping (2.52) up one and substituting (2.50) gives
y(n + 1) = X2(n) + yi(w) u(n) + Vo('^ + 1) u(n + 1) (2.53)
Stepping (2.53) up one and substituting (2.51) yields
y(n + 2) = -aiW XaW - a2(»») »iW + y2W "(w) 4- yi(w + 1) M(n + 1) + ya(n + 2) M(n + 2) (2.54)
Substituting (2.5;2), (^.55), (2.5Ji.) into (2.49) and equating coefficients of u(n), u(n + 1) and u(n + 2)
gives
VoW = ;8o('*-2)
ViW = ^i(m-l) -ai(n)/3o(M-2)
y2W = ;82W - «iW /8i(w - 1) + [ai(«)ai(n - 1) - ff2W]/3o(w " 2)
In matrix form this is
xi(n + l)\ ^ / 1 \fxi(n)\ , /yiW\
a;2(n+l); V-«2W -«i(w)/ Va;2(«)/ "^ VYaW/"*"^
y(n) = (1 0)(^'|^|) + ro{n)u(n)
2.8. Given the time-varying second order continuous time equation with zero input,
y + a^(t)y + a^{t)y = (2.55)
write this in a form corresponding to the first canonical form
/-a^{t) V
-aJt)
(2.56)
V = {y,(<) ymll") {2.57)
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
33
To do this, differentiate {2.57) and substitute for «i and ^2 from (2.56),
J = (Yi - Tiffi - Y2a2)«i + (Yi + ■y2)a;2 (2.5S)
Differentiating again and substituting for x^ and ajj as before,
V = ( Vl - 2aiyi — otiYi — a2Y2 - 2a2y2 " «2Vl + ai«2y2 + a^Yi)*! + (2Yi - «iYi - a2Y2 + Y2)»2 C^-^^)
Substituting (2.57), (2.58) and (2.59) into (2.55) and equating coefficients of ajj and X2 gives the
equations
"1 - aiYi + (012 - ai)Yi =
V2 + («i — 20:2) Y2 + («2 ~ "2«l + <*2 "" «2)Y2 + 2yi - a2Yi =
In this case, ■yi(t) may be taken to be zero, and any non-trivial y2(*) satisfying
"2 + ("I - 2a2)Y2 + (<4 ~ «2«i + «2 - «2)Y2 ~ (2.60)
will give the desired canonical form.
This problem illustrates the utility of the given time-varying form (2.38). It may always be
found by differentiating known functions of time. Other forms usually involve the solution of
equations such as (2.60), which may be quite difficult, or require differentiation of the «((*). Addition
of an input contributes even more difficulty. However, in a later chapter forms analogous to the
first canonical form will be given.
Supplementary Problems
2.9. Given the discrete time equation y(n-\-2)-\-Zy(n-\-l)-\-2y(n) = u(n-]r\) + Zu(n), find the matrix
state equations in (i) the first canonical form, (ii) the second canonical form, (iii) the Jordan canon-
ical form.
2.10. Find a matrix state equation for the multiple input— multiple output system
Vl + «l^l + «22/l + Y3:5'2 + Y4J/2 = Pi^l + ^iM2
2/2 + Yl^2 + liVi + 03^1 + o'iVi - ^2"! + S2M2
2.11. Write the matrix state equations for the system of Fig. 2-20, using Fig. 2-20 directly.
"3
P
X3
-t-
o
(02
V
^2
+
o
p
-* V
Fig. 2-20
2.12. Consider the plant dynamics and feedback com-
pensation illustrated in Fig. 2-21. Assuming
the initial conditions are specified in the form
^(0), v((i), V(0), T(0), w(0) and w(0), write a
state space equation for the plant plus compen-
sation in the form x = Ax + Btt and show the
relation between the specified initial conditions
and the components of x(0).
500
p(p + 6)(p2 + p + 100)
p2 + p + 100
p2 + 2p + 100
!
rn^ig. 2-21
34 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2
2.13. The procedure of turning "backwards" the flow diagram of the first canonical form to obtain the
second canonical form was never justified. Do this by verifying that equations (2.8) satisfy the
original input-output relationship (2.S). Why can't the time-varying flow diagram corresponding
to equations {2.24.) be turned backwards to get another canonical form?
2.14. Given the linear time-varying differential equation
y + ai{t)y + a2(t)y = I3„{t) u + tii{t)u + p^(t)u
with the initial conditions 3/(0) = j/q. ^(0) = Vo-
(i) Draw the flow diagram and indicate the state variables,
(ii) Indicate on the flow diagram the values of the scalors at all times,
(iii) Write down the initial conditions of the state variables.
2.15. Verify equation {2.37) using equation {2.25):
2.16. The simplified equations of a d-c motor are
di dB
Motor Armature: Ri + L-rr = V — Kf-jj
Inertial Load: K^i = J ~Tfi + ^ j7
Obtain a matrix state equation relating the input voltage V to the output shaft angle e using a
state vector
\deldt/
2.17. The equations describing the time behavior of the neutrons in a nuclear reactor are
6
Prompt Neutrons: In — {p{t) — fi)n + 2 \Ci
£=1
Delayed Neutrons: Q = /Jjii — XjCj
6
where /? = 2 y^i and p{t) is the time-varying reactivity, perhaps induced by control rod motion.
i=l
Write the matrix state equations.
2.18. Assume the simplified equations of motion for a missile are given by
Lateral Translation: z + K^^ + K^ + K^P =
Rotation: '4, + K^a + K^ji —
Angle of Attack: a = ^ — K^z
Rigid Body Bending Moment: M{1) = Ka{l)a + Kb{1)IS
where 2 = lateral translation, = attitude, a = angle of attack, p = engine deflection, M{1) =
bending moment at station I. Obtain a state space equation in the form
^ = Ax + Bw, y = Cx + Dm
dt
where u = p and y = ( „,,, ] , taking as the state vector x
(i) X = (ii) X = (iii) x
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
35
2.19. Consider the time-varying electrical network of Fig. 2-22. The
voltages across the inductors and the current in the capacitors
can be expressed by the relations
d dii dLi
dt
d
de,
(Ciei) = Ci-nr + e..
e, - eu
-at(^2k)
dii
dt
dCi
dL,
La-^ + i^
It is more convenient to take as states the inductor fluxes
Pi = I («a-ei)dt + Pi(to)
to
P2
I (El - Bb) dt + p2(to)
and the capacitor charge
91 = J (^-12)
dt 4- 9i(Jo)
Obtain a state space equation for the network in the form
dx.
dt
= A(t)K + B(t)u
y(t) = C(t)x + D(t)u
where the state vector x, input vector u, and output vector y are
9-^000^
L,(t)
1 — ^M0"^-o
^2(<)
= Ci(t)
C
(i)
(ii)
'Pi
9i
,P2/
'ii
X = I ei
eb
h
y =1 ei
^2,
H
y = I 61
.J2.
Fig. 2-22
2.20. Given the quadratic Hamiltonian H = ■Jq'^'Vq 4- ^p^'Tp where q is a vector of n generalized co-
ordinates, p is a vector of corresponding conjugate momenta, V and T are nX n matrices corre-
sponding to the kinetic and potential energy, and the superscript ^ on a vector denotes transpose.
Write a set of matrix state equations to describe the state.
Answers to Supplementary Problems
2.9. (i) x(w + l) =
(ii) x(n +1) =
(iii) x(« 4-1) =
1\ /0\
-2 -3)^^'^'^ "*" ll ' '*^"-^' ^^'^^
-2 0\ /1\
(1 O)x(n)
(3 l)x(n)
(-1 2)x(n)
36
METHODS FOR OBTAINING THE STATE EQUATIONS
[CHAP. 2
2.10. Similar to first canonical form
-«1
1 -73
-«2
-74
-"3
-71
1
-«4
-72
1
0^
1 oy
X
+
or similar to second canonical form
y =
10
—0:2 — oci — 0:4 — 0:3
1
-74 -73 -72 -71/
+
2.11.
-Oil ~^**-
— 0)2 +0)2
-0)3
■xi
(1 0)1 X2
2.12. For Xi = V, X2 — ^1 ^3 — ^> ^i — '*'' ^5 — '>^> Xs = w the initial conditions result immediately and
1
1
1
-500
-105
-6
1
100
1
1
-100
-2
X +
2.13. The time-varying flow diagram cannot be turned backwards to obtain another canonical form be-
cause the order of multiplication by a time-varying coefficient and an integration cannot be inter-
changed.
2.14. (i)
♦►?/
Fig. 2-23
CHAP. 2]
METHODS FOR OBTAINING THE STATE EQUATIONS
37
2.17.
(li) The values of vq, y^ and yj are given by equations (2.3S), (2.54) and (S.35).
(iii) XiiO) = I/O ~ YoiO) u(0) a;2(0) = ^o - ( VoW + ri(0)) m(0) - roW m(0)
/-RL-^
= [
(p(f)-/3)Z-i Xi
2.18. (i) A =
(ii) A
C =
(iii) A =
C =
2.19. (i) A =
C =
(ii) A
C =
2.20. The equations of motion are
Using the state vector x = (gj . .
dt
BH
dPi
dt
ail
Qn Pi
Pn)'', the matrix state equations are
TN
chapter 3
Elementary Matrix Theory
3.1 INTRODUCTION
This and the following chapter present the minimum amount of matrix theory needed
to comprehend the material in the rest of the book. It is recommended that even those well
versed in matrix theory glance over the text of these next two chapters, if for no other
reason than to become familiar with the notation and philosophy. For those not so well
versed, the presentation is terse and oriented towards later use. For a more comprehensive
presentation of matrix theory, we suggest study of textbooks solely concerned with the
subject.
3.2 BASIC DEFINITIONS
Definition 3.1: A matrix, denoted by a capital boldfaced letter, such as A or *, or by the
notation {sa} or {ao}, is a rectangular array of elements that are members
of a field or ring, such as real numbers, complex numbers, polynomials,
functions, etc. The kind of element will usually be clear from the context.
Example 3.1.
An example of a matrix is
A = (' \ ' ) = h^^ «'^ «-^ = {a,)
\ (»21 ''22 <*23 /
2 j
T x^ sin t
Definition 3.2:
A row of a matrix is the set of all elements through which one horizontal
line can be drawn.
Definition 3.3:
A column of a matrix is the set of all elements through which one vertical
line can be drawn.
Example 3.2.
The rows of the matrix of Example 3.1 are (0 2 ;) and (tt x^ sin t). The columns of this matrix are
jT / \x^/ \smt
Definition 3.4:
Definition 3.5:
A square matrix has the same number of rows and columns.
The order of a matrix is mxn, read m by n, if it has m rows and n
columns.
Definition 3.6: A scalar, denoted by a letter that is not in boldface type, is a 1 x 1 matrix.
In other words, it is one element. When part of a matrix A, the notation
Oij- means the particular element in the ith row and jth. column.
Definition 3.7:
A vector, denoted by a lower case boldfaced letter, such as a, or with its
contents displayed in braces, such as {«;), is a matrix with only one row or
only one column. Usually a denotes a column vector, and a^ a row vector.
38
CHAP. 3]
ELEMENTARY MATRIX THEORY
39
Definition 3.8:
The diagonal of a square matrix is the set of all elements an of the matrix
in which i = j. In other words, it is the set of all elements of a square
matrix through which can pass a diagonal line drawn from the upper left
hand corner to the lower right hand corner.
Example 3.3.
Given the matrix
{6i;} =
\.
hi
-^6,2 ,
K.
021 ^022
hs
&M
632"^
633
The diagonal is the set of elements through which the solid line is drawn, bn, 622. ^33. and not those sets
determined by the dashed lines.
Definition 3.9: The trace of a square matrix A, denoted tr A, is the sum of all elements
on the diagonal of A.
tr A = 2 ««
3.3 BASIC OPERATIONS
Definition 3.10: Two matrices are equal if their corresponding elements are equal. A = B
means an = bn for all i and j. The matrices must be of the same order.
Matrix addition is performed by adding corresponding elements. A + B =
C means a« + bn = cn, for all i and j. Matrix subtraction is analogously
defined. The matrices must be of the same order.
Matrix differentiation or integration means differentiate or integrate each
element, since differentiation and integration are merely limiting opera-
tions on sums, which have been defined by Definition 3.11.
Addition of matrices is associative and commutative, i.e. A + (B + C) = (A + B) + C
and A + B = B + A. In this and all the foregoing respects, the operations on matrices have
been natural extensions of the corresponding operations on scalars and vectors. However,
recall there is no multiplication of two vectors, and the closest approximations are the dot
(scalar) product and the cross product. Matrix multiplication is an extension of the dot
product a • b = aibi + a2&2 + • • • + a„bn, a scalar.
Definition 3.11:
Definition 3.12:
Definition 3.13:
To perform matrix multiplication, the element Cn of the product matrix C
is found by taking the dot product of the ith row of the left matrix
A and the jth column of the right matrix B, where C = AB, so that
n
Cij = 2 Cf*^fci-
k = l
Note that this definition requires the left matrix (A) to have the same number of columns
as the right matrix (B) has rows. In this case the matrices A and B are said to be compatible.
It is undefined in other cases, excepting when one of the matrices is 1x1, i.e. a scalar.
In this case each of the elements is multiplied by the scalar, e.g. aB = {abn} for all i and j.
Example 3.4.
The vector equation y = Ax, when y and x are 2 X 1 matrices, i.e. column vectors, is
On
"21
O12
<*22
40
ELEMENTARY MATRIX THEORY
[CHAP. 3
where j/j - ^2 aik^k for i = l and 2. But suppose x = Bz, so that x^. = 2 b^^Zj. Then '
2 / 2 V 2
so that y = A(Bz) = Cz, where AB = C.
Example 3.4 can be extended to show (AB)C = A(BC), i.e. matrix multiplication is asso-
ciative. But, in general, matrix multiplication is not commutative, AB ¥■ BA. Also, there
is no matrix division.
Example 3.5.
To show AB 7^ BA, consider
AB =
1 2\/l 1
3 0/V2 2
5 5
3 3
= D
BA =
1 1
2 2
1 2
3
4 2
8 4
= C
and note D ^^ C. Furthermore, to show there is no division, consider
BF
= C
Since BA = BF = C, "division" by B would imply A = F, which is certainly not true.
Suppose we have the vector equation Ax = Bx, vv^here A and B are nXn matrices. It
can be concluded that A = B only if x is an arbitrary n-vector. For, if x is arbitrary, we
may choose x successively as ei, e2, . . . , e„ and find that the column vectors ai = bi,
a2 = b2, . . . , a„ = b„. Here e, are unit vectors, defined after Definition 3.17, page 41.
Definition 3.14: To partition a matrix, draw a vertical and/or horizontal line between two
rows or columns and consider the subset of elements formed as individual
matrices, called submatrices.
As long as the submatrices are compatible, i.e. have the correct order so that addition
and multiplication are possible, the submatrices can be treated as elements in the basic
operations.
Example 3.6.
A 3 X 3 matrix A can be partitioned into a 2 X 2 matrix An, a 1 X 2 matrix A21, a 2 X 1 matrix A12,
and a 1 X 1 matrix A.^^-
ttu
*12
A =
O2I '*22
«13
023
O3I "32
A similarly partitioned 3X3 matrix B adds as
A,
A + B
An + Bi
A,o + B,
and multiplies as
AB
■A-iiBii + A]2B2
•A21B11 + A22B21
Aoo + Bo
•AllBj2 + A12B2
AoiBio + AooBo
Facility with partitioned matrix operations will often save time and give insight.
CHAP. 3] ELEMENTARY MATRIX THEORY 41
3.4 SPECIAL MATRICES
Definition 3.15: The zero matrix, denoted 0, is a matrix whose elements are all zeros.
Definition 3.16: A diagonal matrix is a square matrix whose off-diagonal elements are all
zeros.
Definition 3.17: The unit matrix, denoted I, is a diagonal matrix whose diagonal elements
are all ones.
Sometimes the order of I is indicated by a subscript, e.g. I„ is an w x w matrix. Unit vectors,
denoted es, have a one as the ith element and all other elements zero, so that Im =
(ei I 62 I ... 1 Cm).
Note lA = AI = A, where A is any compatible matrix.
Definition 3.18: An upper triangular matrix has all zeros below the diagonal, and a lower
triangular matrix has all zeros above the diagonal. The diagonal elements
need not be zero.
An upper triangular matrix added to or multiplied by an upper triangular matrix results
in an upper triangular matrix, and similarly for lower triangular matrices.
Definition 3.19: A transpose matrix, denoted A'^, is the matrix resulting from an inter-
change of rows and columns of a given matrix A. If A = {an}, then
AJ = [an], so that the element in the rth row and ^th column of A becomes
the element in the ith row and ith column of A^.
Definition 3.20: The complex conjugate transpose matrix, denoted At, is the matrix whose
elements are complex conjugates of the elements of A'^.
Note (AB)^ = B''A'r and (AB)t = BtAt.
Definition 3.21: A matrix A is symmetric if A = A''.
Definition 3.22: A matrix A is Hermitian if A = At.
Definition 3.23: A matrix A is normal if At A = AAt.
Definition 3.24: A matrix A is skew-symmetric if A = -A''.
Note that for the different cases: Hermitian A - At, skew Hermitian A = -At, real
symmetric A^ = A, real skew-symmetric A = -A'^, unitary AtA = I, diagonal D, or
orthogonal AtA = D, the matrix is normal.
3.5 DETERMINANTS AND INVERSE MATRICES
Definition 3.25: The determinant otannXn (square) matrix {««} is the sum of the signed
products of all possible combinations of n elements, where each element
is taken from a different row and column.
detA = 2 (~l)''»n'i«2i'2' ••«"''« ^^•^>
Here Pi «2, . . .,P« is a permutation of 1,2, . . .,«, and the sum is taken over all possible
permutations. A permutation is a rearrangement of 1,2,..., n into some other order , such as
2 n ,1, that is obtained by successive transpositions. A transposition is the interchange
of places of two (and only two) numbers in the list 1,2,..., n. The exponent p of -1 m
(3 1) is the number of transpositions it takes to go from the natural order of 1,2, • • •.« to
Pi,P2 2,„. There are n\ possible permutations of n numbers, so each determinant is the
sum of n ! products.
42 ELEMENTARY MATRIX THEORY [CHAP. 3
Example 3.7.
To find the determinant of a 3 X 3 matrix, all possible permutations of 1, 2, 3 must be found. Per-
forming one transposition at a time, the following table can be formed.
p
Vi,
P2.
Pa
1,
2,
3
1
3,
2,
1
2
2,
3,
1
3
2,
1,
3
4
3,
1,
2
5
1,
3,
2
This table is not unique in that for p = 1, possible entries are also 1, 3, 2 and 2, 1, 3. However, these
entries can result only from an odd p, so that the sign of each product in a determinant is unique. Since
there are 3! = 6 terms, all possible permutations are given in the table. Notice at each step only two
numbers are interchanged. Using the table and (3.1) gives
+ (-I)2ai2a23«31 + (-I)''ai2«2l033 + (-l)^«13a21«32 + (-l)^Ctlia23«32
det A = (-I)'>aiia22a33 + (-I)^ai3a22a3i
*12'*2l"33
Theorem 3.1: Let Ahe an nXn matrix. Then det(A'^) = det (A).
Proof is given in Problem 8.3, page 59.
Theorem 3.2: Given two nXn matrices A and B. Then det(AB) = (det A) (det B).
Proof of this theorem is most easily given using exterior products, defined in Section
3.13, page 56. The proof is presented in Problem 3.15, page 65.
Definition 3.26: Elementary row (or column) operations are:
(i) Interchange of two rows (or columns),
(ii) Multiplication of a row (or column) by a scalar,
(iii) Adding a scalar times a row (or column) to another row (column).
To perform an elementary row (column) operation on an m X m matrix A, calculate the
product EA (AE for columns), where E is an k X n{m X m) matrix found by performing the
elementary operation on the n x n{m x m) unit matrix I. The matrix E is called an elemen-
tary matrix.
Example 3.8.
Consider the 2X2 matrix A = {ay}. To interchange two rows, interchange the two rows in I to
obtain E.
'0 iWaii ai2,\ _ la.2\ o-ii^
EA
1 / V 021 **:
22
a,
To add 6 times the second column to the first, multiply
'«ii a-i2\/l 0\ _ /aii+6ai2 aja
AE=i i\ „ . ) — \ _ic
«21 <*22/ \6 1/ \«21 + oa22
Using Theorem 3.2 on the product AE or EA, it can be found that (i) interchange of two
rows or columns changes the sign of a determinant, i.e. det (AE) = -det A, (ii) multiplication
of a row or column by a scalar a multiplies the determinant by «, i.e. det (AE) = a det A,
and (iii) adding a scalar times a row to another row does not change the value of the de-
terminant, i.e. det (AE) = det A.
CHAP. 3] ELEMENTARY MATRIX THEORY 43
Taking the value of a in (ii) to be zero, it can be seen that a matrix containing a row
or column of zeros has a zero determinant. Furthermore, if two rows or columns are
identical or multiples of one another, then use of (iii) will give a zero row or column, so that
the determinant is zero.
Each elementary matrix E always has an inverse E^i, found by undoing the row or
column operation of I. Of course an exception is « = in (ii).
Example 3.9.
The inverse of E = L j is E-i = f j. The inverse of E = [^ ° ) is E-i = ( J' ^
Definition 3.27: The determinant of the matrix formed by deleting the ith row and the yth
column of the matrix A is the minor of the element a«, denoted det My. The
cof actor cy = (-!)'+» det My.
Example 3.10.
The minor of 0122 of a 3 X 3 matrix A is detM22 = aii«33 — aisoisi- The cofactor C22 = (—1)* detM22 —
det M22.
Theorem 3.3: The Laplace expansion of the determinant of an w x w matrix A is det A =
n n
^ aijCij for any column j or det A = 2) fl^«Cij for any row i.
i=l 3=1
Proof of this theorem is presented as part of the proof of Theorem 3.21, page 57.
Example 3.11.
The Laplace expansion of a 3 X 3 matrix A about the second column is
det A = ai2Ci2 + 022832 + 0.^2632
— ~0'12('*21^33 ~ 'f23'l3l) + 'l22(*l 1^*33 ~ *1303l) "" 032('*ll'''23 ~ «13"2l)
Corollary 3.4: The determinant of a triangular nxn matrix equals the product of the
diagonal elements.
Proof is by induction. The corollary is obviously true for n = 1. For arbitrary n, the
Laplace expansion about the nth row (or column) of an tc x n upper (or lower) triangular
matrix gives det A = UnnCnn. By assumption, c„„ = aua22- • -an-hn-i, proving the corollary.
Explanation of the induction method: First the hypothesis is shown true for n = no,
where no is a fixed number, {no = 1 in the foregoing proof.) Then assume the hypothesis
is true for an arbitrary n and show it is true for n + 1. Let n = no, for which it is known
true, so that it is true for no + 1. Then let n — n^ + l, so that it is true for no + 2, etc.
In this manner the hypothesis is shown true for all n — no.
Corollary 3.5: The determinant of a diagonal matrix equals the product of the diagonal
elements.
This is true because a diagonal matrix is also a triangular matrix.
The Laplace expansion for a matrix whose kth row equals the ith row is det A =
2 flfcjCij for fc =7^ i, and for a matrix whose kth column equals the jth column the Laplace
3 = 1 n
expansion is det A = ^ ^ticCij. But these determinants are zero since A has two identical
rows or columns. '"'
44 ELEMENTARY MATRIX THEORY [CHAP. 3
Definition 3.28: The Kronecker delta 8« = 1 if i — j and 8ij = if i ^ j.
Using the Kronecker notation,
2 «*Cij = 2 aiciCji = SkjdetA (3.2)
i=l ! = 1
Definition 3.29: The adjugate matrix of A is adj A = {cuy, the transpose of the matrix
of cofactors of A.
The adjugate is sometimes called "adjoint", but this term is saved for Definition 5.2.
Then (3.2) can be written in matrix notation as
A adj A = IdetA (3.3)
Definition 3.30: An m x n matrix B is a left inverse of the « x m matrix A if BA = Im, and
an m X TO matrix C is a right inverse if AC = I„. A is said to be nonsingular
if it has both a left and a right inverse.
If A has both a left inverse B and a right inverse C, C = IC = BAC = BI = B. Since
BA = Im and AC = In, and if C = B, then A must be square. Furthermore suppose a non-
singular matrix A has two inverses G and H. Then G = GI = GAH = IH = H so that a
nonsingular matrix A must be square and have a unique inverse denoted A"' such that
A-iA = AA-> = I. Finally, use of Theorem 3.2 gives det A det A-^ = detl = 1, so that if
det A = 0, A can have no inverse.
Theorem 3.6: Cramer's rule. Given an nxn (square) matrix A such that detA^T^O.
Then
^"^ = d^^^J^
The proof follows immediately from equation (3.3).
Example 3.12.
The inverse of a 2 X 2 matrix A is
._! _ 1 / "22 ~«12
Another and usually faster means of obtaining the inverse of nonsingular matrix A is
to use elementary row operations to reduce A in the partitioned matrix A 1 1 to the unit
matrix. To reduce A to a unit matrix, interchange rows until an ¥- 0. Denote the inter-
change by El. Divide the first row by an, denoting this row operation by E2. Then
E2EiA has a one in the upper left hand corner. Multiply the first row by an and subtract
it from the ith row for i = 2,3, . . .,n, denoting this operation by E3. The first column of
E3E2E1A is then the unit vector ei. Next, interchange rows E3E2E1A until the element in
the second row and column is nonzero. Then divide the second row by this element and
subtract from all other rows until the unit vector 62 is obtained. Continue in this manner
until Em- • El A = I. Then Em- • -Ei = A^S and operating on I by the same row operations
will produce A~^ Furthermore, detA~' = detEi det E2- • -detEm from Theorem 3.2.
Example 3.13.
/ 1\
To find the inverse of ( 1 , adjoin the unit matrix to obtain
1
-1 1
1
1
CHAP. 3]
ELEMENTARY MATRIX THEORY
45
Interchange rows (detEj = —1):
Divide the first row by —1 (detEg = — 1):
-1 1
1
1
1
1-1 10-1
1 I 1
It turns out the first column is already ej, and all that is necessary to reduce this matrix to I is to add the
second row to the first (det E3 = 1).
1
1
1 -1
1
The matrix to the right of the partition line is the inverse of the original matrix, which has a determinant
equal to [(-l)(-l)(l)]-i = 1.
3.6 VECTOR SPACES
Not all matrix equations have solutions, and some have more than one.
Example 3.14.
(a) The matrix equation
> = c
has no solution because no { exists that satisfies the two equations written out as
£ =
2i = 1
(b) The matrix equation
is satisfied for any J.
0\ /O
(^) =
To find the necessary and sufficient conditions for existence and uniqueness of solutions
of matrix equations, it is necessary to extend some geometrical ideas. These ideas are
apparent for vectors of 2 and 3 dimensions, and can be extended to include vectors having
an arbitrary number of elements.
Consider the vector (2 3). Since the elements are real, they can be represented as points
in a plane. Let (i^ i^) " (^ 3). Then this vector can be represented as the point in the
Ij, I2 plane shown in Fig. 3-1.
(2 3)
Fig. 3-1. Representation of (2 3) in the Plane
46 ELEMENTARY MATRIX THEORY [CHAP. 3
If the real vector were (1 2 3), it could be represented as a point in (|j i^ Q space
by drawing the Ig axis out of the page. Higher dimensions, such as (1 2 3 4), are harder
to draw but can be imagined. In fact some vectors have an infinite number of elements.
This can be included in the discussion, as can the case where the elements of the vector are
other than real numbers.
Definition 3.31: Let % be the set of all vectors with n components. Let ai and &2 be vectors
having n components, i.e. ai and ai are in "Vn. This is denoted ai S "Vn,
32 G =U„. Given arbitrary scalars j8i and p^, it is seen that (/Siai + ^8282) S "Vn,
i.e. an arbitrary linear combination of ai and a2 is in 'Vn.
"Vi is an infinite line and =^2 is an infinite
plane. To represent diagrammatically these
and, in general, 'Vn for any n, one uses the area
enclosed by a closed curve. Let 'U be a set of
vectors in °U„. This can be represented as
shown in Fig. 3-2. Fig. 3-2. A Set of Vectors V in °U„
Definition 3.32: A set of vectors li in "Vn is closed under addition if, given any ai GK and
any a2 S V, then (ai + ai) S 11.
Example 3.15.
(a) Given lA is the set of all 3-vectors whose elements are integers. This subset of "V^ is closed under addi-
tion because the sum of any two 3-vectors whose elements are integers is also a 3-vector whose elements
are integers.
(6) Given lA is the set of all 2-vectors whose first element is unity. This set is not closed under addition
because the sum of two vectors in XI must give a vector whose first element is two.
Definition 3.33: A set of vectors 11 in =U„ is closed under scalar multiplication if, given any
vector a e Ti and any arbitrary scalar p, then pa € 'U. The scalar p can be
a real or complex number.
Example 3.16.
Given K is the set of all 3-vectors whose second and third elements are zero. Any scalar times any
vector in K gives another vector in li, so K is closed under scalar multiplication.
Definition 3.34: A set of «- vectors li in IJn that contains at least one vector is called a vector
space if it is (1) closed under addition and (2) closed under scalar multi-
plication.
If a G X/, where If is a vector space, then Oa = G X/ because 'U is closed under scalar
multiplication. Hence the zero vector is in every vector space.
Given ai, a2, . . . , a„, then the set of all linear combinations of the ai is a vector space
{linear manifold).
-W = {j/i«i} for all ^,
3.7 BASES
Definition 3.35: A vector space H in =U„ is spanned by the vectors ai, a2, . . . , aic (k need not
equal n) if (1) aiGX/, a2 G'U, . . ., aicGlf and (2) every vector in T^ is a
linear combination of the ai, a2, . . . , a^.
CHAP. 3] ELEMENTARY MATRIX THEORY 47
Example 3.17.
Given a vector space V in fg to be the set of all 3-vectors whose third element is zero. Then (1 2 0),
(1 1 0) and (0 1 0) span V because any vector in V can be represented as (a /? 0), and
{a P 0) = (y3-a)(l 2 0)+ {2cc-/3){l 1 0) + 0(0 1 0)
Definition 3.36: Vectors ai,a2, . . .,akG'V„ are linearly dependent if there exist scalars
jSi, ^2, . ..,13k not all zero such that ySiai + ^8232 + • • • + ySfcak = 0.
Example 3.18.
The three vectors of Example 3.17 are linearly dependent because
+1(1 2 0) - 1(1 1 0) - 1(0 10) = (0 0)
Note that any set of vectors that contains the zero vector is a linearly dependent set.
Definition 3.37: A set of vectors are linearly independent if they are not linearly dependent.
Theorem 3.7: If and only if the column vectors ai,a2, . . .,a„ of a square matrix A are
linearly dependent, then det A = 0.
Proof: If the column vectors of A are linearly dependent, from Definition 3.86 for some
Pv Pv-'Pn not all zero we get = P^bl^ + p^a^ + • • • + /3^a„. Denote one nonzero B as B..
Then '
AE = (ai|a2| ... |aii ... |a„)
1 ... p^ ... 0\
1 ... p^ ...
... p. ...
\0 ... ^„ ... 1
= (ai ] 32 1 . . . I 1 . . . I a„)
Since a matrix with a zero column has a zero determinant, use of the product rule of
Theorem 3.2 gives detAdetE = 0. Because detE = ;8j^0, then detA = 0.
Next, suppose the column vectors of A are linearly independent. Then so are the
column vectors of AE, for any elementary column operation E. Proceeding stepwise as
on p. 44, but now adjoining I to the bottom of A, we find Ei, . . . , E„ such that AEi • • • E„ = I.
Each step can be carried out since the column under consideration is not a linear combination
of the precedmg columns. Hence (det A) (det E,) • ■ • (detE^) = 1, so that det A ^ 0.
Using this theorem it is possible to determine if ai, a2, . . ., a., k^ n, are linearly de-
pendent Calculate all the A; x fc determinants formed by deleting all combinations oin-k
rows. If and only if all determinants are zero, the set of vectors is linearly dependent.
Example 3.19.
Consider f 2 j and U j. Deleting the bottom row gives det (I Jj = 0. Deleting the top row
gives det|^g ^j = -12. Hence the vectors are linearly independent. There is no need to check the
determinant formed by deleting the middle row.
Definition 3.38: A set of n-vectors ai, aa a^ form a basis for X/ if (1) they span li and
(2) they are linearly independent.
48 ELEMENTARY MATRIX THEORY [CHAP. 3
Example 3.2U.
Any two of the three vectors given in Examples 3.17 and 3.18 form a basis of the given vector space,
since (1) they span V as shown and (2) any two are linearly independent. To verify this for (1 2 0) and
(1 1 0), set
j8i(l 2 0) + Mi 10) ^ (0 0)
This gives the equations Pi + ^2 = 0, 2Pi + p^ — Q. The only solution is that p^ and p^ are both zero,
which violates the conditions of the definition of linear dependence.
Example 3.21.
Any three noncoplanar vectors in three-dimensional Euclidean space form a basis of 1)^ (not necessarily
the orthogonal vectors). However, note that this definition has been abstracted to include vector spaces
that can be subspaces of Euclidean space. Since conditions on the solution of algebraic equations are the
goal of this section, it is best to avoid strictly geometric concepts and remain with the more abstract ideas
represented by the definitions.
Consider "U to be any infinite plane in three-dimensional Euclidean space 1)^. Any two noncolinear
vectors in this plane form a basis for 11.
Theorem 3.8: If ai, a2, ...,afc are a basis of V, then every vector in V is expressible
uniquely as a linear corobination of ai, a2, . . .,Sik.
The key w^ord here is uniquely. The proof is given in Problem 3.6.
To express any vector in V uniquely, a basis is needed. Suppose we are given a set of
vectors that span V. The next theorem is used in constructing a basis from this set.
Theorem 3.9: Given nonzero vectors ai, a2, . . .,am G 'Vn. The set is linearly dependent if
and only if some au, for 1< fc^ m, is a linear combination of ai, 82, . . ., aic-i.
Proof of this is given in Problem 3.7. This theorem states the given vectors need only
be considered in order for the determination of linear dependency. We need not check
and see that each aic is linearly independent from the remaining vectors.
Example 3.22.
Given the vectors (1 —1 0), (—2 2 0) and (1 0). They are linearly dependent because (—2 2 0) =
—2(1 —1 0). We need not check whether (1 0) can be formed as a linear combination of the first
two vectors.
To construct a basis from a given set of vectors ai, 82, . . . , a™ that span a vector space
li, test to see if a2 is a linear combination of ai. If it is, delete it from the set. Then test
if as is a linear combination of ai and a2, or only ai if 82 has been deleted. Next test
a4, etc., and in this manner delete all linearly dependent vectors from the set in order. The
remaining vectors in the set form a basis of 'U.
Theorem 3.10: Given a vector space V with a basis ai, a2, . . .,am and with another basis
bi, b2, . . . , hi. Then m ~ I.
Proof: Note ai, bi, b2, . . . , bi are a linearly dependent set of vectors. Using Theorem
3.9 delete the vector hk that is linearly dependent on ai, bi, . . ..K-i. Then ai.bi, . . .,bfc-i,
bfc+i, . . .,bi still span V. Next note aa, ai, bi, . . .,bfc-i,bfc + i, . . ., bi are a linearly depen-
dent set. Delete another b-vector such that the set still spans V. Continuing in this manner
gives a;, . . .,a2, ai span V. If l<m, there is an ai+i that is a linear combination of ai, . . -,
a2, ai. But the hypothesis states the a-vectors are a basis, so they all must be linearly
independent, hence I — m. Interchanging the b- and a-vectors in the argument gives m^l,
proving the theorem.
CHAP. 3] ELEMENTARY MATRIX THEORY 49
Since all bases in a vector space V contain the same number of vectors, we can give
the following definition.
Definition 3.39: A vector space V has dimension n if and only if a basis of TA consists of n
vectors.
Note that this extends the intuitive definition of dimension to a subspace of IJm.
3.8 SOLUTION OF SETS OF LINEAR ALGEBRAIC EQUATIONS
Now the means are at hand to examine the solution of matrix equations. Consider the
matrix equation
/A
= Ax = (ai|a2|...ja„) : = 2 f A
If all Ij are zero, x = is the trivial solution, which can be obtained in all cases. To obtain
a nontrivial solution, some of the i^ must be nonzero, which means the ai must be linearly
dependent, by definition. Consider the set of all solutions of Ax = 0. Is this a vector space?
(1) Does the set contain at least one element?
Yes, because x = is always one solution.
(2) Are solutions closed under addition?
Yes, because if Az = and Ay = 0, then the sum x = z + y is a solution of
Ax = 0.
(3) Are solutions closed under scalar multiplication?
Yes, because if Ax = 0, then j8x is a solution of A(j8x) = 0.
So the set of all solutions of Ax = is a vector space.
Definition 3.40: The vector space of all solutions of Ax = is called the null space of A.
Theorem 3.11: If an mx« matrix A has n columns with r linearly independent columns,
then the null space of A has dimension n — r.
The proof is given in Problem 3.8. *
Definition 3.41: The dimension of the null space of A is called the nullity of A.
Corollary 3.12: If A is an « x % matrix with n linearly independent columns, the null space
has dimension zero. Hence the solution x = is unique.
Theorem 3.13: The dimension of the vector space spanned by the row vectors of a matrix
is equal to the dimension of the vector space spanned by the column vectors.
See Problem 3.9 for the proof.
Example 3.23. ,^ „
/I 2 3\
Given the matrix ( ) . It has one independent row vector and therefore must have only one
independent column vector.
Definition 3.42: The vector space of all y such that Ax = y for some x is called the range
space of A.
It is left to the reader to verify that the range space is a vector space. ,
50 ELEMENTARY MATRIX THEORY [CHAP. 3
Example 3.24.
The range space and the null space may have other vectors in common in addition to the zero vector.
Consider
Then Ab = 0, so b is in the null space; and Ac = b, so b is also in the range space.
Definition 3.43: The rank of the mxn matrix A is the dimension of the range space of A.
Theorem 3.14: The rank of A equals the maximum number of linearly independent column
vectors of A, i.e. the range space has dimension r.
The proof is given in Problem 3.10. Note the dimension of the range space plus the
dimension of the null space = n for anrnxn matrix. This provides a means of determining
the rank of A. Determinants can be used to check the linear dependency of the row or
column vectors.
Theorem 3,15: Given an m x to matrix A. Then rank A = rank A^ = rank A'^A = rank AA'^.
See Problem 3.12 for the proof.
Theorem. 3.16: Given an mXn matrix A and an w x fc matrix B, then rank AB ^ rank A,
rankAB ^ rankB. Also, if B is nonsingular, rank AB = rank A; and if A
is nonsingular, rank AB = rank B.
See Problem 3.13 for the proof.
3.9 GENERALIZATION OF A VECTOR
From Definition 3.7, a vector is defined as a matrix with only one row or column. Here
we generalize the concept of a vector space and of a vector.
Definition 3.44: A set %l of objects x, y, z, ... is called a linear vector space and the objects
X, y, z, . . . are called vectors if and only if for all complex numbers a and ji
and all objects x, y and z in Xf:
(1) x + yisinXf,
(2) x + y = y + x,
(3) (x + y)+z = x+ (y + z),
(4) for each x and y in "M there is a unique zinli such that x + z = y.
(5)
ax is in V,
(6)
a(^x) = (a^)x,
(7)
Ix = X,
(8)
a(x + y) = aX + ay.
(9)
{a + ;8)X = aX + ;8x.
The vectors of Definition 3.7 (w-vectors) and the vector space of Definition 3.34 satisfy
this definition. Sometimes a and j8 are restricted to be real (linear vector spaces over the
field of real numbers) but for generality they are taken to be complex here.
CHAP. 3] ELEMENTARY MATRIX THEORY 51
Example 3.25.
The set V of time functions that are linear comhlnations of sin t, sin 2t, sin St, ... is a linear vector
space.
Example 3.26.
The set of all solutions to dx/dt = A(t) x is a linear vector space, but the set of all solutions to dx/dt =
A(<) X + B(() u for fixed u(t) does not satisfy (1) or (5) of Definition 3.44, and is not a linear vector space.
Example 3.27.
The set of all complex valued discrete time functions x(nT) for w = 0, 1, . . . is a linear vector space,
as is the set of all complex valued continuous time functions x{t).
All the concepts of linear independence, basis, null space, range space, etc., extend im-
mediately to a general linear vector space.
Example 3.28.
The functions sin t, sin 2t, sin St, . . . form a basis for the linear vector space 11 of Example 3.25, and
so the dimension of V is countably infinite.
3.10 DISTANCE IN A VECTOR SPACE
The concept of distance can be extended to a general vector space. To compare the
distance from one point to another, no notion of an origin need be introduced. Further-
more, the ideas of distance involve two points (vectors), and the "yardstick" measuring
distance may very well change in length as it is moved from point to point. To abstract
this concept of distance, we have
Definition 3.45: A metric, denoted p(a, b), is any scalar function of two vectors a expand
h ev with the properties
(1) p(a, b) — (distance is always positive),
(2) p{a, b) = if and only if a = b (zero distance if and only if the
points coincide),
(3) p(a, b) = p(b, a) (distance from a to b is the same as distance
from b to a),
(4) p(a, b) + p(b, c) — p(a, c) (triangle inequality).
Example 3.29.
(a) An example of a metric for w-vectors a and b is
_ r (a-b)t(a-b) 1'/^
"^^'''^ - [l + (a-b)t(a-b)J
(6) For two real continuous scalar time functions x(t) and y(t) for t^ — t — t^, one metric is
p{x,y) = [j\x{t)-y(t)]^dtj
11/2
Further requirements are sometimes needed. By introducing the idea of an origin and
by making a metric have constant length, we have the following definition.
Definition 3.46: The norm, denoted ||a!|, of a vector a is a metric from the origin to the
vector a G X/, with the additional property that the "yardstick" is not a
function of a. In other words a norm satisfies requirements (l)-(4) of a
metric (Definition 3.45), with b understood to be zero, and has the addi-
tional requirement
(5) i|aa|| = |aj ||a|].
52 ELEMENTARY MATRIX THEORY [CHAP. 3
The other four properties of a metric read, for a norm,
(1) llall ^ 0,
(2) !|a|| = if andonly if a = 0,
(3) |]a|| = ||a|| ■ (trivial),
(4) i|a]| + llc|| ^ |ia + c||.
Example 3.30.
A norm in "Ug is ll(«i «2)ll2 = ilaiU = VT«iM-la2p-
Example 3.31.
A norm in 'V2 is IK^i ffl2)lli = l|a||i = Wi\ + k2l-
Example 3.32. pj
A norm in °U„ is the Euclidean norm ||a||2 = V^+a = •»/ 2 |«il^-
In the above examples the subscripts 1 and 2 distinguish different norms.
Any positive monotone increasing concave function of a norm is a metric from to a.
Definition 3.47: The inner product, denoted (x,y), of any tv?o vectors a and h in li is- a
complex scalar function of a and b such that given any complex numbers
a and j8,
(1) (a, a) ^ 0,
(2) (a, a) = if and only if a = 0,
(3) (a,b)* = (b,a),
(4) («a + ph, c) = a*(a, c) + |8*(b, c).
An inner product is sometimes known as a scalar or dot product.
Note (a, a) is real, (a, ah) = a(a, b) and (a, 0) = 0.
Example 3.33. n
An inner product in °f„ is (a, b) = atb = 2 Oj ^i-
Example 3.34.
An inner product in the vector space K of time functions of Example 3.25 is
{x,y) ^ ( x*(t)y(t)dt
Definition 3.48: Two vectors a and b are orthogonal if (a,b) = 0.
Example 3.35.
In three-dimensional space, two vectors are perpendicular if they are orthogonal for the inner product
of Example 3.33.
Theorem 3.17: For any inner product (a,b) the Schwarz inequality |(a,b)|2 ^ (a,a)(b,b)
holds, and furthermore the equality holds if and only if a = ab or a or b
is the zero vector.
Proof: If a or b is the zero vector, the equality holds, so take b 7^ 0. Then for any
scalar p, ^ (a + ;8b, a + ;8b) = (a, a) + ;8*(b, a) + ^(a, b) + iP^h, b)
where the equality holds if and only if a + i8b = 0. Setting /? = -(b, a)/(b, b) and re-
arranging gives the Schwarz inequality.
CHAP. 3] ELEMENTARY MAtRIX THEORY 53
Example 3.36.
Using the inner product of Example 3.34,
J x*(t)y{t)dt ^ (f''\!c*{t)\2dt\fC\y{t)\2dt
Theorem 3.18: For any inner product, ^(aTbTaTb) ^ vTaTa) + VTbJbj.
Proof: (a + b, a + b) = |(a + b, a + b)|
= |(a,a) + (b,a) + (a,b) + (b,b)i - |(a, a)| + i(b, a)| + |(a,b)| + i(b,b)|
Use of the Schwarz inequality then gives
(a + b, a + b) ^ (a,a) + 2v'(a,a)(b,b) + (b,b)
and taking the square root of both sides proves the theorem.
This theorem shovi^s that the square root of the inner product (a, a) satisfies the triangle
inequality. Togeth er yyi th the requirements of the Definition 3.47 of an inner product, it
can be seen that ^/{a, a) satisfies the Definition 3.46 of a norm.
Definition 3.49: The natural norm, denoted ||a||2, of a vector a is ||a||2 = ^A^^
Definition 3.50: An orthonormal set of vectors ai, a2, . . ..as is a set for which the inner
product
^^""^^ ^ {1 ITJj] = ''^
vs'here 8y is the Kronecker delta.
Example 3.37.
An orthonormal set of basis vectors in =U„ is the set of unit vectors Cj, Cj, . . . , e„, where e; is defined as
■ ith position
Given any set of k vectors, how can an orthonormal set of basis vectors be formed from
them? To illustrate the procedure, suppose there are only two vectors, ai and a2. First
choose either one and make its length unity:
*Z
becomes / where b^ = y^a^
Because bi must have length unity, y^ = ||ai||^^ Now the second vector must be broken up
into its components:
■)'2'»l
54 ELEMENTARY MATRIX THEORY [CHAP. 3
Here y^^ is the component of a^ along bj, and a^ — y^^ is the component perpendicular to
bj. To find 72' '^ote that this is the dot, or inner, product y^ = bj+ • a^. Finally, the second
orthonormal vector is constructed by letting &^ — y^bj have unit length:
b„ — TT ;— 77-
I|a2- V2»lll2
This process is called the Gram-Schmit orthonormalization procedure. The orthonormal bj
is constructed from a, and the preceding bi for i = 1, . . .,/— 1 according to
b. =
l|a, -2(b,a,)bJ|,
1=1
Example 3.38.
Given the vectors a[ = (1 1 0), aj = (1 -2 1)/V2. Then ||ai||2 = VI, so bj" = (1 1 0)/\/2.
By the formula, the numerator is
Making this have length unity gives bj = (3 —3 2)/v22.
Theorem 3.19: Any finite dimensional linear vector space in which an inner product exists
has' an orthonormal basis.
Proof: Use the Gram-Schmit orthonormalization procedure on the set of all vectors
in V, or any set that spans V, to construct the orthonormal basis.
3.11 RECIPROCAL BASIS
Given a set of basis vectors bi.ba, . . .,b„ for the vector space 'Vn, an arbitrary vector
xG'Vn can be expressed uniquely as
Definition 3.51: Given a set of basis vectors bi, b2, . . . , b„ for <=Un, a reciprocal basis is a set
of vectors ri,r2, ...,r„ such that the inner product
(ri,bi) = 8« i,j = 1,2, ...,n [3.5)
Because of this relationship, defining R as the matrix made up of the ri as column
vectors and B as the matrix with bs as column vectors, we have
RtB = I
which is obtained by partitioned multiplication and use of {3.5).
Since B has n linearly independent column vectors, B is nonsingular so that R is
uniquely determined as R = (B"')^. This demonstrates the set of r* exists, and the set is a
basis for =U„ because R-^ = Bt exists so that all n of the r* are linearly independent.
Having a reciprocal basis, the coefficients p in {3.j) can conveniently be expressed.
Taking the inner product with an arbitrary n on both sides of {S.Jf) gives
(r.,x) = /8i(r,,bi) + P^{x^,\) + •■■ + Pjj^i.K)
Use of the property {3.5) then gives P^ - (ri,x).
CHAP. 3] ELEMENTARY MATRIX THEORY 55
Note that an orthonormal basis is its own reciprocal basis. This is what makes "break-
ing a vector into its components" easy for an orthonormal basis, and indicates how to go
about it for a non-orthonormal basis in "Vn.
3.12 MATRIX REPRESENTATION OF A LINEAR OPERATOR
Definition 3.52: A linear operator L is a function whose domain is the whole of a vector
space Vi and whose range is contained in a vector space X^2, such that for
a and b in Vi and scalars a and /?,
L{aa + ^b) = aL(a) + j8L(b)
Example 3.39.
An TO X w matrix A is a linear operator whose domain is the whole of "V^ and whose range is in X'^, i.e.
it transforms w-vectors into m-vectors.
Example 3.40.
Consider the set lA of time functions that are linear combinations of sin nt, for n = 1,2, . . . . Then
00
any x(t) in V can be expressed as x{t) = 2 In sin nt. The operation of integrating with respect to
time is a linear operator, because "^^
I [ax^it) + Px2(t)] dt = a C Xiit) dt + p I Xzit) dt
Example 3.41.
The operation of rotating a vector in 'V2 by an angle is a linear operator of 'V2 onto °L'2- Rotation of a
vector aa + /3b is the same as rotation of both a and b first and then adding a times the rotated a plus /B
times the rotated b.
Theorem 3.20: Given a vector space Vi with a basis bi, b2, . . .,bn, . . .., and a vector space
V2 with a basis ci, C2, . . . , Cm, . . . . Then the linear operator L whose
domain is lii and whose range is in V2 can be represented by a matrix
{y.j} whose ith column consists of the components of L(bi) relative to the
basis ci, 02, . . . , Cm, . . . .
Proof will be given for n dimensional Vi and m dimensional Vz. Consider any x in Vi.
n
Then x = 2 P^^'- Furthermore since L(bi) is a vector in V2, determine y^j such that
1=1
m
L{^i) = X y,iCr But
/ n \ n n m
L(x) = l(x^a) = ^^^(b,) = 2:A2:r,.c,
\i=l / i=l i = l j=l
n
Hence the ith component of L(x) relative to the basis {0^} equals 2) yj^p^, i.e. the matrix
{yj^} times a vector whose components are the j3i. '"^
Example 3.42.
The matrix representation of the rotation by an angle of Example 3.41 can be found as follows.
Consider the basis Cj = (1 0) and 63 = (0 1) of "Uz- Then any x = (^i jSg) = ySiCi + /JaCa- Rotating
Ci clockwise by an angle gives L{ei) = (cos <p)ei — (sin 0)62, and similarly Lie^) = (cos 0)62 + (sin 0)61,
so that Yii = cos 0, 721 = —sin 0, yi2 = sin 0, 722 = cos 0. Therefore rotation by an angle can be
represented by
cos sin 0"^
-sin COS0.
56 ELEMENTARY MATRIX THEORY [CHAP. 3
Example 3.43.
An elementary row or column operation on a matrix A is represented by the elementary matrix E as
given in the remarks following Definition 3.26.
The null space, range space, rank, etc., of linear transformation are obvious extensions
of the definitions for its matrix representation.
3.13 EXTERIOR PRODUCTS
The advantages of exterior products have become widely recognized only recently, but
the concept was introduced in 1844 by H. G. Grassmann. In essence it is a generalization
of the cross product. We shall only consider its application to determinants here, but it is
part of the general framework of tensor analysis, in which there are many applications.
First realize that a function of an m-vector can itself be a vector. For example,
z = Ax + By illustrates a vector function z of the vectors x and y. An exterior product 4>^
is a vector function of p m-vectors, ai, a2, . . . , ap. However, ^p is an element in a generalized
vector space, in that it satisfies Definition 3.44, but has no components that can be arranged
as in Definition 3.7 except in special cases.
The functional dependency of ip" upon ai, 82 ap is written as ^^ = ai a aj a • • • a ak,
where it is understood there are p of the a's. Each a is separated by a a, read "wedge",
which is Grassmann notation.
Definition 3.53: Given m-vectors as in V, where V has dimension n. An exterior product
^p = ai A ai A • • • a a/c f or p = 0,1,2, .. .,n is a vector in an abstract vector
space, denoted A"'", such that for any complex numbers a and /?,
(aai + ;8aj) A aic A • • • A Hi = a(ai a at a • • • a ai) + /3(ai a afc a • • • a ai) (3.6)
ai A • • • A aj A • • • A aic A • • • A ai = —sn a • • • a ajc a • • • a hj a • • • a ai (5.7)
ai A aj A • • • A afc 9^ if aj, aj, . . . , afc are linearly independent.
Equation (3.6) says ^^ is multilinear, and (3.7) says ^^ is alternating.
Example 3.44.
The case p = and p = 1 are degenerate, in that /\Hl is the space of all complex numbers and
A^V = %{, the original vector space of m-vectors having dimension n. The first nontrivial example is then
AHt- Then equations (S.6) and (3.7) become the bilinearity property
(aaj + /8aj) A afc = aCaj a a^) + p{aj a afc) (*•*)
and also ai^a^ = -a^ a aj (5-9)
Interchanging the vectors in (3.8) according to (3.9) and multiplying by -1 gives
afc A (aaj + /8aj) = a(afc a aj) + Pia^ a aj) i^-iO)
By (3.10) we see that a^ a a^ is linear in either a^ or a^ (bilinear) but not both because in general
(aBj) A (aa^) ¥= a(ai A a^).
Note that setting a^ = a^ in (3.9) gives a; Aa^ = 0. Furthermore if and only if a,- is a linear com-
bination of a;, aj a aj = 0.
Letbi.ba, ...,b„ beabasisof U Then 3;= 2^ «fc*»fc ^"<^ ^i = ,2 T'''" ®° *^^*
aiAa, = Qi«fcbfc)A(2yibO = J^ 2 afcr,(bfc a b.)
CHAP. 3] ELEMENTARY MATRIX THEORY 57
Since b^ a b^j = and b^. a bj = — b; a bj^ for k< I, this sum can be rearranged to
n (-1
aiABj = 2 2 («fcYi - «iYfc)bfc A bi (3J1)
1=1 fc=i
Because aj a a^ is an arbitrary vector in A^V, and (3.11) is a linear combination of b^ a b(, then the vectors
bfc A b( for 1 — k < I — n form a basis for A^TA. Summing over all possible k and I satisfying this rela-
tion shows that the dimension of A^V is n(n — l)/2 = (
Similar to the case A'^, if any a; is a linear combination of the other a's, the exterior
product is zero and otherwise not. Furthermore the exterior products bi a bj a ■ • • a bk for
l^i<j< ••• <k^n form a basis for A^, so that the dimension of A*^ is
nl _ /n\
{n-v)\p\ ~ \v) (^-^^^
In particular the only basis vector for A"X/ is biAb2A • • • aK, so A"X/ is one-dimensional.
Thus the exterior product containing n vectors of an n-dimensional space "W is a scalar.
Definition 3.54: The determinant of a linear operator L w^hose domain and range is the
vector space 'U vi^ith a basis bi, b2, . . . , b„ is defined as
detL = L( MAL(b2)A---AL(b ^
bl A 62 A • • • A bn
The definition is completely independent of the matrix representation of L. If L is an
nxn matrix A with column vectors ai, aa, . . . , a„, and since ei, ea, . . . , e„ are a basis
for -Vn,
det A = Aei a Ae2 a • • ■ a Acn _ ai ^. a? a • ■ • /> ar.
ei A 62 A ■ • ■ A Cn 0. /v eo a • • • A Cn
Without loss of generality, we may define ei a 62 a ■ • ■ a e„ = 1, so that
det A = ai A 32 A • • • A a„ {3.H)
and det I = ei a 62 a • • • a e„ = 1.
Now note that the Grassmann notation has a built-in multiplication process.
Definition 3.55: For an exterior product +" = ai a • • • a ap in An/ and an exterior product
^f'" = ci A ■ ■ ■ ACq in A'Ti, define exterior multiplication as
^"a^" = aiA •,• • ABpAClA ■ • • AC, {3.15)
This definition is consistent with the Definition 3.53 for exterior products, and so
4?" A 41" is itself an exterior product.
Also, if TO = w then ai a • • • a a„-2 a a„-i is an n- vector since ^""' has dimension n from
{3.12), and must equal some vector in li since 'U and A"~*Xf must coincide.
Theorem 3.21. (Laplace expansion). Given an nXn matrix A with column vectors au
Then det A = ai a 82 a • • • a a„ = af (a2 a ■ • • a an).
Proof: Let e; be the ith unit vector in '=Un and «j be the jth unit vector in Vn-i, i.e. e; has
n components and ., has k - 1 components. Then ai = auei + a^e^ +■■■+ anien so that
aiAa2A ••• Aa„ = aiieiAa2A ••• Aa„+ a2ie2Aa2A ■■• Aan+ ••• + ttniCn Aa2A- • ■ Aa„
58 ELEMENTARY MATRIX THEORY [CHAP. 3
The first exterior product in the sum on the right can be written
Cl A 32 A • • • A an — Cl A (ai2ei + ^2262 + ■ • • + ft»i2en) A ■ • • A (ainBl + • • • + 0,nn^-n\
Using the multilinearity property gives
Cl A 82 A • • • A Hn
■ r\
/ (122 \
= Cl A . A • •
\an2/
= e, A [a22 (^«j + . • . + a„2(;;^_ J] \a..Q^ + • • • + a„„(^^_ j]
Since detin = 1 = detln-i, then
0\ /o
CjA ( ^ )A ••• A ^^ ; - •! „-l
'1/
Performing exactly the same multilinear operations on the left hand side as on the right
hand side gives
ei A ["022 [ ] + • • • + dnzi H A • • • A ra2« i \ + • •• -\- dnni j I
= [a22*l + • • • + fln2*n-l] A • ■ • A [a2n*l + ■ ' • + ftnn*»i-l]
/a22\ lain\
— '■ A • • • A • = detAfu = cu
\o.nij \annl
vi^here Cu is the cofactor of an. Similarly,
n
Cj A 82 A • • • A Bn = Cji and ai a a2 a • ■ • a an = 2 ftjiCji = (cn C21 ... Cni)ai
3=1
so that a2 a • • • a a„ = (cu C21 ... c„i)'^ and Theorem 3.21 is nothing more than a state-
ment of the Laplace expansion of the determinant about the first column. The use of column
interchanges generalizes the proof of the Laplace expansion about any column, and use of
det A = det A^ provides the proof of expansion about any row^.
Solved Problems
3.1. Multiply the following matrices, where ai, as, bi and h% are column vectors with n
filGiriGiits
■<.. (12,(2) (-) (g)(-.IM (V, (- «)(!
(ii) (2) (3 4) (iv) (ax|a2)(^) (vi) (a. 1 82) (^^^ ^^
Using the rule for multiplication from Definition 3.13, and realizing that multiplication of a
& X n matrix times an nxm matrix results in & k x in matrix, we have
/a'^b
(i) (1X3 + 2X4) = (11) (iii) ' 1 '
a^bi
aib
(v)
.... /(1X3) (1X4)\ /3 4\ , . , T^ T
^"^ V(2X3) (2X4)) " U s) ^'""^ (aib;) + (a2b^) (vi) (Mi | \2a2)
CHAP. 3]
ELEMENTARY MATRIX THEORY
59
3.2. Find the determinant of A (a) by direct computation, (ft) by using elementary row
and column operations, and (c) by Laplace's expansion, where
1
2
1
2
6
1
3
1
8
2
(a) To facilitate direct computation, form the table
p
Pv
P2'
Ps.
Pi
2
3
4
1
2
4
3
2
3
4
2
3
3
2
4
4
4
2
3
5
4
3
2
6
2
4
3
1
7
2
4
1
3
8
2
3
1
4
9
2
3
4
1
10
2
1
4
3
11
2
1
3
4
P
Pi,
P2.
P3,
P4
12
3
1
2
4
13
3
1
4
2
14
3
2
4
1
15
3
2
1
4
16
3
4
1
2
17
3
4
2
1
18
4
3
2
1
19
4
3
1
2
20
4
2
1
3
21
4
2
3
1
22
4
1
3
2
23
4
1
2
3
There are 4 ! = 24 terms in this table. Then
detA = +l-2«l'2-l'2'8«0+l'0-8'0-l'0'3'2
+ l'6'3«0-l'6'l'0 + 0«6-l-0-0'6«l'0 + 0'0'l'2
-0'0-8-0 + 0-l'8-0-0'l'l-2 + 0'l'3'2-0-l-8'0
+ 0-2'8'0-0'2'l'2 + 0'6'l'0-0'6'3'0 + 2'0'3«0
-2'0-l'0 + 2'2'l'0-2'2'l«0 + 2«l'l'0-2'l'3'0
= 4
(6) Using elementary row and column operations, subtract 1, 3 and 4 times the bottom row from
the first, second and third rows respectively. This reduces A to a triangular matrix, whose
determinant is the product of its diagonal elements 1 • 2 • 1 • 2 = 4.
(c) Using the Laplace expansion about the bottom row,
/I 0\
detA = 2det(l 2 01 = 2*2 = 4
\1 3 1
3.3. Prove that det (A'^) = det A, where A is an % x n matrix.
n!
If A = {a„}, then A^ = {ajj so that det (A''') — 2 {~l)''fflp,i o-p^i ' ' ' "p„»i*
Since a determinant is all possible combinations of products of elements where only one element
is taken from each row and column, the individual terms in the sum are the same. Therefore the
only question is the sign of each product. Consider a typical term from a 3 X 3 matrix: 03ia]2a23>
i.e. pj = 3, P2 — 1' Pa — 2. Permute the elements through ai2ff3iffl23 to tti2"23"3i> s*' that the row
numbers are in natural 1, 2, 3 order instead of the column numbers. Prom this, it can be concluded
in general that it takes exactly the same number of permutations to undo Pi, P2>- • •,Pn to 1,2 n
as it does to permute 1,2, ...,n to obtain Pi, P2> ■■■>Pn- Therefore p must be the same for each
product term in the series, and so the determinants are equal.
60
ELEMENTARY MATRIX THEORY
[CHAP. 3
3.4. A Vandermonde matrix V has the form
'l 1
y _ I •^l ^2
nn— 1 /i«— 1 an-
il O2 . . . On
Prove V is nonsingular if and only if 6i ¥- Oj for i ¥= j.
This will be proven by showing
detV = (92 -9l)(93-92)(«3-ei)- ••(»«- «'n-l)(9n-»n-2)---(9n-9l) = U (ffj " ^i)
For n = 2, det V = fig "" ^u which agrees with the hypothesis. By induction if the hypothesis
can be shown true for n given it is true for n — 1, then the hypothesis holds for w - 2. Note each
term of det V will contain one and only one element from the wth column, so that
detV
70+yi9n+ ■•• + Vn-1«;;
If e„ = 9; for i = 1,2, . . .,n-l, then detV = because two columns are equal. Therefore
61,62, ...,»„_ 1 are the roots of the polynomial, and
But Yn-i is the cof actor of 9^~' by the Laplace expansion; hence
'1 1 ... 1
#, 82 ... e„-i
Tn-i = det
e
,n— 2 „n— 2
60
'1 "2
By assumption that the hypothesis holds for w — 1,
Y„-i = n (»i-Oi)
n-1
Combining these relations gives
detV = (9„-»i)(e„-e2)---(9n-9«-i)
3.5. Show det/''^ r) ~ det A det C, where A and C are nxn and mxm matrices
respectively.
Either det A = or det A # 0. If det A = 0, then the column vectors of A are linearly
dependent. Hence the column vectors of ( « ) are linearly dependent, so that
detf^ ^) =
and the hypothesis holds. ^ '
If detA#0, then A""! exists and
^A B\ ^ /A 0\/I OWI A-iB
^0 c) ~ \o ij[o Cj\0 I
The rightmost matrix is an upper triangular matrix, so its determinant is the product of the
diagonal elements which is unity. Furthermore, repeated use of the Laplace expansion about the
diagonal elements of I gives
det ( „ , ) = det A det ( „ „ ) = det C
1/ — \o c
Use of the product rule of Theorem 3.2 then gives the proof,
CHAP. 3] ELEMENTARY MATRIX THEORY 61
3.6. Show that if ai, a2, . . . , ajc are a basis of V, then every vector in li is expressible
uniquely as a linear combination of ai, aa, . . . , ak.
Let X be an arbitrary vector in V. Because x is in V, and V is spanned by the basis vectors
ai,a2, ...,afc by definition, the question is one of uniqueness. If there are two or more linear
combinations of ai,a2, . ■ -t&k that represent x, they can be written as
fc
i=l
and k
X = 2 "iSi
i = l
Subtracting one expression from the other gives
= (^1 - ai)ai + (^2 - "2)32 + • • ■ + (/3fc - akW
Because the basis consists of linearly independent vectors, the only way this can be an equality is
for jGj = a,-. Therefore all representations are the same, and the theorem is proved.
Note that both properties of a basis were used here. If a set of vectors did not span V, all
vectors would not be expressible as a linear combination of the set. If the set did span V but
were linearly dependent, a representation of other vectors would not be unique.
3,7. Given the set of nonzero vectors ai, 82, . . .,am in ^„. Show that the set is linearly
dependent if and only if some ajc, for Kk^m, is a linear combination of ai, 82,
. . ., afc-i.
// part: If a^ is a linear combination of aj, ag, . . .,a;j_i, then
fc-i
t=i
where the fii are not all zero since a^. is nonzero. Then
= ^lai 4- ^2^2 + • • • + /3fc-iafc_i + (-1)8^ + Oafc + i + ■ • ■ + Oa„
which satisfies the definition of linear dependence.
Only if part: If the set is linearly dependent, then
= ^lai + /32a2 + • • • + Ph^k + ■ ■ ■ + y3„a„
where not all the /3j are zero. Find that nonzero ^^ such that all ySj = for i> k. Then the
linear combination is
k~l
a/c = /^r' 2 Piai
1=1
3.8. Show that if an m X n matrix A has n columns with at most r linearly independent
columns, then the null space of A has dimension n — r.
Because A is w X w, then a; G X)^, x G °i;„. Renumber the a; so that the first r are the inde-
pendent ones. The rest of the column vectors can be written as
ar + i = ^iiai + /3i2a2 + • • • + Pir^r
{3.16)
because a^+i, . . .,a„ are linearly dependent and can therefore be expressed as a linear combination
of the linearly independent column vectors. Construct the n — r vectors XjjX^, . . .,x„_r such that
62
ELEMENTARY MATRIX THEORY
[CHAP. 3
^11
-1
X2 =
w
/321
iS2r
Pn—r, 1
Pn — r/2
Hn — r, r
Note that Axi = by the first equation of {3.16), Axj = by the second equation of {S.16), etc.,
so these are n — r solutions.
Now it will be shown that these vectors are a basis for the null space of A. First, they must
be linearly independent because of the different positions of the —1 in the bottom part of the vectors.
To show they are a basis, then, it must be shown that all solutions can be expressed as a linear
combination of the Xj, i.e. it must be shown the Xj span the null space of A. Consider an arbitrary
solution X of Ax = 0, Then
\
lr+2
in I
Or, in vector notation,
/^"\
— ~lr+l
Pit
-1
4r + 2
\o/
n— /
2
i=l
/M
kr
-1
\o /
ir+i^i + S
jPn-r.A
hA
Pn-r.r
+
•
"r
\-: i
where s is a remainder if the X( do not span the null space of A. If s = 0, then the Xj do span
the null space of A. Check that the last n — r elements of s are zero by writing the vector equality
as a set of scalar equations.
n — r
Multiply both sides of the equation by A. Ax = 2 ~ir + i^^i + As
i=l
Since Ax = and Ax; = 0, As = 0. Writing this out in terms of the column vectors of A
r
gives 2 <'iai = <•• But these column vectors are linearly independent, so a; = 0. Hence the
i=l
n — rxi are a basis, so the null space of A has dimension n — r.
3.9. Show that the dimension of the vector space spanned by the row vectors of a matrix
is equal to the dimension of the vector space spanned by the column vectors.
Without loss of generality let the first r column vectors a; be linearly independent and let s
of the row vectors aj be linearly independent. Partition A as follows:
a
11
112
"^rH- 1,1 <l-r+ 1,2
Ctml ^m2
'xj I ai_r+l
Xr I O-r. r + 1
Xr + j j a,+ i,r+l
I
^Ir I ^l,r+l
]■
O'rr I '^r,r-¥l
^mr I ^m, r + 1
a
m
*r+l,n
am
^r+l,n
CHAP. 3] ELEMENTARY MATRIX THEORY 63
yi ■■■ Yr yr+1 •■• Yn
ml ••• ^mr ^m,r+l ••• ^n
r+1
SO that Xf = (a,-! Oja ... a,,.) and yj = (au ajj . . . a^+ij). Since the X; are r-vectors, 2 ^iX; =
for some nonzero b^. Let the vector b^ = (6i 62 • • • ^r+i) so that '"'
n.
= 2 ^jXi = 2 feta,! 2 6i«i2 ■ • • 2 &i«tr
i=l \>=1 i = l i=l
= (b'-yi b^y^ ... b'-y,)
Therefore b^'yj = for ; = 1, 2, . . . , r.
r
Since the last n — r column vectors a; are linearly dependent, a; = 2 "ij^i ^or i — r + 1, . . .,
r r j=l
Then yj = 2 «ijyj so that b^yj = 2 "ijb'^yj = for i = ^ + !,...,«. Hence
i=i 3=1
= (b^yi b^yg ... b'ryr b^y^ + i ... b^yj = fejai + 6282 + ' ' ' + &r f la^ + i
Therefore r + 1 of the row vectors a^ are linearly dependent, so that s — r. Now consider A'''.
The same argument leads to r — s, so that r = s.
3.10. Show that the rank of the w X n matrix A equals the maximum number of linearly
independent column vectors of A.
Let there be r linearly independent column vectors, and without loss of generality let them be
r
»!, 32, ... , a^. The O; = 2 «ijOj for i = r + 1, . . . , w. Any y in the range space of A can be ex-
j = i
pressed in terms of an arbitrary x as
y = Ax = 2 ^i^i = 2 ^ix^ +2 (.2 «i,a,) X, = 2 (x^ + 2 a^ix^ a,
»-l »-l l=r+l \j=l / i=l y fc = r+l /
This shows the a; for i=l, ...,r span the range space of A, and since they are linearly inde-
pendent they are a basis, so the rank of A = r.
3.11. For an m X n matrix A, give necessary and sufficient conditions for the existence and
uniqueness of the solutions of the matrix equations Ax = 0, Ax = b and AX = B
in terms of the column vectors of A.
For Ax = 0, the solution x = always exists. A necessary and sufficient condition for
uniqueness of the solution x = is that the column vectors aj, ...,a„ are linearly independent.
To show the necessity: If a^ are dependent, by definition of linearly dependent some nonzero 1;
n
exist such that 2 ^Ai - 0. Then there exists another solution x = (|i . . . |„) # 0. To show
i— 1 ji
sufficiency: If the a; are independent, only zero J; exist such that 2 Sili = Ax = 0.
n
For Ax = b, rewrite as b = 2 ^i^i- Then from Problem 3.10 a necessary and sufficient
i=l
condition for existence of solution is that b lie in the range space of A, i.e. the space spanned by
the column vectors. To find conditions on the uniqueness of solutions, write one solution as
n n n
(vi V2 ■■■ In) and another as (jj f z . . . {„). Then b = 2 ^iVi = 2 Offi so that = 2 ('7i - !>!•
<=i 1=1 i=i
The solution is unique if and only if aj, ...,a„ are linearly independent.
64 ELEMENTARY MATRIX THEORY [CHAP. 3
Whether or not b = 0, necessary and sufficient conditions for existence and uniqueness of
solution to Ax = b are that b lie in the vector space spanned by the column vectors of A and that
the column vectors are linearly independent.
Since AX = B can be written as Ax^ = bj for each column vector Xj of X and bj of B, by
the preceding it is required that each bj lie in the range space of A and that all the column vectors
form a basis for the range space of A.
3.12. Given an m x n matrix A. Show rank A = rank A^ = rank A^^A = rank AA''.
By Theorem 3.14, the rank of A equals the number r of linearly independent column vectors
of A. Hence the dimension of the vector space spanned by the column vectors equals r. By
Theorem 3.13, then the dimension of the vector space spanned by the row vectors equals r. But the
row vectors of A are the column vectors of A^, so AT has rank r.
To show rank A = rank A^A, note both A and A^A have n columns. Then consider any vector
y in the null space of A, i.e. Ay = 0. Then A^Ay = 0, so that y is also in the null space of A^A.
Now consider any vector z in the null space of A^A, i.e. A^Az = 0. Then z^A^Az = \\Az\\\ = 0, so
that Az = 0, i.e. z is also in the null space of A. Therefore the null space of A is equal to the
null space of A^A, and has some dimension k. Use of Theorem 3.11 gives rank A = n - A; =
rank A^A. Substitution of A^" for A in this expression gives rank A^ - rank AA^".
3.13. Given an mxn matrix A and an w x A; matrix B, show that rank AB ^ rank A,
rankAB^rankB. Also show that if B is nonsingular, rank AB^rankA, and
that if A is nonsingular, rankAB = rankB.
Let rankA = r, so that A has r linearly indepedendent column vectors aj, ....a^. Then
aj
= 5 a,va,- for i = r+l,...,n. Therefore AB = 2 ajb?" where bf are the row vectors
3 = 1
i=l
of B, using partitioned multiplication. Hence
AB = 2 a^bf +22 a«a,.br = .2 aj bf + 2 ^ <^mK
i=l i = r+lj = l 1 = 1 \ k = r + l
SO that all the column vectors of AB are made up of a linear combination of the r independent
column vectors of A, and therefore rank AB - r.
Furthermore, use of Theorem 3.15 gives rankB = rankB^. Then use of the first part of this
problem with BT substituted for A and A^ for B gives rankB^AT - rank B. Again, Theorem 3.15
gives rank AB = rank B^Ai", so that rank AB - rank B.
If A is nonsingular, rankB = rank A-i(AB) ^ rankAB, using A-i for A and AB for B m
the first result of this problem. But since rank AB ^ rank B, then rankAB -rankB if A i
exists. Similar reasoning can be used to prove the remaining statement.
3.14. Given n vectors xi, X2, . . . , Xn in a generalized vector space possessing a scalar product.
Define the Gram matrix G as the matrix whose elements gry = (xi,Xj). Prove that
detG = if and only if xi,X2, . . .,Xn are linearly dependent. Note that G is a matrix
whose elements are scalars, and that Xi might be a function of time.
Suppose det G = 0. Then from Theorem 3.7, /3igi + /32g2 + ' • • + PnSn = <», where g is a column
n n
vector of G. Then = 2 Pidij = 2 /3i(Xi,Xj).
i=l i=l
Multiplying by a constant /?* and summing still gives zero, so that
= 2 i8* 2 M^u^j) = ( 2 ^i'xi, 2 /3*Xj
3 = 1 i = l \i=l 3 = 1 /
CHAP. 3] ELEMENTARY MATRIX THEORY 65
n
Use of property (2) of Definition 3.47 then gives 2 jSiXi = 0, which is the definition of linear
dependence. *=i
n
Now suppose the Xj are linearly dependent. Then there exist yj such that 2 VjXj = 0.
Taking the inner product with any x; gives = Jk yMu^i) = 2 yiPy for any 1 Therefore
" 3=1 3=1
2, JiSi - and the column vectors of G are linearly dependent so that det G = 0.
3' — 1
3.15. Given two linear transformations Li and L2, both of whose domain and range are
in V. Show det(LiL2) = (detLi)(detL2) so that as a particular case detAB =
detBA= det A detB.
Let V have a basis bj, bj, . . . , b„. Using exterior products, from (S.13),
LiL2(b,) A LiLgCbj) A • • • A LiL2(b„)
det {L1L2)
bj A b2 A • • • A b„
If L2 is singular, the vectors L2(bi) = C; are linearly dependent, and so are the vectors L^L^ibi) =
Li(ci). Then det(LiL2) = = detLg. If Lg is nonsingular, the vectors Cj are linearly independent
and form a basis of V. Then Ci a C2 a • • • a c„ ^ 0, so that
det (I-iL ) = ^i<«=i) ^ ■^1(^2) A • • • A Li(c„) L^Cbi) A I,2(b2) a ••• Al,2(b„)
Cj A C2 A • ■ ■ a C„ bi A bj A • • • A b„
= (detLi)(detL2)
3.16. Using exterior products, prove that det (I + ab'') = 1 + b''a,
det (I + ahi) = (ci + 61a) A (02 + 62a) A • • • A (e„ + 6„a)
= Ci A 62 A • • • A e„ + 61a A 62 A • ■ • A e„ + 6261 A a A 63 A • • ■ A e„ + • • •
+ 6„eiA • • ■ Ae„_i AB
Use of Theorem 3.21 gives
det (I + ab^) = 1 + 6jar(e2 a • • • a e„) - 628^(61 a eg a ■ • • a e„)
+ ... + (-l)"-i6„ar(eiA--- Ae„_i)
but d = (62 A • • • A e„), etc., so that
det(I + ab^) = 1 + a^feiei + 3^6262 + ••• + aT6„e„ = 1 + a^b
l?p-'n P»*-n ^pT^*' n***"/!""'"" °* ^ projection matrix P = I-abT(aTb)-i such that
If^ ". "' "* 7 ''' ^^ " "' ^ = P' »nd the transformation y = Px leaves only the hyperplane
b^x pointwise mvariant, i.e. b^y = b^x = bT[I - ab^(aTb) - i]x = (b^ - b^)x = 0. "yP«^P'*"«
66 ELEMENTARY MATRIX THEORY [CHAP. 3
Supplementary Problems
3.17. Prove that an upper triangular matrix added to or multiplied by an upper triangular matrix results
in an upper triangular matrix.
3.18. Using the formula given in Definition 3.13 for operations with elements, multiply the ■•'ollowing
matrices /„ i\ /q 2 j
^0 ^/V'T ^'^ sin *y
Next, partition in any compatible manner and verify the validity of partitioned multiplication.
3.19. Transpose ( ) , and then take the complex conjugate.
\-!r x^ smtj
3.20. Prove lA = AI = A for any compatible matrix A.
3.21. Prove all skevs^-symmetric matrices have all their diagonal elements equal to zero.
3.22. Prove (AB)t = BtAt.
3.23. Prove that matrix addition and multiplication are associative and distributive, and that matrix
addition is commutative.
3.24. Find a nonzero matrix which, when multiplied by the matrix B of Example 3.5, page 40, results
in the zero matrix. Hence conclude AB = does not necessarily mean A = or B = 0.
3.25. How many times does one particular element appear in the sum for the determinant of an n x n
matrix?
3.26. Prove (AB)"* = B~iA~i if the indicated inverses exist.
3.27. Prove (A-i)i" = (AT)-i.
3.28. Prove det A-i = (detA)-i.
3.29. Verify det f^ sj^^^M 1 l) ^ ^^^ \2 3
=- (I DCl :) = {ID- — '^
2\-i ^ /-I 0\-Y2 2\-i
2 3/ V 1 iMl 3,
3.30. Given A = 11 1 • ^^^^ ^"^
3.31. If both A-i and B-i exist, does (A + B)-i exist in general?
dA
3.32. Given a matrix A(t) whose elements are functions of time. Show dA-Vdt = -A-i-j^A-i.
3.33. Let a nonsingular matrix A be partitioned into An, A12, Aai and A22 such that A^ and A22 - A21A11 Ajj
have inverses. Show that
^-1 - /» -AirAi2\MiV V ' -, "
^ ~ V" I A ** (A22-A2iAn*Ai2)-VV-A2iAii ^
/An' "All A12A22
and if A21 = 0, then A 1 - ( ^ ^-1
CHAP. 3] ELEMENTARY MATRIX THEORY 67
3.34. Are the vectors (2 -1 3), (1 -3 4 0) and (1 1 -2 2) linearly independent?
3.35. Given the matrix equations
(an *i2\ /
:: :::)('•
Using algebraic manipulations on the scalar equations ajjli + ai2?2 — 0, find the conditions under
which no solutions exist and the conditions under which many solutions exist, and thus verify
Theorem 3.11 and the results of Problem 3.11.
3.36. Let X be in the null space of A and y be in the range space of A'''. Show x'^'y = 0.
3.37. Given matrix A = ( j . Find a basis for the null space of A.
3.38. For A = ( j, show that (a) an arbitrary vector z = / ^ J can be expressed as the
sum of two vectors, z = x + y, where x is in the range space of A and y is in the null space of the
transpose of A, and (6) this is true for any matrix A.
3.39. Given nXk matrices A and B and an m X w matrix X such that XA = XB. Under what conditions
can we conclude A = B?
3.40. Given x, y in V, where bi, bg, . . . , b„ are an orthonormal basis of V- Show that
n
(x,y) = 2 (x,bi)(bi,y)
3.41. Given real vectors X and y such that lixjla = ||y||2. Show (x + y) is orthogonal to (x-y).
3.42. Show that rank (A + B) - rank A + rank B.
3.43. Define T as the operator that multiplies every vector in "U^ by a constant a and then adds on a
translation vector to. Is T a linear operator?
3.44. Given the three vectors ai = (VIl9 -4 3), aj = (Vll9 -1 7) and ag ^ (\/ll9 -10 -5), use
the Gram-Schmit procedure on aj, aj and ag in the order given to find a set of orthonormal basis
vectors.
3.45. Show that the exterior product #» = Ej a • ■ • a ap satisfies Definition 3.44, i.e. that is an element
in a generalized vector space AH(, the space of all linear combinations of p-fold exterior products.
3.46. Show (aiCj + a2e2 + "aCs) a (PiCi + /Sgeg + /Sgeg) = {cczPs - a3^2)ei + («i/?3 - o'3Ptl^2 + («i^2 - «2i3i)e3,
illustrating that a a b is the cross product in IJs.
3.47. Given vectors Xi, Xj x„ and an n X w matrix A such that yi, y2 y„ are linearly independent,
where y; = Ax;. Prove that Xj, X2, ..., x„ are linearly independent.
to!
3.48. Prove that the dimension of A'Ti — /^_„\| „!
3.49. Show that the remainder for the Schwartz inequality
(a,a)(b,b) - |(a,b)|2 = J 2 2 ki^i - «AI'
^ t=l 3 = 1
for TC-vectors. What is the remainder for the inner product defined as (a,b) = I a*(.t) b(t) dt"!
68 ELEMENTARY MATRIX THEORY [CHAP. 3
Answers to Supplementary Problems
3.18.
JT 2a + a;2 aj + sin t
bir bx^ b sin t
TT \
3.19. I 2 x2 \
^—j sint/
/-2a a \
3.24. „ for any a and B
\-2l3 p)
3.25. (w — 1) ! as is most easily seen by the Laplace expansion.
-3/2 1 l'
3.30. A-i = ( 5/2 1 -2
,-1 -1 1,
3.31. No
3.34. No
3.37. (5 -4 1 0)^ and (6 -5 1)^ are a basis.
3.38. (a) X = ( ] a and y = ( ) 9 where a and e are arbitrary, and since x and y are independ-
ent they span 'U2-
3.39. The n column vectors of X must be linearly independent.
3.43. No, this is an affine transformation.
3.44. bi = (-/IW -4 3)/12, bg = (0 3 4)/5 and ag is coplanar with aj and ag so that only \ and bg
are required.
3.48. One method of proof uses induction.
3.49. I r r I a(t) Hr) - a(r) b(t) |2 dr dt
chapter 4
Matrix Analysis
4.1 EIGENVALUES AND EIGENVECTORS
Definition 4.1: An eigenvalue of the w x n (square) matrix A is one of those scalars A that
permit a nontrivial (x ¥- 0) solution to the equation
Ax = Ax
(-4.1)
Note this equation can be rewritten as (A - Al)x = 0. Nontrivial solution vectors x
exist only if det (A - Al) = 0, as otherwise (A - Al)-^ exists to determine x = 0.
Example 4.1.
Find the eigenvalues of
3 4
1 3
The eigenvalue equation is
3 4\/a;,
1 3/1x2
X2
Then
3 4
1 3
Vo
The characteristic equation is det
Xj
3 - X
3 - X
1
4 \(-A
S--kJ\x2J
4
1 3- X^
a second-order polynomial equation whose roots are the eigenvalues Xj = 1, X2 = 5.
0. Then (3 - X)(3- X) -4 = or x2-6X + 5 = 0,
Definition 4.2: The characteristic polynomial of A is det (A - Al). Note the characteristic
polynomial is an nth order polynomial. Then there are n eigenvalues
Al, A2, . . . , An that are the roots of this polynomial, although some might be
repeated roots.
Definition 4.3: An eigenvalue of the square matrix A is said to be distinct if it is not a
repeated root.
Definition 4.4: Associated with each eigenvalue Ai of the nxn matrix A there is a nonzero
solution vector Xi of the eigenvalue equation Axi = AiXi. This solution
vector is called an eigenvector.
Example 4.2.
In the previous example, the eigenvector assocated with the eigenvalue 1 is found as follows.
1 \ _
X2
(1)
2 4^0:1
1 2y\a;2
-2
Then 2*1 + 4x2 = and Xi + ^x^, — 0, from which Xy = — 2a;2. Thus the eigenvector x^ is Xj = ( ) x^
where the scalar x^ can be any number. \ ■'■/
Note that eigenvectors have arbitrary length. This is true because for any scalar a,
the equation Ax = Ax has a solution vector ax since A(a:x) = aAx = aAx = A(ax).
69
70 MATRIX ANALYSIS [CHAP. 4
Definition 4.5: An eigenvector is normalized to unity if its length is unity, i.e. ||x|| = 1.
Sometimes it is easier to normalize x such that one of the elements is unity.
Example 4.3.
The eigenvector normalized to unit length belonging to the eigenvalue 1 in the previous example is
Xi = —=l _ ], whereas normalizing its first element to unity gives x, = (
■v/sV 1/ V-1/2
4.2 INTRODUCTION TO THE SIMILARITY TRANSFORMATION
Consider the general time-invariant continuous-time state equation w^ith no inputs,
dx(t)/dt = Ax(i) {J^.2)
where A is a constant coefficient nxn matrix of real numbers. The initial condition is
given as x(0) = xo.
Example 4.4.
Written out, the state equations U.2) are
dxi(t)/dt - a„a;i(i) + a^^x^i^it) + • • • + a^^xJS)
dx2(t)/dt = a2i«i(«) + a22a;2(*) + • • • + a2nXn(t)
dxn(t)/dt = a^iXid) + a„2i»2(i) + • • • + «„„«„(()
and the initial conditions are given, such as
a;2(0)
«n(0)/ \y ^
Now define a new variable, an n-vector y{t), by the one to one relationship
y{t) = M-»x(i) (4,3)
It is required that M be an n x n nonsingular constant coefficient matrix so that the solution
X can be determined from the solution for the new variable y{t). Putting x(t) = My(t)
into the system equation gives
Mdy{t)/dt = AM.y{t)
Multiplying on the left by M-i gives
dyit)/dt = M-'AMy(f) (^.^)
Definition 4.6: The transformation T-'AT, where T is a nonsingular matrix, is called a
similarity transformation on A. It is called similarity because the problem
is similar to the original one but with a change of variables from x to y.
Suppose M was chosen very cleverly to make M-»AM a diagonal matrix A. Then
/yiit)\ /xt ... o\/i/i(«)\
dyit) _ d y^{t) \ \2 ... 0\y2{t)\
~dr - di\ : j = : = Ay(i)
\yn{t)/ \0 ... Xn \yn{t)
/A.
.
.
Xn)
A2 .
.
\o
.
. A„
X„)A
CHAP. 4] MATRIX ANALYSIS 71
Writing this equation out gives dyi/dt = hyi for i=l,2, . . .,n. The solution can be ex-
pressed simply as yi{t) = yi(0)e^'\ Therefore if an M such that M-'AM = A can be found,
solution of dx/dt = Ax becomes easy. Although not always, such an M can usually be found.
In cases where it cannot, a T can always be found where T-^AT is almost diagonal.
Physically it must be the case that not all differential equations can be reduced to this
simple form. Some differential equations have as solutions te^'\ and there is no way to get
this solution from the simple form.
The transformation M is constructed upon solution of the eigenvalue problem for all
the eigenvectors xj, i = 1, 2 n. Because Axi = XiXj for i = 1, 2, . . ., «, the equations
can be "stacked up" using the rules of multiplication of partitioned matrices:
A(Xl I X2 I . . . I X„) = (AXI I AX2 I ... I AXn)
= (AlXl I A2X2 I ... I AnXti)
= (Xl I X2 I .
= (Xl I X2 I .
Therefore M = (xi | X2 1 . . . | x„) (4.5)
When M is singular, A cannot be found. Under a number of different conditions, it can
be shown M is nonsingular. One of these conditions is stated as the next theorem, and
other conditions will be found later.
Theorem 4.1: If all the eigenvalues of an n x % matrix are distinct, the eigenvectors are
linearly independent.
Note that if the eigenvectors are linearly independent, M is nonsingular.
Proof: The proof is by contradiction. Let A have distinct eigenvalues. Let xi, X2, . . . , x„
be the eigenvectors of A, with xi,x2, . . .,x. independent and x. + i, . . .,x„ dependent. Then
^i = 2 ^«x. for j = k + l,k+2,...,n where not all p.. = 0. Since x^ is an eigenvector,
k
Ax. = A.x^ = A. 2 ^„x, for j = k + l,...,n
1=1
\t — 1 / I — 1 i^l
Subtracting this equation from the previous one gives
1=1
But the X,, i=l,2,...,k, were assumed to be linearly independent. Because not all B
are zero some A, = A,. This contradicts the assumption that A had distinct eigenvalues'
and so all the eigenvectors of A must be linearly independent.
4.3 PROPERTIES OF SIMILARITY TRANSFORMATIONS
To determine when a T can be found such that T-^AT gives a diagonal matrix the
properties of a similarity transformation must be examined. Define
72 MATRIX ANALYSIS [CHAP. 4
Then the eigenvalues of S are found as the roots of det (S - \l) = 0. But
det(S-Al) = det(T-iAT-Al)
= det(T-iAT-AT-iIT)
= det[T-i(A-Al)T]
Using the product rule for determinants,
det (S- XI) = detT-idet(A-Al)detT
Since detT-» = (detT)-i from Problem 3.12, det (S- AI) = det(A-Xl). Therefore we
have proved
Theorem 4,2: All similar matrices have the same eigenvalues.
Corollary 4.3: All similar matrices have the same traces and determinants.
Proof of this corollary is given in Problem 4.1.
A useful fact to note here also is that all triangular matrices B display eigenvalues on
the diagonal, because the determinant of the triangular matrix (B — AI) is the product of
its diagonal elements.
Theorem 4.4: A matrix A can be reduced to a diagonal matrix A by a similarity trans-
formation if and only if a set of n linearly independent eigenvectors can
be found.
Proof: By Theorem 4.2, the diagonal matrix A must have the eigenvalues of A appear-
ing on the diagonal. If AT = TA, by partitioned matrix multiplication it is required that
Ati = Aitj, vi^here ti are the column vectors of T. Therefore it is required that T have the
eigenvectors of A as its column vectors, and T~i exists if and only if its column vectors are
linearly independent.
It has already been shown that when the eigenvalues are distinct, T is nonsingular.
So consider what happens when the eigenvalues are not distinct. Theorem 4.4 says that
the only way we can obtain a diagonal matrix is to find n linearly independent eigenvectors.
Then there are two cases:
Case 1. For each root that is repeated k times, the space of eigenvectors belonging to
that root is A;-dimensional. In this case the matrix can still be reduced to a diagonal form.
Example 4.5. / 1 o o\
Given the matrix A = [ 111). Then det (A — XI) = — X(l — X)2 and the eigenvalues are
\-l 0/
0, 1 and 1. For the zero eigenvalue, solution of Ax = gives x = (0 1 —1). For the unity eigenvalue,
the eigenvalue problem is
' 1 o\ AA AA
1 1 1 j[ X2J = (1) ^2 j
0/\xJ \xJ
This gives the set of equations
=
Xi + Xs =
-Xi — X3 —
CHAP. 4]
MATRIX ANALYSIS
73
Therefore all eigenvectors belonging to the eigenvalue 1 have the form
X2 +
where x^ and X2 are arbitrary. Hence any two linearly independent vectors in the space spanned by
(0 1 0) and (1 —1) will do. The transformation matrix is then
M =
and T-iAT = A, where A has 0, 1 and 1 on the diagonal in that order.
Note that the occurrence of distinct eigenvalues falls into Case 1. Every distinct eigen-
value must have at least one eigenvector associated with it, and since there are n distinct
eigenvalues there are n eigenvectors. By Theorem 4.1 these are linearly independent.
Case 2. The conditions of Case 1 do not hold,
to a diagonal form by a similarity transformation.
Then the matrix cannot be reduced
Example 4.6.
Given the matrix
A =
Since A is triangular, the eigenvalues are displayed as 1 and 1.
Then the eigenvalue problem is
(1)
which gives the set of equations ojj = 0, = 0. All eigenvectors belonging to 1 have the form {xi 0)1".
Two linearly independent eigenvectors are simply not available to form M.
Because in Case 2 a diagonal matrix cannot be formed by a similarity transformation,
there arises the question of what is the simplest matrix that is almost diagonal that can be
formed by a similarity transformation. This is answered in the next section.
4.4 JORDAN FORM
The form closest to diagonal to which an arbitrary nxn matrix can be transformed
by a similarity transformation is the Jordan form, denoted J. Proof of its existence in all
cases can be found in standard texts. In the interest of brevity we omit the lengthy develop-
ment needed to show this form can always be obtained, and merely show how to obtain it.
The Jordan form J is an upper triangular matrix and, as per the remarks of the preceding
section, the eigenvalues of the A matrix must be displayed on the diagonal. If the A matrix
has r linearly independent eigenvectors, the Jordan form has n-r ones above the diagonal,
and all other elements are zero. The general form is
Lii(Ai) i
J
r L2i(Ai)
_, 1 —
H.7)
+ -
I jLjmp(Apj
74 MATRIX ANALYSIS [CHAP. 4
Each Lji(Ai) is an upper triangular square matrix, called a Jordan block, on the diagonal
of the Jordan form J. Several Lii(Ai) can be associated with each value of \i, and may differ
in dimension from one another. A general Lji(Ai) looks like
Ai 1 .
..
Ai 1 .
.
Ai .
.
M^O = I Ai ... I (-4.5)
^0 ... Ai^
where Ai are on the diagonal and ones occur in all places just above the diagonal.
Example 4.7. Ai 1
Consider the Jordan form J = „ „' I . Because all ones must occur above the
Xj
Xg,
diagonal in a Jordan block, wherever a zero above the diagonal occurs in J there must occur a boundary
between two Jordan blocks. Therefore this J contains three Jordan blocks,
I'll(^l) ~ ( n \ I ' ^2l(^l) = ^1> Lj2(X2) = Xj
There is one and only one linearly independent eigenvector associated with each Jordan
block and vice versa. This leads to the calculation procedure for the other column vectors
ti of T called generalized eigenvectors associated with each Jordan block Lji(Ai):
Ati = Aiti + Xi
At2 = Ait2 + ti (^.9)
Ati = Aiti + ti-i
Note the number of ti equals the number of ones in the associated Lji(Ai). Then
A(xi I ti 1 12 I . . . I ti I . . . ) = (AiXi I Aiti + Xi I Ait2 + ti I ... [ Aiti + ti-i I . . . )
= (xi| ti 1 12| . . . 1 1(| . . .)Lji(Aj)
This procedure for calculating the ti works very well as long as Xi is determined to within
a multiplicative constant, because then each ti is determined to within a multiplicative
constant. However, difficulty is encountered whenever there is more than one Jordan
block associated with a single value of an eigenvalue. Considerable background in linear
algebra is required to find a construction procedure for the ti in this case, which arises so
seldom in practice that the general case will not be pursued here. If this case arises, a
trial and error procedure along the lines of the next example can be used.
Example 4.8.
Find the transformation matrix T that reduces the matrix A to Jordan form, where
/2 1 1^
A = ( 3 1
\0 -1 1.
CHAP. 4] MATRIX ANALYSIS 75
The characteristic equation is (2 — X)(3 — X)(l — \) + (2 — X) = 0. A factor 2 — X can be removed,
and the remaining equation can be arranged so that the characteristic equation becomes (2 — X)^ = 0.
Solving for the eigenvectors belonging to the eigenvalue 2 results in
Therefore any eigenvector can be expressed in a linear combination as
What combination should be tried to start the procedure described by equations (^.9)? Trying the general
expression gives
(i-i !)(::) ^ {::)'(:)" ^ CD'
Then fa + Tg = a
'"2 + ""s = P
—^2 ~ '"3 — ~P
These equations are satisfied if a = /? This gives the correct x = a(l 1 -1)1". Normalizing x by setting
a = 1 gives t = (ti t2 1 - T^V- The transformation matrix is completed by any other linearly inde-
pendent choice of x, say (0 1 -1)'^, and any choice of ri and tj such that t is linearly independent of the
choices of all x, say tj = and t^ = 1. This gives AT = TJ, or
2
1
1\
( 1
3
1
1
1
-1
ir
V-i
-1
4.5 QUADRATIC FORMS
Definition 4.7: A quadratic form ^ is a real polynomial in the real variables li.lj, . • .,!„
n n
containing only terms of the form ayljl^, such that ^ = 2 2 %^i^}>
where a,, is real for all i and j.
Example 4.9.
Some typical quadratic forms are
^2 - Hi - 2Sil2 + il + Hiia - 7{2{i
^3 = «lllf + «12ll?2 + a21&ll + «22S|
^4 = til + (1 - *')«1«2 ~ ^'^3
Theorem 4.5: All quadratic forms Si can be expressed as the inner product (x,Qx) and
vice versa, v/here Q is an » x w Hermitian matrix, i.e. Q+ = Q.
Proof: First ^ to (x,Qx): „ „
76 MATRIX ANALYSIS [CHAP. 4
Let Q = {q..} = ^{a^. + a.^}. Then q.. = q.., so Q is real and symmetric, and ^ = x'^Qx.
Next, (x,Qx) to ^ (the problem is to prove the coefficients are real):
(x,Qx) = XXQiA^i and (x,Qtx) = t t 9*^,1,.
«=1 3 = 1 i=l 3 = 1 " ■*
Then n n
(x,Qx) = i(x,Qx) + i(x,Qtx) = iX^iQii + Qm^i
i=l 3=1
n n
So (x, Qx) = S S 1^6 (Q'y)|;|j = ^ and the coefficients are real.
i=l 3=1
Theorem 4.6: The eigenvalues of an nxn Hermitian matrix Q = Qt are real, and the
eigenvectors belonging to distinct eigenvalues are orthogonal.
The most important case of real symmetric Q is included in Theorem 4.6 because the
set of real symmetric matrices is included in the set of Hermitian matrices.
Proof: The eigenvalue problems for specific A; and Xj are
Qxi = AiXi
Qxi = AjXj
Since Q is Hermitian,
Qtxj = AjXj
Taking the complex conjugate transpose gives
xjQ = A*xJ {i.l2)
Multiplying {■li:12) on the right by Xj and {i.ll) on the left by xj gives
xjQx.-xjQx. = = (A* -A.)xtx,
If j = i, then a/x+xJ is a norm on ■s.^ and cannot be zero, so that Aj = A* , meaning each eigen-
value is real. Then if j¥=i, >?'. — X^ = \. — \. But for distinct eigenvalues, A^. — A. ^^0,
so xjxj = and the eigenvectors are orthogonal.
Theorem 4.7: Even if the eigenvalues are not distinct, a set of n orthonormal eigenvectors
can be found for an nXn normal matrix N.
The proof is left to the solved problems. Note both Hermitian and real symmetric
matrices are normal so that Theorem 4.6 is a special case of this theorem.
Corollary 4.8: A Hermitian (or real symmetric) matrix Q can always be reduced to a
diagonal matrix by a unitary transformation, vi^here U~'^QU = A and
u-i = U+.
Proof: Since Q is Hermitian, it is also normal. Then by Theorem 4.7 there are n
orthonormal eigenvectors and they are all independent. By Theorem 4.4 this is a necessary
and sufficient condition for diagonalization. To show a transformation matrix is unitary,
construct U with the orthonormal eigenvectors as column vectors. Then
UtU =
'xp
4
\
(XX
/
X2I .
. |x„) =
/X+Xj
1 ^1^1
xjx^ .
x+x^ .
• x|x„
Vu.
4^2 •
• x+x„
x!
CHAP. 4] MATRIX ANALYSIS 77
But x[x^ = (x.,Xj) = S,j because they are orthonormal. Then IJtU = I. Since the column
vectors of U are linearly independent, U~' exists, so multiplying on the right by U"' gives
Ut = U~S which was to be proven.
Therefore if a quadratic form ^ = xtQx is given, rotating coordinates by defining
X = Uy gives Si = y'fU+QUy = y+Ay. In other words, ^ can be expressed as
^ = ki\yi\^ + k2\y2\^ + •■• +A„!2/„|2
where the Ai are the real eigenvalues of Q. Note ^ is always positive if the eigenvalues
of Q are positive, unless y, and hence x, is identically the zero vector. Then the square root of
^ is a norm of the x vector because an inner product can be defined as (x, y)^ = x+Qy.
Definition 4.8: An nxn Hermitian matrix Q is positive definite if its associated quadratic
form Si is always positive except when x is identically the zero vector.
Then Q is positive definite if and only if all its eigenvalues are > 0.
Definition 4.9: An nxn Hermitian matrix Q is nonnegative definite if its associated
quadratic form Si is never negative. (It may be zero at times when x is
not zero.) Then Q is nonnegative if and only if all its eigenvalues are — 0.
Example 4.10.
^ = Ij — 2|i42 + ^2 ~ (fi ~ ^2)^ can be zero when fi = ^2, and so is nonnegative definite.
The geometric solution of constant Si when Q is positive definite is an ellipse in n-space.
Theorem 4.9: A unique positive definite Hermitian matrix R exists such that RR = Q,
where Q is a Hermitian positive definite matrix. R is called the square root
of Q.
Proof: Let U be the unitary matrix that diagonalizes Q. Then Q = UAIJt. Since
hi is a positive diagonal element of A, defineA^'^ as the diagonal matrix of positive A-/^.
Q = UAi/2Ai/2Ut = UA»/2utUAi''2Ut
Now let R = UAi-'^Ut and it is symmetric, real and positive definite because its eigenvalues
are positive. Uniqueness is proved in Problem 4.5.
One way to check if a Hermitian matrix is positive definite (or nonnegative definite)
is to see if its eigenvalues are all positive (or nonnegative). Another way to check is to use
Sylvester's criterion.
Definition 4.10: The mth leading principal minor, denoted detQm, of the n x n Hermitian
matrix Q is the determinant of the matrix Qm formed by deleting the last
n — m rows and columns of Q.
Theorem 4.10: A Hermitian matrix Q is positive definite if and only if all the leading
principal minors of Q are positive.
A proof is given in Problem 4.6.
Example 4.11.
Given Q = {g^}. Then Q is positive definite if and only if
< detQi = g„; < det Q^ = det f^J.^ ^^^V ...; < det Q„ - detQ
\9l2 922/
If < IS replaced by -, vie cannot conclude Q is nonnegative definite.
Rearrangement of the elements of Q sometimes leads to simpler algebraic inequalities.
78 MATRIX ANALYSIS [CHAP. 4
Example 4.12.
The quadratic form
is positive definite if gn > and qnQzz-lfz > 0- ^ut ^ can be written another way:
which is positive definite if ^22 > and 911922 - gfg > 0. The conclusion is that Q is positive definite if
det Q > and either 922 or g^ can be shown greater than zero.
4.6 MATRIX NORMS
Definition 4.11: A norm of a matrix A, denoted ||A||, is the minimum value of k such that
||Ax|[ — KJIxjj for all x.
Geometrically, multiplication by a matrix A changes the length of a vector. Choose
the vector xo vi^hose length is increased the most. Then ||A|| is the ratio of the length of
Axo to the length of xo.
The matrix norm is understood to be the same kind of norm as the vector norm in its
defining relationship. Vector norms have been covered in Section 3.10. Hence the matrix
norm is to be taken in the sense of corresponding vector norm.
Example 4.13.
To find ||U|]2, where Ut = U"i, consider any nonzero vector x.
Theorem 4.11: Properties of any matrix norm are:
(1)
(2)
jAxJI — ||A|| |[x||
|A|| = max j|Au[j, v^^here the maximum value of ||Au|j is to be taken
over those u such that Hull = 1.
|A + B||=^||A[| + ||B[|
1AB||^||A[|||B|[
A[ == |jA|| for any eigenvalue A of A.
I All = if and only if A = 0.
(3)
(4)
(5)
(6)
Proof: Since |[A|[ = k^^^, substitution into Definition 4.11 gives (1).
To show^ (2), consider the vector u = x/a, where a — ||x]|. Then
||Ax|| ~ ||aAu|| = |q:| ||Au|| — k||x|| = K[aijlu][
Division by \a\ gives |jAu|| -= k[|u||, so that only unity length vectors need be considered
instead of all x in Definition 4.11. Geometrically, since A is a linear operator its effect on
the length of ax is in direct proportion to its effect on x, so that the selection of xo should
depend only on its direction.
To show (3),
iJ(A + B)x|| = IJAx + Bxl! ^ |lAx[[ + j[Bx[| ^ (|jA|| + ||Bj[) ||x|[
for any x. The first inequality results from the triangle inequality, Definition 3.46(4), for
vector norms, as defined previously.
CHAP. 4] MATRIX ANALYSIS 79
To show (4), use Definition 4.11 on the vectors Bx and x:
||ABx|| = ||A(Bx)|| ^ ||A|l||Bx|| ^ |1A|| ||B|| ||x||
for any x.
To show (5), consider the eigenvalue problem Ax = Ax. Then using the norm property
of vectors Definition 3.46(5),
|A|||x|| = ||Xx|| = llAxIi ^ |lA|l!ix|i
Since x is an eigenvector, | |x| | ^^ and (5) follows.
To show (6), ||A|| = implies ||Ax|| = from Definition 4.11. Then Ax = so that
Ax = Ox for any X. Therefore A = 0. The converse ||0|| = is obvious.
Theorem 4.12: ||A||2 = p^^^, where p^^^ is the maximum eigenvalue of A+A, and further-
more
^ p. ^ M. ^
"mm 11 M rmax
I1X||2
To calculate IJAll^, find the maximum eigenvalue of At A.
Proof: Consider the eigenvalue problem
AtAg. - pfg^
Since (x, A+Ax) = (Ax, Ax) = ||Ax||2 s= o, then AtA is nonnegative definite and p? ^ 0.
Since AtA is Hermitian, p2 is real and the g. can be chosen orthonormal. Express any x
n
in =U„ as X = 2 iiSr Then
i = l
llAxlE i \\^AS,\\1 ± Ifpf
112
t=l
2
Since (pf)^;^ 2 ^i - 2 ihf - (p?)n,ax 2 ^u taking square roots gives the inequality of
i = l i=l i = l
the theorem. Note that HAxH^ = p^^JM^ when x is the eigenvector g. belonging to (p^ax-
4.7 FUNCTIONS OF A MATRIX
Given an analytic scalar function /(a) of a scalar a, it can be uniquely expressed in a
convergent Maclaurin series,
k =
where f^ = d^f(a)lda!' evaluated at a = 0.
Definition 4.12: Given an analytic function /(«) of a scalar a, the function of annxn matrix
A is /(A) = S/feAVA;!.
k =
Example 4.14.
Some functions of a matrix A are
cos A = (cos 0)1 + (-sin 0)A + (-cos 0)A2/2 + • • • + (-1)'"A2'"/(2to) ! + • • •
eAt - (eO)i + (eO)Aj + {e'^)\H^I1 + ••• + \H^lk\ + •••
80
MATRIX ANALYSIS
[CHAP. 4
Theorem 4.13: If T-^AT = J, then /(A) = T/(J) T-».
Proof: Note A'' = AA • • • A = TJT-^TJT-^ • • • TJT-^ = TPT-^. Then
Theorem 4.14: If A = TAT^S where A is the diagonal matrix of eigenvalues Xi, then
/(A) - /(X,)x,rt + /(X,)x,rt + • • • + /(Xjx„r+
Proof: From Theorem 4.13,
But
/(A) = S/.AVA;!
Jc =
Therefore
/(A) = T/(A)T-'
/
(-4.15)
S/.^J/'i^!
24^2//^!
fc =
/(A)
/(Ai)
/(A2)
(^J^)
/(M,
Also, let T = (xi I X2 i . . . i x„) and T"' = (ri j ra | . . . ] rn)t, where rj is the reciprocal basis
vector. Then from {^.13) and (A-li),
/(A) = (xi I X2 1
|x„)
7(Ai)
/(A2)
= (Xl I X2 I
Ix.)
/(Ai)rl '
f iK)4
/(An),
The theorem follows upon partitioned multiplication of these last two matrices.
The square root function f(a) — a^'^ is not analytic upon substitution of any Ai. There-
fore the square root R of the positive definite matrix Q had to be adapted by always taking
the positive square root of Ai for uniqueness in Theorem 4.9.
Definition 4.13: Let /(a) = a in Theorem 4.14 to get the spectral representation of
A = TAT-i;
A = 2\x,rt
CIHAP. 4] MATRIX ANALYSIS 81
Note that this is valid only for those A that can be diagonalized by a similarity trans-
formation.
To calculate /(A) = T/(J)T-S we must find /(J). From equation (^.7),
/Ln(Xi) V /Ln(Ai)
y 'l.mn{Xr)j \ LL(A„)
where the last equality follows from partitioned matrix multiplication. Then
//(Ln) \
Hence, it is only necessary to find /(L) and use {i.15). By calculation it can be found that
for an Ixl matrix L,
/a'' fcA"-! ... (, ^.)a'^-"-»
L-'CA) =
A" ... (^^g)^'""""! i^-i^)
\ ... A"
^'^^^^ im) ~ (n~'m)\m\ ' *^^ number of combinations of w elements taken m at a time.
00
Then from /(L) = 2 /fcLVA;!, the upper right hand terms «„ are
fc=0 '
but
The series converge since /(A) is analytic, so that by comparing {i.l7) and {-i.lS),
1 d'-'/(A)
"" (i-1)! dA'-i
Therefore
//(A) d//dA ... [(Z-l)!]-id'-i//dA'-i\
/(L) = I /(A) ... [(;-2)!]-id'-V/dA'-M (^.^^^
\ ... /(A) /
From {i.l9) and (4.i5) and Theorem 4.13, /(A) can be computed.
Another almost equivalent method that can be used comes from the Cayley-Hamilton
theorem.
Theorem 4.15 (Cayley-Hamilton): Given an arbitrary nxn matrix A with a charac-
teristic polynomial ^(A) = det(A-Al). Then 9!.(A) = 0.
The proof is given in Problem 5.4.
82 MATRIX ANAI^YSIS [CHAP. 4
Example 4.15.
Given A = ( 1 . Then
det (A - M) = 0(X) = X2 - 6X + 5 (i.SO)
By the Cayley-Hamilton theorem,
,(A) = A2 - 6A + 51 = C' '') - e(l I) + 6(' ON . /O
The Cayley-Hamilton theorem gives a means of expressing any power of a matrix in
terms of a linear combination of A™ for 'm= 0,1, . . .,n—l.
Example 4.16.
From Example 4.15, the given A matrix satisfies = A^ — 6A + 51. Then A^ can be expressed in
terms of A and I by
A2 = 6A - 51 (i-Sl)
Also A3 can be found by multiplying {i.21) by A and then using (i.21) again:
A3 = 6A2 - 5A = 6(6A - 51) - 5A = 31A - 301
Similarly any power of A can be found by this method, including A~i if it exists, because USI) can be
multiplied by A~i to obtain
A-i = (6I-A)/5
Theorem 4.16: For an nxn matrix A,
/(A) = yiA"-i + Y^A"-^ + • ■ • + y„_j A + yj
where the scalars y^ can be found from
/(J) = y,J»-i + y,J»-2 + • • • + y„^j J + yj
Here /(J) is found from ^.15) and {^.19), and very simply from (i.li) if A can be
diagonalized.
This method avoids the calculation of T and T-i at the expense of solving
/(J) = i y^J"-* for the y,.
Proof: Since /(A) = 2/fcAVfc! and A" = 'E '^km^"' by the Cayley-Hamilton
theorem, then
/(A) =:: 2/.(2:«..A'»)/fc! = SA™ !:/,«, JA;!
The quantity in brackets is y„_™.
Also, from Theorem 4.13,
/(J) = T-V(A)T = T-ii:y,A"-*T = ^ y,T-iA"-«T = ^ y.J""'
j = l i— 1 i—1
Example 4.17.
For the A given in Example 4.15, cos A = yiA + yjl. Here A has eigenvalues Xi = 1, Xg = 5, From
cos A = YiA + 721 we obtain
cos Xj = yjXi + 72
cos X2 = 7l>'2 + 72
CHAP. 4] MATRIX ANALYSIS 83
Solving for yj and yo gives
, cos 1 — cos 5/3 2\ 5 cos 1 — cos 5 /l
cos A = ( +
1-5 123/ 5-1 U 1
cos 1/1 ~1\ COS 5/1 1
2 \-l \) ' 2 "^l 1
Use of complex variable theory gives a very neat representation of /(A) and leads to
other computational procedures.
Theorem 4.17: If /(«) is analytic in a region containing the eigenvalues Xi of A, then
/(A) = ~§ m{si-pi)-'ds
where the contour integration is around the boundary of the region.
Proof: Since /(A) = T/(J)T-» = l\ ^. ^ f{s){sl- sy ds\T-\ it suffices to show
that /(L) = iT^ t /(s)(sl -L)-ids. Since
ls-\ -1 ...
\ . . . s- A,
then
/(s-A)'-i (s-Ay-2 ... 1
1 / (s-A)'-i ... s-A
(s-y)'
\ ... (s-A)'-\
(^I-L)-i - , _ ,,
The upper right hand term a^, of g— 7 4) /(s) (si — L) ' ds is
_ J_ r /(s)dg
"" 2,r/j7(s + Ay
Because all the eigenvalues are within the contour, use of the Cauchy integral formula then
gives
1 6}-'m
which is identical with equation {U.19).
Example 4.18.
Using Theorem 4.17,
cos A = ^-^ <P cos s(sl — A)~ids
For A as given in Example 4.15,
_i _ /s-3 -2 \-i _ 1 /s- 3 2
(si - A) - ( -2 s - 3 / ~ s2 - 6s + 5 i 2 s - 3
Then
cos A
2W5' (s
l)(s - 5) V 2 s - 3
84 MATRIX ANALYSIS [CHAP. 4
Performing a matrix partial fraction expansion gives
If cosl /-2 2\ , , 1 r COS.
i 5_ [2 i
5) V 2 2
ds
cos 1 / • 1 — 1\ , cos 5/1 1
2 V-1 1/ 2 Vl 1
4.8 PSEUDOINVERSE
When the determinant of an n x n matrix is zero, or even when the matrix is not square,
there exists a way to obtain a solution "as close as possible" to the equation Ax = y. In
this section we let A be an m x w real matrix and first examine the properties of the real
symmetric square matrices A'^A and AA^, which are w X « and mxm respectively.
Theorem 4.18: The matrix B'^B is nonnegative definite.
Proof: Consider the quadratic form ^ = y'"y, which is never negative. Let y = Bx,
where B can be m X n. Then x'^B'^Bx s^ 0, so that B^B is nonnegative definite.
From this theorem. Theorem 4.6 and Definition 4.9, the eigenvalues of A'^A are either zero
or positive and real. This is also true of the eigenvalues of AA^, in which case we let
B = A'' in Theorem 4.18.
Theorem 4.19: Let A be an m x n matrix of rank r, where r^m and r^n; let g. be an
orthonormal eigenvector of A'^A; let £i be an orthonormal eigenvector of
AA'^, and let pf be the nonzero eigenvalues of A^'A. Then
(1) p? are also the nonzero eigenvalues of AA''.
(2) Ag; = p^i^ for i^l,2,...,r
(3) Ag; = for i = r+1, .. .,n
(4) A^f. = p,g; for i = l,2, ...,r
(5) A^fj = for i = r+l, ...,m
Proof: From Problem 3.12, there are exactly r nonzero eigenvalues of A'^A and AA^.
Then A^Ag. = pfg. for i = l,2, ...,r and A^'Agj = for i = r + 1, . . .,n. Define an
m-vector h. as h. = Ag./p^ for i = 1, 2, . . . , r. Then
AA'-'h, = AA^Ag./p, = A(p2g.)/p, = pfh,
Furthermore,
hfh. = gJA^Ag./p,p, = p^g^-g/Pj = S..
Since for each i there is one normalized eigenvector, h. can be taken equal to £. and (2) is
proven. Furthermore, since there are r of the p2, these must be the eigenvalues of AA'^ and
(1) is proven. Also, we can find fj for i = r+l! . . .,m such that AA^f = and are ortho-
normal. Hence the fj are an orthonormal basis for =U„ and the g^ are an orthonormal basis
for =U„.
To prove (3), A^Ag^ = for i= r+l,...,m. Then ||Ag.|i^ = gfA^^Ag; = 0, so that
Ag. = 0. Similarly, since AA^f^ = for i^r+l,...,m, then AJi. = and (5) is proven.
Finally, to prove (4),
A^f, = A^(Ag/p,) = (A^AgJ/p, = pfg,/p, = PSi
CHAP,, 4] MATRIX ANALYSIS 85
Example 4.19.
r . . /6 4 0\
Let ■"'^ ~ ( 8 , n -5 ) • Then m = r = 2, w = 4.
/72 6 24 36^
^ - (^36 73; ^^ - I 24 16
\36 6 36^
The eigenvalue of AA^ are p\ = 100 and p^ = 25. The eigenvalues of A^A are p2 = 100, p2 = 25,
''3 ~ ^' ''4 ~ *^" '^^® eigenvectors of Ai"A and AA^ are
gj = ( 0.84 0.08 0.24 0.48)^
g2 = ( 0.24 -0.12 0.64 -0.72)1'
gg = (-0.49 0.00 0.73 0.49)T
g4 = ( 0.04 -0.98 -0.06 -0.13)^
fi = ( 0.6 0.8)T
fa = ( 0.8 -0.6)T
From the above, propositions (1), (2), (3), (4) and (5) of Theorem 4.19 can be verified directly. Computationally
it is easiest to find the eigenvalues p^ and p2 and eigenvectors f , and fg of AA^, and then obtain the gj
from propositions (4) and (3).
r
Theorem 4.20: Under the conditions of Theorem 4.19, A = '^ p^lgT.
i = l
Proof: The mxn matrix A is a mapping from 'V^ to =U„,. Also, g^, gj, . . . , g„ and
f J, £2. • • • .^m ^o^'" orthonormal bases for °U^ and 1)^ respectively. For arbitrary x in IJ^,
X = S ligf where i^ = gTx {i..22)
(=1
Use of properties (2) and (3) of Theorem 4.19 gives
Ax = i |,Ag, = i I^Ag, + ± i,Ag, = ± i.p^i^
i=l 1=1 i=r+l 1=1
r
From {i.22), $^ = gTx and so Ax = ^ ig/igfx. Since this holds for arbitrary x, the
theorem is proven. '"^
Note that the representation of Theorem 4.20 holds even when A is rectangular, and has
no spectral representation.
Example 4.20.
Using the A matrix and the results of Example 4.19,
^ , „ ° ) = 10(°"^)(0.84 0.08 0.24 0.48) + 5( "^'^ ) (0.24 -0.12 0.64 -0.72)
6 10 6/ \^-^/ V~0-6/
Definition 4.14: The pseudoinverse, denoted A~^ of the mxn real matrix A is the nxm
r
real matrix A~^ = ^ p.-igjfj'.
Example 4.21.
Ag:ain, for the A matrix of Example 4.19,
/0.84\ / 0.24 \ / 0.0888 0.0384 \
/ 0.08 \ , / -0.12 1 / -0.0144 0.0208 \
^"^ = 'Ao.2,W''-'^-''-\ 0.64h°-'-°-^' = 0.1168-0.0576
\0.48/ \-0.72/ \-0.0864 0.1248/
86 • MATRIX ANALYSIS [CHAP. 4
Theorem 4.21 : Given an mxn real matrix A and an arbitrary w-vector y, consider the
equation Ax = y. Define xo = A"'y. Then ||Ax — y||2 — ||Axo — y||2 and
forthose z ^ xo such that |]Az — y||2 = jjAxo — y||2, then |jz||2 > ||xoj|2.
In other words, if no solution to Ax = y exists, xo gives the closest possible solution.
If the solution to Ax = y is not unique, xo gives the solution w^ith the minimum norm.
Proof: Using the notation of Theorems 4.19 and 4.20, an arbitrary m-vector y and an
arbitrary %-vector x can be written as
m n
where rj. = fjy and |. = gfx. Then use of properties (2) and (3) of Theorem 4.19 gives
n m r m
Ax - y = "EiASi-^vA = i:iliP-v)ii+ 2 vA (4.^i)
i=l i=l i=l t=r+l
Since the £. are orthonormal,
' r m
l|Ax-yi!^ = 2(^iPi-'?/ + 2 Vf
t=l i = r+l
To minimize ||Ax-y|]2 the best we can do is choose i.-r]./-p. for i = l,2, ...,r. Then
those vectors z in °U„ that minimize l|Ax — y|J2 can be expressed as
r rt
i=l i = r+l
where |j for i = r+l, . . .,n is arbitrary. But
M\i = i ivjp)' + i ^f
i=l r+l
The z with minimum norm must have Ij = f or i = r + 1, . . . , to. Then using r;j = ijy from
r r
U.;25) gives z with a minimum norm = ^ p-i'/jgi = 2 Pr*Sif»^y = A'V
i=l i = l
Xo.
Example 4.22.
Solve the equations 6X1 + 40:3 = 1 and 6x1 + X2 + 6x^ = 10.
This can be written Ax = y, where y = (1 10)t and A is the matrix of Example 4.19. Since the
rank of A is 2 and these are four columns, the solution is not unique. The solution Xq with the minimum
norm is , , . / „ \
/ 0.0888 0.0384 \ / 0.4728 \
/ -0.0144 0.0208 I /1\ ^ 0.1936 1
Xo - A ly - I ^.^^gg _o.0576 \lOj I -0.4592 I
\-0.0864 0.1248/ \ 1.1616/
Theorem 422: If it exists, any solution to Ax = y can be expressed as x = A-'y +
(I — A~^A)z, where z is any arbitrary n-vector.
Proof: For a solution to exist, r;j = for i = r + l, . . .,m in equation {i.2i), and
li = T/j/p, for i = l,2, .. .,r. Then any solution x can be written as
i=l i=r+l
where C- for i = r + l, ...,n. are arbitrary scalars. Denote in "Vn an arbitrary vector
z = 2 S-^-, where ^j = gfz. Note from Definition 4.14 and Theorem 4.20,
A-'A = t i Pr'g,iJi.SlP, = i i PrP^S^SlB,, ^ ± g,gf {A.26)
lc = l i = l * fc=l 1=1 '"^
CHAP. 4] MATRIX ANALYSIS 87
n
Furthermore, since the gj are orthonormal basis vectors for °U„, 1=5/ S Sf- Hence
(I-A-'A)z = 2 g.gjz = ± g^^^ {^.27)
i = r+ 1 i = r + 1
From eciuation {A.23), ■q^ = ffy so that substitution of (4.37) into (4.25) and use of Definition
4.14 for the pseudoinverse gives
X = A-'y + (I-A-^A)z
Some further properties of the pseudoinverse are:
1. If A is nonsingular, A~' = A~'.
2. A-'A ¥- AA-^ in general.
3. AA-^A = A
4. A-^AA-^ = A-^
5. (AA-^)'' = AA-^
6. (A-'A)^ = A-'A
7. A-^Aw = w for all w in the range space of A''.
8. A-'x = for all x in the null space of A"^.
9. A-^(y + z) = A-'y + A-^z for all y in the range space of A and all z in the null
space of A'^.
10. Properties 3-6 and also properties 7-9 completely define a unique A"^ and are
sometimes used as definitions of A^^
11. Given a diagonal matrix A = diag (Ai, A2, . . . , A„) where some Xi may be zero.
Then A-^ = diag {x~\ \-\ .... A"') where Q-i is taken to be 0.
12. Given a Hermitian matrix H = Ht. Let H = UAUt where U+ = U-i. Then
H-' = UA-^Ut where A"' can be found from 11.
13. Given an arbitrary mxn matrix A. Let H = A+A. Then A"^ = H-^At = (AH-')+
where H"' can be computed from 12.
14. (A-')-^ = A
15. (A^)-' = (A-^^
16. The rank of A, At A, AAt, A-^ A-'A and AA"' equals tr(AA-^) = r.
17. If A is square, there exists a unique polar decomposition A = UH, where W = At A,
U = AH-^ Ut = U-'. If and only if A is nonsingular, H is positive definite real
symmetric and U is nonsingular. When A is singular, H becomes nonnegative
definite and U becomes singular.
18. If A(Q is a general matrix of continuous time functions, Pi.-\t) may have discon-
tmuities in the time functions of its elements.
A' = lim A^(AA'' + eQ)-i := lim (A''A + eP)-iA''
> Where (P, Q) is any symmetric positive definite matrix that commutes with (A^A, AA^).
Proofs of these properties are left to the solved and supplementary problems.
88 MATRIX ANALYSIS [CHAP. 4
Solved Problems
4.1. Show that all similar matrices have the same determinants and traces.
To show this, we show that the determinant of a matrix equals the product of its eigenvalues
and that the trace of a matrix equals the sum of its eigenvalues, and then use Theorem 4.2.
Factoring the characteristic polynomial gives
det(A-M) = (Xi-X)(X2-X)---(Xn-^) = XlX2---^n+ ••• + (>^l + ^2+---+>^«)(-'^)""^ + (-X)''
Setting X = gives det A = XiX2- • -Xn. Furthermore,
det (A - XI) = (ai - Xei) a (aj - Xe2) a • ■ • a (a„ - Xe„)
= ai A 32 A • ■ • A a„ + (-X)[ei a a2 a • • • a a„ + Hj a eg a • • • a a„ + ■ • • + aj a aj a • • • A e„]
+ • • • + (-X)"""i[ai Ae2 A • • • Ae„ + Bi Aa2A • • • Ae„ + • • ■ + ei ACg a • • ■ a a„]
+ (-X)"ei A e2 A • ■ • A e„
Comparing coefficients of —X again gives XiX2- • • X„ = aj a a2 a • • • a a„, and also
Xl + Xa + • • • + X„ = ai A 62 A • • • A e„ + ei A 32 A • • • A e„ + • • • + ei A 62 A • • • A a„
However,
ai A 62 A • ■ • A e„ = (ciiiei + agiea + • ■ ■ + ani^n) a 62 a • • ■ a e„ = Onei a 62 a • • • a 6„ = a^
and similarly ei a 33 a • ■ • a e„ = 022, etc. Therefore
Xl + X2 + ■ • • + X„ = an + 022 + • • • + «nr. = tr A
4.2. Reduce the matrix A to Jordan form, where
/S -8 -2\
(a) A = 4 -3 -2 (c) A
\3 -4 1/
(b) A = 1 -4 (d) A
X3
/8 - X -8 -2
(a) Calculation of det ( 4 -3 - X -2 ) = gives X^ = 1, X^ = 2, X3 = 3. The eigen-
y 3 -4 1-Xy
vector X; is solved for from the equations
-Xi -8 -2 \ /O'
4 -3 - Xi -2 j Xi = I
3 -4 1-Xi/ \0,
where the third element has been normalized to one in X2 and X3 and to two in x,. Then
_8 -2\ /4 3 2\ /4 3 2\ /l o\
4 _3 _2 3 2 1 = 3 2 10 2 I
,3 -4 1/V2 11/ \2 1 1/ \0 3/
/-1-X 1 -1 \
(b) Calculation of det 1-X -4=0 gives Xj - X2 - X3 - 1.
V 1 -3-x/
CHAP. 4]
MATRIX ANALYSIS
89
Solution of the eigenvalue problem (A— (— l)I)x = gives
(c)
and only one vector, (a 0)"" where a is arbitrary,
one Jordan block Lu(— 1), so
Therefore it can be concluded there is only
Solving,
= Lii(-l) =
Finally, from
gives ti = (y8 2a a)''' where p is arbitrary.
/O 1-1\
2 -4 1 t2
Vo 1 -2/
we find t2 = (y 2/8 — a fl — a)^ where y is arbitrary,
respectively gives a nonsingular (x^ | tj 1 12), so that
n o\ /_! 1 -A A
1 -1 ( 1-4(0 2-1
1 -2/V 1 -3/Vo 1 -1.
= t,
Choosing a, p and y to be 1, and
Since A is triangular, it exhibits its eigenvalues on the diagonal, so Xj = Xg = Xg = —1.
The solution of Ax = (-l)x is x - {a p 2p)T, so there are two b'liearly independent eigen-
vectors ad 0)T and ^(0 1 2)t. Therefore there are two Jordan blocks Li(-l) and LgC-l).
These can form two different Jordan matrices Jj or J,.;
It makes no difference whether we choose Jj or Jj because we merely reorder the eigenvectors
and generalized eigenvector in the T matrix, i.e.,
A(Xi|ti|x2) = (Xi|ti|x2)Ji
A(X2
|tl) = (X2
I tl)J2
How do we choose a and p to get the correct Xj to solve for tj? From (i.l9)
/O 2 -l\ / a
(A - XI)ti = |0 Ojti = [ P } = xi
\0 0/ \2P,
from which ^ = so that Xi = (a 0)^ and tj = (y & 2S - a)^ where y and S are also
arbitrary. From Problem 4.41 we can always take a = 1 and y, 8, etc. = 0, but in general
^-1 2 -l\A y a\ fee y «\/-l 1 0\
-1 )( S ^ j = / S /8 )( -1 )
-l/\0 23 -a 2/8/ \0 2S-a 2/3/ \ -1/
Any choice of a, p, y and 8 such that the inverse of the T matrix exists will give a similarity
transformation to a Jordan form.
(d) The A matrix is already in Jordan form. Any nonsingular matrix T will transform it, since
A. — —I and T-i(— I)T = —I. This can also be seen from the eigenvalue problem
(A-XI)x = (-I-(-l)I)x = Ox =
so that any 3-vector x is an eigenvector. The space of eigenvectors belonging to -1 is three
dimensional, so there are three Jordan blocks Lii(X)
on the diagonal for X = —1.
-1, Li2(X) = -1 and Li3(X) = -1
90
MATRIX ANALYSIS
[CHAP. 4
4,3. Show that a general normal matrix N (i.e. NNt = NtN), not necessarily with distinct
eigenvalues, can be diagonalized by a similarity transformation U such that Ut = U"'.
The proof is by induction. First, it is true for a 1 X 1 matrix, because it is already a diagonal
matrix and U = I. Now assume it is true forafc-lXfc-1 matrix and prove it is true for a
fc Xfc matrix; i.e. assume that for U^-i = UjT-i
Ai (
uI-iNfc_iUfc-i =
Let
Nu =
Form T with the first column vector equal to the eigenvector Xi belonging to Xi, an eigenvalue of N^.
Then form k-1 other orthonormal vectors X2,X3,...,Xfc from '=0^ using the Gram-Schmit process,
and make T = (x, | x
NuT =
and
TtNfcT
Note TtT = I. Then .
(Xl I X2 I
W) =
XiX
1^1
IllXj
nlxfe\
BfcXa
• • • HfcXfc y
«12
«22
aifc
«2fc
«/c2
a/cfcy
where the ay are some numbers.
But TtNfcT is normal, because
TtNfcT(TtNfcT)t r= TtNfcTTtNJ^T = TtN^N^T = (TtNfcT)t(T+NfcT)
Therefore
^12 «22
afc2
«lfc «2fc
Equating the first element of the matrix product on the left with the first element of the product
on the right gives
|XiP + k2P+--- + kfcP = kP
Therefore ai2, aig, . . . , aifc must all be zero so that
Xi ...
TtNfcT =
«22
«2, fc-1
where
*fc-i
«lc-l, 2
«fc-l, lc-1
and Afc_i is normal. Since Afc_iisA;-lXfe-l and normal, by the inductive hypothesis there
exists a Ufc_i = V^l^ such that U^-i Afc_iUfc_i = D, where D is a diagonal matrix.
CHAP. 4]
MATRIX ANALYSIS
91
Define S^ such that S^ =
1 ...
U,
k-l
Then S^S^ = I and
Si^TtNfeTSfc =
1
... o\
;
uJ-1
/
Xi ... 0\/l ...
Ufc-1
X, ...
Therefore the matrix TSj. diagonalizes Nj., and by Theorem 4.2, D has the other eigenvalues
X2' ^3 ^k of ^k on the diagonal.
Finally, to show TS^ is unitary,
I = S^Sfc = sUSk = S^TtTSfc = (TS,)t(TS,)
4.4. Prove that two nxn Hermitian matrices A = At and B = Bt can be simultaneously
diagonalized by an orthonormal matrix U (i.e. UtU = I) if and only if AB = BA.
If A = UAUt and B = UDUt, then
AB = UAUtUDUt = UADUt = UDAUt = UDUtUAUt = BA
Therefore all matrices that can be simultaneously diagonalized by an orthonormal U commute.
To show the convers^, start with AB = BA. Assume A has distinct eigenvalues. Then
Ax; - XjX;, so that ABx; - BAxj = XjBxi. Hence if Xj is an eigenvector of A, so is Bx-. For
distmct eigenvalues, the eivenvectors are proportional, so that Bx; = pjX; where pj is a constant of
proportionality. But then p^ is also an eigenvalue of B, and x^ is an eigenvector of B. By normal-
izing the Xj so that x^txi = 1, U = (xj | . . . | x„) simultaneously diagonalizes A and B.
If neither A nor B have distinct eigenvalues, the proof is slightly more complicated. Let X
be an eigenvalue of A having multiplicity m. For nondistinct eigenvalues, all eigenvectors of
X belong in the m dimensional null space of A - XI spanned by orthonormal Xj, Xg, . . . , x„. There-
Then for
fore Bxj - 2 c^-Xj, where the constants c^- can be determined by c,j = xJ Bx^.
C = {cy} and X = (xi I X2 I . . . I xJ, C = XtBX = XtBtX = Ct, so C is an m X m Hermitian
inatrix. Then C = U^D^U^ where D„ and U^ are m X m diagonal and unitary matrices respec-
tively. Now A(XU„) = X(XUm) since linear combinations of eigenvectors are still eigenvectors,
and D„ = uXxtBXU^. Therefore the set of m column vectors of XU^ together with all other
normalized^eigenvectors of A can diagonalize both A and B. Finally, (XU„)t(XU„) = (U+XtXU„) =
(^nJmVm) — Ifn, SO that the column vectors XU^ are orthonormal.
4.5. Show the positive definite Hermitian square root R, such that R^ = Q, is unique.
Since R is Hermitian, UAjUt = R where U is orthonormal. Also, R2 and R commute, so that
both R and Q can be simultaneously reduced to diagonal form by Problem 4.4, and Q = UDUt.
Therefore D = Aj. Suppose another matrix S^ = Q such that S = VAjYt. By similar reason-
ing, D = Ag. Since a number > has a unique positive square root, Ag = Aj and V and U are
matrices of orthonormal eigenvectors. The normalized eigenvectors corresponding to distinct
eigenvalues are unique. For any nondistinct eigenvalue with orthonormal eigenvectors Xj.Xj,. . .,x„,
(xi I . . . I x„)
= xxxt
92
MATRIX ANALYSIS
[CHAP. 4
and for any other linear combination of orthonormal eigenvectors, yi , y2, . • . , Ym.
(yi I • • • i ym) = (xi I • . . 1 Xm)Tm
where t1=T„'. Then
(yi I • • ■ I ym)
— (Xx I • • • I ^m/'-wJ-n
(Xl I . . ■ 1 X^)
Hence E and S are equal even though U and V may differ slightly when Q has nondistinct eigen-
values.
4,6. Prove Sylvester's theorem: A Hermitian matrix Q is positive definite if and only
if all principal minors det Qm > 0.
If Q is positive definite, ^ = (x, Qx) ^ for any x. Let x^ be the vector of the flrst^ m
elements of x. For those x" whose last m — n elements are zero, (x^, QmXm) = (x", Qx") - 0.
Therefore Q^ is positive definite, and all its eigenvalues are positive. From Problem 4.1, the
determinant of any matrix equals the product of its eigenvalues, so det Q^ > 0.
If detQm > for m == 1, 2, . . .,m, we proceed by induction. For n = 1, detQ = Xi > 0,
Assume now that if detQi > 0, ...,detQ„_i > 0, then Q„_i is positive definite and must
possess an inverse. Partition Q„ as
Qn
Qn =
qt
Qn
q+Q;"-!
Qn
^nn -q+Qn-iq / V *
(i.28)
We are also given detQ„ > 0. Then use of Problem 3.5 gives det Q„ = (g„„- qtQ„_iq) detQ„_i,
so that qnn - q+Q;^iq > 0- Hence
(xj-ila;*)
Qn-
>
gnn-qQn-iq
for any vector (x+_i | af„). Then for any vectors y defined by
substitution into (x, Qx) and use of U.28) vfill give (y, Q„y) > 0.
4.7. Show that if ||A!| < 1, then (I-A)-i = 2 A".
" n=0
Let Sfc = 2 A" and S = 2 A". Then
n=0 «=»
IS -Sfc
I 2 A-cl
n = fc + l
2 llAl!"
CHAP. 4] MATRIX ANALYSIS 93
by properties (3) and (4) of Theorem 4.11. Using property (6) and |]A|| < 1 gives S = lim S^.
Note llAfe + ill ^ ||A||fe + i so that lim A^ + i = 0. Since (I - A)Sfc = I - A^ + i, taking liniitl as
fc^ « gives (I-A)S = I. Since S exists, it is (I-A)-i. This is called a contraction mapping,
because ||(I-A)x|| ^ (1 - ||A||)||x|| ^ |)x|j.
112
4.8. Find the spectral representation of A = | — 1 1—2
1^
The spectral representation of A = 5 h^^i^. The matrix A has eigenvalues X, = 1,
i=l
Xa = 1 - J and Xg = 1 + j, and eigenvectors Xj = (1 1 -0.5)^, xj = {j 1 0)^ and Xg = {-j 1 0)^.
The reciprocal basis r; can be found as
1 i -}^-'
rtj = I 1 1 1
J J \-0.5 0,
Then
A ^ (l)f 1 j(0 -2) + (l-i)| 1 j(-0.5i 0.5 1-;) + (l + j)( 1 j {0.5} 0.5 1 + j)
4.9. Show that the relations AA-^A = A, A-^AA-' = A"', (AA-O'' = AA"^ and
(A~'A)^ = A~'A define a unique matrix A"^ that can also be expressed as in De-
finition 4.14.
r r
Represent A = 2 Pitigi and A"^ = 2 p-igk^t- Then
i=l lc=l It
AA-'A = (2 Pifig.n(2 p-'ejtj)^^ pAel)
T r r
= 222 PiPr'pfcfigigjfj ffcgfc
i=l j = i fc=l >
Since gjgj - Sjj and f ff^ = 8,^,
AA-'A = 2 Pifig-
Similarly A^'AA~' = A^^ and from equation (^.26),
A-' A = 2 gigf = (2 gigf)^ = (A-iA)T
r
Similarly (AA-O'^ = (? ^i^i)'^ " -^A"'-
To show uniqueness, assume two solutions X and Y satisfy the four relations. Then
1. AXA = A 3. (AX)T = AX 5. AYA = A 7. (AY)^ = AY
2. XAX = X 4. (XA)r = XA 6. YAY = Y 8. (YA)r ^ YA
and transposing 1 and 5 gives
9. A^X^A^ = AT 10. ATY^A^ = A^
94 MATRIX ANALYSIS [CHAP. 4
The following chain of equalities can be established by using the equation number above the
equals sign as the justification for that step.
2 4 10 4 2 8
X = XAX = ATX^X = ATY^ATX^'X = A^Y^AX = A'^'Y^X = YAX
= YAYAX = YAYX^AT = YYTATXTA^ = YY^AT = YAY = Y
Therefore the four relations given form a definition for the pseudoinverse that is equivalent to
Definition 4.14.
4.10. The outcome of i/ of a certain experiment is thought to depend linearly upon a para-
meter X, such that y = ax + (3. The experiment is repeated three times, during
which X assumes values xi = 1, xz — -1 and xs - 0, and the corresponding out-
comes yi = 2, and 2/2 = —2 and ys = 3. If the linear relation is true,
2 = a(l) + P
-2 = a(-l) + 13
3 = 40) + 13
However, experimental uncertainties are such that the relations are not quite satisfied
/ in each case, so a and p are to be chosen such that
j:{y,-ax,-pf
i=l
is minimum. Explain why the pseudoinverse can be used to select the best a and /?,
and then calculate the best a and /3 using the pseudoinverse.
The equations can be written in the form y = Ax as
Defining Xo = A-^y, by Theorem 4.21, ||y-Axo|l^ = 2 (2/i - «o*i - ^o)" is minimized. Since
ATA - (^ °), then pi = ^2 and P2 = Vs, and gi = (1 0) and ga = (0 1). Since
fi = Agi/pi, then fi = (1 -1 0)/V2 and fa = d 1 D/Vs: Now A-' can be calculated from
Definition 4.14 to be
A- = (1 ? 1
\8 3 8
so that the best ao = 2 and /3o = 1-
n
Note this procedure can be applied to 2 (Vi ~ «^i ~ /^*i ~ "y)^' ^t*^-
4.11. Show that if an n X n matrix C is nonsingular, then a matrix B exists such that
C = eB, or B = In C.
Reduce C to Jordan form, so that C = TJT'i. Then B = In C = T In JT-i, so that the
problem is to find In L(X) where L(X) is an i X i Jordan block, because
/inLii(Xi) \
InJ = ( ••. )
V lnLfcm(XJ/
CHAF. 4] MATRIX ANALYSIS 95
Using the Maclaurin series for the logarithm of L(X) — XI,
InL(X) = In [M + L(X) - XI] = I In X - 2 MM"* [^C^)- 3^1]*
Note all the eigenvalues | of L(X) - XI are zero, so that the characteristic equation is ^' = 0. Then
by the Cayley-Hamilton theorem, [L(X) — XI]' = 0, so that
InL(X) = IlnX- 2 (-iX)-'[L(X) - XI]«
Since X ^ because C-i exists, InL(x) exists and can be calculated, so that In J and hence In C
can be found.
Note B may be complex, because in the 1X1 case. In (-1) = j^. Also, in the converse case
vi^here B is given, C is always nonsingular for arbitrary B because C-i = e-B.
Supplementary Problems
4.12. Why does at least one nonzero eigenvector belong to each distinct eigenvalue?
1 -4'
4.13. Find the eigenvalues and eigenvectors of A where A = f 3
-2 -1^
4.14. Suppose all the eigenvalues of A are zero. Can we conclude that A = 0?
4.15. Prove by induction that the generalized eigenvector t( of equation {4.9) lies in the null space of
(A-Xil)' + 1.
4.16. Let X be an eigenvector of both A and B. Is x also an eigenvector of (A-2B)?
4.17. Let Xj and Xg be eigenvectors of a matrix A corresponding to the eigenvalues of Xj and Xg, where
Xi v^ X2. Show that axj + /3x2 is not an eigenvector of A if a ¥= and ;8 #■ 0.
4.18. Using a similarity transformation to a diagonal matrix, solve the set of difference equations
x,(n+l)\ _ /O 2\/xi(n)\ / x,(0)\ /2
..(n + l)j - [2 -3JU2WJ "'^'^ U^W) = [1
4.19. Show that all the eigenvalues of the unitary matrix U, where UtU = I, have an absolute value
of one.
4.20. Find the unitary matrix U and the diagonal matrix A such that
/ 2 " t
Check your work.
Ut/ 1 ]U = A
/I 3/6 -4/5 \
4.21. Reduce to the matrix A = ( 1 ) to Jordan form.
Vo 1 /
96 MATRIX ANALYSIS [CHAP. 4
4.22. Given a 3 X 3 real matrix P # such that P2 = 0. Find its Jordan form.
1 o""
4.23. Given the matrix A = ( -1 2 ). Find the transformation matrix T that reduces A
10 1^
to Jordan form, and identify the Jordan blocks Ly(Xj).
4.24. Find the eigenvalues and eigenvectors of A = ( ^ ]. What happens as e ^ 0?
4.25. Find the square root of , , _
.4 5
^0 1 - e/2
5 4
4.26. Given the quadratic form ^ = ^^ + 2|ife + 4|2 + 4I2I3+ 2?^. Is it positive definite?
4.27. Show that 2 A"/w! converges for any A.
n =
4.28. Show that the coefficient a„ of I in the Cayley-Hamilton theorem A" + oiiA" 1 + • • • + a„I -
is zero if and only if A is singular.
4.29. Does A2/(A) = [/(A)]A2?
4.30. Let the matrix A have distinct eigenvalues Xi and Xj. Does A3(A - Xil)(A- Xjl) = 0?
4.31. Given a real vector x = (xj «2 • • • ^n)"^ and a scalar a. Define the vector
grad^a = [BaldXy daldx^ ■■■ da/dx^)'''
Show grad^xTQx = 2Qx if Q is symmetric, and evaluate grad^x^Ax for a nonsymmetric A.
4.32. Given a basis Xi.Xj, . . .,x„ of °y„, and its reciprocal basis ri,r2 r„. Show that I = 2^ Xjrj .
4.33. Suppose A^ = 0. Can A be nonsingular?
4.34. Find e^t, where A is the matrix of Problem 4.2(a).
4.35. Find eAt for A = '
-a a
J, /'4 -3
4.36. Find the pseudomverse of f
/I 2
4.37. Find the pseudoinverse of A = ( ^ ^ ^
4.38. Prove that the listed properties 1-18 of the pseudoinverse are true.
4.39. Given an mXn real matrix A and scalars ffj such that
Agi = e^i At, = ^r^Si
Show that only n = 1 gives fjfj = Sy if gjgj = Sy.
4.40. Given a real r>iXn matrix A. Starting with the eigenvalues and eigenvectors of the real sym-
metric (n + 'm)X{n + m) matrix ( ^
^'^] derive the conclusions of Theorems 4.19 and 4.20.
'
CHAP. 4] MATRIX ANALYSIS 97
4.41. Show that the T matrix of J = T-iAT is arbitrary to within n constants if A is an w X n matrix
in which each Jordan block has distinct eigenvalues. Specifically, show T = T^K where Tq is
fixed and
/Kj ...
K = ."...''!..;;.•....".
\0 ... K„
where K^- is an Z X i matrix corresponding to the jth Jordan block Lj- of the form
K..
J
' «1 «2 «3
ai a^
ai
\0
where «; is an arbitrary constant.
4.42. Show that for any partitioned matrix (A 1 0), (A|0)-' = (— ).
4.43. Show that |xtAxl ^ \\\\\^ \\s.\f^ .
4.44. Show that another definition of the pseudoinverse is A"' = lim (A^A + eP) - 1 A'', where P is any
positive definite symmetric matrix that commutes vpith Ai"A. ^'*''
4.45. Show ||A-J|| = if and only if A = 0.
Answers to Supplementary Problems
4.12. Because det (A- X^I) = 0, the column vectors of A-X,I are linearly dependent, giving a null
space of at least one dimension. The eigenvector of X; lies in the null space of A-XJ.
4.13. X = 3,3,-3, xj = a(-2 1) + /3(0 1 0); xlg = y(l 1)
4.14. No
4.16. Yes
4.18. Xi(n) = 2 and ajgCw) = 1 for all n.
4.20.
/I
4.21. T = I 0.6 0.8
Vo -0.8 0.6
4.22. jr
98 MATRIX ANALYSIS [CHAP. 4
'O a T \ /l 1 0\
4.23. T = ( a t + a ] for a, r, 6 arbitrary, and J = I 1 1 , all one big Jordan block.
^a T 6 J \0 1/
4.24. Xi = 1 + e/2, Xg = 1 ~ ^/2, x, = (1 0), Xj = (1 e); as e ^ 0, Xj = Xj, Xj = Xj. Slight perturba-
tions on the eigenvalues break the multiplicity.
/2 1
'■''■ (l 2
4.26. Yes
4.27. Use matrix norms.
4.28. If A-i exists, A'^ - <x-^[A«~-^ + a^A.^-^ + ■ ■ ■]. If a„ = XiXg. . .X„ = 0, then at least one eigen-
value is zero and A is singular.
4.29. Yes
4.30. Yes
4.31. (AT + A)x
4.33. No
/_4et + 6e2t - e3f -3e« + 4e2' - gSt -gt + 2e2t _ gSt'
4.34. eAt = ( 8et - 9e2« + e3« 6e' - 6e2t + gSt 2e« - Se^t + e^'
\-4e« + 3e2t + e3« -3e« + 2e2« -I- e^' -e« -1- e^' + e^'^
/ cos at sin (oi
4.35. eAt = eat /
\ — sin ut cos ut
1/40
4-36. 7^ (
25 *^-3 0^
/I 3^
4.37. A-' = ^ 2 6
\0 0,
4.40. There are r nonzero positive eigenvalues pj and r nonzero negative eigenvalues -p;. Corresponding
to the eigenvalues Pi are the eigenvectors {^^\ and to -pi are f ^'V Spectral representation of
I then gives the desired result.
4.43. |xtAx| =£ ||x||2||Ax|J2 by Schwartz' inequality.
chapter 5
Solutions to the Linear State Equation
5.1 TRANSITION MATRIX
From Section 1.3, a solution to a nonlinear state equation with an input u{t) and an
initial condition xo can be written in terms of its trajectory in state space as x{t) =
>j>(t; u{t), Xo, to). Since the state of a zero-input system does not depend on u(t), it can be
written x(i) = ^(i; Xo, io). Furthermore, if the system is linear, then it is linear in the
initial condition so that from Theorem 3.20 we obtain the
Definition 5.1: The transition matrix, denoted *(i, io), is the nxn matrix such that
x(i) = ^{t; Xo, to) = *(f, io)xo.
This is true for any to, i.e. x{t) — *(i, t) x(t) for t > i as well as t — t. Substitution of
x{t) --- ^{t, io)xo for arbitrary xo in the zero-input linear state equation dx/dt = A(f)x
gives the matrix equation for *(i, to),
d^{t,to)/dt = A{t)^(t,to) {5.1)
Sinct! for any xo, xo = x(io) = *(^o, io)xo, the initial condition on *(i, to) is
^{to,to) = I [5.2)
Noti(;e that if the transition matrix can be found, we have the solution to a time-varying
linear differential equation. Also, analogous to the continuous time case, the discrete
time transition matrix obeys
*(A; + l,m) = A(A;)*(fc,m) (5.3)
with ^{m,m) — I (5.^)
so that x(fc) = *(fc, m)x(m).
Theorem 5.1: Properties of the continuous time transition matrix for a linear, time-
varying system are
(1) transition property
9{t2,to) = <S>{t2,ti)9{tl,to) {5.5)
(2) inversion property
*(fo,ii) = ^-\tuto) {5.6)
(3) separation property
^tuto) = 9{ti)9-\to) {5.7)
(4) determinant property
detail, to) = e-'f» {5.8)
and properties of the discrete time transition matrix are
(5) transition property
*(fc,m) = 9{k,l)^{l,m) {5.9)
99
100 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
• (6) inversion property
€>{m,k) = *~i(fc,w) (5.10)
(7) separation property
*(m,fc) = fl(m)fl-i(fc) (5.11)
(8) determinant property
det*(A;,m) = [detA(fe-l)][detA(fc-2)]---[detA(m)] f or A; > m (5.12)
In the continuous time case, *~'(i, fo) always exists. However, in rather unusual cir-
cumstances, A{k) may be singular for some k, so there is no guarantee that the inverses
in equations (5.10) and (5.11) exist.
Proof of Theorem 5.1: Because we have a linear zero-input dynamical system, the
transition relations (1.6) and [1.7) become *(i, io)x(<o) = *(^, ti)x(ti) and x(ii) = *(<i, io)x(io).
Combining these relations gives *(i,to)x(tG) = *(t, ti)*(ti, to)x(io). Since x(to) is an arbi-
trary initial condition, the transition property is proven. Setting <2 — U in equation (5.5)
and using {5.2) gives *(io, ti) *(ii, to) = I, so that if det *(fo, U) ¥= the inversion property
is proven. Furthermore let 9{t) — <b{t, 0) and set fi = in equation {5.5) so that
*(i2, U) = B{t2) *(0, to). Use of {5.6) gives *(0,to) = *"H*o, 0) = 9-\to) so that the separation
property is proven.
To prove the determinant property, partition * into its row vectors ^j, ^^, . . ., ^„. Then
det * = ^^/\^^/\ • ■ ■ r\^^
and
d(det ^)/dt = d^Jdt a ^^ '^ • • • a ^„ + ^j a d^Jdt a. ■ ■ ■ A.4,^
+ • • • + ^j A ^2 -^ • ■ • A d^Jdt {5.13)
From the differential equation {5.1) for *, the row vectors are related by
n
d^Jdt = 2 «*(*) *)c f or t = 1, 2, . . . , w
Because this is a linear, time-varying dynamical system, each element ai\,{t) is continuous
and single-valued, so that this uniquely represents djtjdt for each t.
^jA ••• Ad^ydtA ••• A^„ = *lA ■•■ Aj^^a.^^^A ••• A^„ = a,^i A ■ • ■ A ^j A • • • A ^„
Then from equation {5.13),
= [tr A(t)]^i A ^2 A • • • A .^„ = [tr A(t)] det *
Separating variables gives
d(det*)/det4. = trA{t)dt
Integrating and taking antilogarithms results in
det*(t,to) = ye-*'"
where 7 is the constant of integration. Setting t = to gives det *(to, to) = det I = 1 = y,
so that the determinant property is proven. Since e«" = if and only if f{t) = -■», the
inverse of *(t, to) always exists because the elements of A(t) are bounded.
The proof of the properties for the discrete time transition matrix is quite similar,
and the reader is referred to the supplementary problems.
CHAP. 5]
SOLUTIONS TO THE LINEAR STATE EQUATION
101
5.2 CALCULATION OF THE TRANSITION MATRIX FOR
TIME-INVARIANT SYSTEMS
Theorem 5.2: The transition matrix for a time-invariant linear differential system is
^{t,T) = eA"-^) {5.U)
and for a time-invariant linear difference system is
it{k,m) = A"-" (5.15)
Proof: The Maclaurin series for e*' = ^ A.H'^/k !, which is uniformly convergent
fc = «
as shown in Problem 4.27. Differentiating with respect to ^ gives de^ydt = ^ A'^+^i'^/fc!,
so substitution into equation {5.1) verifies that eA"--^' is a solution. Furthermore, for
t = r, eA(t-T) — j^ gQ ^]^jg jg ^j^g unique solution starting from *(t, t) = I.
Also, substitution of *(fe, m) = A''"™ into equation {5.3) verifies that it is a solution,
and for k = m, A"-™ = I. Note that eAgB ^ gBgA jn general, but e^^'e"^'" = e^''e*'» = gActo+t,)
and Ae*' = e^'A, as is easily shown using the Maclaurin series.
Since time-invariant linear systems are the most important, numerical calculation
of e*' is often necessary. However, sometimes only x(f) for t^ta is needed. Then x(i)
can bes found by some standard differential equation routine such as Runge-Kutta or Adams,
etc., on the digital computer or by simulation on the analog computer.
When e^' must be found, a number of methods for numerical calculation are available.
No ore method has yet been found that is the easiest in all cases. Here we present four
of the most useful, based on the methods of Section 4.7.
1. Series method:
2. Eigenvalue method:
gAt ^ ^ AH^Ik !
and, if the eigenvalues are distinct,
gAt = Te-i'T-i
= ^ e^i'Xji
3. Cayley-Hamilton:
n-l
oAf
2 y,{tW
t =
n-l
w'lere the y^{t) are evaluated from e-" = 2 7i(^)J'- Note that from {4-15),
{5.16)
{5.17)
{5.18)
i =
oJt —
eL„(Xi)t
gL2i(Xi)e
e'-"^''!''
gLi2(^2^t
gLmfc'^'^k^t
{5.19)
102 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
where if Lji(Ai) is Ixl, l
then
M^i) =1 ' ■■■ \ I (5.20)
,0 ... \il -
... e'^i*
4. Resolvent matrix:
where R(s) = (si — A)"^
eAt = ^-i{R(s)} [5.22)
The hard part of this method is computing the inverse of (si — A), since it is a poly-
nomial in s. For matrices with many zero elements, substitution and elimination is
about the quickest method. For the general case up to about third order, Cramer's
rule can be used. Somewhat higher order systems can be handled from the flow diagram
of the Laplace transformed system. The elements nj(s) of R(s) are the response of the
ith state (integrator) to a unit impulse input of the yth state (integrator). For higher
order systems, Leverrier's algorithm might be faster.
Theorem 5.3: Leverrier's algorithm. Define the nxn real matrices Fi, F2, ...,Fn and
scalars 9u 62, . . .,9n as follows:
Fi = I 61 = -trAFi/1
F2 = AFi + ^iI 02 = -trAF2/2
F„ = AF„-i + e„-iI 6n = -tvAF„/n
Then
an
taT—/i\-l ^ t
S" + ^lS"-l + • • • + dn-lS + 0n
Also, AF„ + tfnl = 0, to check the method. Proof is given in Problem 5.4
(.J A^-l - S'^-'F^+S''-'F2+ ■■■ +SFn-l+Fn
(Sl-A) - ,n , n _^n-l . ... +^„_,o4-^„ ^^-^"^^
Having R(s), a matrix partial fraction expansion can be performed. First, factor
detR(s) as
det R(s) = s" + ^is«-i + • • • + e„-is + ^n = (s - Ai)(s - A2) • • • (s - Xn) {5.2A)
where the Ai are the eigenvalues of A and the poles of the system. Next, expand R(s) in
matrix partial fractions. If the eigenvalues are distinct, this has the form
R(s) = — ^Ri + ^— R2 + ••• + — ^Rn {5.25)
^ ' S — Al S — A2 S — An
where R/c is the matrix-valued residue
Rfc = (s - A,)R(s) |s=x, {5.26)
CHAP. 5]
SOLUTIONS TO THE LINEAR STATE EQUATION
103
For mth order roots A, the residue of (s — A)"' is
Ri
-A^^ ^ — , \{s - A)'"R(s)l
{5.27)
(m — i) !
Then e^' is easily found by taking the inverse Laplace transform. In the case of distinct
roots, equation {5.25) becomes
gAt ^ gA.tR, + gXj,tJJ2 + . . . + eMR„
Note, from the spectral representation equation {5.17),
Ri = XiTj
so that the eigenvectors Xj and their reciprocal r; can easily be found from the R;.
In the case of repeated roots,
^-i{(s-Ai)"'"} = t"'-'e'''V(m-l)!
To find A'', methods similar to those discussed to find e^' are available.
1. Series method:
2. ESigenvalue method:
and for distinct eigenvalues
3. Cayley-Hamilton;
A*^ = A''
A'' = TPT-i
A-^ = iA^x^rt
i=l
n-1
i =
A-^ = ^lAkW
whei-e the y^{k) are evaluated from J'= = ^ 7i('^)J* where from equation {A.15),
I
=1
^Um)
L^i(Ai) .
. L?,(Ai)
Ul^{k,) ..
. L^(AO
and if L,ii{M) is Z x Z as in equation {5.20),
'Xf kxf-' ... (fc!Af+i-')[(i-l)!(fc- « + !)!] -'^
L5i(A0 =
4. Jilesolvent matrix:
A?
(fc!Af+2-')[(Z-2)!(A;-? + 2)!]-i
{5.28)
{5.29)
{5.30)
{5.31)
{5.32)
{5.33)
{5.3i)
{5.35)
M^ = Z-^{zn{z))
where R(z) = {zl — A)"^
Since B,{z) is exactly the same form as R(s) except with z for s, the inversion procedures
given previously are exactly the same.
104 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
The series method is useful if A'' = for some k = fco. Then the series truncates at
ko — 1. Because the eigenvalue problem Ax = A.x can be multiplied by A"*"' to obtain
= A''x = AA''~1x = A'^x, then X — 0. Therefore the series method is useful only for
systems with A;o poles only at the origin. Otherwise it suffers from slow convergence,
roundoff, and difficulties in recognizing the resulting infinite series.
The eigenvalue method is not very fast because each eigenvector must be computed.
However, at the 1968 Joint Automatic Control Conference it was the general consensus
that this was the only method that anyone had any experience with that could compute
e** up to twentieth order.
The Cayley-Hamilton method is very similar to the eigenvalue method, and usually
involves a few more multiplications.
The resolvent matrix method is usually simplest for systems of less than tenth order.
This is the extension to matrix form of the usual Laplace transform techniques for single
input-single output that has worked so successfully in the past. For very high order
systems, Leverrier's algorithm involves very high powers of A, which makes the spread of
the eigenvalues very large unless A is scaled properly. However, it involves no matrix
inversions, and gives a means of checking the amount of roundoff in that AF„ + »«! should
equal 0. In the case of distinct roots, R. = x^rf so that the eigenvectors can easily be
obtained. Perhaps a combination of both Leverrier's algorithm and the eigenvalue method
might be useful for very high order systems.
5.3 TRANSITION MATRIX FOR TIME-VARYING DIFFERENTIAL SYSTEMS
There is NO general solution for the transition matrix of a time-varying linear system
such as there is for the time-invariant case.
Example 5.1.
We found that the transformation A = TJT-i gave a general solution
^(t,to) = *(t-to) = eA(t-fo' = TeJ«-«o)T-i
for the time-invariant case. For the time-varying case,
dx/dt = A(<)x
Then A{t) = T{t) J{t)T-Ht), where the elements of T and J must be functions of t. Attempting a
change of variable x = T(t)y results in
dy/dt = J(t)y-T-Ht){dT{t)/dt)y
which does not simplify unless dt{t)/dt = or some very fortunate combination of elements.
We may conclude that knowledge of the time-varying eigenvalues of a time-varying
system usually does not help.
The behavior of a time-varying system depends on the behavior of the coefficients of
the A{t) matrix.
Example 5.2.
Given the time-varying scalar system di/dt = | sgn (t - ti) where sgn is the signum function, so that
sgn(i-ti) = -l for t < t^ and sgn {t - ti) = +1 for t > t^. This has a solution |(i) = |(to)e " '»'
for « < tj and |(0 = l(*i)e"~'i'for t > tj. For times t < t^, the system appears stable, but actually
the solution grows without bound as *-*«=. We shall see in Chapter 9 that the concept of stability must
be carefully defined for a time-varying system.
Also, the phenomenon of finite escape time can arise in a time-varying linear system,
whereas this is impossible in a time-invariant linear system.
CHAi'. 5]
SOLUTIONS TO THE LINEAR STATE EQUATION
105
Exam|)le 5.3.
Gonsider the time-varying scalar system
d^dt = il{t - ti)2 with |(to) = ?o. to - * < *i
Then
m = ioe
[(to-ti)~i- (t-ti)-i]
?o-
and tlie solution is represented in Fig. 5-1. The solution goes to
infinity in a finite time. Fig. 5-1
I'hese and other peculiarities make the analysis of time-varying linear systems relatively
more difficult than the analysis of time-invariant linear systems. However, the analysis
of time-varying systems is of considerable practical importance. For instance, a time-
varying linear system usually results from the linearization of a nonlinear system about a
nominal trajectory (see Section 1.6). Since a control system is usually designed to keep
the ^'ariations from the nominal small, the time-varying linear system is a good approxi-
mEition.
Since there is no general solution for the transition matrix, what can be done? In
certsiin special cases a closed-form solution is available. A computer can almost always
find a numerical solution, and with the use of the properties of the transition matrix
(Theorem 5.1) this makes a powerful tool for analysis. Finally and perhaps most im-
portantly, solutions for systems with an input can be expressed in terms of the transition
mati'ix.
5.4 CLOSED FORMS FOR SPECIAL CASES OF TIME-VARYING
LINEAR DIFFERENTIAL SYSTEMS
Theorem 5.4: A general scalar time-varying linear differential system d^/dt = a{t)^ has
the scalar transition matrix
<l>it,T) = e^-
Proof: Separating variables in the original equation, d^/i = a{t)dt. Integrating and
taki:!ig antilogarithms gives i{t) = lo^ " " •
Theorem 5.5: If A(i) A(t) = A{t) A{t) for all t,T, the time-varying linear differential sys-
tem dx/dt = A(i)x has the transition matrix *(t, t) — e '^
This is a severe requirement on A{t), and is usually met only on final examinations.
Proof: Use of the series form for the exponential gives
Taking derivatives,
^g/>^'*'' = A{t) +lA{t)£ A{r,)drj +lj\{v)d^A{t) + ■■■ (5.37)
But from equation {5.86),
A(i)e-^-'''""* = A{t) + A(i) J A(r?)d, + •••
This equation and {5.37) are equal if and only if
A{t) ^^A{r,)d^ = J\{r,)dr,A{t)
106
SOLUTIONS TO THE LINEAR STATE EQUATION
[CHAP. 5
Differentiating with respect to r and multiplying by -1 gives the requirement
A(i)A(r) = A{r)A{t)
Only A{t) A(t) = G{t, r) need be multiplied in the application of this test. Substitution
of r for t and t for r will then indicate if G(<,t) = G(t, t).
Example 5.4.
/ 1^ t
Given A{t) = ( q i
Then from A(t) A(t) = G(t,T)
Q H j , we see immediately
G(^.<) = ( ^ ) ^ G{t,r).
Theorem 5.6: A piecewise time-invariant system, in which A{t) = Ai for U^t^tt+i
for i = 0,1,2, .. . where each Ai is a constant matrix, has the transition
matrix
^{t,to) = eAi"-'i>$(f.^io) for ti^t^ti+i
Proof: Use of the continuity property of dynamical systems, the transition property
(equation (5.5)) and the transition matrix for time-invariant systems gives this proof.
Successive application of this theorem gives
Ht'to) = eA»"-*»' for to^t^U
etc.
Example 5J5.
Given the flow diagram of Fig. 5-2 with a switch S that switches from the lower position to the
upper position at time t^. Then dx^/dt = Xi for «o - * < *i and dx^/dt = 2xi for «i ^ t. The
solutions during each time interval are Xi{t) = aJiQe'"'" for t^^ t < t^ and x^it) = a;j(ti)e2"-*i> for
ti ^ t, where Xiit^) = a;ioe*i'"*<' by continuity.
Fig. 5-2
It is common practice to approximate slowly varying coefficients by piecewise con-
stants. This can be dangerous because errors tend to accumulate, but often suggests means
of system design that can be checked by simulation with the original system.
Another special case is that the nth order time-varying equation
d<"
can be solved by assuming y = V'. Then a scalar polynomial results for A, analogous to the
characteristic equation. If there are multiplicities of order m in the solution of this poly-
nomial, 2/ = (lni)'""*i^ is a solution for i = 0, 1,2, . . .,m — 1.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 107
A number of "classical" second order linear equations have closed form solution in the
sense that the properties of the solutions have been investigated.
Bessel's equation:
t'y + (1 - 2a)tif + [l^yt^y + (a^ - py)]v = (5.38)
Associated Legendre equation:
(1 -t^)y- 2ty + {n(n + 1) - toV(1 - f)]y =
y -2ty + 2ay =
z + {l-t^ + 2a)z =
Hermite equation:
or, vi^ith z = e^^'^y,
Laguerre equation: j.- , /-, J.^• , «
ty + {l-t)y + ay =
with solution L„(t), or s" , n , -, j.\- , / i\ a
^" ty + (k + l-t)y + {a-k)y =
with solution d^L„{t)/dt''.
Hypergeometric equation :
^'''^'"^'■^- t{l -t)y+[y-(a + l3 + l)t]y - apy
ty + {y-t)y -ay =
Confluent:
Mathieu equation:
or, with T = cos^ t,
i/ + (a + /3 cos t)y = {5.39)
4T{l-r)y + 2{l-2T)y + [a + l3{2T-l)]y =
The sdlutions and details on their behavior are available in standard texts on engineering
matheinatics and mathematical physics.
AI(30 available in the linear time-varying case are a number of methods to give *(i, t)
as an infinite series. Picard iteration, Peano-Baker integration, perturbation techniques,
etc., can be used, and sometimes give quite rapid convergence. However, even only three
or four terms in a series representation greatly complicate any sort of design procedure,
so discussion of these series techniques is left to standard texts. Use of a digital or analog
computer is recommended for those cases in which a closed form solution is not readily
found.
5.5 PERIODICALLY-VARYING LINEAR DIFFERENTIAL SYSTEMS
Floquet theory is applicable to time-varying linear systems whose coefficients are con-
stant or vary periodically. Floquet theory does not help find the solution, but instead gives
insight into the general behavior of periodically-varying systems.
Theorem 5.7: (Floquet). Given the dynamical linear time-varying system d's./dt = A(^)x,
where A{t) = A{t + o,). Then
m,r) = P(i,T)eR"-^'
where P(f, t) = F{t + <a,T) and R is a constant matrix.
108 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
Proof: The transition matrix satisfies
d^{t,r)ldt = A(i)*(i,r) with *(r,T) = I
Setting t = t + m, and using A(i) = A{t + a.) gives
a*(f+o,,T)/a* = A(i + «,)$(« +o,,t) = A(i)*(^ + a.,T) (5.40)
It was shown in Example 3.26 that the solutions to cbs./dt = A(i)x for any initial con-
dition form a generalized vector space. The column vectors ^^t, r) of *(i, t) span this vector
space, and since det *(i, r) v^ 0, the ^i,t, r) are a basis. But equation {540) states that the
^^{t + <o, t) are solutions to dTi./dt = A(i)x, so that
n
i>i(t+o>,T) = X c^ii'iit.r) for z= 1,2, . . .,%
Rewriting this in matrix form, where C = {Cji},
*(«+»,t) = *(i,r)C (5M)
Then
C = *(r,t)*(i + o>,T)
Note that C-^ exists, since it is the product of two nonsingular matrices. Therefore by
Problem 4.11 the logarithm of C exists and will be written in the form
C = e"R (5.^^)
If P{t, r) can be any matrix, it is merely a change of variables to write
9{t,r) = P(«,r)eR«--> (5.^5)
But from equations (543), {541) and {542),
P(i+<9, t) = *(i+<o,T)e-R"+"-^' = #(;t,T)e"Ke"R'^'+"-^>
= *(«,r)e-R«--' = V{t,r)
From {5 41)-{5 43), R = a.-Mn [*(r,«) *(i + a.,.)] and P(«,r) = *(«,r)e-H"-^ so that to
hnd R and P(i, r) the solution $(*, r) must already be known. It may be concluded that
Floquet's theorem does not give the solution, but rather shows the form of the solution.
The matrix V{t, r) gives the periodic part of the solution, and e^"-^' gives the envelope of
the solution. Since e«<'-> is the transition matrix to dz/dt = Rz, this is the constant
coefficient equation for the envelope x{t). If the system dz/dt = Rz has all poles in the left
fl if fJ"^', ""^^"^I time-varying system x(i) is stable. If R has all eigenvalues in the
lett halt plane except for some on the imaginary axis, the steady state of z{t) is periodic
with the frequency of its imaginary eigenvalues. To have a periodic envelope in the sense
that no element of x{t) behaves exponentially, all the eigenvalues of R must be on the
imaginary axis. If any eigenvalues of R are in the right half plane, then z{t) and x(t) are
unstable. In particular, if the coefficients of the A{t) matrix are continuous functions of
some parameter a, the eigenvalues of R are also continuous functions of «, so that periodic
solutions form the stability boundaries of the system.
Example 5.6.
Mathieu functions, which exist only for certain combinations of a and fl. The values of a and fi
for which these periodic solutions exist are given by the curves in Fig. 6.3 below. These curves tlef
form the boundary for regions of stability.
CHAP, fi]
SOLUTIONS TO THE LINEAR STATE EQUATION
109
Fig. 5-3
Whether the regions are stable or unstable can be determined by considering the point ;8 = and
a < ill region 1. This is known to be unstable, so the whole region 1 is unstable. Since the curves
are stability boundaries, regions 2 and 6 are stable. Similarly all the odd numbered regions are unstable
and all the even numbered regions are stable. The line P — 0, a — represents a degenerate case,
which agrees with physical intuition.
It is interesting to note from the example above that an originally unstable system
might be stabilized by the introduction of a periodically-varying parameter, and vice versa.
Another use of Floquet theory is in simulation of *(f, t). Only *(t, t) for one period <*
need be calculated numerically and then Floquet's theorem can be used to generate the
solution over the whole time span.
5.6 SOLUTION OF THE LINEAR STATE EQUATIONS WITH INPUT
Knowledge of the transition matrix gives the solution to the linear state equation with
input, even in time-varying systems.
Theorem 5.8: Given the linear differential system with input
dx/dt = A(«)x + B(Qu
y = C{t)x + D{t)u {2.39)
with transition matrix *(t, t) obeying 3*(i, T)ldt = A{t) *(t, t) [equation
(5.1)]. Then
x(t) = *(i,«o)x(to) + r *(«, t) B(t) u(t) dr
yit)
C(«)*(t,io)x(to) + r C(t)^t,T)B{T)u{r)dT + D(i)u
i5.U)
The integral is the superposition integral, and in the time-invariant case it becomes a
convolution integral.
Proof: Since the equation dx/dt = A(t)x has a solution x{t) = *(i, U), in accordance
with the method of variation of parameters, we change variables to k(f) where
x(^) = *(i,io)k(t)
{545)
110 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
Substituting into equation (2.39),
dy.ldt = (a*/af)k + $dk/di = A{t)$k + B(t)u
Use of equation {5.1) and multiplication by *(to, t) gives
dk/dt = ^{to,t)B{t)u{t)
Integrating from ta to t,
k{t) = k{to) + f *(io, t) B(t) u(t) dr (546)
Since equation (545) evaluated at t = U gives x(io) = k{to), use of (545) in (5.^6) yields
$(fo,i)x(«) = x(io) + f *(io, t) B(r) u(t) dr
Multiplying by *(i, io) completes the proof for x(t). Substituting into y(i) = C(i) x(i) +
D(i) u(^) gives y{t).
In the constant coefficient case, use of equation (5.14.) gives
x(i) = eA(t-t„)x(io) + r eA»~^)Bu(T) dr (5.^7)
and ^t
y(«) = CeA"-*«'x(io) + 1 CeA<'-^'Bu(T) dr + Du(i) (5.^5)
This is the vector convolution integral.
Theorem 5.9: Given the linear difference equation
x(A; + 1) = A(A;) x{k) + B{k) u{k)
y{k) = C{k) x{k) + D{k) u{k) {240)
vs^ith transition matrix *(&, m) obeying *(fc + 1, m) = A.{k) *(&, m) [equa-
tion {5.3)]. Then
x(A;) = nA(i)x(m)+ ^ 11 A(z) B(i) u(y) + B(A; - 1) u(A; - 1)
i=m i = m i = j + l (5.49)
where the order of multiplication starts with the largest integer, i.e. A.{k — \) A{k — 2) ■ ■ ■ .
Proof: Stepping equation {2.40) up one gives
x(m + 2) = A(m + 1) x(to + 1) + B(m + 1) u(m + 1)
Substituting in equation {2.40) for x(w + 1),
x(m + 2) = A(to + 1) A(m) x(?n) + A(m + 1) B(m) u(m) + B(m + 1) u(m + 1)
Repetitive stepping and substituting gives
x{k) = A{k - 1) • ■ • A(to) x(m) + A(fe - 1) • • • A(m + 1) B(m) u(m)
+ A(A; - 1) • • • A(to + 2) B(to + 1) u(m + 1) + • • • + A(/(; - 1) B{k - 2) u{k - 2)
+ B(A; - 1) u(A; - 1)
This is equation {5.49) with the sums and products written out.
CHAP. 5]
SOLUTIONS TO THE LINEAR STATE EQUATION
111
5.7 TRANSITION MATRIX FOR TIME-VARYING DIFFERENCE EQUATIONS
Setting B = in equation (549) gives the transition matrix for time-varying difference
equations as
Ic-l
m {5.50)
*(fc, m) - Yi Mi) for f^ >
For k = m, *(m,m) = I, and if A-^{i) exists for all i, ^{k,m) = Yl^~K'>') for k<m.
Then equation (549) can be written as '"''
x(A;) = *(fc,m)x(m) + ^ *(fe, ; + 1) B(i) u(i)
j = m
{5.51)
This is very similar to the corresponding equation (5.^4) for differential equations
except the integral is replaced by a sum.
Often difference equations result from periodic sampling and holding inputs to dif-
ferential systems.
u(i)
^'
»(h)
Hold
dx/dt = A(i)x + B(t)u
y = C(t)x + D(i)u
y{t)
<r-
y«fc).
Fig. 5-4
In Fig. 5-4 the output of the hold element is u(fc) = u(tfc) for 4 — f<4+i, where
'k+i — tk-T for all fc. Use of {5.H) at time 4 = 4+1 and U = h gives
*(i)c + 1, r) B(t) u(A;) dr {5.52)
Comparison with the difference equations (2.4-0) results in
As(k) = ^{tk+i,tk) Bs(k) = \ ^*(4+i,T)B(T)dT
Cs(k) = C(«k) D^A;) = D(4)
where the subscript s refers to the difference equations of the sampled system. For time-
invariant differential systems, As - e^'^, Bs = i e^'^'^-'^B dr, Cs = C and Ds = D. Since
in this case As is a matrix exponential, it is nonsingular no matter what A is (see the com-
ment after Problem 4.11).
Although equation (5.50) is always a representation for ^{k,m), its behavior is not
usually displayed. Techniques corresponding to the differential case can be used to show
this behavior. For instance, Floquet's theorem becomes *(/<;, m) = P(fc,m)R''"'", where
F{k, m) = P{k + o>, m) if A(A;) = (k + a>). Also
(Il:^')ly{jc + n) +a, ^^'^l~^^- y{k + n~l) + ■■■ + a^_^(k + l)y(k + l) + a^k) =
has solutions of the form XVA; ! . Piecewise time-invariant, classical second order linear,
and sesries solutions also have a corresponding discrete time form.
112 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
5.8 IMPULSE RESPONSE MATRICES
With zero initial condition x{to) = 0, from {54i) the output y{t) is
y{t) = f C(i) *(i, r) B(t) u(t) dr + D(i)u(i) (5.53)
This suggests that y(i) can be written as a matrix generalization of the superposition
integral,
y{t) = r H{t,r)u{T)dr {5M)
where H{t, t) is the impulse response matrix, i.e. hij{t, r) is the response of the ith output at
time t due to an impulse at the yth input at time t. Comparison of equations (5.53) and
{5.54-) gives
nit,r) = I ^ ^^^ (5.55)
In the time-invariant case the Laplace trans:?ormation of H{t, 0) gives the transfer function
matrix
J:{U{t,0)] = C(sI-A)-iB+D (5.56)
Similarly, for discrete-time systems y(fc) can be expressed as
k
yik) ~ 2 H(A;,to)u(to) (5.5?-)
m = — 00
where
'C(A;)*(A;,m + l)B(m) k>m
H(A;, m) = \ D(fe) k = m (5.58)
k <m
Also, the 2 transfer function matrix in the, time-invariant case is
2:{H(A;,0)} = C(2l-A)-iB + D {5.59)
5.9 THE ADJOINT SYSTEM
The concept of the adjoint occurs quite frequently, especially in optimization problems.
Definition 5.2: The adjoint, denoted La, of a linear operator L is defined by the relation
(p, Lx) = (Lap, x) for all x and p {5.60)
We are concerned with the system d^ldt = A{t)x. Defining L = A{t) - Id/dt, this becomes
ptxdi, the adjoint system is found from
equaLiuii yo.ou) usmg migration oy parts. "'
(p,Lx) = f\^A{t)xdt- f\f~dt
" X [''^^^P + ^'^lf]*''* + VHU)K{t,) - pt(ti)x(*i)
For the case p(^o) = = p(^i), we find La = A^{t)+Id/dt.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 113
Since Lx = for all x, then (p, Lx) = for all x and p. Using (5.60) it can be con-
cluded Lap = 0, so that the adjoint system is defined by the relation
dp/dt = -At(i)p {5.61)
Denote: the transition matrix of this adjoint system as *(t, to), i.e.,
a^it,to)/dt ==,9^A^{t)t{t,to) with *(io,io) = I (5.62)
Theorem 5.10: Given the system dx/di = A(i)x + B(Qu and its adjoint system dp/dt =
-At(i)p. Then
pt(f,)x(«i) = pt(io)x(io) + j pHt)B{t)u{t)dt {5.63)
and *t(i,io) = *-\t,to) {5.6i)
The column vectors i>i(t,t^) of *(«,« J aS* the reciprocal basis to the column vectors ^i(*'*o)
of^t,to). Also, if u(i) = 0, then pt(t)x(f) = scalar constant for any «.
Proof: Differentiate p+(f)x(«) to obtain
d(ptx)/dt = (dpt/dQx + ptdx/di
Using the system equation dx/di = A(i)x + B{t)u and equation {5.61) gives
d(ptx)/dt = ptBu
Integration from to to ti then yields (5.55). Furthermore if u{t) = 0,
pt(io)x(«o) = pt(*)x(f)
From the transition relation, x(f) = *(f, to) x(fo) and p(<) = ♦(it, to) p(«o) so that
p+(io)Ix(io) = p+(M*t(i,«o)*(«,«o)x(«o)
for any p(fo) and x(io). Therefore*(5.(J4) must hold.
The adjoint system tr^ftsition matrix gives another way to express the forced solution
of {5.U): ,
K{t) = *(i,«o)x(io) + r *(i,T)B(r)u(T)dr
•'to
The variable of integration t is the second argument of *(t, t), which sometimes poses simula-
tion difficulties. Since ♦<to,T) = *-/(t,<o) = «'+(t,«o), this becomes
x(f) = 4^it,to)\x{to) + J ■i^T,to)B{r)uir)dT'] {5.65)
in whi(;h the variable of integration t is the first argument of *(t, to).
Thi3 adjoint often can be used conveniently when a final value is given and the system
motion backwards in time must be found.
Exampl*! 5.7.
' t 2t
Givim dx/dt = [of 2f ]*■ ^^^ ^^^ adjoint system to find the set of states (xi{l) X2(l)) that
permit the system to pass through the point Xi(2) = 1.
/I 3\
The adjoint system is dp/dt — —tl jp. This has a transition matrix
*{t,r) = 0.2e<«^/2)-(r^/2) / 3 -3\ o.Ee^^-^'V^ ^
'-22/ I 2 3
114 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
Since pt(2)x(2) = pt(l)x(l), if we choose ' pt (2) = (1 0), then (1 0) x(2) = a;i(2) = 1 = pt(l) x(l) =
Pi(l)a;i(l) + P2(l)a;2(l). But p(l) = ♦(!, 2) p(2), so that
pt(l) = 0.2(3e-i-5 + 2e6 2^-2e-i-5)
The set of states x(l) that gives Xi(2) = 1 is determined by
1 = (0.6e-i-5 + 0.4e6)a;j(l) + (0.4e6-o.4e-i-5)a;2(l)
Solved Problems
5.1. Given ^t,to), find A{t).
Using (5.2) in (5.1) at time to = *. Mi) = 3*(*, to)/St evaluated at to ~ *• This is a quick
check on any solution.
5.2. Find the transition matrix for the system
/-I
\ -1
by (a) series method, (b) eigenvalue method, (c) Cayley-Hamilton method, {d) re-
solvent matrix method. In the resolvent matrix method, find (si — A)~^ by (1) sub-
stitution and elimination, (2) Cramer's rule, (3) flow diagram, (4) Leverrier's
algorithm.
(a) Series method. From (5.16), eAt = i + At/V. + AHy2l + ■ ■ • . Substituting for A,
/t2/2 \
eAf = [ 1 ) + ( -4t 4t ) + ( 6t2 -8t2 j + •••
\ 2t2 -2t2/
2t + 4t2/2 - 2t(l - 2f) + • • • 4t(l - 2t + 4t2/2 + • • •)
-t(l - 2t + 4f2/2 + • ■ •) 1 - 2t + 4t2/2 + 2t(l - 2t) +
Recognizing the series expression for e~' and e~2t
/e-t
eAt = [ (l-2t)e-2t
\ -te-2« (l + 2t)e-2t/
(6) The eigenvalues of A are -1, -2 and —2, with corresponding eigenvectors (1 0)''' and (0 2 1)'''.
The generalized eigenvector corresponding to —2 is (0 1 1)'', so that
Using equation {5.17),
A
eAf =02
Multiplying out the matrices gives the answer obtained in (a).
CHAP. 5]
SOLUTIONS TO THE LINEAR STATE EQUATION
115
|c) Again, the eigenvalues of A are calculated to be -1, -2 and -2. To find the yAt) in equation
(5.18),
el« =
6-2',
which gives the equations
Solving for the 7j,
Using (5.18) then gives
/I 0\ /-I 0\ /I
= Vo I 1 ) + Yi ( 0-2 1 + 72 (
\0 1/ \ -2/ \0
e~' = 70 - Vi + T2
«~^' = ro - 2yi + 472
ie-2t = yj - 472
7o = 4e-« — 3e-2t - 2te-2'
7i = 4e-« — 4e-2t - Ste-^t
yg = 6-« — e-2t — ie-2t
/I 0\ /-I
eAt = (4e-«-3e-2t-2«e-2t)[ 1 1 + (4e-<- 4e-2t- 3«e-2«) ( -4
\0 1/ ■ \ -1
/I 0^
+ (e-t-e-2t-ie-2t) / o 12 -16
Vo 4 -4,
Summing these matrices again gives the answer obtained in (a).
(dl) Taking the Laplace transform of the original equation,
SJ:(Xi) - Xio = -^(Xj)
S^{X2) - X20 = -^JLiXi) + 4^(X3)
sJLix^ - a;ao = -•^(a;2)
Solving these equations by substitution and elimination,
a^io
0^
4 -4
4,
■<!(a;i) =
s + 1
4»30
_ r 1 2 ~1 4x
■^^*'2) - [_7+2~(7T2)2j"'20 + (^
Ls + 2 ^ (s + 2)2
■^^^3) - (g + 2)2
Putting this in matrix form .C(x) = R(s)a;o.
1
/
R(s) =
'+ 1
4
s + 2 (s + 2)2
(s + 2)2
\ (s + 2)2
Inverse Laplace transformation gives eAt as found in (a)
(d2) From (5.22),
s + 2 (s + 2)2 /
R-i(s)
+ 10
s + 4 -4
1s
116
SOLUTIONS TO THE LINEAR STATE EQUATION
[CHAP. 5
Using Cramer's rule,
R(8)
(8 + l)(s +- 2)2
8<8 + X) . 4(8 + 1)
-(s + 1) {s + l)(s + 4)^
Performing a partial fraction expansion,
R(s) =
8+1
0'
1 +
s + 2
(s + 2)2 '
^0
Addition will give R(8) as in (dl).
(d3) The flow diagram of the Laplace transformed system is shown in Fig. 5-5.
'0
0-2 4
-1
<3Hj
•<:(a^3)
■0— '
1*10
1_
8
•C(«i)
iO
i.
Fig. 5-5
For xio = 1, .C(«i) = 1/(8 + 1) and J^(x^ = Ak^^ = 0.
For Kjo = 1, •C(*i) = 0' -^(^2) = «/(* + 2)2 and J^{^ - 4/{« + 2)2.
For a^ao =1, ^(«i) = 0, .c(a;2) = "lA* + 2)2 and .^W = (» + 4)/(s + 2^-
Therefore,
R(s)
8+1
4
\
\
T^TW (8 + 2)2
^1 s + 4
(7+2)2 (s + 2)2 /
Again a partial fraction expansion can be performed to obtain the previous result.
(d4) Using Theorem 5.3 for Leverrier's algorithm.
F,
Using equation (5.25),
R(s) ^
+ 51
+ 81
0\ 74
1 I + sf 1
9i = 5
92 = 8
fls = 4
s8 + 5s2 + 8s + 4
A partial fraction expansion of this gives the previous result.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 117
5.3. Using (a) the eigenvalue method and then (6) the resolvent matrix, find the transition
matrix for
x(fc + l) = (-J _^)x(fc)
(a) The eigenvalues are -1 and -2, with eigenvectors (1 0)r and (1 -1)^ respectively. The re-
ciprocal basis is (1 1) and (0 -1). Using the spectral representation equation (5.31),
A" = (-1)" Q (1 1) + (-2)fc fj^ (0 -1)
(6) From equation (5.35),
R(z) = (' + '' -M"' = ■ 1 ^ + ^ 1
V z + 2y {z + l)(z + 2)\ z + 1
so that
{5.23)
' _£_ 1 V (-2)'^
z + 2
5.4. Prove Leverrier's algorithm,
/ j_.v-l ^ s"-'Fi + s''-^F2 + • • • + sFn-i + Fn
^ '' S» + 5lS"-l + • • • + ^fn-lS + On
and the Cayley-Hamilton theorem (Theorem 4.15).
Let det(sI-A) = <f>{s) = s" + 9iS"-i + • • • + e„_is + «„. Use of Cramer's rule gives
0(s)(sl- A)-i = F(s) where F(s) is the adjugate matrix, i.e. the matrix of signed cof actors trans-
posed of (si — A).
An intermediate result must be proven before proceeding, namely that tr F(s) = d^/ds. In the
proof of Theorem 3.21, it was shown the cofactor e^j of a general matrix B can be represented as
Ci^-ejAb2A ••• Ab„ and similarly it can be shown Cy = bj a • • • Abi_i a e^ Ab^+i a • • • a b„, so
that letting B = si - A and using tr F(s) = Cii(s) -f- Czzls) 4 + c„„{s) gives
tr F(s) = cj A (sea - ag) a • • ■ a (se„ - a„) + (scj - a{) ac^a ■ ■ ■ a. (sej - a„)
+ ••• + (sci -ai) A ••• A(se„_i-a„_i) a e„
But
0(s) = det (si - A) = (sei - aj) a {se2 - ag) a • • • a (se„ - a J
and
d<p/ds = ei A(se2-a2) A ••• A(se„-aJ + •■•-!- (scj - aj) a • • • Ae„ = trF(s)
and so the intermediate result is established.
Substituting the definitions for 0(s) and F(s) into the intermediate result,
trF(s) = tr (s^-iFi + si-SFa-l- ••• +sP„_i + F„)
= s^-itrFi + s"-2trF2 +••■-!- strF„_i -I- trF„
d<f>/ds = ?is"-i + (n— l)9iS"-2 4. . . . + g^_^
Equating like powers of s,
trFfe + i - {n-k)$k (5.66)
for k = 1,2, .. .,n — l, and trFj = n. Rearranging {5.23) as
(S" + ffiS"-! + • • • + 9„)I = (sl - A)(s«-lpi -I- sn-2F2 + ■ • • + F„)
and equating the powers of s gives I = Fj, fl„I = — AF„, and
fffcl = -AFfc + Ffe + i (5.67)
118 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
for A; = 1, 2, . . .,w — 1. These are half of the relationships required. The other half are obtained
by taking their trace
nOk = -trAFfc + trFfc + i
and substituting into (5.66) to get
fcflfc = -trAFfc
which are the other half of the relationships needed for the proof.
To prove the Cayley-Hamilton theorem, successively substitute for F^+x, i.e.,
Fi = I
F2 = ej. + AFi = ffil + A
Fg = e^I + AF2 = 82I + i?iA + A2
F„ = 9„_il + 9„_2A + • • • + eiA»-2 + A"-
Using the last relation e„I = — AF„ then gives
= e„I + 9„-i A + • • • + (?iA"-i + A" = 0(A)
which is the Cayley-Hamilton theorem.
5.5. Given the time-varying system
dt V-e-* a
Find the transition matrix and verify the transition properties [equations {5.5)-{5.8)].
Note that A(t) commutes vidth A(t), i.e.,
A(t A(t = ( / ., n 9 -ff+;J = A(r A(«)
It can be shown similar to Problem 4.4 that two nXn matrices B and C with n independent eigen-
vectors can be simultaneously diagonalized by a nonsingular matrix T if and only if B commutes
with C. Identifying A(«) and A(t) for fixed t and r with B and C, then A(t) = T(f)A(t)T-i(t) and
A(t) = T{t)A{t)T-Ht) means that T(«) = T(t) for all t and r. This implies the matrix of eigen-
vectors T is constant, so dT/dt = 0. Referring to Example 5.1, when d1{t)/dt = then a general
solution exists for this special case of A(t)A(T) = A(T)A(t).
For the given time-varying system, A(t) has the eigenvalues Xi = a + je-* and X2 = a-je'K
Since A(«) commutes with A(r), the eigenvectors are constant, {j -1)^ and (j 1)'^. Consequently
) i\ ./.x _ fa-je-t \ ,r~i - 1
From Theorem 5.5, rt,, , .
*(t,T) = Te-'-r T-i
Substituting the numerical values, integrating and multiplying out gives
- act-T) / cos(e-''-e~') sin(e-''-e-*)
*(t,r) _ e« ^ ( _gijj(g-r_e-t) cos(e-r-e-t)
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 119
To check this, use Problem 5.1:
5'* = ^e-it-r)( ''°^^'''"-*'~'^ sm{e-r-e-t)\_^^^^_^^^/sm(e-r-e-t)-cos(e-r-e-t)
<'t \-sm{e—^ — e~t) cos{e—^ — e-t)J \cos(e—^ — e't) sm{e—^—e-*)
Setting T = t in this gives A(«).
To verify *(<2, i^o) = *(«2. <i) *(h> *o). note
cos(e~'o — e~h) = cos (e~'o — e~*i + e~'i — e—*2)
= cos (e~'o — e~*i) cos (e~'i — e~*2) — sin (e~'o — e~ti) sin (e~*i — e~*2)
and
sin (e~'o — g-tj) = sin (e""*o — g-ti) cos (e""'i — e""'2) + cos (e~'o — e~'i) sin (e~'i — e~'2)
SIC that
-^rt-,^f cos (e~*o — e~«2) sin (e~'o — e-'2)\ / cos (e~'i — e~'2) sin (e~'i — e-'2
\— sin (e~'i — e~'2) cos (e~'i — e~*2)
/ cos (e~'o — g-ti) sin (e~'o — e~'i)
V— sin(e~'o — e 'i) cos (e~*o — e^'i)
To verify *-i(«j, fo) = *(*o. *i). calculate ♦->(*!, <o) by Cramer's rule:
_ _ gacto-ti) /cos (e~'o — e-'i) — sin (e~*o — e~*i)
^' " cos^ (e^'o— e~'i) + sin^ (e~*o — g-ti) I sin (e~'o— e~'i) cos (e~'o — e~*i)
liUnce cos2(e-*o — e~"'i) + sin2 (e^'o — g-ti) = l and sin (e~'o — e-«i) = — sin (e-*i — e^o), this
equals *(*o, *i)-
To verify *(«i, *o) = «(<i)»-'(«o). set
9{t) = e«t/ cos(l-e-') sin (1 - e-t)
\^— sin (1 — e~*) cos (1 — e~t)
Then
9-1(0 ^ e-„t /'=''« (1-^"') -sin(l-e-*)^
\sin(l — e-*) cos(l — e-«)^
Since we have
cos(e-«o-e-«i) = cos(e-«o-l) cos(l-e-«i) - sin (e-<o- 1) sin (1 - e-«i)
and a similar formula for the sine, multiplication of «(«i) and fl-i(<o) will verify this.
Finally, det*(t,T) = e2«(t-T) and trA(«) = 2a:, so that integration shows that the determinant
property holds.
5.6. Find the transition matrix for the system
dicldt = ( ^ ^ U
,-1 - (^2 + a)lt -2a - lit;
Writing out the matrix equations, we find dxjdt = Xg and so
d^X^/df^ + (2a + l/t)dx^/dt + [1 + (a2 + a)/t]xi =
Multiplying by the integrating factor te«*, which was found by much trial and error with the
effuation, gives
t(e<^* d'^xjdt^ + 2ae"tdxi/dt + a^e^tx^) + {e"t dx^/dt + ae««a;i) + te^^Xi -
which can be rewritten as
td^e<^*x^)/dP' + d(e««a;i)/df + te^tx^ =
Tiis has the same form as Bessel's equation (5.38), so the solution is
x^it) = dxjdt = -ax^(t) + e-''t[-c^Ji(t) + c^dYa/dt]
120 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 6
To solve for the constants c^ and Ca,
*(A;, m) =
so that x(t) = FG(t)G-MT)F-ix(T), where
1 0\ ^ / ^(t) FoCt)
-a 1/ ^^ V-^i(t) dFo/d*
and so *(<,t) = FG(«) G-^WF"'- This is true only for t and t > or t and t < 0, because
at the point t = 0, the elements of the original A(t) matrix blow up. This accounts for Fo(0) = "•
Admittedly this problem was contrived, and in practice a man-made system would only
accidentally have this form. However, Bessel's equation often occurs in nature, and knowledge
that as t^ «, Jo(t) ~\/27S cos{t-!r/4) and Y^if) ~ ^ilvi sin(t-ir/4) gives great insight
into the behavior of the system.
5.7. Given the time-varying difference equation x(« + 1) = A(») x(n), where A(n) = Ao
if n is even and A(«) = Ai if n is odd. Find the fundamental matrix, analyze by
Floquet theory, and give the conditions for stability if Ao and Ai are nonsingular.
From equation (5.50),
AiAqAi- • -AjAo if k is even and m is even
AqAiAo-'-AiAo if fe is odd and m is even
Ai AqAi • • • AflAi if A: is odd and m is odd
AqAiAq- • -AqAi if k is even and m is odd
For m even, *(fc, m) = P(fc, m) (AiAo)"'-'»>'2, where P(fc.m) = I if fc is even and P(fc,m) =
(AoAi-i)''2 if k is odd. For m odd, *(fc,m) = P(fc,m) (AoAi)(fc-"»'2, ^here P(fc,m) =_I if fc is
odd and P(fe,m) = {AiAo-i)i'2 if fc is even. For instability, the eigenvalues of R - (AiAo)'
must be outside the unit circle. Since the eigenvalues of B^ are the squares of the eigenvalues of B,
it is enough to find the eigenvalues of A^Ao. This agrees with the stability analysis of the equation
x(n + 2) = AiAox(m).
5.8. Find the impulse response of the system
d^y/df" + (1 - 2a) dy/dt +(«"-« + e-^')y = u
Choose xi^y and x^ = eKdxi/dt- ax^) to find a state representation in which A(t) commutes
with A(t). Then in matrix form the system is
y = a 0)x
From equation (5.55) and *{t, t) obtained from Problem 5.5,
H(t,T) = e'' + '«"-^> sin{e-''-e-')
This is the response y(.t) to an input u(t) - Sit-r).
5.9. In the system of Problem 5.8, let u{t) = 6^"-''^ y{to)^yo and {dy/dt){to) = ayo.
Find the complete response.
From equations (5.44) and Problem 5.5,
y{t) = e««-to) cos (e-to - e-«)vo + «"' J «"'' sin (e"^ - e'*) dr
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 121
Changing variables from r to ij in the integral, where e-'^—e-* = ii, gives
y(t) = e«f«-«o> cos(e-to-e-t)j/o + e-«*[l - cos (e-«- e-*o)]
Notice this problem cannot be solved by Laplace transformation in one variable.
5.10. (Jiven a step input U{s) = 6/s into a system with a transfer function
rr/ V _ g + 1
jl^'ind the output y{t) assuming zero initial conditions.
The easiest way to do this is by using classical techniques.
^{y(t)} = U(s)H(s) = „y + 'i. = 1+3-4
s-* + os^ + 6s s s + 2 s + 3
Taking the inverse Laplace transform determines y = 1 + 3e~2t — 4e-3t.
Doing this by state space techniques shows how it corresponds with the classical techniques.
I'rom Problem 2.3 the state equations are
^G) = Co -3)0 "'G)"
y = (-1 2)1 -^^
V^2
The transition matrix is obviously
*(*.t) = ( " ]
The response can be expressed directly in terms of (5.W).
y(t) =. *(Mo)0 + J^\-l 2) (^"'p' ^_3L,)(;)6«(.-to)rfr
= 1 + 3e-2t - 4e-3t (5.e«)
This integral is usually very complicated to solve analytically, although it is easy for a computer.
Instead, we shall use the transfer function matrix of equation (5.56).
= _2 1_ = s + 1 _ „, .
s + 3 s + 2 s2 + 5s + 6 ~ ^^'
This is indeed our original transfer function, and the integral (5.68) is a convolution and its
Laplace transform is
^s2 + 5s + 6/Vs
whose inverse Laplace transform gives y(t) as before.
5.11. t sing the adjoint matrix, synthesize a form of control law for use in guidance.
vehicle to a final state x(ty), which is known. I
x(tf) = *{tf,t)K{t) + J *{tf,T)B(r)u{T)dr
We desire to guide a vehicle to a final state x(ty), which is known. From (S.H),
122 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
Choose u(*) = U(t)c where V(t) is a prespecifled matrix of time functions that are easily mech-
anized, such as polynomials in t. The vector c is constant, except that at intervals of time it is
recomputed as knowledge of x(t) becomes better. Then c can be computed as
c = r j ^(tf,T)B{r)V{T)dr~] [*(t;,t)x(t) - y.(tj)]
However, this involves finding *(</, t) as the transition matrix of dx/dt = A{i)x with x(f) as the
initial condition going to x(«j). Therefore *(*/, t) would have to be computed at each recomputa-
tion of c, starting with the best estimates of x(t). To avoid this, the adjoint transition matrix
♦(t, tf) can be found starting with the final time tf, and be stored and used for all recomputations
of c because, from equation {5M), *t(r, tf) - *(tj, r) and c is found from
r J «t(r, tf) B(t) U(t) dr [♦t(t, tf) x(f) - X(%)]
Supplementary Problems
5.12. Prove equations (5.9), (5.10), (5.11), and (S.12).
5.13. Given *(k,m), how can A(fe) be found?
5.14. Prove that AeAt =: gAtA and then find the conditions on A and B such that eAgB - eBeA.
5.15. Verify that *(t,T) = eA(f-T) and *(fc,m) = A''-'" satisfy the properties of a transition matrix
given in equations (5.5)-(5.12).
5.16. Given the fundamental matrix
J /e-4(t-T) + 1 e-»«— ^' - 1
What is the state equation corresponding to this fundamental matrix?
/O 2\
5.17. Find the transition matrix to the system dx.ldt = I ^ _3 j '^•
5.18. Calculate the transition matrix *(f, 0) for dx/df = L ^jx using (a) reduction to Jordan
form, (6) the Maclaurin series, (c) the resolvent matrix.
5.19. Find eAt by the series method, where A = ( 1 1 . This shows a case where the
series method is the easiest. yo 0/
_/-3 1 0\
5 20 Find eAt using the resolvent matrix and Leverrier's algorithm, where A - I 1 3 1.
\ -3/
5.21. Find eAt using the Cayley-Hamilton method, where A is the matrix given in Problem 5.20.
CHAI'. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 123
5.22. Use the eigenvalue method to find eAt for
A =
5.23. Use the resolvent matrix and Cramer's rule to find eAt for A as given in Problem 5.22.
5.24. Use the resolvent matrix and Cramer's rule to find A^ for A as given in Problem 5.22.
5.25. Find eA« by using the Maclaurin series, Cayley-Hamilton and resolvent matrix methods when
5.26. Find the fundamental, or transition, matrix for the system
\ x/
using the matrix Laplace transform method.
5.27. Given the continuous time system
y = (1 0)x + 4m
Compute y{t) using the transition matrix if m is a unit step function. Compare this with the
solution obtained by finding the 1X1 transfer function matrix for the input to the output.
-^ -i\ .. . /n ,, /o
5.28. Given the discrete time system
K(n + 1) = [^_l _l)^(n) +[^lju(n) x(0) = ,
y{n) = (1 0)x(n) + 4M(n)
Compute y{n) using the transition matrix if u is the series of ones 1,1,1, ...,1
5.29. (a) Calculate *(«, <„) for the system dx/dt = ( „ , ) x using Laplace transforms.
(b) Calculate *(fc, m) for the system x(A; + 1) = |^ j x(fc) using 2 transforms.
5.30. How does the spectral representation for eAt extend to the case where the eigenvalues of A are
not distinct?
n-l
5.31. In the Cayley-Hamilton method of finding eAt, show that the equation el* = 2 YiWAy can always
i =
be solved for the yi(t). For simplicity, consider only the case of distinct eigenvalues.
5.32. Show that the column vectors of *(t, r) span the vector space of solutions to dx/dt = A(t)x.
5.33. Show A(<) A(t) = A(t) A(t) when A(t) = a{t)C, where C is a constant nXn matrix and a(t) is a
scalar function of t. Also, find the conditions on ay(t) such that A(t) A(t) = A(t) A(0 for a
I X 2 A(i) matrix.
5.34. Given the time-varying system
dx /
dt \-a-^t) -'a{t) a-^(t) ^' ^
Find the transition matrix. Hint: Find an integrating factor.
124
SOLUTIONS TO THE LINEAR STATE EQUATION
[CHAP. 5
5.35. Prove Floquet's theorem for discrete time systems, *(k,m) = P(k,m)W<-~'" where P(fc, m) =
P(fc + <o, m) if A(fc) = A(fc + w).
/sin t sin t\
5.36. Given the time-varying periodic system dx/dt = • , • ^ x. Find the transition matrix
\ sm t smty
*(i, <(,) and verify it satisfies Floquet's result *(t, (q) = P(*. <o)«*"~*"' where P is periodic and
R is constant. Also find the fundamental matrix of the adjoint system.
5.37. The linear system shown in Fig. 5-6 is excited by a square wave 8(t) with period 2 and amplitude
|s(t)| = 1. The system equation is y + [p + a sgn{am vt)]y = 0.
8{t)o-
<DH
: i:_
,r
■N
1
82
+ "Y
+
.
G
\ .
} *
OJ/(t)
Fig. 5-6
It is found experimentally that the relationship between a and yS that permits a periodic
solution can be plotted as shown in Fig. 5-7.
Fig. 5-7
Find the equation involving a and p so that these lines could be obtained analytically. (Do not
attempt to solve the equations.) Also give the general form of solution for all a and /3 and mark
the regions of stability and instability on the diagram.
5.38. Given the sampled data system of Fig. 5-8 where S is a sampler that transmits the value of e{t)
once a second to the hold circuit. Find the state space representation at the sampling instants of
the closed loop system. Use Problem 5.10.
'<..-^-r>-^eM'.
Hold
s+ 1
s2 + 5s -h 6
-oy(t)
Fig. 5-8
5.39. Find *(t,T) and the forced response for the system t^v + tv + v -- p{t) with riito) - Vo ^^^
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 125
5.40. Consider the system
I \c dJ\X2
'yi\ _ (s h\fxi
dt\xj - ye dyUy+W ""^ x = Ax + Bu
, , , I , , or y = Cx + Du
3/2/ \* '"^j\'^%J
Find the transfer functions ^{2/i}/^{mi} and Jliv^lJiW^ using the relation
■CAv^Uin) = [C(Is-A)-iB + D]
5.41. The steady state response x^sit) of an asymptotically stable linear differential system satisfies the
equation
dx,,/dt = A(t)xss + Bit) u(t)
but does not satisfy the initial condition Xss(«o) = "o and has no reference to the initial time to-
♦(*, t) is known.
(a) Verify, by substitution into the given equation, that
X,,(t) = J *(t, t) B(r) u(r) dr
where I is the indefinite integral evaluated at t = t. Hint: For an arbitrary vector function
f{t,r), -^
J^£Jf(t,.).. = f(t,^(0)f - f(t..W)^+£J^f(,.)c^.
(6) Suppose A(t) = A(t + T), B(«) = B(t + T), u(t) = u(t + T), and the system is stable. Find an
expression for a periodic K^^it) = x^si* + ^ in the form
Xjt) = K{T) \ *(t, t) B(t) U(r) dr
where K(7') is an nXn matrix to be found depending on T and independent of t.
5.42. Check that h(t) = et+<'(t-r) sin (e-r- g-t) satisfies the system equation of Problem 5.8 with
u{t) - S(t-T).
5.43. Fi nd in closed form the response of the system {l-t^)^-\^ = u to the input u(t) =
ty/f^-l with zero initial conditions.
5.44. Consider the scalar system dyldt = -(1 + t)y + (1 + t)u. If the initial condition is 2/(0) = 10.0,
find the sign and magnitude of the impulse in u{t) required at « = 1.0 to make j/(2) = 1.0.
/ -3/(2 \
5.45. Given the system dx/dt = ( ^ g/^ )x. Find the relationship between Xi(t) and x^it) such
that Xiitf) — 1, using the adjoint.
5.46. Show that if an nXn nonsingular matrix solution T(t) to the equation dT/dt = A{t)T — TD(t)
is known explicitly, where D(i) is an nXn diagonal matrix, then an explicit solution to dx/dt =
A(()x is also known.
126 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5
Answers to Supplementary Problems
5.13. A(k) = *{k + l,k)
5.14. If and only if AB - BA does eAeB = eBgA.
5.15. deteAU-T) = detel"-'"' = e<-t—riSh = e(t-T)trA
/-2 -2'
5.16. A = f „ „
-2 -2
5.17. ♦(f.r) = ^^et-rf^\(l 2) + e^<-r-of ^V-1 2)1
"»• '': :
5.19. eAt
5.20. 71 = 9, 72 = 26, yg = 24
/O o\ / -^ ~^ ^\
eAt zz o.5e-2t (1 1 ) + e^t ( j + 0.5e-« | -1 1 j
\0 1/ \ 0/
5.21. eAt = (6e-2t-8e-3t + 3e-«)I+ 0.5(7e-2t-12e-3t + 5e-4t)A+ 0.5(e-2t-2e-3t+e-«)A2
5.23. (sI-A)-i = F(s)/s(s-l)2 = D/s + B/(s - 1) + C/(s - 1)2 where
C =
Vi 1 -i/
^ -A; -fc - 1 2fe + 1
5.24. A" =
5.25. eAt =
e2t 1 _ e2t
1
5.27. y{t) = |[ll-2e-(t-to)/2_e--(t-t„)][7(i_y
5.28. y(k) = 4 + [7 - Z{-V)^ - 4(-l/2)'']/12
5.29. *(t,to) = „ , , ), *(fc,m) = (^ i /
\ gt-to y V 1
5.30. Let the generalized eigenvector tj of A have a reciprocal basis vector Sj, etc. Then
eAt = e'^txjrl + e'^tt^sf + te'^tx^sf + • • •
5.31. A Vandermonde matrix in the eigenvalues results, which is then always invertible.
5.33. Requires only Oi2(«) 021(7) = ai^ir) a^iit) and [a22(t) - aii(t)] a2i(T) = a2i(t) [022(7) - aii(r)].
CHAP, 5] SOLUTIONS TO THE LINEAR STATE EQUATION 127
a-HT)dT aito) am \ a-'Wdr
5.34. *(t,<o) = I '« *°
-a:~l(t) sin I a-^T)dT cos I a-Hr) dr
/cosh T sinh t'*
5.36. *(t,to) = e^i . ,
\sinh T cosh t
where t = cos *o - cos t, so *(t,t^) = V(t,to) and R = 0. Also »t(t, «„) = ♦-!(«, to) =
*(to, t).
5.37. Let y2 = j8 + a, S2 = /3 - a. Then e^R = ♦(2, 0) = ♦(2, 1) ♦(!, 0) where
*(2,1) = ( '"'.' ^''"'^^'^ ♦(1.0) = f *=°^^ ^'^^^^A
\—S sm 8 cos S y \~y s™ T ''"s y /
For periodicity of the envelope z{t + 4,r/e) = x(t), eigenvalues \ of e^R = e*^^.
det (X/ - e2R) = X^ - X2 cos « + 1 = X2 - X tr e2R + det e^R
det *(2, 1) det *(1, 0) = 1
2 cos 9 = 2 cos y cos 8 — (y/8 + 8/7) sin y sin 8
The stability boundaries are then determined by
±2 = 2 cos y cos S — (y/S + 8/y) sin y sin 8
The solution is of the form ♦(<, r) = P(t, T)eR(«-r) and the given curves form the stability boundaries
between unstable regions and periodic regions.
Reference: B. Van Der Pol and M. J. 0. Strutt, On the Stability of the Solutions of Mathieu's
Equation, Philosophical Magazine, 7th series, vol. V, January-June 1928, pp. 18-38.
5.38. x(fc + l) = ^(l + ^"')/2 '-'-' ]^(k)+ ('-^'~yM
^ ' \{\-e-^)l^ (5e-3-2)/3y^ ^ ^ \^2-2e-3y 6
5.39. i;(t) = 1/0 cos (In t/fo) -I- ^0*0 sin (In «/to) - j sin (In r/t) p(t)/t dr
5.41. K(r) - (e-RT_|)-i
5.42. Since h{t) is an element of the transition matrix,
d^hldf^ -V {\-2)dh/dt + (a"^- a + e-^t)h = f or t # t
Also, since dh/dt and h are continuous in t,
lim I dWdt^dt = I S{t-T)dt - 1
5.43. y{t) = (1/4 - ^/W^n. )(t - to) + (sin-i t - sin-i to)/2
5.44. u(t) = ^^g-s/V^ Mt-l-O)
5.45. 4 = (3/t + T3)a;i(to) + 3(1 - T*)x2{to)/tf where r = tf/to
5.46. Let X = T(«)z.
chapter 6
Controllability and Observability
6.1 INTRODUCTION TO CONTROLLABILITY AND OBSERVABILITY
Can all the states of a system be controlled and/or observed? This fundamental question
arises surprisingly often in both practical and theoretical investigations and is most easily
investigated using state space techniques.
Definition 6.1: A state xi of a system is controllable if all initial conditions xo at any
previous time to can be transferred to xi in a finite time by some control
function u(<,xo).
If all states xi are controllable, the system is called completely controllable or simply
controllable. If controllability is restricted to depend on to, the state is said to be control-
lable at time to. If the state can be transferred from xo to xi as quickly as desired inde-
pendent of to, instead of in some finite time, that state is totally controllable. The system
is totally controllable if all states are totally controllable. Finally, we may talk about the
output y instead of the state x and give similar definitions for output controllable, e.g. an
output controllable at time to means that a particular output y can be attained starting from
any arbitrary xo at to.
To determine complete controllability at time to for linear systems, it is necessary and
suflacient to investigate whether the zero state instead of all initial states can be trans-
ferred to all final states. Writing the complete solution for the linear case,
X(<i) = *(ii,*o)x(«o) + f *(il, r) B(t) u(t) dr
which is equivalent to starting from the zero state and going to a final state x (ii) =
x(ii)-*(ii,to)x(«o)- Therefore if we can show the linear system can go from to any
X (ii), then it can go from any x(<o) to any x(fi).
The concept of observability will turn out to be the dual of controllability.
Definition 6.2: A state x(i) at some given t of a system is observable if knowledge of the
input u(t) and output yir) over a finite time segment to<T^t com-
pletely determines x(t).
If all states x{t) are observable, the system is called completely observable. If observa-
bility depends on to, the state is said to be observable at to. If the state can be determmed
for T in any arbitrarily small time segment independent of to, it is totally observable.
Finally, we may talk about observability when u(t) = 0, and give similar definitions for
zero-input observable.
To determine complete observability for linear systems, it is necessary and sufficient to
see if the initial state x(«o) of the zero-input system can be completely determined from
y(r), because knowledge of x(«o) and u(r) permits x(*) to be calculated from the complete
solution equation (5.-^^).
128
CHAP. 6]
CONTROLLABILITY AND OBSERVABILITY
129
We have already encountered uncontrollable and unobservable states in Example 1.7,
page 3. These states were physically disconnected from the input or the output. By
physically disconnected we mean that for all time the flow diagram shows no connection,
i.e. tlie control passes through a scaler with zero gain. Then it follows that any state
vecto:i-s having elements that are disconnected from the input or output will be uncontrol-
lable or unobservable. However, there exist uncontrollable and unobservable systems in
whicl: the flow diagram is not always disconnected.
Exami:le 6.1.
Ccinsider the time-varying system
d^/a;i\ _ /«
dt \x2j ~ \0 13
+
gat
get
From the flow diagram of Fig. 6-1 it can be seen that u(t) passes through scalers that are never zero.
uo
Fig. 6-1
Fop zero initial conditions,
u(t) cLt = X2(ti)e~eh
Only those x^it^) - Xi(ti)e*''^^ "' can be reached at t^, so that Xzih) is fixed after a;i(ti) is chosen. There-
fore thu system is not controllable.
6.2 CONTROLLABILITY IN TIME-INVARIANT LINEAR SYSTEMS
For time-invariant systems dx/dt = Ax + Bu in the case where A has distinct eigen-
values, connectedness between the input and all elements of the state vector becomes
equivalent to the strongest form of controllability, totally controllable. We shall first con-
sider the case of a scalar input to avoid complexity of notation, and consider distinct eigen-
values before going on to the general case.
Theorem 6.1: Given a scalar input u{t) to the time-invariant system dK/dt = Ax + hu,
where A has distinct eigenvalues Ai. Then the system is totally controllable
if and only if the vector £ = M-'b has no zero elements. M is the modal
matrix with eigenvectors of A as its column vectors.
Proof: Only if part: Change variables by x = Mz. Then the system becomes
dx/dt =: Az + iu, where A is the diagonal matrix of distinct eigenvalues Ai. A flow diagram
of this system is shown in Fig. 6-2 below.
If any element ft of the f vector is zero, the element of the state vector Zi is disconnected
from the control. Consequently any x made up of a linear combination of the z's involving
Zi will be uncontrollable. Therefore if the system is totally controllable, then all elements
/i must be nonzero.
130
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
p-GWc^
Af'.
+
Fig. 6-2
0J
0j
+1 ^__/rvJ
0J
If part: Now we assume all /i are nonzero, and from the remarks following Definition
6.1 we need investigate only whether the transformed system can be transferred to an
arbitrary z{U) from z(io) - 0, where ^i can be arbitrarily close to ta. To do this, note
i = 1,2, . . .,n
{6.1)
It is true, but yet unproven, that if the /i are nonzero, many different m(t) can transfer
to z(«i).
Now we construct a particular u{t) that will always do the job. Prescribe u{t) as
u{t) = i: M^e-^^^-'i' {6.2)
where the ix^. are constants to be chosen. Substituting the construction {6.2) into equation
{6.1) gives
«A) = 2 /ii"fc(e^'"'~'^ e^'"'""') foralH (6.5)
where the inner product is defined as
{B{T),<f>{r)) = r'e*(r)</,(T)dr
»/to
Equation {6.3) can be written in matrix notation as
'Zi{ti)/fi
Z2{tl)/f2
Zn{tl)/fn
gii 912
fl'21 6^22
ffln Qzn
{6.A)
where gik = (e'^""'-^', e^i«i-^>). Note that because /i ^ by assumption, division by fi is
permitted. Since the time functions e'^i"!-^) are obviously linearly independent, the Gram
matrix {go,} is nonsingular (see Problem 3.14). Hence we can always solve equation {6.Jf)
for jui, 112, ..., i^n, which means the control {6.2) will always work.
Now we can consider what happens when A is not restricted to have distinct eigenvalues.
CHAP. 6]
CONTROLLABILITY AND OBSERVABILITY
131
Theorem 6.2: Given a scalar input u{t) to the time-invariant system dx/di = Ax + bM,
vi^here A is arbitrary. Then the system is totally controllable if and only if:
(1) each eigenvalue Ai associated with a Jordan block Lji(Xi) is distinct from
an eigenvalue associated w^ith another Jordan block, and
(2) each element /i of f = T-^b, where T->AT = J, associated with the
bottom row of each Jordan block is nonzero.
Note that Theorem 6.1 is a special case of Theorem 6.2.
Proof: Only if part: The system is assumed controllable. The flow diagram for one
I X I Jordan block Lji(Ai) of the transformed system dz/dt = Jz + fw is shown in Fig. 6-3.
The control u is connected to Zi, 22, . . .,Zi-\ and zi only if /i is nonzero. It does not matter
if fufi, ■ ■ ■ and /i-i are zero or not, so that the controllable system requires condition (2)
to hold. Furthermore suppose condition (1) did not hold. Then the bottom rows of two
different Jordan blocks with the same eigenvalues [L^i(Ai) and L„i(Ai)] could be written as
dzv/dt — XiZv + fvU
dZr,ldt — XiZr, + fr,U
+T ^^^ +T +T
<Hv
+
<7>
<^
Fig. 6-3
Consider the particular state having one element equal to fr^Zv-fvZ^. Then
d{fr, Zv — fvZn)ldt — fr, {XiZv + fvU) — fy (Ai2;„ + f-r^U)
^ Xi\Jy)Zv JvZn)
Therej^ore fr,z, (t) - f^Zr, (t) = [fr,z, (0) - f.Zr, (0)]e^'* and is independent of the control. We have
found a particular state that is not controllable, so if the system is controllable, condition
(1) must hold.
// part: Again, a control can be constructed in similar manner to equation (6.2) to show
the sy;3tem is totally controllable if conditions (1) and (2) of Theorem 6.2 hold.
Example 6.2.
To illustrate why condition (1) of Theorem 6.2 is important, consider the system
d_
dt
X2
+
2/ = (1 -3)
X2
Then ..zix^} = (3^{m} + a;io)/(s - 2) and j:{x2} = Uiu} + X2o)/is - 2) so that y(t) = {x^a ~ Sx2o)e^t re-
gardlesiii of the action of the control. The input is physically connected to the state and the state is
physicaJy connected to the output, but the output cannot be controlled.
For discrete-time systems, an analogous theorem holds.
132
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
Theorem 6.3 : Given a scalar input u{m) to the time-invariant system x(m + 1) = Ax + hu{m),
where A is arbitrary. Then the system is completely controllable if and
only if conditions (1) and (2) of Theorem 6.2 hold.
Proof: Only if part: This is analogous to the only if part of Theorem 6.2, in that the
flow diagram shows the control is disconnected from at least one element of the state vector
if condition (2) does not hold, and a particular state vector with an element equal to
/tj^i' — ^^t, is uncontrollable if condition (1) does not hold.
If part: Consider the transformed systems z(m + 1) = Jz(m) + iu{m), and for simplicity
assume distinct roots so that J = A. Then for zero initial condition,
Zi{m) = [A'"->m(0) + \'"-^u{l) + • • ■ + u{m - 1)]/;
For an nth order system, the desired state can be reached on the nth step because
zi{n)lfi
Z2{n)lf2
Zn{n)/fn
xr'
^„-2
\r'
x„
u{0)
U{1)
u{n — 1) ,
{6.5)
Note that a Vandermonde matrix with distinct elements results, and so it is nonsingular.
Therefore we can solve (6.5) for a control sequence u(0),u{l), . . .,u{n — l) to bring the system
to a desired state in n steps if the conditions of the theorem hold.
For discrete-time systems with a scalar input, it takes at least n steps to transfer to an
arbitrary desired state. The corresponding control can be found from equation {6.5), called
dead beat control. Since it takes « steps, only complete controllability was stated in the
theorem. We could (but will not) change the definition of total controllability to say that
in the case of discrete-time systems, transfer in n steps is total control.
The phenomenon of hidden oscillations in sampled data systems deserves some mention
here. Given a periodic function, such as sin mt, if we sample it at a multiple of its period
it will be undetectable. Referring to Fig. 6-4, it is impossible to tell from the sample points
whether the dashed straight line or the sine wave is being sampled. This has nothing to
do with controllability or observability, because it represents a failure of the abstract object
(the difference equation) to represent a physical object. In this case, a differential-differ-
ence equation can be used to represent behavior between sampling instants.
Fig. 6-4
6.3 OBSERVABILITY IN TIME-INVARIANT LINEAR SYSTEMS
Analogous to Theorem 6.1, connectedness between state and output becomes equivalent
to total observability for d^ldt = Ax + Bu, y = c+x + d'^u, when the system is stationary
and A has distinct eigenvalues. To avoid complexity, first we consider scalar outputs and
distinct eigenvalues.
CHAP. ()
CONTROLLABILITY AND OBSERVABILITY
133
Theorem 6.4: Given a scalar output y{t) to the time-invariant system dx/dt = Ax + Bu,
y = c+x + d'^u, where A has distinct eigenvalues Ai. Then the system is
totally observable if and only if the vector gt = c+M has no zero elements.
M is the modal matrix with eigenvectors of A as its column vectors.
Proof: From the remarks following Defini-
tion 6.2 we need to see if x(fo) can be recon-
structed from measurement of y(T) over to<T — t
in the case where u(t) = 0. We do this by chang-
ing variables as x = Mz. Then the system be-
comes dz/dt — Az and y — c+Mz = gtz. The flow
diagram for this system is given in Fig. 6.5.
Each Zi{t) = 2i(io)e'''<'"'o' can be determined
by taking in measurements of y{t) at times t^ =
to + (ti - to)k/n for fc = 1, 2, . . . , w and solving
the set of equations
i = l
for giZi{U). When written in matrix form this
set of equations gives a Vandermonde matrix
which is always nonsingular if the Ai are distinct.
If all i7i ¥- 0, then all Zi(fo) can be found. To find
x(t), use x(^o) = Mz(to) and dx/dt = Ax + Bu.
Only If gi¥'0 is each state connected to y. Fig.
The extension to a general A matrix is similar to the controllability
*'V
6-5
Theorem
6.2.
Theorem 6.5: Given a scalar output y{t) from the time-invariant system dx/dt = Ax + Bu,
1/ = ctx + d'^u, where A is arbitrary. Then the system is totally observable
if and only if:
(1) each eigenvalue Ai associated with a Jordan block Lji(Ai) is distinct from
an eigenvalue associated with another Jordan block, and
(2) each element gt of g+ = c+T, where T-^AT = J, associated with the
top row of each Jordan block is nonzero.
The proof is similar to that of Theorem 6.2.
Theorem 6.6: Given a scalar output y{m) from the time-invariant system x(m + l) =
Ax{m)+Bu{m), |/(m) = ctx(m) + d'^u(m), where A is arbitrary. Then the
system is completely observable if and only if conditions (1) and (2) of
Theorem 6.5 hold.
The proof is similar to that of Theorem 6.3.
Now we can classify the elements of the state vectors of dx/dt = Ax + Bu, y = Cx -I- Du
and of x(m + 1) = Ax(m) + Bu(to), y = Cx + Du according to whether they are controllable
or not, and whether they are observable or not. In particular, in the single input-single
output case when A has distinct eigenvalues, those elements of the state zi that have non-
zero /i and gi are both controllable and observable, those elements Zj that have zero fj but
nonzero gs are uncontrollable but observable, etc. When A has repeated eigenvalues, a
glance at Fig. 6-3 shows that Zk is controllable if and only if not all of fk,fk+i, . ■ -yfi are
zero, and the eigenvalues associated with individual Jordan blocks are distinct.
Unobservable and uncontrollable elements of a state vector cancel out of the transfer
function of a single input-single output system.
134
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
Theorem 6.7: For the single input-single output system dKldt = Ax + hu, y = t^x. + du,
the transfer function c'''(sl — A)""*b has poles that are canceled by zeros if
and only if some states are uncontrollable and/or unobservable. A similar
statement holds for discrete-time systems.
Proof: First, note that the Jordan flow diagram (see Section 2.4) to represent the
transfer function cannot be drawn with repeated eigenvalues associated with different
Jordan blocks. (Try it! The elements belonging to a particular eigenvalue must be com-
bined.) Furthermore, if the J matrix has repeated eigenvalues associated with different
Jordan blocks, an immediate cancellation occurs. This can be shown by considering the
bottom rows of two Jordan blocks with identical eigenvalues X.
dt \Z2
X 0\fzi
A,]U2
y = (ci C2)
4-
+ du
u
Then for zero initial conditions <^{zi} = (s - \)-^bi^{u} and Jl{z2} = {s-\)-^b2Jl{u}, so that
JliV] = [ci{s-\)-^bi + C2{s-k)~^b2 + d]^{u}
Combining terms gives
This is a first-order transfer function representing a second-order system, so a cancellation
has occurred. Starting from the transfer function gives a system representation of
dz/dt = \z + u, y = (ci&i + 0262)2 + du, which illustrates why the Jordan flow diagram can-
not be drawn for systems with repeated eigenvalues.
Now consider condition (2) of Theorems 6.2 and 6.5. Combining the flow diagrams of
Figs. 6-2 and 6-5 to represent the element Zi of the bottom row of a Jordan block gives
Fig. 6-6.
♦- y
Fig. 6-6
Comparing this figure with Fig. 2-12 shows that f^g^ = p., the residue of A,. If and only
if p.-O, a caacellation occurs, and p, = if and only if f. and/or g^ - 0, which occurs when
the system is uncontrollable and/or unobservable.
Note that it is the uncontrollable and/or unobservable element of the state vector that
is canceled from the transfer function.
6.4 DIRECT CRITERIA FROM A, B, AND C
If we need not determine which elements of the state vector are controllable and ob-
servable, but merely need to investigate the controllability and/or observability of the whole
system, calculation of the Jordan form is not necessary. Criteria using only A, B and C
are available and provide an easy and general way to determine if the system is completely
controllable and observable.
CHAI. 6] CONTROLLABILITY AND OBSERVABILITY 135
Theorem 6.8: The time-invariant system dx/dt = Ax + Bu is totally controllable if and
only if the n x nm matrix Q has rank n, where
Q = (B|AB| ... |A»-iB)
Note that this criterion is valid for vector u, and so is more general than Theorem 6.2.
One method of determining if rank Q = w is to see if det (QQ^) ¥- 0, since rank Q =
rank QQ^ from property 16, page 87. However, there exist faster machine methods for
determining the rank.
Proof: To reach an arbitrary x(^i) from the zero initial state, we must find a control
u(t) such that
x(*i) = r'eA<'i-^'Bu(T)dr {6.6)
n
Use of Theorem 4.16 gives gActi-r) _ ^ y.(T)A"~' so that substitution into (6.6) gives
i=l
/ Wl
x(«i) = (Bwi + ABw2+ ••• +A"->Bw„) = Q ;
\ Wn ,
where Wj, = j y„+i_s.(T)u(T)dT. Hence x(ii) lies in the range space of Q, so that Q must
to
have rank n to reach an arbitrary vector in "U^. Therefore if the system is totally con-
trollable, Q has rank n.
Kbw we assume Q has rank n and show that the system is totally controllable. This
time we construct u(t) as
U(r) = Mi8(r - g + m,S("(t - ^ + ■ • • + M,S<"-"(r - t,) {6.7)
whers /t^ are constant m-vectors to be found and 8"''(t) is the fcth derivative of the Dirac
delta function. Substituting this construction into equation {6.6) gives
x(tj) = eA(ti-t„)B^^ + eACti-t„)AB/i2 + • • • + eA"i-'o>A"-iB^„ {6.8)
Jtti
eActi-T)B8/ _ f\ ^^ - eA(ti-to)B and the defining relation for 8^"^ is
I
B"'\t-^)g{i)di = d^g/dt^
Usinj? the inversion property of transition matrices and the definition of Q, equation {6.8)
can be rewritten
(^
eActo-ti)x(fi) = q( .
From Problem 3.11, page 63, a solution for the |t^ always exists if rankQ = n. Hence some
ll^ (perhaps not unique) always exists such that the control {6.7) will drive the system to
X(ii).
The construction for the control {6.7) gives some insight as to why completely control-
lable stationary linear systems can be transferred to any desired state as quickly as possible.
No restrictions are put on the magnitude or shape of u(t). If the magnitude of the control
is bounded, the set of states to which the system can be transferred by U are called the
reachable states at ti, which has the dual concept in observability as recoverable states at
ti. iVny further discussion of this point is beyond the scope of this text.
136
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
A proof can be given involving a construction for a bounded u{t) similar to equation
(6.2), instead of the unbounded u{t) of (6.7). However, as ti -» U, any control must become
unbounded to introduce a jump from x(io) to x(ii).
The dual theorem to the one just proven is
Theorem 6.9: The time-invariant system dx./dt=^ Ax + Bu, y = Cx + Du is totally ob-
servable if and only if the kn x n matrix P has rank n where
CA
CA"
Theorems 6.8 and 6.9 are also true (replacing totally by completely) for the discrete-time
system x(m + 1) = Ax(m) + Bu(m), y(TO) = Cx(m) + Du(m). Since the proof of Theorem
6.9 is quite similar to that of Theorem 6.8 in the continuous-time case, we give a proof for
the discrete-time case. It is sufficient to see if the initial state x(i) can be reconstructed
from knowledge of y(m) for l^m< «>, in the case where u(to) = 0. From the state equa-
tion,
y(0 = Cx(J)
yil + 1) = Cx(i + 1) = CAx(0
y{l + n-l) = CA"-ix(i)
y{i)
Px(i)
This can be rewritten as
y{l + n-l)
and a unique solution exists if and only if P has rank n, as shown in Problem 3.11, page 63.
6.5 CONTROLLABILITY AND OBSERVABILITY OF TIME-VARYING SYSTEMS
In time-varying systems, the difference between totally and completely controllable be-
comes important.
Example 6.3.
Consider the time-varying scalar system
or unity as shown in Fig. 6-7.
dx/dt
-X 4- h{i)u and j/ = a;, where 6(f) is either zero
<§) — rO
e
Fig. 6-7
If {»(*) = for to -* < *o + '^*. the system is uncontrollable in the time interval [to, to + At). If
6(i) = 1 for to + ^*-*-*i' *he system is totally controllable in the time interval to + A*-t-*i-
However, the system is not totally controllable, but is completely controllable over the time interval
to — * — *i to reach a desired final state x{t-^.
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 137
Now suppose b{t) = for t^ < t — ^, and we wish to reach a final state x(t^. The state xit^i can
be reached by controlling the system such that the state at time t^ is x{t^ = e(*2~*i'a;(t2). Then with zero
input the free system will coast to the desired state x(t<^ = <j>(t2. ti) «(ti). Therefore if the system is totally
controllable for any time interval f,. ~ * ~ *e + ^*> it is completely controllable for all t — t^.
For the time-varying system, a criterion analogous to Theorem 6.8 can be formed.
Theorem 6.10: The time-varying system dx/dt = A{t)x + B{t)u is totally controllable if
and only if the matrix Q(t) has rank n for times t everywhere dense in
[to, ti], where Q(i) = (Qi | Q2 1 . . . | Q«), in which Qi = B(i) and Q^+i =
— A(i)Qfc + dQfc/di for k — 1,2, . . .,n-l. Here A{t) and B{t) are assumed
piecewise differentiable at least % — 2 and n — 1 times, respectively.
The phrase "for times t everywhere dense in [to,ti]" essentially means that there can exist
only isolated points in t in which rank Q < w. Because this concept occurs frequently, we
shall abbreviate it to "Q(f) has rank w(e.d.)".
Proof: First we assume rank Q = w in an interval containing time iy such that to < 1? < *i
and show the system is totally controllable.
Construct u(t) as „
uW = Sm^S'-'-^V-^) (6.9)
lc=l
To attain an arbitrary x(fi) starting from x(to) = we must have
L{tr) = r*{tur)B{r)u{r)dr = t jS W«l, r) B(t)]
M, (6.10)
T = 7)
But
|:[*(t»,r)B(x)] = Ht^.r)^ + [^*-Hr, «.)] B(r) (6.11)
Note = dl/dt = d{9^-^)/dt = A**"i +*d*-Vdt so that d^'^ldt = -*-»A and equation
{6.11) becomes
■|: [*(«!, T)B(r)] - *(ti,r)Q2(r)
Similarly,
^ [*(«,, r)B(r)] = *(tl,r)Q.(T) (^.1^)
Therefore (fi.iO) becomes
/-
x(to = *(ti,v)Q('?) :
A solution always exists for the nk because rank Q = w and * is nonsingular.
Now assume the system is totally controllable and show rank Q = «. From Problem
6.8, page 142, there exists no constant w-vector z 7^ such that, for times t everywhere
dense in U~t~ U,
zt*(to,QB(i) =
By differentiating h times with respect to t and using equation (6.12), this becomes
z+*(io, t) Q)c(i) = 0. Since *(io, t) is always nonsingular, there is no n-vector yt =
zt*(io,<) v^O such that
= (ytQi I y+Q2 1 . . . | y+Qn) = y^Q. = 2/iqi + 2/2q2 + • • • + y„qn
where qi are the row vectors of Q. Since the n row vectors are then linearly independent,
the rank of Q(i) is n(e.d.).
138 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6
Theorem 6.11: The time-varying system dx/dt = A{t)x + B{t)u, y = C(i)x + D(i)u is totally
observable if and only if the matrix P(^) has rank ^(e.d.), where F'^{t) -
(PJ" I PJ I ... \K) in which Pi = C(*) and P^ + i = PfcA(f) + dP^/dt for
fc = 1,2, . . .,n-l. Again A(i) and B(i) are assumed piecewise differentiable
at least n — 2 and n — 1 times, respectively.
Again, the proof is somewhat similar to Theorem 6.10 and will not be given. The situ-
ation is somewhat different for the discrete-time case, because generalizing the proof follow-
ing Theorem 6.9 leads to the criterion rankP = n, where for this case
Pi = C{1), Pa = C{l + l)Ail), ..., Pfc = C{l + k-l)A{l + k-2)- ■ -Ail)
The situation changes somewhat when only complete controllability is required. Since
any system that is totally controllable must be completely controllable, if rank Q(i) has
rank n for some t > to [not rank «(e.d)] then x(io) can be transferred to x(^i) for ti ^ t.
On the other hand, systems for which rank Q(i) < n for all t might be completely control-
lable (but not totally controllable).
Example 6.4.
Given the system
dt \x2j
^ -(:^)(::)^
\h{t))
where
=
f sin t
I
2k7r^ t< (2k + l),r
{2k + l)v ^ t < 2(fc + l)ff
k =
and f2{t)=fi{t + T).
Then
0, ±1,±2, ...
\f2it) Mid) + df^ldt
At each instant of time, one row of Q(t) is zero, so rank Q(t) = 1 for all t. However,
^ea(t-t„) \/fi(t)\ _ //i(«)e«"-to>N
*(t„,t)B(t) - 1 ^ ee<t-H^J\f,(t)J - V/2(*)e«'-'o'
If t—to>TT, the rows of *(«o, *) B(t) are linearly independent time functions, and from Problem 6.30,
page 145, the system is completely controllable for tj — to > tt. The system is not totally controllable because
for every tg. if ^2 ~ *o < ""> either /i(t) or fiir) is zero for tf, — t — i<2,.
However, for systems with analytic A(i) and B(i), it can be shown that complete con-
trollability implies total controllability. Therefore rank Q = n is necessary and sufficient
for complete controllability also. (Note /i(f) and fiif) are not analytic in Example 6.4.)
For complete controllability in a nonanalytic system with rankQ(i)<n, the rank of
*(i, t) B(t) must be found.
6.6 DUALITY
In this chapter we have repeatedly used the same kind of proof for observability as was
used for controllability. Kalman first remarked on this duality between observing a
dynamic system and controlling it. He notes the determinant of the W matrix of Problem
6.8, page 142, is analogous to Shannon's definition of Information. The dual of the optimal
control problem of Chapter 10 is the Kalman filter. This duality is manifested by the
following two systems:
CHAP. 6]
CONTROLLABILITY AND OBSERVABILITY
139
System #1:
System #2:
dx/di = A(i)x + B{t)u
y = C(f)x + D(i)u
dw/dt =■ -At(t)w + Ct(t)v
z = Bt(i)w + Dt(f)v
Then system #1 is totally controllable (observable) if and only if system #2 is totally
observable (controllable), which can be shovi^n immediately from Theorems 6.10 and 6.11.
6.1. Given the system
dx
Solved Problems
1 2 -1\ /O
1 X + |m,
1-43/ \1
2/ = (1 -1 l)x
find the controllable and uncontrollable states and then find the observable and un-
observable states.
Following the standard procedure for transforming a matrix to Jordan form gives A = TJT~i
1 2-1
10
.1-4 3/
-1 0\ / 2 1 0\/-l 0"
1 )( 2 jf 1 -2 1
1 1 2/ V 1/ V 10,
Then f = T-»b= (0 1 0)t and g^ = c^T = (0 1 1). The flow diagram of the Jordan system is
shown in Fig. 6-8.
u o-
■G>
+
a
Z2
<7>
s-2
s-2
<7>
KiW
-OV
s- 1
^3
-0-
Fig. 6-8
The element Xj of the state vector z is controllable (through Z2) but unobservable. The element
Z2 is both controllable and observable. The elem.ent 23 is uncontrollable but observable.
Note Theorem 6.1 is inapplicable.
140 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6
6.2. Find the elements of the state vector z that are both controllable and observable for
the system of Problem 6.1.
Taking the Laplace transform of the system with zero initial conditions gives the transfer
function. Using the same symbols for original and transformed variables, we have
sxi = Xi + 2x2 — x^ (6.1S)
SX2 = X2 (6.H)
sxg = Xi — 4x2 + 3^3 + ** (6.15)
y = Xi — a;2 + ^3 (6.16)
From (6.15), Xi = 4^2 + (s — 3)3:3 ~ "■ Putting this in (6.13) gives (4s — 6)^2 = (« — !)«* — (s — 2)2*3.
Substituting this in {6.H), (s - l)(s - 2)2x3 = (s - 1)2m. Then from (6.16),
V
(i^i)!_o+^l
(s-2)3 ' (s-2)2j"' (s-l)(s-2)2
(s-l)(s-2)^
Thus the transfer function h(s) = (s — 2)-i, and from Theorem 6.7 the only observable and con-
trollable element of the state vector is «2 as defined in Problem 6.1.
6.3. Given the system of Problem 6.1. Is it totally observable and totally controllable?
Forming the P and Q matrices of Theorems 6.9 and 6.8 gives
'1-1 1
Q =
-1
-4
1
3
8
Then rank P = 2, the dimension of the observable state space; and rank Q = 2, the dimension
of the controllable state space. Hence the system is neither controllable nor observable.
6.4. Given the time-invariant system
and that u{t) = e"* and y{t) = 2 — ate~K Find Xi{t) and X2{t). Find xi(0) and X2{0).
What happens w^hen a == ?
Since y = x^, then Xi(t) = 2 — ate-*. Also, dxi/dt — ax^, so differentiating the output gives
«2(*) = ~6~' + *"""'• Then x^(<i) =■ 2 and a;2(0) = — 1. When a = 0, this procedure does not work
because dxjdt = 0. There is no way to find X2(t), because X2 is unobservable as can be verified from
Theorem 6.5. (For a = 0, the system is in Jordan form.)
6.5. The normalized equations of vertical motion y{r, 6, t) for a circular drumhead being
struck by a force u{t) at a point r = u, 6 = 60 are
Y^2 = V'2/ + 27rrS{r-ro)S{9-eo)u{t) {6.17)
vsrhere y{ri, e,t) = at r - n, the edge of the drum. Can this force excite all the
modes of vibration of the drum?
The solution for the mode shapes is
00 00
y(r, e,t) = 2 2 "^n ("mr/ri) [Xin.m (*) COS 2nj-fl + a;2„ + 1, „ (t) sin 2nve]
CHAP. 6]
CONTROLLABILITY AND OBSERVABILITY
141
where k^ is the mth zero of the wth order Bessel function e7„(r). Substituting the motion of the first
harmonic m = 1, n = 1 into equation {6.17) gives
(Px2i/dt^ = Xa;2i + y cos 2jr9oW
cPxgi/dt^ = X^si + y sin 2ire(,u
where ^ = k^ "*" (^'^)^ ^^^ 7 — *'o''^i(«i''o/*"i)- Using the controllability criteria, it can be found
that one particular state that is not influenced by u(t) is the first harmonic rotated so that its node
line is at angle Sq. This is illustrated in Fig. 6-9. A noncolinear point of application of another
force is needed to excite, or damp out, this particular uncontrollable mode.
Pig. 6-9
6.6. Consider the system
dx
110
0'
^ = [0 1 0Jx + [1)mi + hu2.
1
where Ui and U2 are two separate scalar controls. Determine whether the system
is totally controllable if
(1) b = (0 0)^
(2) b = (0 1)''
(3) b = (1 0)^
For each case, we investigate the controllability matrix Q = (B | AB | A2B) for
/I 1 0\ /O
A = (0101 and B = ll
\0 1/ \1
For b = it is equivalent to scalar control, and by condition (1) of Theorem 6.2 the system is
uncontrollable. For case (2),
Q =
1
2
1
1
1
1
1
1
1
1
1
The first three columns are linearly independent, so Q has rank 3 and the system is controllable.
For case (3),
/O 1 1 1 2 l'
Q=(l01010
Vl 1 1 0,
The bottom two rows are identical and so the system is uncontrollable.
142 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6
6.7. Investigate total controllability of the time-varying system
dx ft 0'
dt - VI o;^ + 12'^
The Q(t) matrix of Theorem 6.10 Is
'e« eHl-t)
Then detQ(0 = e«(e« + 2t- 2). Since e« + 2i = 2 only at one instant of time, rank Q(t) = 2(e.d.).
6.8. Show that the time-varying system dx/dt = A(i)x 4- B{t)u is totally controllable if
and only if the matrix W(f , t) is postive definite for every t and every t> t, where
W(t,r) = J *(r,,)B(,7)Bt(^)*t(,,,)d,
Note that this criterion depends on 9{t, t) and is not as useful as Theorems 6.10 and
6.11. Also note positive definite W is equivalent to linear independence of the rows
of *(t, rj) B{r]) for T ~ rj — t.
If W(t, r) is positive definite, W~i exists. Then choose
u(r) = -Bt(T)*t(ti,r)W->(«o.*l)x(ii)
Substitution will verify that
X(ti) = J *(*!, r) B(t) u(7) dr
so that the system is totally controllable if W(t, t) is positive definite. Now suppose the system is
totally controllable and show W(t, r) is positive definite. First note for any constant vector k,
(k,Wk) = J kt*(T, 7;) 8(7?) Bt(,) *t(T, ,)k d,;
= J ||Bt(„)*t(r,,)k|||d„ ^
Therefore the problem is to show W is nonsingular if the system is totally controllable. Suppose
W is singular, to obtain a contradiction. Then there exists a constant n-vector z ^ such that
(z, Wz) = 0. Define a continuous, m-vector function of time f (t) = — Bt(t) *t(to, t)z. But
J 'i|f(r)||^dr = J 'zt*«o,<)B(«)Bt(i)*t(io,f)zdi
= (z,w(«j,gz) =
f t(i) u(t) dt for any u(f). Substituting for f(f) gives
zt*(t(„ t) Bit) u{t) dt (6.18)
'0
In particular, since the system is assumed totally controllable, take u(t) to be the control that
transfers to *(fi, to)z ¥- 0. Then
= J '*(«o,*)B(t)u
{t)dt
Substituting this into equation (6.18) gives = ztz which is impossible for any nonzero z.
Therefore no nonzero z exists for which (z, Wz) = 0, sa that W must be positive definite.
CHAP. 6]
CONTROLLABILITY AND OBSERVABILITY
143
Supplementary Problems
6.9. Consider the bilinear scalar system di/dt = u(t) 4(<). It is linear in the initial state and in the
control, but not both, so that it is not a linear system and the theorems of this chapter do not apply.
The flow diagram is shown in Fig. 6-10. Is this system completely controllable according to Defini-
tion 6.1?
u(t)
Multipliei
X
\,i(l)
[^
Fig. 6-10
6.10. Given the system
dx
dt
(-2 1 0)x
Determine which states are observable and which are controllable, and check your work by deriving
the transfer function.
6.11. Given the system
dx
dt
1 /3
X +
M,
y = a i)x
Classify the states according to their observability and controllability, compute the P and Q matrices,
and find the transfer function.
6.12. Six identical frictionless gears with inertia I are mounted on shafts as shown in Fig. 6-11, with
a center crossbar keeping the outer two pairs diametrically opposite each other. A torque u(t)
is the input and a torque y(t) is the output. Using the angular position of the two outer gearshafts
as two of the elements in a state vector, show that the system is state uncontrollable but totally
output controllable.
"(ofc:
€""
Fig. 6-11
u(t)
o
Fig. 6-12
|»2(«)
■ R
6.13. Given the electronic circuit of Fig. 6-12 where u(t) can be any voltage (function of time),
what conditions on R, L^ and Z«2 can both i^it-s) and ^(ii) be arbitrarily prescribed for
given that ix(to) and t2(*o) can be any numbers?
6.14. Consider the simplified model of a rocket vehicle
'$\ / 1
Under what conditions is the vehicle state controllable?
Under
144
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
6.15.
6.16.
6.17.
6.18.
6.19.
6.20.
6.21.
6.22.
6.23.
Find some other construction than equation (6.2) that will transfer a zero initial condition to an
arbitrary z(ti).
Prove Theorem 6.5.
Prove Theorem 6.6.
What are the conditions similar to Theorem 6.2 for which a two-input system is totally controllable?
Given the controllable sampled data system
iin + 2) + 3j(w + 1) + 2j(n) = u(n + 1) - u(n)
Write the state, equation, find the transition matrix in closed form, and find the control that will
force an arbitrary initial state to zero in the smallest number of steps. (This control depends upon
these arbitrary initial conditions.)
Given the system with nondistinct eigenvalues
-1 1 -r
dx
dt
4 X + 2 M,
2/ = (0 1 -l)x
1 -3/ \1,
Classify the elements of the state vector z corresponding to the Jordan form into observable/not
observable, controllable/not controllable.
Using the criterion Q = (b 1 Ab | . . . | An-ib), develop the result of Theorem 6.1.
Consider the discrete system x(fc + 1) = Ax(fc) + bM(fc)
where x is a 2-vector, m is a scalar, A = ( j, b = i-ij-
(a) Is the system controllable? (6) If the initial condition is x(0) = ( n ) ' ^^^ *^® control
sequence m(0), m(1) required to drive the state to the origin in two sample periods (i.e., x(2) = 0).
Consider the discrete system
X(,K -r J.; — i\x.\n,), y\n,j
/ 9 ^\
ct = (1 2).
x(A: + l) = Ax(A;), y(k) = ctx(fc)
2 1^
(a) Is the system observable? (fe) Given the observation sequence j/(l) = 8, y(2) = 14, find the
initial state x(0).
where x is a 2-vector, y is a scalar, A
6.24.
6.25.
6.26.
Prove Theorem 6.8, constructing a bounded u(t) instead of equation (6.7). Hint: See Problem 6.8.
Given the multiple input-multiple output time-invariant system dx/dt = Ax -f Bu, y = Cx + Du,
where y is a fc-vector and u is an m-vector. Find a criterion matrix somewhat similar to the Q
matrix of Theorem 6.8 that assures complete output controllability.
Consider three linear time-invariant systems of the form
Sj.- dx<-»/dt = A«)x(" + B«>u«>, y"' = C'»x(» i = 1, 2, 3
(a) Derive the transfer function matrix for the interconnected system of Fig. 6-13 in terms of
A«>,B(» and C»\ i= 1,2,3.
M c
Si
i/(0
,,(3)
1/(2)
S3
y(3)
Fig. 6-13
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 145
(6) If the overall interconnected system in part (a) is observable, show that S3 is observable.
(Note that u"' and y"' are vectors.)
6.27. Given the time-varying system
i = (J O'-'G)- """-"-"'^
Is this system totally controllable and observable?
6.28. Prove Theorem 6.9 for the continuous-time case.
6.29. Prove a controllability theorem similar to Theorem 6.10 for the discrete time-varying case.
6.30. Similar to Problem 6.8, show that the time-varying system dx/dt = A(t)x 4- B(t)u is completely
controllable for tj > t if and only if the matrix W(t, t) is positive definite for every r and some
finite t> T. Also show this is equivalent to linear independence of the rows of *(t, 1;) B(i;) for
some finite 17 > t.
6.31. Prove that the linear time-varying system dxjdt = A(t)x, y = C(t)x is totally observable if and
only if M(ti, to) is positive definite for all t^ > t^, where
mtuh) = J" ♦t((, to) ct(t) c(t) ♦(«, g dt
Answers to Supplementary Problems
6.9. No nonzero iit^) can be reached from |(to) = 0, so the system is uncontrollable.
6.10. The states belonging to the eigenvalue 2 are unobservable and those belonging to -1 are uncon-
trollable. The transfer function is 3(s4-3)-i, showing only the states belonging to -3 are both
controllable and observable.
6.11. One state is observable and controllable, the other is neither observable nor controllable.
6.13. Li # La
6.14. Mg # and ^„ 7^
6.15. Many choices are possible, such as
71
u{t) = 2 l^k{U[t- to - («i - mk - \)ln] - U[t - to - (ti - to)fe/»i]}
fc=i
n
where U(t — t) is a unit step at t = t; another choice is u(t) = 2 iik^~^^*. In both cases the
k = l
expression for n^ is different from equation {6.4) and must be shown to have an inverse.
6.18. Two Jordan blocks with the same eigenvalue can be controlled if /11/22 "~/i2/2i ^ 0, the /'s being
the coeflScients of Mj and Mj in the last row of the Jordan blocks.
146
CONTROLLABILITY AND OBSERVABILITY
[CHAP. 6
6.19. x(»i+l) = (_^ M x(n) + [ _'^ ) m(w); y(n) = (1 0) x(w)
*{n,k) = (-l)™-fc(^_^ lA + (_2)n-ic ^2 ^J
w(0)\ ^ 1 /13 -5\/a;i(0)\
M(l)y 6 1^10 -27 V 0^2(0)/
6.20. The flow diagram is shown in Fig. 6-14 where Zi and «2 ^^^ controllable, Zj and Zg are observable.
«3
<EH
+ ■
+ '■
«2
+ ^
0j I — 0J
Fig. 6-14
f or f = 1 and p = in T =
6.22. Yes; m(0) = -4, m(1) = 2.
2£ 2p - {
.0 i P-«/
G)-
6.23. Yes; x(0)
6.25. rank E = fe, where R = (CB 1 CAB ] . . . 1 CA"-iB | D)
6.26. H(s) = c<3)(Is-A(3))-iB")[C(i'(Is-A(i>)-iBa> + C(2>(Is-A(2>)-iBC2)]
6i!7. It is controllable but not observable.
chapter 7
Canonical Forms
of the State Equation
7.1 INTRODUCTION TO CANONICAL FORMS
The general state equation dx/di = A(i)x + B{t)u appears to have all n^ elements of
the A{t) matrix determine the time behavior of x(«). The object of this chapter is to reduce
the number of states to m observable and controllable states, and then to transform the m^
elements of the corresponding- Ait) matrix to only m elements that determine the input-
output time behavior of the system. First we look at time-invariant systems, and then at
time-varying systems.
7.2 JORDAN FORM FOR TIME-INVARIANT SYSTEMS
Section 2.4 shovs^ed hov^^ equation (2.21) in Jordan form can be found from the transfer
function of a time-invariant system. For single-input systems, to go directly from the
form dx/d« = Ax + bM, let x = Tz so that dz/dt = Jz + T-^hu where T-»AT = J. The
matrix T is arbitrary to vi^ithin n constants so that T = ToK as defined in Problem 4.41,
page 97.
For distinct eigenvalues, dz/dt = Az + K-'T^'hu, where K is a diagonal matrix with
elements ku on the diagonal. Defining g = To"'b, the equation for each state is dzi/dt =
\iZi + {gtU/kii). If gi = 0, the state Zt is uncontrollable (Theorem 6.1) and does not enter
into the transfer function (Theorem 6.7). For controllable states, choose ku = gu Then
the canonical form of equation [2.16) is attained.
For the case of nondistinct eigenvalues, look at the Z x i system of one Jordan block with
one input, dz/dt = Lz + T-^hu. If the system is controllable, it is desired that T-^b = ei,
as in equation (2.21). Then using T = ToK, we require T<r'b= Kei = iai m-i ... m)'',
where the ai are the I arbitrary constants in the T matrix as given in Problem 4.41. In this
manner the canonical form of equation (2.21) can be obtained.
Therefore by transformation to Jordan canonical form, the uncontrollable and un-
observable states can be found and perhaps omitted from further input-output considera-
tions. Also, the n^ elements in the A matrix are transformed to the n eigenvalues that
characterize the time behavior of the system.
7.3 REAL JORDAN FORM
Sometimes it is easier to program a computer if all the variables are real. A slight
drawback of the Jordan form is that the canonical states z{t) are complex if A has any
complex eigenvalues. This drawback is easily overcome by a change of variables. We keep
the same Zi as in Section 7.2 when Ai is real, but when Ai is complex we use the following
procedure. Since A is a real matrix, if A is an eigenvalue then its complex conjugate A*
is also an eigenvalue and if t is an eigenvector then its complex conjugate t* is also. Without
loss of generality we can look at two Jordan blocks for the case of complex eigenvalues.
±/z \ ^ /L V^
dt[z*) U L*A/*
147
148 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7
If Re means "real part of" and Im means "imaginary part of", this is
^/Rez + ilmz\ _ /ReL + jImL \/'Rez + jimz
dt\B,ez - jimz) " \ ReL - ;ImLy\^Rez - yimz
ReLRez — ImLImz + yReLImz + /ImLRez
ReLRez - ImLImz - yReLImz - jlmLRez
By equating real and imaginary parts, the system can be rewritten in the "real" Jordan
form as
d^/Rez\ ^ /ReL -ImLN/RezN
dt \Jmz) \ImL RelaJXImzJ
7.4 CONTROLLABLE AND OBSERVABLE FORMS FOR
TIME-VARYING SYSTEMS
We can easily transform a linear time-invariant system into a controllable or observable
subsystem by transformation to Jordan form. However, this cannot be done for time-
varying systems because they cannot be transformed to Jordan form, in general. In this
section we shall discuss a method of transformation to controllable and/or observable sub-
systems without solution of the transition matrix. Of course this method is applicable to
time-invariant systems as a subset of time-varying systems.
We consider the transformation of the time-varying system
dx/dt = A^(«)x + B^(«)u, y = e(i)x {T.l)
into controllable and observable subsystems. The procedure for transformation can be
extended to the case y = C^(i)x + D^(*)u, but for simplicity we take D^{t) = 0. We adopt
the notation of placing a superscript on the matrices A, B and C to refer to the state variable
because we shall make many transformations of the state variable.
In this chapter it will always be assumed that A{t), B(«) and C{t) are differentiable n- 2,
n-1 and n-1 times, respectively. The transformations found in the following sections
lead to the first and second ("phase-variable") canonical forms {2.6) and {2.9) when applied
to time-invariant systems as a special case.
Before proceeding, we need two preliminary theorems.
Theorem 7.1: If the system {7.1) has a controllability matrix Q^(i), and an equivalence
transformation x{t) = T{t) z{t) is made, where T{t) is nonsingular and dif-
ferentiable, then the controllability matrix of the transformed system
Q^(«) = 1-\t) Q^(*) and rankQ*(i) = rankQ^(«).
Proof: The transformed system is dz/dt = \'{t)z + B^{t)n, where A? = T-\A.^-d1ldt)
and B^ = T~>B^ Since Q^ ^^ (Qf | Qf | • • • 1 Q^) and Q^ is similarly partitioned, we need to
show Qj = T->Q5^for A; = l,2, ...,% using induction. First Q^ = B^^ = T-^B^ = T 'Q?.
Then assuming Q^_i = T-»Q^_i,
Q^, = -A.'q,l^,+d^l_Jdt
^ -T-i(A^T-dT/dt)(T-»Q^_i) + dX-VdiQ^-i + T-^dOll-Jdt
= T-^{-A.-q,l-r + dikl-Jdt) = T-»Q^
for fc = 2,3, ...,«. Now Q^(t) = T-i(«)Q-(<) and since T(*) is nonsingular for any t,
rankQ^(«) = rankQ^(«) for all t.
CHAP. 7] CANONICAL FORMS OP THE STATE EQUATION 149
It is reassuring to know that the controllability of a system cannot be altered merely
by a change of state variable. As we should expect, the same holds for observability.
theorem 7.2: If the system (7.1) has an observability matrix P^(f), then an equivalence
transformation x{t) = T{t) z{t) gives P^(i) = P^(i) T(f) and rankP^(t) =
rank P^(«).
The proof is similar to that of Theorem 7.1.
Use of Theorem 7.1 permits construction of a T<^(i) that separates {7.1) into its con-
trollable and uncontrollable states. Using the equivalence transformation x(i) = T'^(<)z(i),
(7.1) becomes
where the subsystem
dzjdt = Aii(*)zi + Bi(i)u (7.3)
is of order «i ^ n and has a controllability matrix Q^' (i) = (QM Q2M • • • [ Qn'i) with
rankni(e.d.). This shows zi is controllable and Z2 is uncontrollable in (7.2).
The main problem is to keep T'^(t) differentiable and nonsingular everywhere, i.e. for all
values of t. Also, we will find Q"^ such that it has n — rii zero rows.
Theorem 73: The system {7.1) is reducible to {7.2) by an equivalence transformation if
and only if Q^(<) has rank«i(e.d.) and can be factored as Q* = Vi(SlR)
where Vi is an nXni differentiable matrix with rankwi everywhere, S is
an wi X mni matrix with rank «i(e.d.), and R is any ni x m{n - ni) matrix.
Note here we do not say how the factorization Q^ = Vi(S | R) is obtained.
Proof: First assume {7.1) can be transformed by x(i) = T-(i) x{t) to the form of {7 i
?) - (f
Using induction, QJ = /'^'^ = (^2\ and if QJ = ('<«") tlien f or ! = 1, 2,
Therefore
VO-O ...0J~0
where ¥{t) is the niXm{n-ni) matrix manufactured by the iteration process {7.4) for
i = ni,ni + l, ...,n-l. Since Q*i has rank Wi(e.d.), Q^ must also. Use of. Theorem 7.1 and
the nonsingularity of T-^ shows Q^ has rank«i(e.d.). Furthermore, let T<^(i) = (Ti(«) | TaC*)),
so that
Q^{t) = T^{t)q^{t) = (Ti(«),T2(i))(^" I
= (TiQ^i TiF) = Ti(Q^i F)
Since Ti(i) is an % x ui differentiable matrix with rankni everywhere and Q«^i is an ni x mni
matrix with rank ni(e.d.), the only if part of the theorem has been proven.
150 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7
For the proof in the other direction, since Q," factors, then
Qn*) = {■v^{t) y^it)) (^^f ^f)
where Yzit) is any set oi n-ni differentiable columns making \{t) = (Vi Va) nonsingular.
But what is the system corresponding to the controllability matrix on the right? From
Theorem 6.10, [^'^ = B^ Also,
and
V) = -(aI t)C«')-l(?) -^.^ »-
) " Uli AI2A ^ dt[Q
Therefore = -Aliit)S{t), and since S{t) has rankwi(e.d.) and Ali(fl is continuous, by
Problem 3.11, Aliit) = 0. Therefore the transformation V(i) = (Vi{t) Viit)) is the required
equivalence transformation.
The dual relationship for observability is x(<) = T^t) z{t) that transforms (7.1) into
where the subsystem
dz3/di = AUt)z3 y = C?(«)z3 (7.6)
is of order Ws^m and has an observability matrix F^t) with rankW3(e.d.). Then F'{t)
has n — n3 zero columns.
Theorem 7.4: The system (7.1) is reducible to (7.5) by an equivalence transformation if
and only if P^^ has rankw3(e.d.) and factors as P^ = (-y-JRa where R3 is an
nsXn differentiable matrix with rankws everywhere and S is a fcnsXns
matrix with rank ■n3(e.d.).
The proof is similar to that of Theorem 7.3. Here T" = f^'] ' where R4 is any
set of n-ns differentiable rows making T» nonsingular and S is the observability matrix
of the totally observable subsystem.
We can extend this procedure to find states wi, W2, W3 and W4 that are controllable and
observable, uncontrollable and observable, controllable and unobservable, and uncontrollable
and unobservable respectively. The system {7.1) is transformed into
y = (Cr C^ 0)w C^-^)
in which wi is an ws-vector for i = 1, 2, 3, 4 and where the subsystem
CHAP. 7] CANONICAL FORMS OP THE STATE EQUATION 151
d_/w,\ _ /Ar, vwi\ ^ /Br\
has a controllability matrix [ Jj ^') of rank n.+us^ w(e.d.) and where the subsystem
!(::) = if aI)(::) ^ = <« «,(::) ,...,
has an observability matrix (^^^| ^^^^^ of rank m + n^ ^ n(e.d.). Hence these subsystems
are totally controllable and totally observable, respectively. Clearly if such a system as
(7.7) can be found, the states wi, vi^s, ws, W4 vi^ill be as desired because the flow diagram of
(7.7) shows W2 and W4 disconnected from the control and ws and W4 disconnected from the
output.
Theorem 7.5: The system [7.1) is reducible to {7.7) by an equivalence transformation if
and only if P^ has rank%i + n2(e.d.) and factors as P^ = RiUi + R2U2 and
Q^ has rankni + n3(e.d) and factors as Q^ = ViS' + VsS^
HereRX^) is a A;n x rii matrix with rankni(e.d.), S\t) is an ^ x mn matrix with rankni(e.d.),
Vi{t) IS an ni X « differentiable matrix with rank ni everywhere, Vi(i) isannxni differentiable
matrix with rank w; everywhere, and Vi{t) V^i) = S«I„i. Furthermore, the rank of R* and S'
must be such that the controllability and observability matrices of (7.8) have the correct
rank.
Proof: First assume {7.1) can be transformed to {7.7). By reasoning similar to (7.^),
/Pu Pl2
Q^ = f ^ ^ ^ ^ \ p^ = [ ^- P-
I Gai G32
\Gii G42
SO that Q^ and P^ have rank rii + na (e.d.) and Ui + Ui (e.d.), respectively. Let
(IX
1(t) = (V, V, V, VJ = "'
Then
Q^ = Vi(Qii Qi3 F12 F14) + V3(Q3i Q33 F32 F34)
and
P^ = (Pfi PJi Gl GJO^Ux + (P5 PJ, GJ, GJ2)^U2
which is the required fornl.
Now suppose P^ and Q^ factor in the required manner. Then, by reasoning similar to
the proof of Theorem 7.3, R' = (Pj; Pj Gj, Gj)^ for ^ = l,2 and S^ = (Qa Q« F,2 ¥,,)
for i = 1,3, and Ars , Ar4 , A2"; , A^s , A^*, A4"; and Ar3 have all zero elements.
Theorem 7.5 leads to the following construction procedure:
1. Factor P^Q^ = WS^ to obtain W and S^.
2. Solve P^Vi = Ri and UiQ^ = S> for Vi and Ui.
3. Check that UiVi = I„,.
152 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7
4. Factor P^-R>Ui = R2U2 and Q^-ViSi = V3S8 to obtain R^U2,V3 and S».
5. Find the reciprocal basis vectors of U2 to form V2.
6. Find V4 as any set of nt differentiable columns making T{t) nonsingular.
7. Check that UiV; = 8«I„i.
8. Form T{t) = (Vi V2 V3 V4).
Unfortunately, factorization and finding sets of differentiable columns to make T(t)
nonsingular is not easy in general.
Example 7.1. /sint 1\
Given P^ = I • . ). Obviously rankP^ = 1 and it is factored by inspection, but suppose we
\ sin t 1 /
try to mechanize the procedure for the general case by attempting elementary column operations analogous
to Example 3.13, page 44. Then
fsint l\/(sint)-i -(sint)-2\ ^ /I 0\
[sint l)\ (sin*)-i/ \^i 0/
The matrix on the right is perfect for P^, but the transformation matrix is not differentiable at t = iir for
i = . . . - 1, 0, 1, . . .
However, if a{t) and Pit) are analytic functions with no common zeros, let
^(*) - [m a{t))
Then {a{t) p{t))E{t) = (^^(f) + I3^{t) 0) and E(i) is always nonsingular and differentiable.
This gives us a means of attempting a factorization even if not all the elements are analytic.
If a{t) and p{t) are analytic but have common zeros Ci, ^2, . . .,ik, ■ . ., the matrix E(t)
can be fixed up as
m = (;S"!i:i)n(i-"«-"^A>
where p^ is the order of their common zero C^ and y^{Q is a convergence factor.
Example 7.2. /► \ _ >.2
Let a(t) = f2 and p(t) = tK Their common zero is ?i = with order 2 = pj. Choose y^iX^) - i^.
Then ^ ,^ .-
Note E(0) is nonsingular.
Using repeated applications of this form of elementary row or column operation, it is
obvious tlat we can Ind T(t) = E„(t)E„-.(t) • • • E.(t) such that P^T = (P^^ 1 0), and^simUarly
for Q^ if P^ or Q^ is analytic and of fixed rankr=^w. Also, denoting T 1 = (j^J'
then P^ factors as P'^'Ui.
The many difficulties encountered if P^ or Q- changes rank should be noted. Design of
filters or controllers in this case can become quite complicated.
7 5 CANONICAL FORMS FOR TIME-VARYING SYSTEMS
As discussed in Section 5.3, no form analogous to the Jordan form exists for a general
time-varying linear system. Therefore we shall study transformations to forms analogous
to the first and second canonical forms of Section 2.3.
CHAP. 7]
CANONICAL FORMS 6f THE STATE EQUATION
153
Consider first the transformation of a single-input, time-varying system
dx/dt = Ait)x + h(t)u
to a form analogous to (2.6),
l-a,(t) 1 ... 0\ l0\
dz
dt
-«2(«)
1
-«„_,(i) ... 1
-ajt) ... 0/
z +
\1|
u
Theorem 7.6:
(7.9)
(7.10)
The system (7.9) can be transformed to (7.10) by an equivalence trans-
formation if and only if Q^(<), the controllability matrix of (7.9), is diflfer-
entiable and has rank n everyvs'here.
Note this implies (7.9) must be more than totally controllable to be put in the form (7.10)
in that rank Q = « everywhere, not w(e.d.). However, using the methods of the previous
section, the totally controllable states can be found to form a subsystem than can be put
in canonical form.
Proof: This is proven by showing the needed equivalence transformation T(Q =
Q^(i)K, where K is a nonsingular constant matrix. First let x = Q^(i)w, where the nxn
matrix Q^(i) is partitioned as (qf | . . . | qj). Then
dvr
m
/
-1
-1
(-l)v„
(-l)"-2a.
-1
W +
(7.11)
This is true because b^ = Q^b» = qf and Q^-HA-Q--dQVdi) = A"- so that -AX + dqf/(i« =
q?+i for 1 = 1,2, ...,n-l. Also A^e^ = Q^->(A-q- -dqj/df). Setting w = kz, where
K
-1
\(-l)«-l
0/
will give the desired form (7.10).
If the system can be transformed to (7.10) by an equivalence transformation, from
Theorem 7.1 Q^ = TQ^ Using Theorem 6.10 on (7.10), Q- = K-i which has rank n every-
where and T by definition has rank n everywhere, so Q^ must have rank n everywhere.
Now we consider the transformation to the second canonical form of Section 2.3, which
is also known as phase-variable canonical form.
1 '
1
■ ' \
1^
dz
1
1
.
z +
'
dt
• ^ J
1 Mt)
a2(t)
a3(t) . .
. a„(t) 1
ll
u
(7.12)
154
CANONICAL FORMS OF THE STATE EQUATION
[CHAP. 7
The controllability matrix Q
Q^
of this system is
(-1)"-^'
(-1)"-^ Qn-l.n-l
and
q„+i =
Q'n-2,2
Qn-2, 1
Qn-1.2
Qn-IA
where qu = (-l)'an for l^i^n; and for l^k<i — n,
i->'-i '^ An- • I. .LI
q,^ = (-Ifan-i+k + {-If Z an-jQi-KHi + 2/ (-1)' ^ff
Theorem 7.7: The system {7.9) can be transformed to (7.12) by an equivalence trans-
formation if and only if Q^(t), the controllability matrix of (7.9), is differ-
entiable and has rank n everywhere.
Proof: The transformation matrix T = Q-Q^-', and Q^ has rank n everywhere.
Therefore the proof proceeds similarly to that of Theorem 7.6, transforming to {7.11) and
then setting z = Q^w. To determine the ai{t)^ from the m{t) obtained from (7.11), note
-A^Q^ + dQ.'^/dt = -Q^A*" when written out in its columns gives
(q|lq|
Iq^Wn+i) = (qi
I q^ 1 -Q^A»e„)
Therefore q^+i = -Q"A'^e„, which gives a set of relations that can be solved recursively
for the ai{t) in terms of the «;(*) of {7.11).
Then
Example 7.3.
Suppose we have a second-order system.
_ /922
and
'0 -1
I 1 —02
Ini
-Q==^A«'e„ =
eta
ai + al — da2/dt
"2 — aia2j
By equating the two expressions it is possible to find, recursively, a^ - -a^ and a^ = -a^ - da^ldt.
It appears that the conditions of Theorems 7.6 and 7.7 can be relaxed if the b^(i) of
equation {7.10) or {7.12) is left a general vector function of time instead of e„. No results
are available at present for this case.
Note that if we are given {7.9), defining y = Zi in {7.12) permits us to find a corre-
sponding scalar nth order equation d"2//di" = aiy + a2dy/dt+ ■ ■ ■ +and" ^y/dt« ^ + u.
For the case where u is a vector instead of a scalar in (7.9) a possible approach is to set
all elements in u except one equal to zero, and if the resulting Q- has rank n everywhere
then the methods developed previously are applicable. If this is not possible or desirable,
the form
dw
It
^A^i
A2"i
Ar2
A^2
aV
All
bw
2
w -I-
,An Al ... ATil \0 ... h\
may be obtained, where the ft are in general nonzero n-vector functions of time, and
u
{7.13)
CHAP. 7]
CANONICAL FORMS OF THE STATE EQUATION
155
is of the form {7.11); and for i ¥^ j,
This form is obtained by calculating m Q^ matrices for each of the m columns of the B
matrix, i.e. treating the systems as m single-input systems. Then from any I of these
single-input Q^ matrices, choose the first «i columns of the first Q^ matrix, the first m
columns of the second Q^ matrix, . . . , and the first Ui columns of the Ith. Q^ matrix such that
these columns are independent and differentiable and «i + W2 + • • • + ni = n. In this order
these columns form the T matrix. The proof is similar to that of Theorem 7.6.
To transform from this form to a form analogous to the first canonical form {7.10),
use a constant matrix similar to the K matrix used in the proof of Theorem 7.6. However,
as yet a general theory of reduction of time-varying linear systems has not been developed.
Solved Problems
7.1. Transform
x(m + 1) =
x(m) +
to Jordan form.
From Example 4.8, page 74, and Problem 4.41, page 97,
u{m)
«2
«1
K =
Then
/?!
from which ag = 2, aj = -1, p^ = -3 are obtained. The proper transformation
/ 1 0\ /-I 2 0\ /-I 2
T = ToK = ( 1 1 1 1/ -1 J = ( -1 1
V-1 -1/ \ -3/ \ 1 -2
Substitution of x = Tz in the system equation then gives the canonical form
/2 1 0\ /O
z(m + 1) = I 2 j z(m) + ( 1 j u{m)
Vo 2/
is then
156
CANONICAL FORMS OF THE STATE EQUATION
[CHAP. 7
7.2. Transform
/ 1 1 2\ 0'
dxldt = -1 1 -2 X + I M
\ -0 ,1/ \i.
to real Jordan form.
From Problem 4.8, page 93, and Problem 4.41, page 97,
Substitution into the system equation gives
dildt =
Then a
+ ( (l-i)//3 \u
,(1 + j)/p*J
-2 and p — 1—j. Putting this in real form gives
Re«2 j + I 1 )"
Im 22/ \ .
7.3. Using the methods of Section 7.4, reduce
dx/dt =
1 0\ /I
2 2 3 X + 2 \u,
2 0-1/ \-2.
i^ = (1 1 2)x
(i) to a controllable system, (ii) to an observable system, (iii) to a controllable
and observable system.
(i) Form the controllability matrix Q —
Observe that Q = (b I -Ab ) A^b) in accord with using the form of Theorem 6.10 in the
time-invariant case, and not Q = (b | Ab | A^b) which is the_f orm of Theorem 6.8. Ferformmg
elementary row operations to make the bottom row of Q« - gives
Q =
Then the required transformation is x = Tz where
1
2
1
1
1
and
T =
CHAP. 7] CANONICAL FORMS OF THE STATE EQUATION 157
Using this change of variables, dzldt = T->(AT - dT/dt)z + T-'btt.
1 0\ /l\
dx/dt = ( -2 -1 3 |z + Jm 2/ = (-1 -1 2)z
2/ \0/
(ii) Forming the observability matrix gives
/ 1 1 2\ / ^ 2 0\ A -1 0\
P^ = ( -1 2 1 j = ( -1 1 )[ 1 1 ) = P^(TO)-i
\ 1 4 5/ \ 1 5 0/\0 1/
where the factorization is made using elementary column transformations. Using the trans-
formation X = T^z,
-1 -3 o\ /-A
dz/dt = ( 2 o)z+ OJM i/ = (12 0)z
-2 -2 1/ \-2/
(iii) Using the construction procedure,
-1 -1 -1
P^Q* = ( 1 1 1
Then
/l-t>3
P^V, = ( -1 2 1 )( 0-21 ) = ( -1 ) = R' gives Vi = -v^i
... \ •^31
and
UiQ^ = (Mil Mi2 "is) ( 2 2 1 = (-1 -1 -1) = SI gives Uj = (1 u^g - 1 M13)
\-2 0-2/
Note UiVi = 1 for any values of M13 and Vg^. The usual procedure would be to pick a value
for each, but for instructional purposes we retain M13 and v^i arbitrary. Then
/O 2 - Mi3 2 - uia"
P^ - RlUi = (0 1 + Mi3 1 + Mi3 ) = f 1 + Mi3 j (0 1 1) = R2U2
\0 5 - Mi3 5 - Mi3^
and
/ 2--y3i -V31 2-i;3i\
Qx_ViSi = / 2-'i;gi -vai 2-Vsi ) = ( 1 )(2--ygi -yg, 2 - Vgi) = VgSa
\-2 + i;3i i;3i -2 + ^31^
Choosing Vgi = and M13 — 1 gives
/l ^12 l\ / 1 1
T = I ^22 1 j = 1 1
\0 V32 -1/ \M31 M32 M33/
Using UjVj = Sy we can determine
/l 1'
T = I 1 1
Vo 0-1.
158
CANONICAL FORMS OP THE STATE EQUATION
[CHAP. 7
The reduced system is then
V3y
2/ = (1 1 0)z
where «i is controllable and observable, z-i is observable and uncontrollable, and «3 is con-
trollable and unobservable. Setting ■Wgi = Wij = 1 leads to the Jordan form, and setting
■''31 = '"■\z - leads to the system found in (ii), which coincidentally happened to be in the
correct controllability form in addition to the observable form.
7.4. Reduce the following system to controllable-observable form.
dxjdt
X +
2/ = (0 i2 -l)x
Using the construction procedure following Theorem 7.5,
Qx
PX =
t -5 -16 -2t -6t-40'
2
Then from P»^Q* = R^Si,
Ri =
-1
-3<2 - 1
-3t3 + 2f2 - 12t - 1
-3t* + 2t3 - 24*2 + i5t - 18 /
SI = (2 2 2 2)
Solving P^Vi = Ri and UjQ^ = S^ gives
Vi =
Ui = (0 1 0)
Factoring P^-RiUi and Q^-ViSi gives
U2 = (0 1)
S3 = (t -5 -16 -2i -6t-40)
Then Vj = (0 1) and vj = (0 1 0)
that the equivalence transformation
will also put the system in the form (7.7).
R2 =
/
' -3(2 - t
-3(3 + 2(2 - 12t - 1
\-3(4 + 2(3 - 24(2 + i5t - 18/
and T = (ViiV2|V3|V4). It is interesting to note
CHAP. 7] CANONICAL FORMS OF THE STATE EQUATION 159
7.5. Reduce the following system to controllable form:
sint J \coatJ
■ —j— —cos t
First we calculate Q^ and use elementary row operations to obtain
E(f)Qa: = /' * cos A/ t tcosA _ /t2 + cos2t t^cost + cos^t
\— cost t j\(iost coa^tj ~ \
The required transformation is
T = E-i(e) = 1 / * -cost
t^ + C0s2 t V cos t t
7.6. Put the time-invariant system
/ 1 0\ /1\
x(m + l) =223 x(m) + 2 U(m)
\-2 0-1/ \2/
(i) into first canonical form {7.10) and (ii) into phase-variable canonical form (7.12).
Note this is of the same form as the system of Problem 7.3, except that this is a discrete-time
system. Since the controllability matrix has the same form in the time-invariant case, the pro-
cedures developed there can be used directly. (See Problem 7.19 for time-varying discrete-time
systems.)
From part (i) of Problem 7.3, the system can be put in the form
/ 1 o\ /A
z(m +1) = -2-1 3 1 z(m) +0 u{m)
\ 2/ \oJ
Therefore the best we can do is put the controllable subsystem
zi(m-l-l) = (_2 _i)^iW + (o)«W
into the desired form. The required transformation is
\0 2j\-l 0/ 1^-2
to the first canonical form
Xcim +1) = / j z<.(m) + ( - ) m(»»)
To obtain the phase-variable canonical form,
'1 -l\/0 -i\-i
'0 2J\1 g-u
160 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7
where q^ - -a^ = a^, from Example 7.3. From A^-Cn = Q^i-i(A^iq^i - dq^'M) we obtain
Then ^ = ( ) from which we obtain the phase-variable canonical form
Zp(m + 1) = ( ^jzp(TO) + r JM(m)
By chance, this also happens to be in the form of the first canonical form.
7.7. Put the time-varying system
ft sini\ / 2\
dx/d« = h _TL )x + (-1 )m for i^O
(i) into first canonical form, (ii) into second canonical form, and (iii) find a scalar
second-order equation with the given state space representation.
To obtain the first canonical form,
^ - <^ - {:, -''T%: I) - CT' -\
from which
where
'ai(t)\ 1 / 8 - 4i - 2*2 + (t-1) sini - cost
^a^it)/ 6 + 2t - sint ye - 6t - 2(2 - (t + 6) sin i - 3 cos « + sin2 «
To obtain the phase-variable form,
^^ [-1 -3 J\l -a,(t)j \ ai + 3 -1
from which
d.,/dt = {_^^\^j^^ X)^ + (l)«
The second-order scalar equation corresponding to this is
d^yldt^ + ai dy/dt + {a^ + dai/dt)y = u
where y = z^i.
7.8. Transform to canonical form the multiple-input system
1 2 -l\ /O 1\
dx/dt = |0 1 Ox-fO lu
1 -4 3/ \l 1/
To obtain the form (7.13), we calculate the two Q^ matrices resulting from each of the columns
of the B^ matrix separately:
CHAP. 7]
CANONICAL FORMS OF THE STATE EQUATION
161
'0 1 -4^
Q^ (Ml only) = (000
.1-3 8.
'1 -2 4^
Q^(M2only) = I 1 -1 1
a 0-2,
Note both of these matrices are singular. However, choosing T as the first two columns of
Q^ (Ml only) and the first column of Q^ (mj only) gives
so that
dvildt —
Also, T could have been chosen as the first column of Q^ (mj only) and the first two columns of
Q* (m2 only).
To transform to a form analogous to the first canonical form, let w = Kz where
Then
K =
0'
dzldt =
-1 4 llz + (0 0)m
2 -4 0/ Vl 0>
Supplementary Problems
'1 2 -1\ /-I
7.9. Transform dx/dt = (o 1 OJx + ( o)m to Jordan form.
.1-4 3/ \ 0.
7.10. Transform rfx/dt = /~^ i)''"*'(-4)" *« realJordan form.
7.11. Using the methods of Section 7.4, reduce
(i) to a controllable system, (ii) to an observable system, (iii) to an observable and controllable
system.
7.12. Prove Theorem 7.2, page 149.
7.13. Prove Theorem 7.4, page 150.
7.14. Show that the factorization requirement on Theorem 7.3 can be dro^^ed if T(t) can be nonsingular
and differentiable for times t everywhere dense in [to. *i]'
162 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7
7.15. Consider the system — / *^i ) = ( ^ ] u where /i(t) == 1 — cos t for — t — 2 and zero
"'^ \^2/ \t2Wj
elsewhere, and /2(<) = for — f — 2 and 1 -- cos t elsewhere. Can this system be put in the
form of equation {7.3)1
7.16. Find the observable states of the system
^4 6 2 8^
dx/dt =12 8 4 6'" 2/ = (1 1 1 l)x
^8 4 6
7.17. Check that the transformation of Problem 7.5, page 159, puts the system in the form of equation
{7.2), page 149, by calculating A^ and B^.
7.18. Develop Theorem 7.3 for time-varying discrete-time systems.
7.19. Develop the transformation to a form similar to equation {7.11) for time-varying discrete-time
systems.
t-1 -i + 2\ /I'
7.20. Reduce the system dx/dt = { -t-2 1 t + 2Jx + (l)M
t -t+1/ \0^
to a system of the form of equation {7.2).
7.21. Given the time-invariant system dx/dt = Ax + e„M where the system is in phase-variable canonical
form as given by equation {7.12). Let z = Tx where z is in the Jordan canonical form dx/dt —
Az + bM and A is a diagonal matrix. Show that T is the Vandermonde matrix of eigenvalues.
7.22. Verify the relationship for the gj^ in terms of the 0^, following equation {7.12) for a third-order
system.
7.23. Solve for the aj in terms of the a; for i = 1, 2, 3 (third-order system) in a manner analogous to
Example 7.3, page 154.
7.24. Transform the system -~ = (-1/2 1 0]x-(-
to phase-variable canonical form.
-11 4 6
1/2 1
-27/2 6 1 ,
7.25. Transform the system d-s.ldt =
to phase-variable canonical form.
7.26. Using the results of Section 7.5, find the transf.jrmation x = Tw that puts the system
/ 1 -1 2\ /l 3\^
dx/dt =
/ -3 6\ /l 0\
into the form dw/di = ( -1 -1 w 4- J u
V 1 0/ \0 1/
7.27. Obtain explicit formulas to go to phase-variable canonical form directly in the case of time-invariant
systems.
CHAP. 7]
CANONICAL FORMS OF THE STATE EQUATION
163
7.28. Use the duality principle to find a transformation that puts the system dx/dt = A(t)x and y = C(t)x
into the form
dz
dt
... ai(t)\
1 ... 02(e) \
1 ... 03(4)
... 1 »„(«)/
y = (0
l)z
7.29. Prove that ||T|| < « for the transformation to phase-variable canonical form.
Answers to Supplementary Problems
7.9.
1-1
0/8) where p is any number # 0.
-1 2/3.
7.10. T =
4 -8
-4 -12
7.11. There is one controllable and observable state, one controllable and unobservable state, and one
uncontrollable and observable state.
7.15. No. Q^ = Vi(l 0) but Vj does not have rank one everywhere.
7.16. The transformation
T =
r4i
r43
puts the system into the form of equation (7.6), for any r^i that make T nonsingular. Also, Jordan
form can be used but is more difficult algebraically.
7.23.
-a3 — Q!i, — «£ — 0^2 "^" 2q:x, — aj — 03 + «£ + «!
7.24. T-i =
-3/2 1 1
5/2 1 -2
-1-1 1,
7.25. T-i
ot fi2t — (>t
A =
1
2e~t -e-t 1,
7.26. T
'1 -1 3^
1
.0 1 2,
7.28. This form is obtained using the same transformation that puts the system dw/dt = At(t)w + Ct(t)u
into phase-variable canonical form.
7.29. The elements of Q^ 1 are a linear combination of the elements of Q^, which are always finite as
determined by the recursion relation.
chapter 8
Relations with Classical Techniques
8.1 INTRODUCTION
Classical techniques such as block diagrams, root locus, error constants, etc., have been
used for many years in the analysis and design of time-invariant single inputs-single output
systems. Since this type of system is a subclass of the systems that can be analyzed by
state space methods, we should expect that these classical techniques can be formulated
within the framework already developed in this book. This formulation is the purpose of
the chapter.
8.2 MATRIX FLOW DIAGRAMS
We have already studied flow diagrams in Chapter 2 as a graphical aid to obtaining the
state equations. The flow diagrams studied in Chapter 2 used only four basic objects
(summer, scalor, integrator, delayer) whose inputs and outputs were scalar functions of
time. Here we consider vector inputs and outputs to these basic objects. In this chapter
these basic objects will have the same symbols, and Definitions 2.1-2.4, pages 16-17, hold with
the following exceptions. A summer has n m-vector inputs ui{t), U2(t), . . . , u„(i) and one
output m-vector y(i) = ±ui{t) ± U2{t) ± • • • ± u„(i). A scalor has one m-vector input u(i)
and one output A;-vector y{t) — A(t)u{t), where A(i) is a fc x m matrix. An integrator has
one m-vector input u(t) and one output m-vector y(<) = y(io) + I u(t) dr. To denote
vector (instead of purely scalar) time function flow from one basic object to another, thick
arrows will be used.
Example 8.1.
Consider the two input — one output system
^X2/ \0 Oy\a;2/ \0 1
This can be diagrammed as in Fig. 8-1.
dt
= (3 2)x
*-V(t)
Fig. 8-1
Also, flow diagrams of transfer functions (block diagrams) can be drawn in a similar
manner for time-invariant systems. We denote the Laplace transform of x{t) as J!.{x}, etc.
164
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
165
Example 8.2.
The block diagram of the system considered in Example 8.1 is shown in Fig. 8-2.
xo =
^W
Jliy}
Fig. 8-2
Using equation (5.56) or proceeding analogously from block
diagram manipulation, this can be reduced to the diagram
of Fig. 8-3 where
H(s) = (6/s (2s + 3)/s2)
1 lo 0/ Vo 1,
^{u>:
(3 2)
['
H(s)
-^^{«}
Fig. 8-3
Vector block diagram manipulations are similar to the scalar case, and are as useful
to the system designer. Keeping the system representation in matrix form is often helpful,
especially when analyzing multiple input-multiple output devices.
8^ STEADY STATE ERRORS
Knowledge of the type of feedback system that will follow an input with zero steady
state error is useful for designers. In this section we shall investigate steady state errors
of systems in which only the output is fed back (unity feedback). The development can be
extended to nonunity feedback systems, but involves comparing the plant output with a
desired output which greatly complicates the notation (see Problem 8.22). Here we extend
the classical steady state error theory for systems with scalar unity feedback to time-
varying multiple input-multiple output systems. By steady state we mean the asymptotic
behavior of a function for large t. The system considered is diagrammed in Fig. 8-4. The
plant equation is dx/dt = A{t)x + B{t)e, the output is y = C(i)x and the reference input is
d(^) = y{t) + e{t), where y, d and e are all m-vectors. For this system it will always be
assumed that the zero output is asymptotically stable, i.e.
limC(i)*^_3p(t,T) =
t-* CO
where d^^_^c(t,T)/dt = {A{t)-B{t)C{t)]^^_^^(t,T) and *^_B(,(i, i) = I. Further, we shall
be concerned only with inputs d{t) that do not drive ||y(*)|] to infinity before i = «j, so that
we obtain a steady state as t tends to infinity.
A(t)
■y(t)
Zero output is
asymptotically stable
Fig. 8-4. Unity Feedback System with Asymptotically Stable Zero Output
166 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8
Theorem 8.1: For the system of Fig. 8-4, lim e(f) = if and only if d = C(t)w + g
where dvfldt = A{t)w + B{t)g for all t^U in which g{t) is any function
such that lim g(t) = and A, B, C are unique up to a transformation
t-* 00
t-»00
on w.
Proof: Consider two arbitrary functions i{t) and b{t) whose limits may not exist as t
tends to «>. If lim [f (t) - h(t)] = 0, then £(«) - h(t) = r(t) for all f, where r{t) is an
t-*00
arbitrary function such that lim r(i) = 0. From this, if = lim e(f) = lim [d(i)-y(t)],
then for alH,
d(t) = y(t) + r(t) = C(i)*^_^^(i,gx(g + r C{t)4>^_^^{t,r)B{r)dir)dr + r(i)
•'to
= r C(f)*^_3^(i.r)BWd(r)dr + g{t) + C(i) *^_3,(t, *„) w(g
v to
where the change of variables g(i) = r(t) + C(<)*a-bc(*, *o)[x(io)-w(io)] is one-to-one for
arbitrary constant w(to) because lim C(i)*A-Bc (t, to) = 0. This Volterra integral equation
for d{t) is equivalent to the differential equations dw/dt = [A{t) - B{t) C(i)]w + B{t)d and
d = C(f )w + g. Substituting the latter equation into the former gives the set of equations
that generate any d{t) such that lim e{t) = 0.
Conversely, from Fig, 8-4, djs./dt = A(t)x +B(f)e = [A(t) - B(f ) C(^)]x + B(f)d. Assuming
d = C(t)w + g and subtracting dv//dt - A(i)w + B(t)g gives
d(x - vi)ldt = [A{t) - Bit) C(«)](x - w)
Then lime = lim(d-y) = lim [g - C(^)(x - w)]
t~* 00 t-+M
= limg - [limC{t)*^_„c(*'*o)]W*o)-w(g] =
f->-00 t-'OO
From the last part of the proof we see that e{t) = g(t)-C(i)*A-Bc(«, io)[x(fo)-w(*o)]
regardless of what the function g{t) is. Therefore, the system dw/dt = A(i)w + B{t)g with
d = C{t)w + g and the system dx/dt = [A{t) - B{t) C{t)]x + B{t)d with e = d- C(i)x are
inverse systems. Another way to see this is that in the time-invariant case we have the
transfer function matrix of the open loop system H(s) = C(sl - A)-iB relating e to y. Then
for zero initial conditions, =C{d} = [H(s) + I]^{g} and =C{e} = [H(s) + I]-i=^{d} so that
=C{g} = -^{e}. Consequently the case where g{t) is a constant vector forms a sort of bound-
ary between functions that grow with time and those that decay. Of course this neglects
those functions (like .sin t) that oscillate, for which we can also use Theorem 8.1.
Furthermore, the effect of nonzero initial conditions w(io) can be incorporated into g{t).
Since we are interested in only the output characteristics of the plant, we need concern our-
selves only with observable states. Also, because uncontrollable but observable states of the
plant must tend to zero by the assumed asymptotic stability of the closed loop system, we
need concern ourselves only with states that are both observable and controllable. Use of
equation (6.9) shows that the response due to any Wi(io) is identical to the response due to
an input made up of delta functions and derivatives of delta functions. These are certainly
included in the class of all g(<) such that lim g(t) = 0.
Since the case g{t) = constant forms a sort of boundary between increasing and de-
creasing functions, and since we can incorporate initial conditions into this class, we may
take g{t) as the unit vectors to give an indication of the kind of input the system can follow
with zero error. In other words, consider inputs
d{t)
at) C *^(i, t) B(t) g(r) dr = C(^) r *^(i, r) B(r) 6, dr f or i = 1, 2, . . . , m
•'to 'o
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 167
which can be combined into the matrix function
C{t) f ' *^(t, r) B(T)(ei I e^ 1 ... I em)dT = C{t) f *^(t, r) B(t) dr
*/ to *^ 'o
Inputs of this form give unity error, and probably inputs that go to infinity any little bit
slower than this will give zero error.
Example 8.3.
Consider the system of Example 8.1 in which e(t) = U2(t) and there is no input Ui(t). The zero input,
unity feedback system is then -^ =\(q o)~(i)^^^M'' ^^°s® output
y = (3 2)x = e-t|[3a;i(0) + 2a!2(0)] cosV2t + -iz[«i(0) + «2(0)] sinV2tl
tends to zero asymptotically. Consequently Theorem 8.1 applies to the unity feedback system, so that the
equations
where limfi'(t) = 0, generate the class of inputs d(t) that the system can follow with zero error.
t-+oo
Solving this system of equations gives
d{t) = 3wi(0) + 2w2(0) + 3fW2(0) + f [S(t-T) + 2]g(T) dr + g(t)
For g{t) = 0, we see that the system can follow arbitrary steps and ramps with zero error, which is in
agreement with the classical conclusion that the system is of type 2. Also, evaluating
C(*) r ♦aC*. t) B(t) dr = ( [Sit - t) + 2] cJt = 1.5*2 + 2t
*
shows the system will follow t^ -v^ith constant error and will probably follow with zero error any function
t2-e for any e > 0. This is in fact the case, as can be found by taking g(t) = t~^.
Now if we consider the system of Example 8.1 in which e(t) = Mi(t) and there is no input u^it),
then the closed loop system is ^ = ( n n) ~ \ nr^ ^^ \^' '^^® output of this system is
y = 0.50:2(0) + [3a!i(0) + 1.5a;2(0)]e~*' which does not tend to zero asymptotically so that Theorem 8.1 cannot
be used.
Definition 8.1: The system of Fig. 8-4 is called a type-l system (1=1,2,...) when
lim e{t) = for the inputs di = it- toy-^U{t - to)ei for all i = 1, 2, . . . , m.
In the definition, U{t — to) is the unit step function starting at t = to and Ci is the tth unit
vector. All systems that do not satisfy Definition 8.1 will be called type-0 systems.
Use of Theorem 8.1 involves calculation of the transition matrix and integration of the
superposition integral. For classical scalar type-Z systems the utility of Definition 8.1 is
that the designer can simply observe the power of s in the denominator of the plant transfer
function and know exactly what kind of input the closed loop system will follow. The fol-
lowing theorem is the extension of this, but is applicable only to time-invariant systems with
the plant transfer function matrix H(s) = C(sl — A)"'B.
Theorem 8.2: The time-invariant system of Fig. 8-4 is of type Z^l if and only if
H(s) = s-'R(s)+P(s) where R(s) and P(s) are any matrices such that
limsR-^(s) = and ||lims'-ip(s)|[ < «>.
s->0 s-»0
168
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
Proof: From Theorem 8.1, the system is of type I if and only if Jl{{Ax | d2 1 ... | d^)} =
{l-l)\s-n= [H(s) + I]G(s) where =C{gi}, the columns of G(s), are the Laplace transforms
of any functions gi{t) such that - lim g^{t) = lim s=C{gi} where s=C(gi} is analytic for
Res^O. '-*°° '-'"
First, assume H(s) = s-'R(s) + P(s) where lim sR->(s) = so that R-i(s) exists in a
neighborhood of s = 0. Choose *"*"
G(s) = (Z-l)!s-'[H(s) + I]-i = (?-l)![R(s) + s'P(s) + s'I]->
Since [H(s) + I]~> is the asymptotically stable closed loop transfer function matrix, it is
analytic for Re s ^ 0. Then sG(s) has at most a pole of order J - 1 at s = in the region
Res ^ 0. In some neighborhood of s = where R"Hs) exists we can expand
sG(s) = (i-l)!s[R(s) + s'P(s)+s'I]-i = (Z-l)!sR-i(s)[I-Z(s)+Z2(s) ]
where Z(s) = sR-'(s)[s'->P(s) +s'-il]. Since limZ(s) = 0, this expansion is valid for
s->0
small \s\, and lim sG(s) = 0. Consequently sG(s) has no pole at s = and must be
analytic in Re s ^ which satisfies Theorem 8.1.
Conversely, assume lim sG(s) = where sG(s) is analytic in Re s ^ 0. Write H(s) =
s 'R(s) + P(s) where P(s) is any matrix such that
s'[H(s) — P(s)] is still arbitrary. Then
lim s'
s-»0
'P(s)|| < 00 and R(s) =
(Z-l)!s-'I = [s-'R(s) + P(s) + I]G(s)
can be solved for sR"i(s) as
(i-l)!sR-i(s) :
sG(s)(I + W(s))-i = sG(s)[I-W(s)+W2(s)-
where (Z- 1)! W(s) = [s'-ip(s) + s'-iI]sG(s). This expansion is valid for ||sG(s)|| small
enough, so that R"'(s) exists in some neighborhood of s — 0. Taking limits then gives
limsR-i(s) = 0.
We should be careful in the application of Theorem 8.2, however, in light of Theorem 8.1.
The classification of systems into type I is not as clear cut as it appears. A system with
H{s) = (s + e)"^ can follow inputs of the form e"". As t tends to zero this tends to a step
function, so that we need only take c"^ on the order of the time of operation of the system.
Unfortunately, for time-varying systems there is no guarantee that if a system is of
type N, then it is of type N — k for all k — 0. However, this is true for time-invariant
systems. (See Problem 8.24.).
Example 8.4.
^{d};
-Q
-1 -1
1 + 12s + 3s2
:>^{!/}
Fig. 8-5
The system shown in Fig. 8-5 has a plant transfer function matrix H(s) that can be written in the form
-1\ / -V
"^^^ ~ i2 ( 1 0/ "^ Vl2s-i + 3
in which
lim sP{s)
s-»0
s™ \ 12 -I- 3s
= S-2R(s) + P(8)
< '^
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 169
and where
limsR-i(s) = limsf ° ^ \ , ) = r '] #
wo s-o \-l 9 - 6s-^J \0 -6/
Since lim sR^i(s) has a nonzero element, the system is not of type 2 as appears to be the case upon
first inspection. Rewrite H(s) in the form (where R(s) and P(s) are different)
H(s) = l/-6«- + 9«- -s->\ /0-l\ ^ ,-.«(,) + p(,)
^ ^ 8 \ S-1 + 12 / ^ \3 0/
Again, ||limP(s)j| < °° but now
s->0
, . , „. , /O
lim sR-i(s) = hm s = n n
S-.0 *'-*" \ _ 9s - 6 / V
\ * 1 + 12s
Since the closed loop system has poles at —1, -1, -0.5, and -0.5, the zero output is asymptotically stable.
Therefore the system is of type 1.
To find the error constant matrix of a type-Z system, we use block diagram manipula-
tions on Fig. 8-4 to get ^{e} = [I + H(s)]-».<:(d}. If it exists, then
lime(<) = lims[I + s-'R(s)-FP(s)]-»=C{d}
t-*oo S-+0
= lims'+i[sl + R(s)-Fs'P(s)]-i^{d} = lims'+iR-HsK{d}
for any I > 0. Then an error constant matrix table can be formed for time-invariant sys-
tems of Fig. 8-4.
Steady State Error Constant Matrices
System Type Step Input Ramp Input Parabolic Input
lim [H-H(s)]-i
S-+0
*
*
1
lim R-i(s)
s-i-O
*
2
lim R-Ms)
s->0
In the table * means the system cannot follow all such inputs.
Example 8.5. /a 0\
The type-1 system of Example 8.4 has an error constant matrix lim R-i(s) = ( q -6/' "^^"^ ^*
the input were (t - to)U(t - to)e2, the steady state output would be [{t-toWit - tg) + 6] 63. The system
can follow with zero steady state error an input of the form (t — tQ)U(t — to)ei.
8.4 ROOT LOCUS
Because root locus is useful mainly for time-invariant systems, we shall consider only
time-invariant systems in this section. Both single and multiple inputs and outputs can
be considered using vector notation, i.e. we consider
dx/dt = Ax + Bu y = Cx-f-Du (8.1)
Then the transfer function from u to y is H(s) = C(sl- A)-iB-(-D, with poles determined
by det (si - A) = 0. Note for the multiple input and output case these are the poles of the
whole system. The eigenvalues of A determine the time behavior of all the outputs.
170
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
We shall consider the case where equations (8.1) represent the closed loop system.
Suppose that the characteristic equation det (si ~ A) = is linear in some parameter k so
that it can be written as
This can be rearranged to standard root locus form under k variation,
-1 =
-AiS"-i + ^,s"-2+ ... +^^_^s + ^„
S" + ^jS"-! + •
+ ^n-lS +
The roots of the characteristic equation can be found as k varies using standard root locus
techniques. The assumed form of the characteristic equation results from both loop gain
variation and parameter variation of the open loop system.
Example 8.6.
Given the system of Fig. 8-6 with variable feedback gain k.
xo =
Fig. 8-6
The closed loop system can be written as
dx _
It ~
-K 1
-K -3
X 4-
The characteristic equation is s^ -t- (3 -I- k)s + 4k = 0. Putting it into standard root locus form leads to the
root locus shown in Fig. 8-7.
Ims
Res
Fig. 8-7
Example 8.7.
The feedback system of Fig. 8-8 has an unknown
parameter a. Find the effect of variations in a upon
the closed loop roots.
Let sinh a = k. The usual procedure is to set
the open loop transfer function equal to —1 to find
the closed loop poles of a unity feedback system.
(a + 3} sinh a
s2 + 3s + sinh a
Fig. 8-8
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
171
-1 =
is + 3)«
g2 + 3s + K
This can be rearranged to form the characteristic equation of the closed loop system, s^ + 3s + k +
(s + 3)k = 0. Further rearrangement gives the standard root locus form under k variation.
8 + 4
s(s + 3)
This happens to give the same root locus as in the previous example for sinh a — Q.
8.5 NYQUIST DIAGRAMS
First we consider the time-invariant single input-
single output system whose block diagram is shown in
Fig. 8-9. The standard Nyquist procedure is to plot
G(s)H{s) where s varies along the Nyquist path en-
closing the right half s-plane. To do this, we need
polar plots of G{jo))H{j<a) where <» varies from — oo to
+ °o.
Using standard procedures, we break the closed
loop between e and v. Then setting this up in state
space form gives
dx/dt = Ax -I- be v = c+x -I- de
+
■^Ui{e}
G(s)
J
-Civ}
H(s)
Fig. 8-9
(8.2)
Then G{jo>)H{}o>) = c^{jcoI- A)-^h + d. Usually a choice of state variable x can be found
such that the gain or parameter variation k of interest can be incorporated into the c vector
only. Digital computer computation of {jo>I-A)-'^h as « varies can be most easily done by
iterative techniques, such as Gauss-Seidel. Each succeeding evaluation of {j<o._^J. — A)-^h
can be started with the initial condition {j\l-A)~% which usually gives fast convergence.
Example 8.8.
Given the system shown in Fig. 8-10. The state space form of this is, in phase-variable canonical
form for the transfer function from e to v,
(hi.
dt
Then ct(7<oI-A)-ib =
jaija + 1)
-\)^ + (i)^ - = (^ 0)x
, giving the polar plot of Fig. 8-11.
ImGH
-^-CW
K
s
J
1
s + 1
ReGH
Fig. 8-10
Fig. 8-11
About the only advantage of this over standard techniques is that it is easily mechanized
for a computer program. For multiple-loop or multiple-input systems, matrix block diagram
manipulations give such a computer routine even more flexibility.
172
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
Example 8.9.
Given the 2 input — 2 output system with block diagram
shown in Fig. 8-12.
Then dx/di = Ax + bjej + b2e2 and ■yj = cjx and fj =
c|x. The loop connecting Vi and Cj can be closed, so that
^1 — ■"! = cjx. Then
^{vg} = c|(sI-A-bict)-ib2^{e2}
so that we can ask the computer to give us the polar plot of
c|(iuI-A~biC+)-ib2.
+
+
G(8)
^{v}
H(s)
Fig. 8-12
8.6 STATE FEEDBACK POLE PLACEMENT
Here we discuss a way to feed back the state vector to shape the closed loop transition
matrix to correspond to that of any desired wth-order scalar linear differential equation.
For time-invariant systems in particular, the closed loop poles can be placed where desired.
This is why the method is called pole placement, though applicable to general time-varying
systems. To "place the poles," the totally controllable part of the state equation is trans-
formed via x(i) = T{t)z{t) to phase-variable canonical form {7.12) as repeated here:
1 ... \ /O
1 ... \ /
dz
di.
z +
u
{7.12)
... 1 / \
\ai{t) ai{t) as{t) . . . an{t)/ \ 1
Now the scalar control u is constructed by feeding back a linear combination of the z state
variables as m = kt(i)z where each klt) = -aJ^t)-a.{t).
u = [-„^{t) - ali)\z^ + [-«,(*) - ali)\z, + • • • + [-«„(*) - a„(t)]2„
Each a^{t) is a time function to be chosen. This grives the closed loop system
1 ...
, 1 ...
az
Then z^{t) obeys
i = 1, 2, . . . , w - 1
... 1
\-a^{t) -a^t) -aj,t) ... -ccj^i) ,
z{^' + «„(Q2j"^" + ••• + alt)'z^ + a^{t)z^ = and each z.^^{t) for
is the ith derivative of z^t). Since the a}^t) are to be chosen, the corre-
sponding closed loop transition matrix *^(t, t^) can be shaped accordingly. Note, however,
that x(i) = T{t)^jt, t^T.^ so that shaping of the transition matrix *^(<, t^) must be done
keeping in mind the effect of T{t).
This minor complication disappears when dealing with time-invariant systems. Then
T{t) is constant, and furthermore each a.{t) is constant. In this case the time behavior of
x(i) and z(i) is essentially the same, in that both A^ and A^ have the same characteristic
equation A" + ff^A"-»+ ha^A-t-ofj = 0. For the closed loop system to have poles at the
desired values y^, y^ . . . , y^, comparison of coefficients of the A's in (A - yj)(A - yj) • • • (A. - y„) =
determines the desired value of each a.
Example 8.10.
Given the system
^x ^
dt
V2 -3
1 )x +
*^'
It is desired to have a time-invariant closed loop system with poles at 0, —1 and —2.
will have a characteristic equation X^ + 3\2 + 2X = 0. Therefore we choose u
kT = (_2 1 -(t + 3)).
Then the desired system
= (-2 1 -(t + 3))x, so
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 173
For multiple-input systems, the system is transformed to the form of equation {7.13),
except that the subsystem dw./dt = Al? W; + hfu^ must be in phase-variable canonical form
{7.12) and for i ¥- j, the ATi{t) must be all zeros except for the bottom row. Procedures
similar to those used in Chapter 7 can usually attain this form, although general conditions
are not presently available. If this form can be attained, each control is chosen as
u^ = k?'(i)Wi — elA^w. for j ¥- i to "place the poles" of Ajj(t) and to subtract off the coupling
terms.
Why bother to transform to canonical form when trial and error can determine k?
Example 8.11.
Place the poles of the system of Problem 7.8 at Pj, p^ *nd p^. We calculate
det
This is
^^ - (fei3 + ^21 + k^i + ^23 + 5)X2 + [fcii + 2fci3 + 3^21 + 4fe22 + S/cga + fei3(fe2i + ^22) " ^izihi + ^12) + 8]X
— k^i — fci3 - 2^21 — 4^22 - 6^23 - fcll(^22 + ^23) + ^12(^21 + ^23) + ^13(^21 "" ^22) - 4
It would take much trial and error to choose the fc's to match
(\ - Pi)(\ - p^{\ - Pa) = X3 - (pi + P2 + P3)X2 + (piP2 + P2P3 + PiPs)?^ - P1P2P3
Trial and error is usually no good, because the algebra is nonlinear and increases greatly
with the order of the system. Also, Theorem 7.7 tells us when it is possible to "place the,
poles", namely when Q(i) has rank n everywhere. Transformation to canonical form seems
the best method, as it can be programmed on a computer.
State feedback pole placement has a number of possible defects: (1) The solution appears
after transformation to canonical form, with no opportunity for obtaining an engineering
feeling for the system. (2) The compensation is in the feedback loop, and experience has
shown that cascade compensation is usually better. (3) All the state variables must be
available for measurement. (4) The closed loop system may be quite sensitive to small
variation in plant parameters. Despite these defects state feedback pole placement may
lead to a very good system. Furthermore, it can be used for very high-order and/or time-
varying systems for which any compensation may be quite difficult to find. Perhaps the
best approach is to try it and then test the system, especially for sensitivity.
Example 8.12.
Suppose that the system of Example 8.10 had t — e instead of f in the lower right hand corner of the
A(t) matrix, where e is a small positive constant. Then the closed loop system has a characteristic equation
X^ + 3x2 + 2X — * = 0, which has an unstable root. Therefore this system is extremely sensitive.
8.7 OBSERVER SYSTEMS
Often we need to know the state of a system, and we can measure only the output of the
system. There are many practical situations in which knowledge of the state vector is
required, but only a linear combination of its elements is known. Knowledge of the state,
not the output, determines the future output if the future input is known. Conversely
knowledge of the present state and its derivative can be used in conjunction with the state
equation to determine the present input. Furthermore, if the state can be reconstructed
from the output, state feedback pole placement could be used in a system in which only the
output is available for measurement.
In a noise-free environment, n observable states can be reconstructed by differentiating
a single output n~l times (see Section 10.6). In a noisy environment, the optimal recon-
struction of the state from the output of a linear system is given by the Kalman-Bucy filter.
A discussion of this is beyond the scope of this book. In this section, we discuss an observer
system that can be used in a noisy environment because it does not contain differentiators.
However, in general it does not reconstruct the state in an optimal manner.
174
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
To reconstruct all the states at all times, we assume the physical system to be observed
is totally observable. For simplicity, at first only single-output systems will be considered.
We wish to estimate the state of dx/dt = A(i)x + B(i)u, where the output y = ct(f)x. The
state, as usual, is denoted x(i;), and here we denote the estimate of the state as x(i).
First, consider an observer system of dimension n. The observer system is constructed as
dx/dt = A{t)x + k(t)[cHt)x-y]+B{t)u
(8.3)
where k{t) is an w-vector to be chosen. Then the observer system can be incorporated into
the flow diagram as shown in Fig. 8-13.
Fig. 8-13
Since the initial state x(fo), where U is the time the observer system is started, is not
known, we choose x{to) =0. Then we can investigate the conditions under which x{t) tends
to x{t). Define the error e(Q = x(«)-x(t). Then
de/dt = dx/dt -dx/dt = [A(«) + k(«)ct(i)]e {84)
Similar to the method of the previous Section 8.6, k(<) can be chosen to "place the poles"
of the error equation (84). By duality, the (Josed loop transition matrix *(t, «„) of the
adjoint equation d^/dt= -At(t)p - c(t)i; is shaped using -u = kt(t)p. Then the transition
matrix *(i,g of equation (84) is found as *(«,«,) = *+(*„,<), from equation {5M). For
time-invariant systems, it is simpler to consider dw/dt = Atw + cv rather than the
adjoint. This is because the matrix At + ckt and the matrix A + kc+ have the same
eigenvalues. This is easily proved by noting that if A is an eigenvalue of At + ckt, its com-
plex conjugate \* is also. Then \* satisfies the characteristic equation det (X*I - A-t - ckt) = 0.
Taking the complex conjugate of this equation and realizing the determinant is invariant
under matrix transposition completes the proof. Hence the poles of equations (8.3) and (84)
can be placed where desired. Consequently the error e{t) can be made to decay as quickly
as desired, and the state of the observer system tends to the state of the physical system,.
However, as is indicated in Problem 8.3, we do not want to make the error tend to zero
too quickly in a practical system.
y = (1 l)x
Example 8.13.
Given the physical system
dt \ 2 -2/ V 0,
Construct an observer system such that the error decays with poles at -2 and -3.
First we transform the hypothetical system
dw _ /-2 2\ . /I'
dt
w +
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
175
to the phase variable canonical form
dz
It
z + L ]v
where w
*.
obtained by Theorem 7.7. We desire the closed loop system to have the charac-
Then
^-2 -4
'4 1
teristic equation = (X + 2)(\ + 3) = X^ + 5\ + 6. Therefore choose v - (-4 -l)z = (-1 0)w.
k = (—1 0)t and the observer system is constructed as
Now we consider an observer system of dimension less than n. In the case of a single-
output system we only need to estimate « — 1 elements of the state vector because the known
output and the n — 1 estimated elements will usually give an estimate of the nth element of
the state vector. In general for a system having k independent outputs we shall construct
an observer system of dimension n — k.
We choose P(i) to be certain n — k diflferentiable rows such that the nxn matrix
= {H(t) I G(t)) exists at all times where H has n — k columns. The estimate x
w
(8.5)
is constructed as
x.{t) = H{t)w + G{t)y or, equivalently,
Analogous to equation (8.3), we require
Pdx/d« = P[Ax + L(Cx-y)+Bu]
where L{t) is an nxk matrix to be found. (It turns out we only need to find PL, not L.)
This is equivalent to constructing the following system to generate w, from equation (8.5),
dw/dt = (dP/di)x + Pdx/d« = Fw - PLy + PBu
where F is determined from FP = dP/dt + PA + PLC. Then (F | -PL)j
so that F and PL are determined from (F|-PL) = {dP/dt + PA){H\G).
(8.6) the error e = P(x-x) = Px- w obeys the equation de/dt^Fe.
The flow diagram is then as shown in Fig. 8-14.
{8.6)
= dP/dt + PA
From (8.5) and
Fig. 8-14
Example 8.14.
Given the system of Example 8.13, construct a first-order observer system.
Since C = (1 1), choose P = (pi Pg) with pi ¥= pg- Then
cj [l ij P1-P2V-I Pi
= (H I G)
176
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
Therefore
w +
and
F = (dP/dt + PA)H =
Pi-P2\-1/ P1-P2
(Pi P2) /-2 1
-P2
Pi
PL =
-{dr/dt + PA)G
1
P1-P2 V 2 -2yv-i
_ (Pi P2) /'-2 1\/-P2
Pi
-3pt + 4p2
Pi - P2
P2) /■
-P2 V
Pi - P2
_ P?
2p,^
Pi -P2
SO that (pi - P2)dw/dt = (-3pi + 4p2)w - (Pi - 2p|)j/ - pi(pi - p2)u is the first-order observer. A bad
choice of Pi/p2 with 1 < pj/pg < 4/3 gives an unstable observer and makes the error blow up.
The question is, can we place the poles of F by proper selection of P in a manner similar
to that of the w-dimensional observer? One method is to use trial and error, which is some-
times more rapid for low-order, time-invariant systems. However, to show that the poles
of F can be placed arbitrarily, we use the transformation x = Tz to obtain the canonical
form
dt
\Zl
All
A21
+ T 'Bu
1-2
ct
where the subsystem dzi/dt = Aun + Biu and Vi = ct -n is in the dual phase variable canon-
ical form
'0
1
ai(«)
a2(i)
Z:i + BiU
i = 1, 2,
,0
1 a„;(i)/
2/i
in which Bi is defined from T-^B =
l)zi = 2„.
(8.7)
and Mi is the dimension of the ith subsystem.
As per the remarks following Example 8.10, the conditions under which this form can
always be obtained are not known at present for the time-varying case, and an algorithm
is not available for the time-invariant multiple-output case.
However, assuming the subsystem {8.7) can be obtained, we construct the observer
equation (8.6) for the subsystem (8.7) by the choice of Pi = (I | fc) where ki(^) is an (m - 1)-
vector that will set the poles of the observer. We assume ki(i) is differentiable. Then
Pi
ki
and
-ki
= (Hi i Gi) (8.8)
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
177
We find Fi = {dFi/dt + PiA«)Hi = [(0 | dki/dt) + (I \ kOA«]( ) from which
Fi
... kii{t)
1 ... ki2{t)
1 ... A;i3(^)
• 1 fci,ni-l(*)/
By matching coefl!icients of the "characteristic equation" with "desired pole positions,"
we make the error decay as quickly as desired.
Also, we find FjLj as PiL; = -{dPJdt + PiAii)Gi.
Then x = Tz = T( : where z.- = H.-Wi + Gi^/i = | — )w.- + ( —]yi
Example 8.15.
Again, consider the system of Example 8.13.
dt
-2 1
2 -2
2/ = (1 l)x
To construct an observer system with a pole at -2, use the transformation x = Tz where (Tt)-i =
'4 1\
Then equations (8.7) are
3 1
dz
df
J:4> + (:r'"
2/ = (0 l)z = Z2
1 I 2 \ /w
The estimate z according to equation {8.8) is now obtained as
where k^ = -2 sets the pole of the observer at —2. Then F = —2 and PL = -2 so that the observer
system is dw/dt = —2w + 2y — 2u. Therefore
Tz
1 -3\/l 2\/w
-1 4JU l)\y
and the error PT ix — w = 2xi + X2 — w decays with a time constant of 1/2. This gives the block diagram
of Fig. 8-15.
Fig. 8-15
178
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
8.8 ALGEBRAIC SEPARATION
In this section we use the observer system oi' Section 8.7 to generate a feedback control
to place the closed loop poles where desired, as discussed in Section 8.6. Specifically, we
consider the physical open loop system
dx/di = A(f)x + B(«)u + J(i)d
y = C(i)x
with an observer system (see equation (8.6)),
dw/dt = F(t)w-F{t)L.{t)y + F{t)B{t)u + V{t)J{t)d
X = H(i)w + G(i)y
(8.9)
{8.10)
and a feedback control u(i) that has been formed to place the poles of the closed loop sys-
tem as
u = W(f)x {8.11)
Then the closed loop system block diagram is as in Fig. 8-16.
; ^^>^:r0t^
d=i4;
p0if(S'
Physical system
Observer
-*U- Feedback -
Fig. 8-16
Theorem 8.3: (Algebraic Separation). For the system [8.9) with observer {8.10) and feed-
back control {8.11), the characteristic equation of the closed loop system can
be factored as det (Al - A - BW) det (Al - F).
This means we can set the poles of the closed loop system by choosing W using the pole place-
ment techniques of Section 8.6 and by choosing P using the techniques of Section 8.7.
Proof: The equations governing the close(3 loop system are obtained by substituting
equation {8.11) into equations {8.9) and {8.10):
J
_d /x
dt[w
A + BWGC BWH \/x\ , / - ^ .
PBWGC-PLC F + PBWhAw/ VPJ/
Changing variables to e = Px - w and using HP + GC = I and FP = dP/dt + PA + PLC
gives
_d /x
dt\e
A + BW -BWH\ /x\ , /J \ J
F e + O^'*
(C 0)
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
179
Note that the bottom equation deldt = Fe generates an input -WHe to the closed loop
of observer system dx/dt = (A + BW)x. Use of Problem 3.5 then shows the characteristic
equation factors as hypothesized. Furthermore, the observer dynamics are in general
observable at the output (through coupling with x) but are uncontrollable by d and hence
cancel out of the closed loop transfer function.
Example 8.16.
For the system of Example 8.13, construct a one-dimensional observer system with a pole at -2 to
generate a feedback that places both the system poles at —1.
We employ the algebraic separation theorem to separately consider the system pole placement and
the observer pole placement. To place the pole of
dx
dt
' ■'- ^-;)" - n '^
= (1 l)x
2 -2
using the techniques of Section 8.6 we would like
u = (-2 3/2)x
which gives closed loop poles at -1. However, we cannot use x to form u, but must use x as found
from the observer system with a pole at -2, which was constructed in Example 8.15.
dw/dt = -2w + 2y - 2(u + d)
We then form the control as
u = (-2 3/2)x = (-2 3/2)
Thus the closed loop system is as in Fig. 8-17.
= -7w/2 -I- 5y
^rO
+
CMS)
Fig. 8-17
Note that the control is still essentially in the feedback loop and that no reasons were given as to
why plant poles at -1 and observer pole at -2 were selected. However, the procedure works for high-
order, multiple input— multiple output, time-varying systems.
8.9 SENSITIVITY, NOISE REJECTION, AND NONLINEAR EFFECTS
Three major reasons for the use of feedback control, as opposed to open loop control,
are (1) to reduce the sensitivity of the response to parameter variations, (2) to reduce the
effect of noise disturbances, and (3) to make the response of nonlinear elements more linear.
A proposed design of a feedback system should be evaluated for sensitivity, noise rejection,
and the effect of nonlinearities. Certainly any system designed using the pole placement
techniques of Sections 8.6, 8.7 and 8.8 must be evaluated in these respects because of the
cookbook nature of pole placement.
180
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
In this section we consider these topics in a very cursory manner, mainly to show the
relationship with controllability and observability. Consequently we consider only small
percentage changes in parameter variations, small noise compared with the signal, and
nonlinearities that are almost linear. Under these assumptions we will show how each
effect produces an unwanted input into a lineair system and then how to minimize this un-
wanted input.
First we consider the effect of parameter variations. Let the subscript N refer to the
nominal values and the subscript a refer to actual values. Then the nominal system (the
system with zero parameter variations) can be represented by
d^N/dt = AN(t)xN + B{t)u
yjv = C(t)xN + D(t)u {8.12)
These equations determine XN(i) and yN(<), so these quantities are assumed known. If some
of the elements of An drift to some actual Aa (keeping B, C and D fixed only for simplicity),
*^®" dXa/dt = Aa{t)Xa + B(t)VL
y„ = C(<)xa + D(f)u {8.13)
Then let 8x = Xa-XN, 8A = Aa-As, 8y = ya-yN, subtract equations {8.12) from {8.13),
and neglect the product of small quantities .SA 8x. Warning: That 8A 8x is truly small
at all times must be verified by simulation. If this is so, then
d(8x)/dt = AnC*:) 8x + 8A(t) xn
8y = C{t)S% {8.1^)
In these equations AN{t), C{t) and xn(«) are known and SA{t), the variation of the parameters
of the A{t) matrix, is the input to drive the unwanted signal 8x.
For the case of noise disturbances d{t), the nominal system remains equations {8.12) but
the actual system is ■,,,.« /+\ , rj/+\„ . ¥/+\^
dxa/dt = An(«)xo + B(i)u + J{t)d
y„ = C{t)xa-D{t)u + K{t)d {8.15)
Then we subtract equations {8.12) from {8.15) to obtain
d{hx)ldt = Aw{t) 8x + J(i)d
8y = C(*)8x + K(i)d («-^^)
Here the noise A{t) drives the unwanted signal 8x.
Finally, to show how a nonlinearity produces an unwanted signal, consider a scalar input
x^ into a nonlinearity as shown in Fig. 8-18. This can be redrawn into a Imear system
with a large output and a nonlinear system with a small output (Fig. 8-19).
«N
+
Fig. 8-18
Fig. 8-19
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 181
Here the unwanted signal is 8d which is generated by the nominal xn. This can be
incorporated into a block diagram containing linear elements, and the effect of the non-
linearity can be evaluated in a manner similar to that used in deriving equations (8.16).
d(8x)/d« = An(«) 8x + j{t) Sd
8y = C(^) 8x + k{t) Sd (8.17)
Now observability and controllability theory can be applied to equations {8.U), (8.16)
and (8.17). We conclude that, if possible, we will choose C{t), An(«) and the corresponding
input matrices B{t), D{t) or J{t), K{t) such that the unwanted signal is unobservable with
respect to the output Sy(t), or at least the elements of the state vector associated with
the dominant poles are uncontrollable with respect to the unwanted signal. If this is im-
possible, the system gain with respect to the unwanted signal should be made as low as
possible.
Example 8.17.
Consider the system dx/dt = f ^ _Mx + L jw. The nominal value of the parameter a is zero
and the nominal input u(t) is a unit step function.
dxi^/dt = ( -1 )'''*'"*' (l
If Xj,(0) - 0, then Xf^it) = ( , _ „_» ) • The effect of small variations in a can be evaluated from
equation (S.li). \^ ® /
Simplifying,
*.,/« = (-i_:uu: :)(;::::
d(Sx)/dt = [ ^ _^sis. + f'^ )a{l-e-f^
We can eliminate the effects of a variation upon the output if c is chosen such that the output observability
matrix (ctb ctAb) = 0. This results in a choice ct = (0 y) where y is any number.
Furthermore, all the analysis and synthesis techniques developed in this chapter can be
used to analyze and design systems to reduce sensitivity and nonlinear effects and to reject
noise. This may be done by using the error constant matrix table, root locus, Nyquist,
and/or pole placement techniques on the equations {8.14^), (8.16) and (8.17).
182 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8
Solved Problems
/
dt
with y = {1 l)x. Find the class of inputs that give a constant steady state error.
The closed loop system with zero input (d(t) = 0) is
V
dx
8.1. For the system of Fig. 8-4, let -^ = [ 1
\t + l t + 1/ \t + l
dx
dt
1 t_) -(i±l)(l 1)
,t + l t + l/ \t + l/
-1 -2 '^
which has a double pole at -1. Therefore the zero output of the closed loop system is asymp-
totically stable and the remarks following Theorem 8.1 are valid. The class of all d(t) that the
system can follow with lim e(t) = is given by
Vt+t t + i/ \« + i/
with d = (1 l)w + g. The transition matrix for this system is
1 ft + €■'-* t(l-eT-t)
Then
d(t)
l±l[n,,(to) + w^{to)] + (t+l)j'-^^ff(r)dr + g{t)
(Notice this system is unobservable.) For constant error let g{t) = k, an arbitrary constant. Then
«(« + 1) C r^l dr = -c(t + l)[ln (« + 1) - In (to + 1) + (*o + l)-i " (« + l)-i]
' J (t + 1/
to
and the system follows with error k all functions that are asymptotic to this. Since the system is
reasonably well behaved, we can assume that the system will follow all functions gomg to infinity
slower than K(t + 1) In (t 4 1).
8.2. Given the multiple-input system
' -1 0\fzi
-3^
Place the poles of the closed loop system at -4, -5 and -6.
Transform the system to phase variable canonical form using the results of Problem 7.21.
/ 1 1 i\ Ai o\
-1 -2 -3 jl -C2 jz
1 4 9/ \0 /eg/
'^*'^" «-i \ / 3 5/2 1/2^
K-i V -3 -4 -1 )x
-crVV 1 3/2 1/2,
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 183
and
\-6 -11 -6/
To obtain phase variable canonical form for u^, we set
/a/cA
"2 ] =
"3/
which gives a«i = 1/2, k^ = -1, Kg = 1/2,
For a ^ 0, we have the phase variable canonical system
d^=0 lx+0-2- l/2a
\-6 -11 -6/ \l 4 + l/2a
To have the closed loop poles at -4, -5 and -6 we desire a charateristic polynomial X3 + 15X2 + 74^ 4- 120.
Therefore we choose u^ - -114xi - GSajg - dx^ and M2 = Oaji + Oaia + 0x3.
In the case a = 0, the state ^i is uncontrollable with respect to Mj and, from Theorem 7.7,
cannot be put into phase variable canonical form with respect to Mj alone. Hence we must use'
4*2 to control Zi, and can assure a pole at -4 by choosing Mg = -3«i. Then we have the single-input
system
AA /-4
\^3/ V 0-3
whose controllable subsystem
d y^aX /-2 0\ /^2^ /I
du^sy "" V -s){zj ^ ur^
can be transformed to phase variable canonical form, and u^ = -12z2 + 623.
The above procedure can be generalized to give a means of obtaining multiple-input pole
placement.
8.3. Given the system (Py/dt^ = 0. Construct the observer system such that it has a double
pole at -y. Then find the error e = x - x as a function of y, if the output really is
y{t) + ri{t), where ?;(*) is noise.
The observer system has the form
dx _ /O l^>^ , /fei
^2
df - [0 oy^ + U Jf^^")'^-^-"]
The characteristic equation of the closed loop system is X(X - fej) - feg = 0. A double pole at -y has
the characteristic equation X^ + 2yX -h y2 = 0. Hence set k^ = -2y and fca = -y2. Then the
equation for the error is
di[ej " (-y2 o)[eJ + (-y2
Note the noise drives the error and prevents it from reaching zero.
The transfer function is found from
\^2/j s2 -I- 2ys + y2 \^ y2s
As y -^ », then Ci^r, and e^-^ d-qidt. If >;(«)=: ijo cos ui, then utjo, the amplitude of d7,/d<,
may be large even though ij^ is small, because the noise may be of very high frequency. We con-
clude that it is not a good idea to set the observer system gains too high, so that the observer system
/ion ■fil4-y\irt .-.tt4- HAin^n ^-P 4-1. ;
can Alter out some of the noise.
184 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8
8.4. Given the discrete-time system
x(n + l) = (_2 J)x(«) + (Jju{n), y{n) = (1 O)x(n)
Design a feedback controller given dominant closed loop poles in the z plane at
(i±y)/2.
We shall construct an observer to generate the estimate of the state from which we can con-
struct the desired control. The desired clos(3d loop characteristic equation is X^ - X + 1/2 - 0.
Hence we choose u = SxJ2 - 2^^. To generate $i and x^ we choose a first-order observer with a
pole at 0.05 so that it will hardly aflfect the response due to the dominant poles and yet will be large
enough to filter high-frequency noise. The transformation of variables
° ly gives z(n + l) = (i "3)^(")+(o)'*<'^^' vM = {0 IHn)
Use of equation (8.8) gives
P = (1 -0.05) and PL = -(1 -0.05)(^; ''f){'T) = '■''''
Then the observer is
/O 1\. . ^_ /1\ ... , ^0.05
X = (^ gJZ
where z(n) = i^jMn) + {\ ) 2'('»)
and w{n) is obtained from w{n +1) = -0.05w(m) - 2.1525j/(«) + u(n).
8.5. Find the sensitivity of the poles of the system
dx. /—I + a^ a
dt " \ 2 -2/'"
to changes in the parameter a where \a\ is small.
We denote the actual system as dxjdt = A^ and the nominal system as dx^/dt = A^k^.
wherr A^ = A„ when ,. = 0. In general, A^ = A„-8A where ||6Al| is small smce \a\ is small.
We assume A^ has distinct eigenvalues xf so that we can always find a corresponding eigenvector
w. from A^w, = xf w^. Denote the eigenvectors of A^ as vf, so that Aj;v; = X^ vj . Note the
eigenvalues Xf are the same for A^ and A^ Taking the transpose gives
v+A, = Xfvt (^•^«)
which we shall need later. Next we let the actual eigenvalues X? = xf + 6X, Substituting this
into the eigenvalue equation for the actual A„ gives
(A.v + SA)(Wi + SWi) = (xf + 8Xi)(wj+SWj) {8-19)
Subtracting A^Wj = xf Wj and multiplying by v| gives
v+ 8 A w, + vf A^ SWi = SX, vt w, + xf vt 8w, + vt (8X, I - 8 A) 8w,
Neglecting the last quantity on the right since it is of second order and using equation (8.18) then
leads to
_ v] 8AWi (5.20)
O Ai —
Therefore for the particular system in ciuestion,
/-I + a^ a\ _ /--I 0\ /a2 «\, ^ ^^ + g^
Aa = ( 2 -2J " V 2 -2^ \0
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
185
Then for A^ we find
Xf = -1
Xf = -2
w, =
w, =
-2
Using equation (8.20),
^2
1 «= -1 + (1 0)
- -2 + (-2 1)
a^ a\/l
0/\2
= -1 + a^ + 2a
0/Vl
= -2
2a
For larger values of a note we can use root locus under parameter variation to obtain exact values.
However, the root locus is difficult computationally for very high-order systems, whereas the
procedure just described has been applied to a 51st-order system.
8.6. Given the scalar nonlinear system of Fig. 8-20 with input a sin t. Should K>1
be increased or decreased to minimize the effect of the nonlinearity?
a sin t /~\
-\e{t)
• •
— « _L «3
y
J '
K
a sm t
I
+
■^ew(<)
»•
Vn
J '
if ~~ *^N
K
Fig. 8-20
Fig. 8-21
The nominal Unear system is shown in Fig. 8-21. The steady state value of Cj, = (a sin f)/(X - 1)
We approximate this as the input to the unwanted signal dHSy)/dt^ = 4, which gives the steady
state value of Sy = «3(27 sin t - sin 3t)/[36(K - 1)3]. This approximation that dHSy)/df^ = el
instead of e„ must be verified by simulation. It turns out this a good approximation for LI < 1
and we can conclude that for \a\ < 1 we increase K to make Sy/y become smaller.
8.7.
A simplified, normalized representation of the control system of a solid-core nuclear
rocket engine is shown in Fig. 8-22 where Sn, ST, SP, Sp^^ and 87 are the changes from
nommal in neutron density, core temperature, core hydrogen pressure, control rod
setting, and turbine power control valve setting, respectively. Also, G^{s), G^{s) and
Gg{s) are scalar transfer functions and in the compensation Kj, k^, k^ and k^ are scalar
constants so that the control is proportional plus integral. Find a simple means of
improving response.
4-
+
) ,
Gi(«)
X
■^
Giis)
8T
63(8)
SP
J
J
+
dpcr
"1 + "2/*
, _!_../_
sv
"3
Fig. 8-22
186
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
The main point is to realize this "multiple loop" system is really a multiple input- multiple
output system. The control system shown has constrained 8pcr to be an inner loop. A means
of improving response is to rewrite- the system in matrix form as shown in Fig. 8-23. This opens
up the possibility of "cross coupling" the feedback, such as having Spc, depend on SP as well as
ST. Furthermore it is evident that the system is of type 1 and can follow a step input with zero
steady state error.
S^demandN
C?) .
(") ,
Ki+^K2
G(s)
-r<>- "
■
Fig. 8-23
8.8. Compensate the system {s + p,)-'i^ + V,)-' ^o have poles at -u, and -^, with an
observer pole at -^„ by using the algebraic separation theorem, and discuss the effect
of noise rj at the output.
The state space representation of the plant is
J/ = (1 0)x
The feedback compensation can be found immediately as
U = {PlP2 — Tri^2 Pl + P2~n~'^2)x
To construct the observer system, let P = (ai aa)- Then
1 1
\-P1P2 -~Pi " P2
from which ^^ ^ (pi + p2-^oK and -PL = MPi + P2 - '^o) - PiP2]«2
Also PB = PJ = a2- Therefore the estimator dynamics are
A f —
dt \a2
To construct the estimate,
^P\-i/wN / 1 ^/w
(F I -PL)
(i)
(-.oi-PL)f7 :f) = («i«2)
= PA
-^al—) + Wo(Pl + P2-^o)-PlP2]y + U + d
')
»2y
„-i
TTQ - Pi - P2/\V
The flow diagram of the closed loop system is shown in Fig. 8-24.
+1
Pl + P2~''l~'^2
w/ao
(s -I- Pi)(s -I- P2)
S + '
1+
U C)*~ '^0*Pl + P2 - To) - P1P2
jro - Pi - P2
P1P2 — '^i^z
xi = 1/
Fig. 8-24
CHAP. 8j
RELATIONS WITH CLASSICAL TECHNIQUES
187
Note the noise v is fed through a first-order system and gain elements to form u. If the noise
level is high, it is better to use a second-order observer or a Kalman filter because then the noise
goes through no gain elements directly to the control, but instead is processed through first- and
second-order systems. If there is no noise whatsoever, flow diagram manipulations can be used to
show the closed loop system is equivalent to one compensated by a lead network, i.e. the above flow
diagram with ij = can be rearranged as in Fig. 8-25.
-I-
^
1
y
—u
(S + Pi){s + P2)
o/— 4- « *. \ _i
-PlV2
^2 fl F2I ^ ^Vr2
Fig. 8-25
order °%il^carZZL^T:^'"" have cancelled out, and the closed loop system remains second-
anv im-nl? corresponds to the conclusion the observer dynamics are uncontrollable by d. However
diagram '" "" '^" ^''^'"'^ ^" '^''' °" ^' ^' '^"^ "« ^^^'^ f^"*" *e first flow
Supplementary Problems
8.9. Given the matrix block diagram of Fig. 8-26. Show that this reduces to Fig. 8-27 when the
mdicated inverse exists.
?Q=^
H
Fig. 8-26
> G{H-HG)-i
Fig. 8-27
8.10. Given the matrix block diagram of Fig. 8-28. Reduce the block diagram to obtain H, isolated
single feedback loop.
m a
=<
+
H. t" '
1 — ^ G3 I
y
.
H2 Cz
Fig. 8-28
188
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
8.11.-
Determine whether the scalar feedback system of Fig.
,8r29 in which y{t) is related to e(t) as d-
o
'dt
1 \ /
-4«-iy ^ \2t'!^
y = (1 0)x
t >
(i) can follow a step input a with zero error, (ii) can
follow a ramp input at with zero error.
Fig. 8-29
8.12.
8.13.
Given the time-varying system diy/dt^+ a(t) dy/dt + p(t)y = u. Fmd a feedback control u such
that the closed loop system behaves like d^z/dt^ + e{t) dz/dt + <p{t)z - 0.
8.14.
Given the system ^ ^
/l 1 -2\
^ = 0-1 1 X J/ = (0 1 -l)x
Construct a third-order observer system with poles at 0, -1 and -1.
Given the system
y =
8.15.
dx
Construct a first-order observer system with a pole at -1.
Given the system
1 -1
dx
-:>-{>
2/ = (1 0)x
Construct a first-order observer system with a pole at -3 and then find a feedback control
^j = fej$j + fc2«2 that places both the closed loop system poles at -2.
8.16.
1 /3 1
2/ = (1 l)x
8.17.
Given the system
dx
d/T " 4 ^1 3,
Construct a first-order observer system with a pole at -4.
Given the system dx/dt = A(«)x + BW« where y = C(t)x + D(t)«. What is t^e/orm of the
oblerver system when D(t) # 0? What is the algebraic separation theorem when D(t) # 0?
8.18. Given the system
dx
dt
4^1 3-|-4a(t)y VO
and
= (0 l)x
8.19.
8.20.
8.21.
B„(t),C,(t),D„{t) andu„(t)?
, , ,, _ P + * + -^^*^ "/fV. Choose «(t) such that at least one state
Given the system di!.ldt - y f[t) fi / ^"
will be insensitive to small variations in Rt), gwen the nominal solution x^(t).
Given that the input d(t) is generated by the scalar system d^.m =^ ^^^J^ TolLw atfsS
under what conditions on p(t) can the system dx/dt - a(t)x + I3{t)e with y fWx
d(t) with zero error?
CHAP. 8]
RELATIONS WITH CLASSICAL TECHNIQUES
189
8.22. Given the general nonunity feedback time-invariant system of Fig. 8-30. Under what conditions
can lim e(t) = 0? Set F = I and H = I and derive Theorem 8.2 from these conditions.
^{d}:
=Q
F(s)
<{y<i}
+
G(8)
^{y}
H(8)
<>=-
^{e}
Fig. 8-30
8.23. In the proof of Theorem 8.1, why is
d(<) = J C(t)*A_Bc(«,T)B(r)d(r)dT + g(i) + C(t)*A-Bc(«,«o)w(«o)
to
equivalent to dvi/dt = [A«) - B(t) C(<)]w + B(t)d and d = C(t)w + g?
8.24. Show that if a time-invariant system is of type N, then it is of type N-k for all integers k such
that — k — N.
8.25. Given the system of Fig. 8-31 where the constant matrix K has been introduced as compensation.
Show that
(a) The type number of the system cannot change if K is nonsingular.
(6) The system is of type zero if K is singular.
d(f)
+
=Q
K
■:
H(s)
Fig. 8-31
8.26. Design a system that will follow d(t) = sin « with zero steady state error.
Answers to Supplementary Problems
8.10. This cannot be done unless the indicated inverse exists (Fig. 8-32). The matrices must be in the
order given.
^^0=^
> G4Gi[I-hH2(G2-^G3)G4G,]-i
G2 + G3
I>
+
H,
Fig. 8-32
190
RELATIONS WITH CLASSICAL TECHNIQUES
[CHAP. 8
8.11.
The closed loop system output
Theorem 8.1 applies. Then
\-M']^2(to) +
2 1 ) a;i(to) + (t — *o) »^2(*o) | tends to zero, so
*o)]
8.12.
d{t) = Wi(<o) + J
so that the system can follow steps but not ramps -with zero error,
For the state equation
S'(t) At — In < for t > some ti
'dt
-lit) -l(e> + (i'«
8.13.
corresponding to the given input-output relation,
u = iji — <t> a — 9)x
which shows the basic idea behind "pole placement,"
cPy/df^ + a dy/dt + /By = (a - e) dy/dt + (^ - <
1 l-2\ /1\
0-1 1 ) $ + ) [(0 1 -l)x - y]
dx
It
.1 1,
8.14. dw/dt = -w + (0 l)y,
8.15.
'OV /O 1-
1 jw + / 1 )y
1/ Vo 0,
8.16.
8.17.
8.18.
8.20.
Fig. 8-33
This cannot be done because the system is unobservable.
Subtract D(f)u from y entering the observer and this reduces to the given formulation.
U:{Sym = - :^^r^!!r"f?. where ^Mt)} = -■ -^{"(*)>
(s + l)(s + 1)
.««. . ,'-;;«" :")".(::;,./
4(s + l)(s + l)
Q(t) =
»N1 dXfji/dt — (1 + t + /(t))«Nl ~ u{t)xi^
^Xf^i dx^i/dt — f(t)Xf^i — fiXffi
Choose u{t) such that det Q(t) = for all t.
8.21.
8.22.
8.24.
8.25.
8.26.
, - f a(T))dn
pit) = e -"to
e(*) + J 7(*)e-'T" " " pir)e{r)dT \ + ~/it)K where 9(J) is any function such
that lim e{t) = and k is an arbitrary constant,
lim [F(s) -G(s)(I + G(s)H(s))-i]s^{d} = and is analytic for s ^ 0.
Use Theorem 8.2.
Use Theorem 8.2.
One answer is H(8) = 8(82+ i)-i^
chapter 9
Stability of Linear Systems
9.1 INTRODUCTION
Historically, the concept of stability has been of great importance to the system designer.
The concept of stability seems simple enough for linear time-invariant systems. However,
we shall find that its extension to nonlinear and/or time-varying systems is quite compli-
cated. For the unforced linear time-invariant system dx/df = Ax, we are assured that
the solution x{t) = TeJ^-'o^T-ixo does not blow up if all the eigenvalues of A are in the left
half of the complex plane.
Other than transformation to Jordan form, there are many direct tests for stability.
The Routh-Hurwitz criterion can be applied to the characteristic polynomial of A as a yes-
or-no test for the existence of poles in the right half plane. More useful techniques are
those of root locus, Nyquist, Bode, etc., which indicate the degree of stability in some sense.
These are still the best techniques for the analysis of low-order time-invariant systems, and
we have seen how to apply them to multiple input-multiple output systems in the previous
chapter. Now we wish to extend the idea of stability to time-varying systems, and to do
this we must first examine carefully what is meant by stability for this type of system.
9.2 DEFINITIONS OF STABILITY FOR ZERO -INPUT LINEAR SYSTEMS
A type of "stability" results if we can say the response is bounded.
Definition 9,1: For every xo and every to, if there exists a constant k depending on xo and
to such that i|x(i)|| ^ k for all t ^ to, then the response x(i) is bounded.
Even this simple definition has difficulties. The trouble is that we must specify the response
to what. The trajectory of x(i) = ^(^; u{T),x{to), to) depends implicitly on three quantities:
u(r), x(io) and to. By considering only u(t) = in this section, we have eliminated one
difficulty for a while. But can the boundedness of the response depend on x(to) ?
Example 9.1.
Consider the scalar zero-input nonlinear equation dx/dt = -x + x^ with initial condition x{0) = Xg.
The solution to the linearized equation dx/dt = -x is obviously bounded for all Xo- However, the solution
to the nonlinear equation is
X{t)
Xo + e«(l - Xo)
For all negative values of cbq, this is well behaved. For values of Xo > 1, the denominator vanishes at a
time «i = In a;o - In (a;o - 1), so that lim a;(t) = ».
t-+t,
It can be concluded that boundedness depends upon the initial conditions for nonlinear
equations in general.
Theorem 9.1: The boundedness of the response x(^) of the linear system dx/dt = A(t)x
is independent of the initial condition xo.
191
192
STABILITY OP LINEAR SYSTEMS
[CHAP. 9
Proof: [|x(i)|! = ]|^(«;0,xo,io)|l = ||*(i,io)xo]| ^ ||*(t, io)|l lixo||
Since |[xo|i is a constant, if |!x(i)|| becomes unbounded as i-* oo, it is solely due to *(*, *o).
Now we shall consider other, different types of stability. First, note that x = is a
steady state solution (an equilibrium state) to tlie zero-input linear system ds.ldt = A(f)x.
We shall define a region of state space by ||x|| < «, and see if there exists a small region of
nonzero perturbations surrounding the equilibrium state x = that give rise to a trajectory
which remains within [jx|| < e. If this is true for all £ > 0, no matter how small, then we
have
Definition 9.2:
The equilibrium state x = of dx/dt = A(i)x is stable in the sense of
Liapunov (for short, stable i.s.L.) if for any to and every real number e > 0,
there is some 8 > 0, as small as we please, depending on to and e such
that if |]xo|| < 8 then ||x(i)|| < £ for all t > to.
This definition is also valid for nonlinear systems with an equilibrium state x = 0.
It is the most common definition of stability, and in the literature "stable i.s.L." is often
shortened to "stable". States that are not stable i.s.L. will be called unstable. Note stability
i.s.L. is a local condition, in that 8 can be as small as we please. Finally, since x = is an
obvious choice of equilibrium state for a linear system, when speaking about linear systems
we shall not be precise but instead will say the sjstem is stable when we mean the zero state
is stable.
Example 9.2,
Consider the nonlinear system of Example 9.1.
If x„- 1, then
»(t)| =
kol
\Xn
l«ol
|a;o + et(l - a;o)1 " |1 + (e' - 1)(1 - Xo)!
In Definition 9.2 we can set S = e > if e - 1, and if e > 1 we set S = 1 to show the zero state of
Example 9.1 is stable i.s.L. Hence the zero state is stable i.s.L. even though the response can become
— . ^ - 1 « ^ > Q£ course if the
Another point to
unbounded for some «o- (This situation corresponds 1;o Fig. 9-2(6) of Problem 9.1.)
response became unbounded for all Xg # 0, the zero state would be considered unstable. ,- - _
note is that in the application of Definition 9.2, in the i-ange where e is small there results the choice of
a correspondingly small S.
Example 9.3.
Given the Van der Pol equation
dt
X2
{l-Xl)X2 - Xj
with initial condition x(0) = Xq. The trajectories in
state space can be plotted as shown in Fig. 9-1. We
will call the trajectory in bold the limit cycle.
Trajectories originating outside the limit cycle spiral
in towards it and trajectories originating inside the
limit cycle spiral out towards it. Consider a small
circle of any radius, centered at the origin but such
that it lies completely within the limit cycle. Call
its radius e and note only Xo = will result in a
trajectory that stays within
<
Therefore
the zero state of the Van der Pol equation is unstable
but any trajectory is bounded. (This situation corre-
sponds to Fig. 9-2(e) of Problem 9.1.)
Fig. 9-1
Theorem 9.2: The transition matrix of a linear system is bounded as
all f^to if and only if the equilibrium state x =
is stable i.s.L.
Note ||x(t)i| is bounded if ||*(*,MII is bounded.
mt,to)\\ <K{to) for
of dx/dt = A(t)x
CHAP. 9] STABILITY OF LINEAR SYSTEMS 193
Proof: First assume \\<i>[t,U)\\<K{U) where k is a constant depending only on U.
If we are given any e > 0, then we can always find 8 ==■ tUiU) such that if |!xo|| < S then
£ = k(«o)S > ||*(<, io)|| ||xo!| ^ ||#(«, fo)xo|| = ||x(«)]|. From Definition 9.2 we conclude stability
i.s.L.
Next we assume stability i.s.L. Let us suppose *(i, to) is not bounded, so that there is at
least one element $ij(*,io) that becomes large as t tends to «>. If ||xoI| < 8 for a nonzero 8,
then the element Xj of xo can be nonzero, which results in a trajectory that eventually leaves
any region in state space defined by ||x|l < e. This results in an unstable system, so that
we have reached a contradiction and conclude that *(i, U) must be bounded.
Taken together. Theorems 9.1 and 9.2 show that boundedness of ([x(t)|| is equivalent to
stability i.s.L. for linear systems, and is independent of xo. When any form of stability is
independent of the size of the initial perturbation xo, we say the stability is global, or speak of
Stability in the large. Therefore another way of stating Theorem 9.1 is to say (local) sta-
bility i.s.L. implies global stability for linear systems. The nonlinear system of Example
9.1 is stable i.s.L. but not globally stable i.s.L.
In practical applications we often desire the response to return eventually to the equi-
librium position X = after a small displacement. This is a stronger requirement than
stability i.s.L. which only demands that the response stay within a region ||x|| < e.
Definition 9.3: The equilibrium state x = is asymptotically stable if (1) it is stable
i.s.L. and (2) for any to and any xo sufficiently close to 0, x(t) ^0 as t -* «>.
This definition is also valid for nonlinear systems. It turns out that (1) must be assumed
besides (2), because there exist pathological systems where x(t) -> but are not stable i.s.L.
Example 9.4.
Consider the linear harmonic oscillator ^ = ( _. . ) x with transition matrix
*((,0) =
cos t sin t
—sin t cos t
The A matrix has eigenvalues at ±jV To apply Definition 9.2, ||x(t)||2 - ||*(«, to)ll2 IIX0II2 = llxoli2 < e = S since
ll*(*. *o)!l2 = !• Therefore the harmonic oscillator is stable i.s.L. However, x(«) never damps out to 0, so
the harmonic oscillator is not asymptotically stable.
Example 9.5.
The equilibrium state a; = is asymptotically stable for the system of Example 9.1, since any small
perturbation (a;o < 1) gives rise to a trajectory that eventually returns to 0.
In all cases except one, if the conditions of the type of stability are independent of to,
then the adjective uniform is added to the descriptive phrase.
Example 9.6.
If 8 does not depend on tg in Definition 9.2, we have uniform stability i.s.L.
Example 9,7.
The stability of any time-invariant system is uniform.
The exception to the usual rule of adding "uniform" to the descriptive phrase results
because here we only consider linear systems. In the general framework of stability def-
initions for time-varying nonlinear systems, there is no inconsistency. To avoid the com-
plexities of general nonlinear systems, here we give
Definition 9.4: If the linear system dx/dt = A(t)x is uniformly stable i.s.L. and if for all
to and for any fixed p however large ||xo|| < p gives rise to a response
x(t) -^0 as t-^ ao, then the system is uniformly asymptotically stable.
The difference between Definitions 9.3 and 9.4 is that the conditions of 9.4 do not depend
on to, and additionally must hold for all p. If p could be as small as we please, this would
be analogous to Definition 9.3. This complication arises only because of the linearity, which
in turn implies that Definition 9.4 is also global.
194 STABILITY OF LINEAR SYSTEMS [CHAP. 9
Theorem 9JS: The linear system ds./dt — A{t)x is uniformly asymptotically stable if and
only if there exist two positive constants ki and k2 such that ||*(t, to)|| —
^jg-Kact-to' for all t — to and all U.
The proof is given in Problem 9.2.
Example 9.8.
Given the linear time-varying scalar system dx/dt = —x/t. This has a transition matrix *(«, to) = tjt.
For initial times to > the system is asymptotically stable. However, the response does not tend to as
fast as an exponential. This is because for to < the system is unstable, and the asymptotic stability is
not uniform.
Example 9.9.
Any time-invariant linear system dx/dt = Ax is unii'ormly asymptotically stable if and only if all the
eigenvalues of A have negative real parts.
However for the time-varying system d-x.ldt = A(i)x, if A(i) has all its eigenvalues
with negative real parts for each fixed t, this in general does not mean the system is asymp-
totically stable or even stable.
Example 9.10.
Given the system
±(x\.\ _ / 4k -3<ce8'<t\/xi\
dt\xj " \Ke-i>'t j\x^j
with initial conditions x(0) = Xq. The eigenvalues are Xi = k and Xg = 3/c. Then if « < 0, both
eigenvalues have real parts less than zero. However, the exact solution is
2a;i(t) = SCxio + a;2o)«^''' - (^'lo + 3a;2o)«'''' and ^x^it) = (xio + SiCao)*"""* " (»'io + a'2o)«~^'"
For any nonzero real k the system is unstable.
There are many other types of stability to consider in the general case of nonlinear
time-varying systems. For brevity, we shall not discuss any more than Definitions 9.1-9.4
for zero-input systems. Furthermore, these definitions of stability, and those of the next
section, carry over in an obvious manner to discrete-time systems. The only difference is
that t takes on discrete values.
9.3 DEFINITIONS OF STABILITY FOR NONZERO INPUTS
In some cases we are more interested in the input-output relationships of a system than
its zero-input response. Consider the system
diLldt = A(i)x + B(i)u y = C(t)x {9.1)
with Initial condition x(to) = xo and where ||A(t)ll<K^, a constant, and llB(t)il < k^, an-
other constant, for all t ^ U, and A{t) and B(t) are continuous.
Definition 9.5: The system {9.1) is externally stable if for any to, any xo, and any u such that
\\vL{t)\\ ^ S for all t ^ «o, there exists a constant e which depends only on
ta, xo and 8 such that ||y(^)|| ^ € for all t ^ to.
In other words, if every bounded input produces a bounded output we have external stability.
Theorem 9.4: The system (9.1), with xo = and single input and output, is uniformly
externally stable if and only if there exists a number ^ < «> such that
\h{t,T)\dr ^ /3
for all t ^ to where h{t, r) is the impulse response.
CHAP. 9] STABILITY OF LINEAR SYSTEMS I95
If /3 depends on U the system is only externally stable. If xo ¥- 0, we additionally require
the zero-input system to be stable i.s.L. and C(*) bounded for t> to so that the output does
not become unbounded. If the systeni has multiple inputs and/or multiple outputs, the
criterion turns out to be j \\U{t,r%dT ^ /3 where the I. norm of a vector v in 'L' is
\M\i = 2 l"^!! (see Sections 3.10 and 4.6).
Proof: First we show that if j |/i(i,T)|dT ^ ;8, then we get external stability. Since
h{t, t) m(t) dr, we can take norms on both sides and use norm properties to obtain
\y(t)\ ^ f'\hit,r)\\u(r)\dT ^ 8 C\h{t,r)\dT ^ 8/3 = e
From Definition 9.5 we then have external stability.
Next, if the system is externally stable we shall prove J \h{t, T)\dT ^ j8 by contra-
diction. We set ui(t) = sgnh{t,T) for to-r^t, where sgn is the signum function which
has bound 1 for any t. By the hypothesis of external stability,
^ - \yit)\ = I r Ht, r) Uiir) dr = C \h(t, t)| dr
I *'to ''to
Now suppose j \h{t,T)\dT ^ p is not true. Then by taking suitable values of t and to
«/to
we can always make this larger than any preassigned number a. Suppose we choose t — 9
and to = 6*0 in such a way that 1 \h{e,T)\ dr > a. Again we set M2(t) = sgn/i(6i,T) so that
Xe »/so
h{e,T)u2{r)dT = y(e). Since a is any preassigned number, we can set a = e and
-0
arrive at a contradiction.
Example 9.11.
Consider the system dy/dt = u. This has an impulse response h{t, r) = U(t — t), a unit step starting
at time t. Then I \U{t — T)\dT = t — tg which becomes unbounded as t tends to «!. Therefore this
system is not externally stable although the zero-input system is stable i.s.L.
In general, external stability has no relation whatsoever with zero-input stability con-
cepts because external stability has to do with the time behavior of the output and zero-
input stability is concerned with the time behavior of the state.
Example 9.12.
Consider the scalar time-varying system
dx/dt = a(f)a; + e« + 9(t.to)z{ and y = e-« + e(to.«x
where »(t,to) = I air) dr. Then the transition matrix is *(*, fg) = e«('.<o'. Therefore
/i(t, to) = e-' + «to.«<t>(t, to)«'» + ''"»''»' - eto~t
so that I \h{t,T)\dT < I \h{t,T)\dT — 1. Thus the system is externally stable. However, since Q:(t)
*^to "^-^
can be almost anything, any form of zero-input stability is open to question.
196 STABILITY OF LINEAR SYSTEMS [CHAP. 9
However, if we make C{t) a constant matrix, then we can identify the time behavior of
the output with that of the state. In fact, with a few more restrictions on {9.1) we have
Theorem 9.5: For the system (9.1) with h norms, C{t) = I, and nonsingular B(i) such that
|jB~i(^)||j < Kg_i, the system is externally stable if and only if it is uni-
formly asymptotically stable.
To have B{t) nonsingular requires u to be an ^-vector. If B is constant and nonsingular,
it satisfies the requirements stated in Theorem 9.5. Theorem 9.5 is proved in Problem 9.2.
9.4 LIAPUNOV TECHNIQUES
The Routh-Hurwitz, Nyquist, root locus, et;c., techniques are valid for linear, time-
invariant systems. The method of Liapunov has achieved some popularity for dealing with
nonlinear and/or time-varying systems. Unfortunately, in most cases its practical utility
is severely limited because response characteristics other than stability are desired.
Consider some metric p(x(t),0). This is the "distance" between the state vector and
the vector. If some metric, any metric at all, can be found such that the metric tends to
zero as i -» <», it can be concluded that the system is asymptotically stable. Actually,
Liapunov realized we do not need a metric to show this, because the triangle inequality
(Property 4 of Definition 3.45) can be dispensed! with.
Definition 9.6: A time-invariant Liapunov function, denoted v(x), is any scalar function
of the state variable x that satisfies the following conditions for all t^to
and all x in the neighborhood of the origin:
(1) v(x) and its partial derivatives exist and are continuous;
(2) v(0) = 0;
(3) v(x) > for X ?^ 0;
(4) dv/dt = (grad^ v)'^ d^/dt < for x v^ 0.
The problem is to find a Liapunov function for a particular system, and there is no general
method to do this.
Example 9.13.
Given the scalar system dx/dt = —x. We shall consider the particular function -nix) — x"^. Applying
the tests in Definition 9.6, (1) nix) = x^ and Sv/Sa; = 2;a; are continuous, (2) );(0) - 0, (3) t]{x) = a;2 >
for all x¥-Q, and (4) dt^ldt = 2x dx/dt = —2.x^ < for all x ¥= 0. Therefore 7)(x) is a Liapunov function.
Theorem 9.6: Suppose a time-invariant Liapunov function can be found for the state
variable x of the system dTn/dt = f (x, t) where f (0, t) = 0. Then the state
X = is asymptotically stable.
Proof: Here we shall only prove x^ as i5 -* «.. Definition 9.6 assures existence and
continuity of v and dv/dt. Now consider v{4>{t; ta, xo)), i.e. as a function of t. Since v> 6
and dv/dt <0 for i>^0, integration with respect to i shows v(^(ii; io,xo))> v(^(i2; io,xo))
for to<ti< tz. Although v is thus positive and monotone decreasing, its limit may not be
zero. (Consider 1 + e'K) Assume v has a constant limit k > 0. Then dv/dt = when
V = K. But dv/dt < for ^ t^ 0, and when ^ - then dv/dt = {grad^vyi{Q, t) = 0. So
dv/dt= implies ^ = which implies v = 0, a contradiction. Thus v -> 0, assuring x -» 0.
If Definition 9.6 holds for all to, then we have uniform asymptotic stability. Additionally
if the system is linear or if we substitute "everywhere" for "in a neighborhood of the
origin" in Definition 9.6, we have uniform global asymptotic stability. If condition (4) is
weakened to dv/dt ^ 0, we have only stability i.s.L.
CHAP. 9] STABILITY OF LINEAR SYSTEMS 197
Example 9.14.
For the system of Example 9.13, since we have found a Liapunov function v{x) = x^, we conclude
the system dx/dt = -x is uniformly asymptotically stable. Notice that we did not need to solve the state
equation to do this, which is the advantage of the technique of Liapunov.
Definition 9.7: A time-varying Liapunov function, denoted v(x, t), is any scalar function
of the state variable x and time t that satisfies for all t~ to and all x
in a neighborhood of the origin:
(1) v(x, t) and its first partial derivatives in x and t exist and are con-
tinuous;
(2) v(0,i) = 0;
(3) v(x,«)^a(||x|j) >0 for xv^O and t^to, where a(0) = and a(^)
is a continuous nondecreasing scalar function of |;
(4) dv/dt = (gradx v^ dx/dt + dv/dt < f or x v^ 0.
Note for all t — to, v(x, t) must be — a continuous nondecreasing, time-invariant func-
tion of the norm ||xj|.
Theorem 9.7: Suppose a time-varying Liapunov function can be found for the state
variable x(^) of the system dx/dt = f(x, t). Then the state x = is asymp-
totically stable.
Proof: Since dv/dt < and v is positive, integration with respect to t shows that
v{x{to), to) > v{x{t), t) for t > to. Now the proof must be altered from that of Theorem 9.6
because the time dependence of the Liapunov function could permit v to tend to zero even
though x remains nonzero. (Consider v = x^e~* when x = t). Therefore we require
v(x, t) ^ a(||x||), and « = implies IJxH = 0. Hence if v tends to zero with this additional
assumption, we have asymptotic stability for some to.
If the conditions of Definition 9.7 hold for all to and if v(x, t) ^ p{\\x\\) where ^(|) is a
continuous nondecreasing scalar function of | with /3(0) = 0, then we have uniform asymp-
totic stability. Additionally, if the system is linear or if we substitute "everywhere" for
"in a neighborhood of the origin" in Definition 9.7 and require a(||x]|) ^ «> with ||xj| ^ =o,
then we have uniform global asymptotic stability.
Example 9.15.
Given the scalar system dx/dt = x. The function x'^e-*t satisfies all the requirements of Definition
9.6 at any fixed t and yet the system is not stable. This is because there is no a(||x||) meeting the require-
ment of Definition 9.7.
It is often of use to weaken condition (4) of Definition 9.6 or 9.7 in the following manner.
We need v(x, t) to eventually decrease to zero. However, it is permissible for v(x, t) to be
constant in a region of state space if we are assured the system trajectories will move to
a region of state space in which dv/dt is strictly less than zero. In other words, instead of
requiring dv/dt < for all x ^ and all t ^ to, we could require dv/dt ^ for all x 9^
and dv/dt does not vanish identically in t^to for any to and any trajectory arising from a
nonzero initial condition x{to).
Example 9.16.
Consider the Sturm-Liouville equation d'^yldt'^ + p{t) dy/dt + q(t)y = 0. We shall impose conditions
on the scalars p(t) and q(t) such that uniform asymptotic stability is guaranteed. First, call y = x^ and
dy/dt = X2, and consider the function cCx, t) = xf + xl/q{t). Clearly conditions (1) and (2) of Definition
9.7 hold if q(,t) is continuously differentiable, and for (3) to hold suppose l/q{t) — kj > for all t so that
a(||x||) = ||x||| min{l,Ki}. Here min {1, /cj} = 1 if ki — 1, and min{l,Ki} = Ki if kj < 1. For (4) to
hold, we calculate , j„ -y rfa- -r^ j
dt ~ ^1 dt ■ q(t) dt «2(t) dt
198
STABILITY OF LINEAR SYSTEMS
[CHAP. 9
Since dxi/dt = «2 and dx2/dt = —p(t)x2 -
dv
di
■ q{t)x-y, then
_ 2p(t) q(t) + dq/dt 2
Q'Ht) ^^
Hence we require 2p{t) q{t) + dq/dt > for all t — tQ, since q~^ > Kj. But even if this requirement
is satisfied, dp/dt = for X2 = and any a;,. Condition (4) cannot hold. However, if we can show that
when a;2 = and x^ # 0, the system moves to a region where x^ ^ 0, then c(x, () will still be a Liapunov
function. Hence when X2 = 0, we have dx^/dt — —q(t)xi. Therefore if q{t) 7^ 0, then x^ must become
nonzero when x^ # 0. Thus v{-s., t) is a Liapunov function whenever, for any t — t^, q(t) 7^ and
\/q(t) ^ Ki > 0, and 2p(t) q{t) + dqidt > 0. Under these conditions the zero state of the given Sturm-
Liouville equation is asymptotically stable.
Furthermore we can show it is uniformly globally asymptotically stable if these conditions are inde-
pendent of «o and if q(t) - K2. The linearity of the system implies global stability, and since the previous
calculations did not depend on t^ we can show uniformity if we can find a /3(||x||) s: >(x, t). If q(t) — k,i.
then
max{l,/c-i} = ;8(llx||) ^^ .(x, f).
For the discrete-time case, we have the analog for Definition 9.7 as
Definition 9.8: A discrete-time Liapunov function, denoted v(x, A;), is any scalar function
of the state variable x and integer k that satisfies, for all k> ko and all
X in a neighborhood of the origin:
(1) v(x, k) is continuous;
(2) v(0,A;) = 0;
(3) v(x,/c)s=a(||x!|) >0 for Xv^O;
(4) Av(x, k) = v(x(A; + l),k + 1) - v(x(fc), fe) < for x v^ 0.
Theorem 9.8: Suppose a discrete-time Liapunov function can be found for the state
variable x{k) of the system x(fe -t- 1) = f(x, k). Then the state x = is
asymptotically stable.
The proof is similar to that of Theorem 9.7. If the conditions of Definition 9.7 hold for
all ko and if v(x, fc) ^ j8(||x||) where jS(|) is a continuous nondecreasing function of | with
j8(0) = 0, then we have uniform asymptotic stability. Additionally, if the system is linear
or if we substitute "everywhere" for "in a neighborhood of the origin" in Definition 9.8
and require a(|Ix||) -> °o with ||x|| -^ <», then we have uniform global asymptotic stability.
Again, we can weaken condition (4) in Definition 9.8 to achieve the same results. Condition
(4) can be replaced by Av(x, k) -= if we have assurance that the system trajectories will
move from regions of state space in which Av = to regions in which Av < 0.
Example 9.17.
Given the linear system x(A;+ 1) = A(fc)x(fc), where l|A(fe)|l < 1 for all k and for some norm. Choose
as a discrete-time Liapunov function v{k) — ||x||. Then
A. = v{K{k + l))-y{^{k)) = ||x(fc + l)||-|ix(fc)|| = ||A(fc)x(fc)I|-||x(fc)||
Since l|A(fc)|| < 1, then l|A(fc) x(fc)l| ^ |lA(fc)l| ||x(fc)|| < llx(fc)|| so that A^ < 0. The other pertinent
properties of n can be verified, so this system is uniformly globally asymptotically stable.
9.5 LIAPUNOV FUNCTIONS FOR LINEAR SYSTEMS
The major diflSculty with Liapunov techniques is finding a Liapunov function. Clearly,
if the system is unstable no Liapunov function ej:ists. There is indeed cause to worry that
no Liapunov function exists even if the system is stable, so that the search for such a func-
tion would be in vain. However, for nonlineai* systems it has been shown under quite
general conditions that if the equilibrium state x = is uniformly globally asymptotically
stable, then a time-varying Liapunov function (jxists. The purpose of this section is to
manufacture a Liapunov function. We shall consider only linear systems, and to be specific
we shall be concerned with the asymptotically stable real linear system
dK/dt = A{t)x [9.2)
CHAP. 9] STABILITY OF LINEAR SYSTEMS 199
in which there exists a constant Kg depending on i„ such that |lA(t)|l2 — Kg(ij) and where
||*(t, i)|pdT exists for some norm. Using Theorem 9.3 we can show that uniformly
asymptotically stable systems satisfy this last requirement because ||*(f, t)1| — K^e^'^i'"-'''
so that
f \\^{r,t)fdT - «\ ^ e-^'^^^^-^dr = kI/2k^ = k^ < 00
However, there do exist asymptotically stable systems that do not satisfy this requirement.
Example 9.18.
Consider the time-varying scalar system dx/dt = —xl2t. This has a transition matrix *((, to) —
(tg/t)'^'^. The system is asymptotically stable for t^ > 0. However, even for t — t^ > 0, we find
|l<J>(T,t)l|2d7- = 1 (t/T)dT = =0
Note carefully the position of the arguments of *.
Theorem 9.9: For system (9.2), i'(x, i) = x'^P(i)x is a time-varying Liapunov function,
where P{t) = i 4>^(r, f ) Q(t) *(t, i) dr in which Q{t) is any continuous
positive definite symmetric matrix such that liQ(*)|| — k5(*o) ^^^ Q(*)"~«I
is positive definite for all t — t^ if e > is small enough.
This theorem is of little direct help in the determination of stability because *(<, t) must be
known in advance to compute v(x, f). Note P(i) is positive definite, real, and symmetric.
Proof: Since * and Q are continuous, if P(i) < ■» for all t — to then condition (1) of
||*(t, t)f dr. Since the in-
tegral exists for the systems (9.2) under discussion, then condition (1) holds. Condition (2)
obviously holds. To show condition (3),
v{K,t) = j'\^r,t)^{t)VQ{r)Hr,t)K{t)dr = j^ X^(t) Q(r) x(r) cZr
Since Q(0 - «! is positive definite, then x'^(Q - J)x ^ so that x'"Qx ^ ex'^x > for any
x(t)v^O. Therefore „
v(x,t) ^ ej X^(T)x(T)dT = .j ||x|||dr
Since ||A(i)||2 — Kg for the system {9.2), then use of Problem 4.43 gives
v(x,i) ^ .Kg-l J^"||A(T)|y|x(T)|||dr ^ -eKg-1 f X^Axdr
= -£Kg-l J'"x''(dx/dT)dr = .Kg-l||x(i)|||/2 = a(||x||) >
x'^(t) Q(t) x(t) dr, then
dv/dt = -x' («) Q(*) x(«) < since y is positive dennite.
Example 9.19.
Consider the scalar system of Example 9.8, dx/dt = -x/t. Then 4>(t, to) = V*- A Liapunov function
can be found for this system for to > 0, when it is asymptotically stable, because
\\Mt)\\2
f-1 < t-' = K3«o) and f°°||*(r,f)|12dr = f"(t/r)2dr =
For simplicity, we choose Q{t) = 1 so that P(t) - J (t/r)2 dr = t. Then >'(x, t) = tx^, which is a
time-varying Liapunov function for all t — to > 0. t
200 STABILITY OF LINEAR SYSTEMS [CHAP. 9
Consider the system , , ,, . , ., , ,, ,^, ^, ,_ .,
dK/dt — A(t)x + f(x, u(i), t) {9.3)
where for t ^ U, £(x,u, t) is real, continuous for small ||x|| and |lu||, and ||f|| -* as ||x|| ->
and as ||u|| -» 0. As discussed in Section 8.9 this equation often results when considering
the linearization of a nonlinear system or considering parameter variations. (Let 8x from
Section 8.9 equal x in equation {9.3).) In fact, most nonswitching control systems satisfy
this type of equation, and ||f|| is usually small because it is the job of the controller to keep
deviations from the nominal small.
Theorem, 9.10: If the system {9.3) reduces to the asymptotically stable system {9.2) when
f = 0, then the equilibrium state is asymptotically stable for some small
{{x|| and {|u|{.
In other words, if the linearized system is asymptotically stable for t — U, then the corre-
sponding equilibrium state of the nonlinear system is asymptotically stable to small dis-
turbances.
Proof: Use the Liapunov function v(x, t) = x^P(t)x given by Theorem 9.9, in which
♦(t, t) is the transition matrix of the asymptotically stable system {9.2). Then conditions
(1) and (2) of Definition 9.7 hold for this as a Liapunov function for system {9.3) in the
same manner as in the proof of Theorem 9.9. For small ||x|| it makes little difference
whether we consider equation {9.3) or {9.2), and so the lower bound a(||x||) in condition (3)
can be fixed up to hold for trajectories of {9.3). For condition (4) we investigate dv/dt =
d(x''Fx)/dt = 2x^P dK/dt + x^ {dP/dt)Ti = 2x^PAx + 2x''Pf + x'" {dF/dt)x. But from the proof
of Theorem 9.9 we know dv/dt along motions of the system dx/dt = A(i)x satisfies dv/dt =
-x^'Qx = 2x^PAx + x'' {dP/dt)K so that -Q = 2PA + d¥/dt. Hence for the system {9.3),
dv/dt = -x^'Qx + 2x^Pf. But ||f||^0 as ||x|| -> and ||u|| ^ 0, so that for small enough
{|x|{ and ||u||, the term — x^Qx dominates and dv/dt is negative.
Example 9.20.
Consider the system of Example 9.1. The linearized system dx/dt = —x is uniformly asymptotically
stable, so Theorem 9.10 says there exist some small motions in the neighborhood of a; = which result
in the asymptotic stability of the nonlinear system. In fact, we know from the solution to the nonlinear
equation that initial conditions Xq < 1 cause trajectories tViat return to a; = 0.
Example 9J!1.
Consider the system dx/dt = —x + u(l + x — u), where u{t) = e, a small constant. Then f(,x, e, t) =
e(l + ae — e) does not tend to zero as x -* 0. Consequently Theorem 9.10 cannot be used directly. However,
note the steady state value of a; = e. Therefore redefine variables as z — x — e. Then dz/dt — —z + ez,
which is stable for all e — 1.
Example 9.22.
Consider the system
dxi/dt - X2 + Xi(v^^+xl)/2 and dx^/dt - -x^ + Xzix^ + xl)/2
The linearized system dxi/dt = X2 and dxz/dt = -Xi is stable i.s.L. because it is the harmonic oscillator
of Example 9.4. But the solution to the nonlinear equation is xf + xf = [{xIq + x^^)-'^ - t]-i- which is
unbounded for any scio or a!2o- The trouble is that the linearized system is only stable i.s.L., not asymp-
totically stable.
9.6 EQUATIONS FOR THE CONSTRUCTION OF LIAPUNOV FUNCTIONS
From Theorem 9.9, if we take the derivative of P(i) we obtain
dP/dt = -*^(t. i) Q(i) *(i, i) 4- J] [d^{T,t)/dtYq{r)Mr,t)dT
+ J 4>'^{T,t)Ct{T)[d*{r,t)/dt]dT
Since *(t, t) = I and cI*(t, t)/dt = d9~\t, T)/dt = -*(t, t) A{t), we obtain
dP/dt + AT(^) P{t) + P{t) A{t) = -Q(i) {9.4)
CHAP. 9] STABILITY OF LINEAR SYSTEMS 201
Theorem 9.11: If and only if the solution P(i) to equation {94) is positive definite, where
A(i) and Q(i) satisfy the conditions of Theorem 9.9, then the system {9.2),
dx/dt = A(i)x, is uniformly asymptotically stable.
Proof: If P(t) is positive definite, then v = x^P(i)x is a Liapunov function. If the
system {9.2) is uniformly asymptotically stable, we obtain equation {9.4) for P.
Theorem 9.11 represents a rare occurrence in Liapunov theory, namely a necessary and
sufficient condition for asymptotic stability. Usually it is a difficult procedure to find a
Liapunov function for a nonlinear System, but equation {9.4) gives a means to generate a
Liapunov function for a linear equation. Unfortunately, finding a solution to equation {9.4)
means solving n{n + 1)/2 equations (since P is symmetric), whereas it is usually easier to
solve the n equations associated with dx/dt = A(t)x. However, in the constant coefficient
case we can take Q to be a constant positive definite matrix, so that P is the solution to
A^P + PA = -Q {9.5)
This equation gives a set of n{n + 1)/2 linear algebraic equations that can be solved for P
after any positive definite Q has been selected. It turns out that the solution always exists
and is unique if the system is asymptotically stable. (See Problem 9.6.) Then the solution
P can be checked by Sylvester's criterion (Theorem 4.10), and if and only if P is positive
definite the system is asymptotically stable. This procedure is equivalent to determining
if all the eigenvalues of A have negative real parts. (See Problem 9.7.) Experience ha;s
shown that it is usually easier to use the Lienard-Chipart (Routh-Hurwitz) test than the
Liapunov method, however.
Example 9.23.
Given the system cPy/dP' + 2dy/dt + y - 0. To determine the stability by Liapunov's method we set
up the state equations
*c ^ / 1
dt [-1 -2
y = {1 0)x
We arbitrarily choose Q = I and solve
-IVph Pi2\ [Pn Pu]/ <> 1\ ^ /-I
1 -2y\Pi2 P22J \Pi2 P22A-I -2/ I, -1
1 /3 1\ , '
The solution is I* — 9 ( -1 ■, )• Using Sylvester's criterion we find p^ = 3 > a:nd det P = 1 > 0,
and so P is positive definite and the system is stable. But it is much easier to use Routh's test which ahnQst
immediately says the system is stable.
We may ask at this point what practical use can be made of Theorem 9.9, other than the
somewhat comforting conclusion of Theorem 9.10. The rather unsatisfactory answer is
that we might be able to construct a Liapunov function for systems of the form dx/dt =
A{t)x + f(x, t) where A(^) has been chosen such that we can use Theorem 9.9 (or equation
{9.5) when the system is time-invariant). A hint on how to choose Q is given in Problem 9.7.
Example 9.24.
Given the system d^y/dt^ + {2 + e-*^) dy/dt+ y = 0. We set up the state equation -in the form
dx/dt = Ax + f :
t = {.: 4)» + (-."O ' - »'«
As found in Example 9.23, 2i/ = Sxf + Zx^x^ + «| . Then
dv/dt = d{TsJ-Pii)ldt = 2isJV dx/dt = 2xTp(Ax + f )
= -x^Qx + ZxTf = -x\ - xl - (xi+X2)e~tx2
202
STABILITY OF LINEAR SYSTEMS
[CHAP. 9
dv
di
e-V2 1 + e-t
Using Sylvester's criterion, 1 > and 1 + e""* > e~^*/4, so the given system is uniformly asymptotically
stable.
For discrete-time systems, the analog of the construction procedure given in Theorem
9.9 is
v(x,fc) = 2 x''(A;) *''(to, ;c) Q(w) *(w, A;) x(fc)
m — k
and the analog of equation (9.5) is A'^PA — P = — Q.
9.1.
Solved Problems
A ball bearing with rolling friction rests on a deformable surface in a gravity field.
Deform the surface to obtain examples of the various kinds of stability.
Vxi/
_Q^
Global
Asymptotic
Global
Stability
Unstable
Unstable
Asymptotic
Stability
Stability
i.s.L. but
but
and
Stability
but Unbounded
i.s.L.
Unbounded
Bounded
Unbounded
(a)
(6)
(c)
(d)
(e)
(/)
Fig. 9-2
We can see that if the equilibrium state is g:lobally asymptotically stable as in Fig. 9-2(a),
then it is also stable i.s.L., as are (6), (c) and (d), i3tc. If the shape of the surface does not vary
with time, then the adjective "uniform" can be affixed to the description under each diagram. If the
shape of the surface varies -with time, the adjective "uniform" may or may not be affixed, depend-
ing on how the shape of the surface varies with time.
9.2. Prove Theorems 9.3 and 9.5.
We wish to show dx/dt = A(t)x is uniformly asymptotically stable if and only if
-KjCt — to)
-K«)|h
for all i — tQ and all to-
ll'8~K*)l!i — «B-i' ^^ externally stable if and only if ||*(*, to'
if the unforced system is uniformly asymptotically stable.
*(i,«o)||-
Also wi2 wish to show dx/dt = A(t)x + B(t)u, where
-KoCt-tn)
and hence if and only
Since ||x(t)l| ^ i|*(t, to)IM|xol|, if ||*(t, to)ll - "le-"^"-'"' and \\xo\] ^ S, then ||x(t)|| ^ «,S
and tends to zero as t -> », which proves uniform asymptotic stability. Next, if dx/dt = A(t)x
is uniformly asymptotically stable, then for any p > and any i; > there is a fj (independent
of to by uniformity) such that if |lxo|i - p then ||x(t)|| - 17 for all t > to + tj. In fact, for p = l
and 7; = e-i, there is a h = T such that »; = e-i ^ ||x(to+ DH = ||*(to + T, to)xo||. Since
llxoll = 1, from Theorem 4.11, property 2, |i*(to+ T, fo)xo]| = |l*(to+ T, to)!!- Therefore for all
*o. ll*(*o+ T, to)i| — e-i and for any positive integer fc,
p-k
ll*(to + fcr, to)ll ^
Choose k such that
Theorem 9.2. Then
\\4>{t^ + kT,t^ + (k-l)T)\\-'
to + kT^ t<to + {k+ 1)T
• ll*(to + 2r, to+r)llll*(to + T. to)ll - «
and define /cj = ||*(t, to + kT)\\e < <»
from
I*(«,*o)ll
|*(t, to + fcr)|| ||*(to + fcr, to)|| = Kie-ii|*(to + fcr, to)lI - ^i^
-(fc+i)
Kie
-Ct0+(.k+l^)T-t0^)/T
Defining Kg = 1/T proves Theorem 9.3.
CHAP. 9]
STABILITY OF LINEAR SYSTEMS
203
If ll*(«,*o)lli-«ie
-KjCf-to)
, then
J |l*(«,T)i|l|B(r)lldT ^ «B J ||*(t,r)lidr ^ KgKi J e""^""^' dr
to *0 to
f ||H(f,T)||idT
= KgKi(l - e "2 » )/k2 - KgKi/K2 = /3
SO that use of Theorem 9.4 proves the system is uniformly externally stable.
If the system is uniformly externally stable, then from Theorem 9.4,
/3 ^ J" ||H(t,r)l|,dT = J ||*(t,T)B(T)i|idr
to to
Since ||B-i(t)]|i - Kg_i, then
J ||*(i,r)l|idr ^ J ||*«,T)B(r)lli||B-l(T)||idr =S ^«^_j < oo
for all fo arid all t. Also, for the system {9.1) to be uniformly externally stable with arbitrary Xq,
it must be uniformly stable i.s.L. so that |I*(t, io)Ili < «■ from Theorem 9.2. Therefore
« > /3«B-1« - J l|*(t,T)|Ii|]*(T,«o)l|idr S= J" !!*(«, r)*(T,to)||ldr
to to
= r mt,h)\Vdr = |l*(f,*o)!|i(«-to)
Define the time T = ^K^_^Ke so that at time t ■= tf,+ T, iiK-^-^K — |l*(to + T, *o)IIi/'«b-i''^- Hence
||*(io+ T, tt^)\\x — e-i, so that using the argument found in the last part of the proof just given
for Theorem 9.3 shows that |l*(i, <o)lli - Kje""""'")'''', which proves Theorem 9.5.
9.3. Is the scalar linear system dx/dt = 2i(2 sini — l)a; uniformly asymptotically stable?
We should expect something unusual because \A(t)\ increases as t increases, and A(t) alternates
in sign. We find the transition matrix as <!>(<, to) = e'^"~*"»' where e{t) — 4 sin t — 4t cos t — P.
Since e{t) < 4: + 4\t\ — t^, lim e{f) = — »», so that lim *(t, io) = and the system is asymptoti-
cally stable. This is true globally, i.e. for any initial condition Xq, because of the linearity of the
system (Theorem 9.1). Now we investigate uniformity by plotting *(t, tp) starting from three dif-
ferent initial times: to — ^> *o == Stt- and Iq = Av (Pig. 9-3). The peaks occur at ^{{2n + l)ir, 2nTr) =
gT7(4-7r)C4»i + i) so that the vertical scale is compressed to give a reasonable plot. We can pick an
initial time to large enough that some initial condition in Ixq] < S will give rise to a trajectory
that will leave \x\ < e for any e > and 6 > 0. Therefore although the system is stable i.s.L.,
it is not uniformly stable i.s.L. and so cannot be uniformly asymptotically stable.
k*(t.to)
9.4. If ^{t) ^ K+ \ [p(t) |(t) + ^(t)] dr where p{t) ^ and k is a constant, show
that
f'p(T)dT
g-'to
This is the Gronwall-Bellman lemma and is often useful in establishing bounds for
the response of linear systems.
We shall establish the following chain of assertions to prove the Gronwall-Bellman lemma.
204 STABILITY OF LINEAR SYSTEMS [CHAP. 9
Assertion (i) : If dw/dt ^ and a)(to) - 0, then o)(f) ^ for t ^ to-
Proof: If lo(fi) > 0, then there is a T in to - ^ - *i such that u(r) = and <o(t) > for
r =s t ^ tj. But using the mean value theorem gives «(ti) = co(ti) — <o(r) = {t^ — T) da/dt — 0,
which is a contradiction.
Assertion (ii) : If da/dt - a{t)a ^ and <o(to) - 0, then w(t) ^0 for f ^ to-
Proof: Multiplying the given inequality by e"*"'*"', where 9(t, to) = J aW dr, gives
s= e-^'-'o'du/dt-altle"""-'"' = d{e-'^*-Va)/dt
Applying assertion (i) gives e~^"''»'u - 0, and since e"*"''"' > then u ^ 0.
Assertion (iii): If d<f,/dt - ait)<p - dy/dt — a(t)Y and ^(to) - 7(to), then ^6(f) =^ y(t).
Proof: Let a = <p — y in assertion (ii).
Assertion (iv): If dio/dt - a(t)u ^ n(t) and <o(to) - k, then «(«) - ( « + f e^^^^'-'o^W dr j e*"'*»\
Proof: Set u = and y = («+ j e"'*'^'^''o-'/iW t^r j e*'^*''«' in assertion (iii), because dy/dt —
a{t) = fi(t) and y(fo) = k. \ "^'o /
Assertion (v): The Gronwall-Bellman lemma is true.
Proof: Let a)(t) = /c + I [p{t) |(t) + /((r)] dr, so that the given inequality says £(t) — u(t).
Since p{t) — we have da/dt = p(t) |(t) + /t(t) ~ p{t) a(t) + /j(f), and <o(to) = k. Applying' assertion
(iv) gives
m - "(t)
« + f e-'"''''»V(T)dT
,ect,to)
9.5. Show that if A{t) remains bounded for all t ^ U (||A(f)|| — k), then the system cannot
have finite escape time. Also show ||x(i:)|| =£ ||xo||e'''o"*''"^"'''' for any A(«).
Integrating dx/dt = A(t)x between to and t gives x(t) = xq + J A(t) x(t) dr. Taking the
norm of both sides gives '"
i|x(t)l| = ||xo + f A(r)x(r)dr|| ^ l|xo|| + \\ f A(r)x(r)dTl| ^ ||xo|| + f ||A(r)l| ||x(r)|| dr
•^t„ -^to ^h
Since ||A(T)i| - 0, we use the Gronwall-Bellman inequality of Problem 9.4 with ^(t) = to obtain
ll^^.;il - ll^oll^'""'''^'"'"'- If I|A(t)||^/c, then ||x(t)|| ^ l|xo||e''"-'»\ which shows x(t) is bounded
for finite t and so the system cannot have finite escape time. However, if ||A(t)|| becomes un-
bounded, we cannot conclude the system has finite escape time, as shown by Problem 9.3.
9.6. Under what circumstances does a unique solution P exist for the equation A^P +
PA = -Q?
We consider the general equation BP + PA = C where A, B and C are arbitrary real nXn
matrices. Define the column vectors of P and C as p; and q respectively, and the row vectors of A as
a?' for i = 1, 2, . . . , w. Then the equation BP + PA = C can be written as BP -I- 2 Pi*,^ = ^' '^^i<=^
in turn can be written as
'b ... 0\ /ani a.2il ... a„il\
B ... \ / aigl a.22l • • • an2^ \
... B/ \<ii„I a2„l ... anni/.
CHAP. 9]
STABILITY OF LINEAR SYSTEMS
205
Call the first matrix in brackets B and the second A, and the vectors p and c. Then this equation
can be written (B + A)p = c. Note B has eigenvalues equal to the eigenvalues of B, call them
^j, repeated n times. Also, the rows of A — XI„ are linearly dependent if and only if the rows of
A''' - Xl„ are linearly dependent. This happens if and only if det (A^ — Xl„) = 0. So X has eigen-
values equal to the eigenvalues of A, call them a^, repeated n times. Call T the matrix that reduces
B to Jordan form J, and V = [Vij] the matrix that reduces A to Jordan form. Then
^B ... 0\ /anl ... a„il\ |/T ... \ /-ynl ... v^^l'"
^0 ... B/ \ai„I ... a„„l/ |\0 ... T/ \-i;i„I ... r„„I,
'J ... 0\ /anl ... a„il
,0 ... J/ Vfti^l ... a„„l
• • ■ aj./
Hence the eigenvalues of B + A are a; + 13 j for i,j-l,2,...,n.
A unique solution to (B + A)p = c exists if and only if det (B + A) = fl («i + /3j) # 0.
There£ore if and only if «; + /3j'^0 for all i and j, does a unique solution to BP + PA = C exist.
If B - AT then /3^ = cij, so we require aj + aj ^ 0. If the real parts of all aj are in the left half
plane (for asymptotic stability), then it is impossible that «« + «,. = 0. Hence a unique solution for
P always exists if the system is asymptotically stable.
9.7.
Show that PA + AT = -Q, where Q is a positive definite matrix, has a positive
definite solution P if and only if the eigenvalues of A are in the left half plane.
This is obvious by Theorem 9.H, but we shall give a direct proof assuming A can be diagonalized
as M lAM = A. Construct P as (M-i)t]VI-i. Obviously P is Hermitian, but not necessarily real.
Note P is positive definite because for any nonzero vector x we have xtPx = xt(M-i)tM-ix =
(M-ix)t(M-ix) > 0. Also, xtQx = -(M-ix)t(A + At)(M-ix) > if and only if the real parts of the
eigenvalues of A are in the left half plane. Furthermore, ^ = xt(M-i)tM-ix decays as fast as is
possible for a quadratic in the state. This is because the state vector decays with a time constant
V equal to one over the real part of the maximum eigenvalue of A, and hence the square of the
norm of the state vector decays in the worst case with a time constant equal to 1/2);. To investi-
gate the time behavior of v, we find
dp/dt = -xtQx = (M-ix)t(A + At)(M-ix) ^ -2,(M-ix)t(M-ix) = -2vv
For a choice of M-ix equal to the unit vector that picks out the eigenvalue with real part v, this
becomes an equality and then ;- decays with time constant 1/2?;.
9.8.
In the study of passive circuits and unforced passive vibrating mechanical systems
we ^f ten encounter the time-invariant real matrix equation Fd^y/dt^ + Gdy/dt +
Hy - where F and H are positive definite and G is at least nonnegative definite.
Prove that the system is stable i.s.L.
Choose as a Liapunov function
_ dy-r
dt^^ + ^'^% +
J'^(G+GT)gdr + ym+my
This is positive definite since F and H are, and
^dyT
206
STABILITY OF LINEAR SYSTEMS
[CHAP. 9
which is a positive definite quadratic function of the state. Also,
dp
di
dyT
dt
["S + «S + H^] + [-S + «! + «'_
dt
By Theorem 9.6, the system is stable i.s.L. The reason this particular Liapunov function was
chosen is because it is the energy of the system.
Supplementary Problems
9.9. Given the scalar system dx/dt = tx + u and y - x.
\h{t,T)\dT ^ V2^e' .
'0
(6) Show the system is not stable i.s.L.
(c) Explain why Theorem 9.4 does not hold.
910 Given the system dx/dt = -x/t + u and y = x. (a) Show the response to a unit step function
input at time *« > 0. gives an unbounded output. (6) Explain why Theorem 9.5 does not hold even
though the system is asymptotically stable for io > 0.
9 11 By altering only one condition in Definition 9.6, show how Liapunov techniques can be used to give
sufficient conditions to show the zero state is not asymptotically stable.
9.12. Prove that the real parts of the eigenvalues of a constant matrix A are < " i* ^"^ only if given
any symmetric, positive definite matrix Q there exists a symmetric. Positive definite matrix P which
is the unique solution of the set of «(n + l)/2 linear equations -2aP + ATP + PA- Q.
9.13. Show that the scalar system dx/dt = -(1 + t)x is asymptotically stable f or t - using Liapunov
theory.
-AAAAA-
B(t)
o
:c{t)
Fig. 9-4
9 14 Consider the time-varying network of Fig. 9-4. Let a^iC*) -
charge on the capacitor and X2(t) = flux in the inductor, with
initial values Xio and a^ao respectively. Then L{t) dxjdt - x^
and L(t) C(i) dxa/di + L(t)xi -H fi(t) C(f)x2 = 0. Starting
with
/R + {2L/RC) 1
P(*) = (^ 1 2/R
find conditions on R, L, and C that guarantee asymptotic
stability.
9.15. Using the results of Example 9-16. show that if « > <l</ < ^Zilf ^Jr^i^ol'^ly'st^l
the Mathieu equation d^y/dt^ + ady/dt + (1 + P cos 2t/^)y - is unilormiy asymp
«. even the ti»e v„,.n> «„e„ =^«m .ff .^^^ct^Jf S^-f "rnS Z^T.^^t:"^:
Uapunov function (■(x) = x^x, show
f W„(T)dT . n M J^.W-^-'"^
llxoike
|lx(«)||2 ^ IWUe '0
• -1 <- +!,„<■ ,-,f Problem 9 7 for F in the discrete-time case
9.17. What is the construction similar to that ot Frobiem
A^PA - P = -Q?
CHAP. 9] STABILITY OF LINEAR SYSTEMS 207
9.18. Show that if the system of Example 9.22, page 200, is changed slightly to dx^/dt = x^- ex^ +
(x, + a;|)/2 and rixj/dt = -Xj - eX2 + (xf + x|)/2, where e > 0, where the system is uniformly
asymptotically stable (but not globally).
9.19. Given the system -^x = f~^ "^^^]js.
dt \a -X J
Construct a Liapunov function for the system in the case e(t) ~ 0. This Liapunov function must
give the necessary and sufficient condition for stability, i.e. give the exact stability boundaries on a.
Next, use this Liapunov function on the system where e{t) is not identically zero to find a condition
on e(t) under which the system will always be stable.
9.20. Given the system dx/df = (A(t) + B(i))x where dx.ldt = A(«)x has a transition matrix with norm
||*(i,T)|| - e "2"--^) . Using the Gronwall-Bellman inequality with /i = 0, show 11x11 ^ |lxn||e'''3"''2'""'o'
if ||B(t)l|^«3e-''^'.
9.21. Show that ||x(«)|| = ||x„||/'l"'''^^'"'^ + J^'/-'"^"""*'B(r)uWdr for the system dx/dt = A(t)x +
B(t)u with initial condition x(to) = xq. '"
Answers to Supplementary Problems
9.9. (a) J \h(t,r)\dr = e'''^ C e-^'^ dr ^ ^^e'^^^
— 00 — -O
(5) 4>(t,t,) = //2e-'o/2
(c) The system is not externally stable since /3 depends on t in Theorem 9.4.
9.10. (a) y(t) = (t-tl It)l2 for t^t^Xi.
(6) The system is not uniformly asymptotically stable.
9.11. Change condition (4) to read dv/dt > in a neighborhood of the origin.
9.12. Replace A by A - al in equation {9.5).
9.13. Use p = x2
9.14. < Ki * R{t) ^ K2 < =0
< Kg ^ Lit) ^ K4 < 00
< K5 =S C{t) =^ Kg < «
< «7 ^ 1 + R{L/R2 - cm + CL/RC - L/R
^'"' ^A.ZTX'^a' ^^'-^ i^always positive definite, and Q = M-i(I - AtA)M-i which is positive
definite if and only if each eigenvalue of A has absolute value less than one.
9.18. Use any Liapunov function constructed for the linearized system.
9.19. If Q = 2I, thenar = |(^ ^" ^| The system is stable for all a if « = 0. If » ^ 0, we
require 4 - 2a6i - 92(1 + „2/2)2 > o_
9.20. Ilxll ^ ||xo||«-2<'-*o' + J^y ^.<-^) IIBWII llxll dr so apply the Gronwall-Bellman inequality to
llxlle''^' =£ l|xo!|e''^'» + .sflMdr
9.21. Use Problem 9.4 with |lx(*)|| = |(t), |]x«|| = «, ||A(t)l| = p(t), and ||BWu(r)l| = ,(.).
chapter 10
Introduction to Optimal Control
10.1 INTRODUCTION
In this chapter we shall study a particular type of optimal control system as an intro-
duction to the subject. In keeping with the spii-it of the book, the problem is stated in
such a manner as to lead to a linear closed loop system. A general optimization problem
usually does not lead to a linear closed loop system.
Suppose it is a control system designer's task to find a feedback control for an open loop
system in which all the states are the real output variables, so that
d^ldt = A(i)x + B(<)u y = Ix + Ou [10.1)
In other words, we are given the system shown in Fig. 10-1 and wish to design what goes
into the box marked *. Later we will c ;nsider what happens when y{t) = C{t)x{t) where
C{t) is not restricted to be the unit matrix.
<l(f):
+
+
:
x(«)
= y(«)
System {lu.i)
*
f' —
s,
Fig. 10-1. Vector Feedbatk Control System
A further restriction must be made on the type of system to be controlled.
Definition 10.1: A regulator is a feedback control system in which the input d{t) = 0.
For a regulator the only forcing terms are due to the nonzero initial conditions of the
state variables, x.{to) = xo ?^ in general. We shall only study regulators first because
(1) later the extension to servomechanisms (which, follow a specified d{t)) will become easier,
(2) it turns out that the solution is the same if d(f) is white noise (the proof of this result
is beyond the scope of the book), and (3) many systems can be reduced to regulators.
Example 10.1.
Given a step input of height a, u(t) = aUit), where the unit step f7(i — <o) = for t < t^ and
U(t - to) = 1 for t - to. into a scalar system with transfer function 1/(8 + /3) and initial state x^. But
this is equivalent to a system with a transfer function l/js(s + ;8)], with initial states a and a;o and with no
input. In other words, we can add an extra integrator with initial condition a at the input to the system
flow diagram to generate the step, and the resultant system is a regulator. Note, however, the input
becomes a state that must be measured and fed back if the system is to be of the form {10.1).
Under the restriction that we are designing a regulator, we require d(i) = 0. Then the
input u(i) is the output of the box marked * in Fig. 10-1 and is the feedback control. We
shall assume u is then some function to be formed of the present state x(i) and time. This
is no restriction, because from condition 1 of Section 1.4 upon the representation of the state
of this deterministic system, the present state completely summarizes all the past history
of the abstract object. Therefore we need not consider controls u = u(x(t), t) for to^T^t,
but can consider merely u = u(x(i), t) at the start of our development.
208
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL
209
10.2 THE CRITERION FUNCTIONAL
We desire the system to be optimal, but must be very exact in the sense in which the
system is optimal. We must find a mathematical expression to measure how the system
must be optimal in comparison with other systems. A great many factors influence the
engineering utility of a system: cost, reliability, consumer acceptance, etc. The factors
mentioned are very diflScult to measure and put a single number on for purposes of com-
parison. Consequently in this introductory chapter we shall simply avoid the question by
considering optimality only in terms of system dynamic performance. It is still left to the
art of engineering, rather than the science, to incorporate unmeasurable quantities into
the criterion of optimality.
In terms of performance, the system response is usually of most interest. Response is
the time behavior of the output, as the system tries to follow the input. Since the input to
a regulator is zero, we wish to have the time behavior of the output, which in this case
IS the state, go to zero from the initial condition xo. In fact, for the purposes of this chapter
there exists a convenient means to assign a number to the distance that the response [x(t)
for U^T^ U\ IS from 0. Although the criterion need not be a metric, a metric on the
space of all time functions between U and t, will accomplish this. Also, if we are interested
only m how close we are to zero at time t^, a metric only on all x(ii) is desired. To obtain
a linear closed loop system, here we shall only consider the particular quadratic metric
p2(x,0) = W{U)%^(t,) + \ r*'xT(r)Q(x)x(r)dr
-'to
Where S is an n x « symmetric constant matrix and Q(r) is a « x t^ symmetric time-varying
matrix. If either S or Q(.), or both, are positive definite and the other one at least non-
negative definite then p(x,0) is a norm on the product space {x(i,),x(r)}. It can be shown
this requirement can be weakened to S, Q nonnegative definite if the system d^ldt = A(^)x
with y - VQ(^)x is observable, but for simplicity we assume one is positive definite. The
exact form of S and QM is to be fixed by the designer at the outset. Thus a number is
assigned to the response obtained by each control law u(x(«),<), and the optimum system is
that whose control law gives the minimum p(x, 0). y^ :/
The choice of Q(r) is dictated by the relative importance of each state over the time
interval Co — t <C t\.
Example 10.2.
attac?"''-"?^'^'-'''''"'/*/?^'"'" ^•^^' P*^" ^^' ^'* ^^^ '^^'''^^ °f «t^te variables (i). If the angle of
t-K 1 0)'~The„ "o-Hm'?' '"'"' "' ^^" "•"''"'^^^ *^ '"'^^'■^l °^ a^ = x^^x where H =
Pnrtf. •/ ?.-" " '" nonnegative definite and we must choose S to be positive definite
r funernTf i^i^rs^i t^T' "^ ""-' '-' '' ''' --"^'^ «--' « -^^^^ ^^^^o^^^
The choice of S is dictated by the relative importance of each state at the final time, U.
Example 10.3.
stationed 'ir Tl ^''tent:li^:o%i:Z.T' ^T 'M T"'' "' -f ^^^ ^^^^^^^'^^ ^''' -'^"^ ^^^^^ ^
t - f TV,,.,, f \ "^" ''^ makes no difference what path the missile flies to arrive near z = at
s =
where e,, e^, e, are small fixed positive numbers to be chosen by trial and error after finding ^h. i ^
loop system for each fixed e If anv , — n „„ „„ *. ui j. , error airer nnding the closed
edcn nxea e,. it any e; - 0, an unstable system could result, but might not.
210 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
For the system {10.1), the choice of control u(x(f),t) to minimize p(x,0) is u =
-B-»(«) xo S(i - U) in the case where B(i) has an inverse. Then x(t) = for all t>U and
p(x, 0) = 0, its minimum value. If B(i) does not have an inverse, the optimal control is a
sum of delta functions and their derivatives, such as equation {6.7), page 135, and can drive
x(*) to almost instantaneously. Because it is very hard to mechanize a delta function,
vsrhich is essentially achieving infinite closed loop gain, this solution is unacceptable. We
must place a bound on the control. Again, to obt;ain the linear closed loop system, here we
shall only consider the particular quadratic
|jJ'u^(r)R(T)u(T)dr
where R(t) is an m x m symmetric time-varying positive definite matrix to be fixed at the
outset by the designer. R is positive definite to Jissure each element of u is bounded. This
is the generalized control "energy", and will be added to p2(x,0). The relative magnitudes
of ||Q(t)|| and ilR(T)j| are in proportion to the relative values of the response and control
energy. The larger |iQ(T)|| is relative to |iR(T)|l, the quicker the response and the higher the
gain of the system.
10.3 DERIVATION OF THE OPTIMAL CONTROL LAW
The mathematical problem statement is that we are given dx/dt = A(i)x + B(t)u and
want to minimize
v[x,u] = ix^(*i) Sx(f i) + UJx^^(t)Q(t)x(t) + Xx'{r)^{T)xx{r)]dr {10.2)
Here we shall give a heuristic derivation of ttie optimal control and defer the rigorous
derivation to Problem 10.1. Consider a real w-vector function of time p(i), called the costate
or Lagrange multiplier, which will obey a diff(jrential equation that we shall determine.
Note for any x and u obeying the state equation, p''(i)[A(«)x + B(i)u-dx/dt] = 0. Addmg
this null quantity to the criterion changes nothing.
. [x, u] = ix^(ti) Sx{<:i) + j " (ix-^Qx H- iu'-Ru + p^Ax + p^Bu -p^dx/dr) dr
Integrating the term p'^cZx/dr by parts gives
v[x,u] = v,[x(t,)] + vjx] + V3M
vjx(f,)] = ix''(i,)Sx(iJ - x^(i^)p{i,) + x'^{tMK) (^^•^)
Wx] = f ''(ix^Qx + 3i:^Ap + x^ dv/dr) dr {10. -If)
•^ to
Introduction of the costate p has permitted v[x, ti] to be broken into v,, v, and v,, and heuris-
tically we suspect that v[x,u] will be minimum when v,,v, and v, are each mdependently
minimized. Recall from calculus that when a smooth function attains a local minimum
its derivative is zero. Analogously, we suspect that if v, is a minimum, then the gradient of
{10.3) with respect to x(fj) is zero:
p(ti) = Sxo-Cii) (^0-^)
and that if v, is a minimum, then the gradient of the integrand of {lO.J,) with respect to
^ is zero: ^p/^^ ^ -AJ{t)v - Q(*)x- i^^-'^)
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL
211
Here x<'»>(i) is the response x(0 using the optimal control law u°''(x(t), t) as the input to the
system (10.1). Consequently we define p{t) as the vector which obeys equations (10.6) and
(10 7). It must be realized that these steps are heuristic because the minimum of v might
not occur at the combined minima of v^,v^ and v^; a differentiable function need not result
trom taking the gradient at each instant of time; and also taking the gradient with respect
to X does not always lead to a minimum. This is why the iinal step of setting the gradient
ot the integrand of (10.5) with respect to u equal to zero (equation (10.8)) does not give a
rigorous proof of the following theorem.
Theorem 10.1: Given the feedback system (10.1) with the criterion (10.2), having the
costate defined by (10.6) and (10.7). Then if a minimum exists, it is obtained
by the optimal control
"°- = -R-^B^p (10.8)
The proof is given in Problem 10.1, page 220.
To calculate the optimal control, we must find p(^). This is done by solving the system
equation (10.1) using the optimal control (10.8) together with the costate equation (10.7),
l/x-\ _ f A(t) -B(i)R-i(^)B^(^)\/x°^\
with x<"'(io) = xo and p(ti) = Sx°''(ii).
Example 10.4.
Consider the scalar time-invariant system dx/dt = 2x + u with criterion
» = x2(«j) + -J {3x^ + uyi)dt
Then A = 2, B = 1, R = 1/4, Q = 3, S = 2. Hence we solve
d_/x°p\ _ / 2 -4\/a;op\
dt[p J - [-S -2)[p J
with iBOPCO) = Xo, p(l) = 2a;(l). Using the methods of Chapter 5,
/x°Ht)\ _ re-« /2 4\ e + «/ 6 -4N-|/a;op(0)\
[pit)) - L^U e) + -^[-S 2JJ(p(0)j ^''-''^
Evaluating this at t = 1 and using p(l) = 2a;°p(l) gives
Then from Theorem 10.1,
Op + it 4- orxpS-it
^'"'(^o.t) = 1 - sr/s ^0 (10.m
Note this procedure has generated u- as a function of t and x„. If xo is known, storage
ot u (Xo, i) as a function of time only, in the memory of a computer controller, permits open
loop control; i.e. starting at U the computer generates an input time-function for d^°^/dt =
Ax°p + Bu'"'(xo, t) and no feedback is needed.
However, in most regulator systems the initial conditions xo are not known. By the
introduction of feedback we can find a control law such that the system is optimum for
arbitrary xo. For purposes of illustration we will now give one method for finding the
feedback, although this is not the most efficient way to proceed for the problem studied in
this chapter. We can eliminate xo from dx°V(i^ = Ax"" + Bu'«>(xo, i) by solving for x°''(i)
m terms of xo and t, i.e. x-Ci) =: #(«;xo,*o) where ^ is the trajectory of the optimal system.
liien x°^(t) - ^(f;xo, to) can be solved for xo in terms of x'">(i) and t, xo = xo(x'»>(i), t). Sub-
stituting for xo m the system equation then gives the feedback control system d^^^/dt =
Ax^p + Bu°p(x'"'(«), ^).
212
INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
Example 10.5.
To find the feedback control for Example 10.4, from (10.10) and (10.11)
Solving this for Xo and substituting into (10.12) gives the feedback control
4- '50g8(l — t)
Mop(x»P(t), t) = -Y^-^^a=tT ^°''(*)
Hence what goes in the box marked * in Fig. 10-1 is the time-varying gain element K(t) where
^, , _ 2 + 30e8(i-»
K(i) — I — ]5g8(i-t)
and the overall closed loop system is
dx"P _ 4 + 20.38"-"
IT " 1 - 5e8(i-»
10.4 THE MATRIX RICCATI EQUATION
To find the time-varying gain matrix K(i) directly, let P(*) = P(*) 't!^*)- ^"""-,^S.^'
an n X n Hermitian matrix to be found. Then u-^lx, t) = -R-BW^ '"f ^.T r.L.Hion
The closed loop system then becomes d.^^/dt = (A - BR-B^P)xo^ ^ call its transition
matrix *„(*'-)• Substituting p = Px- into the bottom equation of {10.9) gives
{dF/dt)K.°^ + Vdx'-'-/dt = -Qx°^ - A^Fx"-'
Using the top equation of (10.9) for dTc°'>/dt then gives
= (dP/dt + Q + AT + PA - PBR-'B''P)x°p
But x'"'(«) = *,,(«, «o)xo, and since xo is arbitrary and *,, is nonsingular, we find the nxn
matrix P must satisfy the matrix Riccati equation
-dF/dt = Q + AT + PA - PBR-»BT (10.13)
This has the "final" condition P(«x) = S since :p(f 1) = P(«i) x(tx) = Sx(iO. Changing inde-
pendent variables by r = U-t, the matrix Riccati equation becomes dP/dr = Q + A P +
PA - PBR-'BT where the arguments of the matrices are U-r instead of t. The equation
can then be solved numerically on a computer Jirom r = to r = i, - to, starting at the
initial condition P(0) = S . Occasionally the matrix Riccati equation can also be solved
analytically.
Example 10.6.
For the system of Example 10.4, the 1 X 1 matrix Riccati equation is
-dP/dt = 3 + 4P-4P2
with P(l) — 2. Since this is separable for this example,
f dP _ 1 ,.. nt) - 3/2 1 zw+i/^
*-*' = j 4{P^"3/2)(P + l72) - 8^"F(ti)-3/2 8 *" P(ti) + 1/2
Taking antilogarithms and rearranging, after setting ti = 1, gives
, ^ 15 -V e8"-"
■P(t) - 10 - 2e8«-i'
Then K(t) = -R-^BTP = -4F(t), which checks with the answer obtained in Example 10.5.
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL
213
Another method of solution of the matrix Riccati equation is to use the transition matrix
o± (10.9) directly. Partition the transition matrix as
Then
Eliminating x(ij) from this gives
V{t) = P(i)x(^) = [*2i(i,g + *,,(«, <j)S][*,^(f,g+*^^(^gS]-ix(«) (i0.i5)
so that P is the product of the two bracketed matrices. A sufficient condition (not necessary)
for the existence of the solution V{t) of the Riccati equation is that the open loop transition
matrix does not become unbounded in U^t^ U. Therefore the inverse in equation (10.15)
can alv^ays be found under this condition.
Example 10.7.
For Example 10.4, use of equation [10.10) gives P(t) from (10.15) as
^ ' ~ [2e4Ct-i) + 6e-4<t-i) + 2(4e4"-i) - 4e-*<t-i^)]/8
which reduces to the answer obtained in Example 10.6-
We have a useful check on the solution of the matrix Riccati equation.
Theorem 102: If S is positive definite and Q(«) at least nonnegative definite, or vice versa,
and R(«) is positive definite, then an optimum v[x, u^p] exists if and only if
the solution P(^) to the matrix Riccati equation {10.13) exists and is bounded
and positive definite for all t < U. Under these conditions v[x. u"-! =
Proof is given in Problem 10.1, page 220. It is evident from the proof that if both S and
«(«) are nonnegative definite, P(f) can be nonnegative definite if v[x, u""] exists.
Theorem 10.3: For S{t), Q(i) and R{t) symmetric, the solution P(^) to the matrix Riccati
equation (10.13) is symmetric.
Proof: Take the transpose of (10.13), note it is identical to (10.13), and recall that
there is only one solution F(t) that is equal to S at time tu
Note this means for an nXn P(t) that only »(n + l)/2 equations need be solved on the
computer, because S, Q and R can always be taken symmetric. Further aids in obtaining
a solution for the time-varying case are given in Problems 10.20 and 10.21.
10.5 TIME-INVARIANT OPTIMAL SYSTEMS
So far we have obtained only time-varying feedback gain elements. The most important
engineering application is for time-invariant closed loop systems. Consequently in this
section we shall consider A, B, R and Q to be constant, S = and ii -> oc to obtain a con-
stant feedback element K.
Because the existence of the solution of the Riccati equation is guaranteed if the open
loop transition matrix does not become unbounded in U^t^ U, in the limit as ii -> =o we
should expect no trouble from asymptotically stable open loop systems. However, we wish
to incorporate unstable systems in the following development, and need the following ex-
istence theorem for the limit as <i -> =o of the solution to the Riccati equation, denoted by n.
214
INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
Theorem 10.4: If the states of the system {10.1) that are not asymptotically stable are
controllable, then lim P(fo; U) = n{to) exists and n{to) is constant and posi-
tive definite. ''"°
Proof: Define a control u1(t) = -B''(T)*^(i2,T) W->(«o,t2)x(*o) for U^r^tz, similar
to that used in Problem 6.8, page 142. Note W drives all the controllable states to zero at
the time t2<°°. Let u'(t) = for t > ti. Defining the response of the asymptotically
J,"
Xa^'sQxasdi < °o. Therefore
v\Ti,U^ = r'(xTQx + ul^Rul)di + I Xa'^QXasd* < =»
Note v[k,u^] ^ a{to,t2)-x.'^{to)7^{to) after carrying out the integration, since both x(*) and
u\t) are linear in x(fo). Here a{to, ^2) is some bounded scalar function of U and tz. Then
from Theorem 10.2,
ix''(io)n(<o)x(to) = v[x,u°''] ^ «(io, t2) x^(to) x(to) < =0
Therefore \\n(to)\\2 - 2a(to, tz) < 'X' so n(io) is bounded. It can be shown that for S = 0,
Ps=o (*"'*) ~ Ps=o(*<';*i) ^°^ U^t^ ti, and also that when S > 0, then
lim||Pg^„(io;i)-Ps.„(«o;<)|| =
Therefore, lim F{to; h) must be a constant because for ii large any change in P(fo; U) must
be to increase P(*o;*i) until it hits the bound n(to)-
Since we are now dealing with a time-invariant closed loop system, by Definition 1.8 the
time axis can be translated and an equivalent system results. Hence we can send U -^ — «>,
start the Riccati equation at P(ii) = 8 = and integrate numerically backwards in time
until the steady state constant U is reached.
Example 10.8.
Consider the system dx/dt = u with criterion " = s | {x^ + u^) dt. Then the Riccati equation is
-dP/dt = 1 - P2 with P(ti) = 0. This has a solution P(to) = tanh (tj - t„). Therefore
n = lim P(to; *i) = lim tanh (fj — <o) = lim tanh («i - to) = 1
ti -► CO tj -+ 00 fQ -+ — CO
The optimal control is u°p = —R~W^Jlx = —x.
Since n is a positive definite constant solution, of the matrix Riccati equation, it satisfies
the quadratic algebraic equation
= Q + A^n + HA - nBR-iB^n (10.16)
Example 10.9.
For the system of Example 10.8, the quadratic algebraic equation satisfied by n is = 1 — n?. This
has solutions ±1, so that n = 1 which is the only positive definite solution.
A very useful engineering property of the quadratic optimal system is that it is always
stable if it is controllable. Since a linear, constant coefficient, stable closed loop system
always results, often the quadratic criterion is chosen solely to attain this desirable feature.
Theorem 10.5: If n exists for the constant coefficient system, the closed loop system is
asymptotically stable.
Proof: Choose a Liapunov function V = x'^Ux. Since n is positive definite, V >
for all nonzero x. Then
dV/di = x^n(Ax-BR-»B^nx) + (Ax-BR-iBi'nx)rnx = -x'^Qx - x^'nBR-iB^'nx <
for all X ^ because Q is positive definite and IlBR~iB''n is nonnegative definite.
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL
215
Since there are in general n{n + l) solutions of (10.16), it helps to know that n is the
only positive definite solution.
Theorem 10.6: The unique positive definite solution of (10.16) is n.
Proof: Suppose there exist two positive definite solutions Di and Ea. Then
(n, - n2)(A - BR- wn.) + (A - BR->B^n2r(ni - n^) = q-q = o (lo.i?)
^w^l^ f7?^f '■''n T ^•^' *^^* *^^ ^^"^^^°" XF + GX = K has a unique solution X when-
ever A,(F) + A,(G) ^0 for any i and j, where A,(F) is an eigenvalue of F and A,(G) is an
eigenvalue of G. But A - BR-B-n. for i = 1 and 2 are stability matrices, from Theorem
10.5. Therefore the sum of the real parts of any combination of their eigenvalues must
be less than zero. Hence (10.17) has the unique solution Hi-Hz = 0.
Generally the equation (JO.iff) can be solved for very high-order systems, n ^ 50 This
gives another advantage over classical frequency domain techniques, which are not so easily
adapted to computer solution. Algebraic solution of (10.16) for large n is difficult, because
there are n(n + 1) solutions that must be checked. However, if a good initial guess F(t,)
IS available, i.e. P(U) = 8F(U) + n where 8P(i.) is small, numerical solution of (10.13) back-
war J from anyj. to a steady state gives n. In other words, 8F(t) = P(t) - n tends to zero
as t -^ — 00^ which IS true because:
Theorem 10.7: If A, B, R and Q are constant matrices, and R, Q and the initial state F(t2)
are positive definite, then equation (10.13) is globally asymptotically stable
as ^ ^ -co relative to the equilibrium state n.
Proof: 8P obeys the equation (subtracting (10.16) from (10.13))
-dSF/dt = Fr8P + 8PF-8PBR-iB'^«P
where F = A-BR-'B'"n. Choose as a Liapunov function 2F = tr[8P^{nn^)-i8P1 For
brevity, we investigate here only the real scalar case. Then 2F = -Qn-^ - B^R-m and
2F - n 2 8P^, so that dV/dt = n-^ 8PdSP/dt = n-^ SP^Qn-^ + B^R-^P) > for all non-
zero SP since Q, R, P and n are all > 0. Hence 7 - as « -> -=c, and 8P - 0. It can be
shown m the vector case that dV/dt > also.
Example 10.10.
snnnLT ^ ^'- *"" f ^/T^!.^ ^°-^' '""'''^"'^ *^^ '^"^'^*'°" "^"^ *° ^" '»<=°^^««t i"i«al condition, i.e.
an^nLi TimpTori 1 ^'n!'"''*^^'"' ''^^'^ ^ '' '^''"'" ^(*o) = tanh (*, - «„ + tanh"! e). For
Experience has shown that for very high-order systems, the approximations inherent
m numerical solution (truncation and roundoff) often lead to an unstable solution for bad
initial guesses, in spite of Theorem 10.7. Another method of computation of n will now
be investigated.
Theorem 10.8: If a^ is an eigenvalue of H, the constant 2n x 2n matrix corresponding to
(10.9), then -Ai is also an eigenvalue of H.
Proof: Denote the top n elements of the ith. eigenvector of H as £i and the bottom n
elements as gi. Then
°^ \ii = A£, - BR-iB'-g^
\S^ = -Qf, - A^g,
-^?) = (-B^-^B. :?)(7') = H^(7'
216
INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
This Shows -\i is an eigenvalue of H^. Also, since detM = (detM^)* for any matrix
M, then for any | such that det (H - |I) = we find that det (H- - |I) = 0. This shows that
if e is an eigenvalue of H^ then ^ is an eigenvalue of H. Since -Xi is an eigenvalue of
H'', it is concluded that -xt is an eigenvalue of H.
This theorem shows that the eigenvalues of H are placed symmetrically with regard
to the > axis in the complex plane. Then H has at most n eigenvalues with real parts < 0,
and will have exactly n unless some are purely imaginary.
Example 10.11. . . _^x
The H matrix corresponding to Example 10.8 is H = (_^ J ■ This has eigenvalues +1 and -1,
which are symmetric with respect to the imaginary axis. V
Example 10.12.
A fourth-order system might give rise to an 8 X 8 IH having eigenvalues placed as shown in Fig. 10-2.
ImX
•ReX
Fig. 1(1-2
The factorization into eigenvalues with real parts < and real parts > is another way
of looking at the Wiener factorization of a polynomial p{o>^) into p''{<^)p («)•
Theorem 10.9: If Ai, A2, . . ., A„ are n distinct eigenvalues of H having real parts < 0, then
n = GF-i where F = (fi |£2| . . • |fr.) and G = (gilg2| . . • |gn) as defined
in equation (10.18).
Proof- Since n is the unique Hermitian positive definite solution of (10.16), we will
show first that GP-i is a solution to (10.16), th<m that it is Hermitian, and finally that it is
positive definite.
Define annxn diagonal matrix A with Ai, \2, .... An its diagonal elements so that from
the eigenvalue equation (10.18),
{ly - H(S) - (4 -^".n©
from which /m io\
FA = AF-BR-iB'^G {^0.19)
GA = -QF-A^G (iO-^O)
Premultiplying (10.19) by F" ^ and substituting for A in (10.20) gives
GF-iAF - GF-iBR-^B'^G = -QF - A^G
Postmultiplying by F-^ shows GF-i satisfies (10.16).
Next, we show GF-^ is Hermitian. It suffices to show M = FtG is Hermitian, since
GF-i = Ft-iMF-i is then Hermitian. Let the elements of M be mjic-iigk; tor j ^ k,
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL
217
^^i^UA-i Q n^ -I .K
>^3r^>'5-tiHV « !^ W « i\hV*^
Since the term in braces equals 0, we have m.'^ = m* and thus GP-i is Hermitian.
Finally, to show GF-i is positive definite, define two w x w matrix functions of time as
e(<)-.Fe^'F 1 and #(«) = Ge^'P-'. Then fl(c») = = ^(oc). Using (J0J9) and (iO.^0),
de/dt = (AF - BR-iB^G)e^'F-i
d^/di = -(QF + A^G)e^'F-i
Then
so that
GF-i = «t(o)^(o) = -j'J^^e^^at
(e^'F->)t(FtQF + GtBR-»B''G)(e^'F-»)di
Since the integrand is positive definite, GF-i is positive definite, and GF-^ = n.
Corollary 10.10: The closed loop response is x(^) = Fe-^'-o^F-xo with costate p(*) =
The proof is similar to the proof that GF-' is positive definite.
Corollary 10.11: The eigenvalues of the closed loop matrix A-BR-'B^n are Ai,X2, x„
and the eigenvectors of A - BR-i B^n are Ifi, fz, . . . , fn.
The proof follows immediately from equation (10.19). Furthermore, since \^, X2 \„
are assumed distinct, then from Theorem 4.1 we know f., f„ . . . , f„ are linearly independent,
so that F-i always exists. Furthermore, Theorem 10.5 assures ReAi < so no A, can be
imaginary.
Example 10.13.
The H matrix corresponding to the system of Example 10.8 has eigenvalues -1 = X, and +1 = X,
Correspondmg to these eigenvalues, ^ 1 c*"" -rx ^2-
where a and ;8 are any nonzero constants. Usually, we would merely set a and fi equal to one hut for
purposes of instruction we do not here. We discard X^ and its eigenvector since it has a real part > 0,
and form F - /i - a and G = g^ = a. Then n = GP-^ =1 because the a's cancel. From Problem 4.41,
m the vector case F = FoK and G = GoK where K is the diagonal matrix of arbitrary constants asso-
ciated with each eigenvector, but still n = GF-i = GoK(FoK)-i = GoFq.
Use of an eigenvalue and eigenvector routine to calculate n from Theorem 10 9 has
given results for systems of order n ^ 50. Perhaps the best procedure is to calculate an
approximate Ho = GF 1 using eigenvectors, and next use Re(no + nJ)/2 as an initial guess
to the Riccati equation (10.13). Then the Riccati equation stability properties (Theorem
10.7) will reduce any errors in Do, as well as provide a check on the eigenvector calculation.
10.6 OUTPUT FEEDBACK
Until now we have considered only systems in which the output was the state, y = Ix.
For y = Ix, all the states are available for measurement. In the general case y = C(<)x,
the states must be reconstructed from the output. Therefore we must assume the observa-
bility of the closed loop system dx/d^ = F(i)x where F(t) = A(t)~B{t)R-\t)Bm)F{t-ti)
218
INTRODUCTION TO OPTIMAL CONTROL
[CHAP. 10
To reconstruct the state from the output, the output is differentiated n-1 times.
y = C{t)K = Ni(t)x
dy/dt = N,(^)dx/di + dNJdtx = (NiF + dNi/dt)x = Nax
d"-»y/di"-i = (N„-iF + dN„-i/dQx = N„x
where N'' = (Nr| . . . | NI) is the observability matrix defined in Theorem 6.11. ^ Define a
nk X k matrix of differentiation operators ll{d/dt) by H^(d/di) = {I\Id/dt\ . . . |Id" Vdi" y.
Then x = N-'{t) H{d/dt)y. Since the closed loop system is observable, N has rank n. From
Property 16 of Section 4.8, page 87, the generalized inverse N"^ has rank n. Using the
results of Problem 3.11, page 63, we conclude the n-vector x exists and is uniquely de-
termined. The optimal control is then u = -R^iBTN^^Hy.
Example 10.14.
Given the system
dt
X2
1
-012 — «!
X2
2/ = (1 0)x
with optimal control u = k,x, + k^x^. Since y = x^ and dx,/dt = x^, the optimal control in terms of the
output is M = k^y + k^ dy/dt.
System (iO.l)
-R-^B^
State
estimator
Pig. 10-3
Since u involves derivatives of y, it may appear that this is not very practical This
mathematical result arises because we are controlling a deterministic system m which there
is no noise, i.e. in which differentiation is feasible.
However, in most cases the noise is such thai; the probabilistic nature of the system must
be taken into account. A result of stochastic optimization theory is that under certain cir-
cumstances the best estimate of the state can be used in place of the state m the optimal
control and still an optimum is obtained (the "separation theorem" • An estimate of each
state can be obtained from the output, so that structure of the optimal controller is as shown
in Fig. 10-3.
10.7 THE SERVOMECHANISM PROBLEM
Here we shall discuss only servomechanism problems that can be reduced to regulator
problems. We wish to find the optimum compensation to go into the box marked * * in
Fig. 10-4.
Fig. 10-4. The Servomechanism Problem
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 219
The criterion to be minimized is
W{ti) Se(«i) + ^ J'' [e^(T) Q(t) e(T) + u'-Cr) R(t) u(t)] dr (10.21)
Note when e{t) = y{t) - d{t) is minimized, y(i) will follow d{t) closely.
To reduce this problem to a regulator, we consider only those d{t) that can be generated
by arbitrary z{to) in the equation dz/dt = A(«)z, d = C(«)z. The coefficients A(«) and C(t)
are identical to those of the open loop system (10.1).
Example 10.15.
Given the open loop system
^ = (0 jx + ( 62 )m 2/ = (1 1 1)x
\0 -2/ V63/
Then we consider only those inputs d(t) = z^ + Z2o(« + 1) + z^^e-^K In other words, we can consider as
inputs only ramps, steps and e~^<- functions and arbitrary linear combinations thereof.
Restricting d(t) in this manner permits defining new state variables w = x - z. Then
dw/dt = A(t)w + B(t)u e = C(t)w (10.22)
Now we have the regulator problem (10.22) subject to the criterion (10.21), and solution
of the matrix Riccati equation gives the optimal control as u = — R^'B'^Pw. The states w
can be found from e and its w - 1 derivatives as in Section 10.6, so the content of the box
marked * * in Fig. 10-4 is R-^B'^PN-'H.
The requirement that the input d(t) be generated by the zero-input equation dzldt =
A.(t)z has significance in relation to the error constants discussed in Section 8.3. Theorem
8.1 states that for unity feedback systems, such as that of Fig. 10-4, if the zero output is
asymptotically stable then the class of inputs d(t) that the system can follow (such that
lim e(t) — 0) is generated by the equations dw/dt — A(i)w + B(f)g and d(t) — C(i)w + g
where g(i) is any function such that lim g(t) = 0. Unfortunately, such an input is not
reducible to the regulator problem in general. However, taking g = assures us the
system can follow the inputs that are reducible to the regulator problem if the closed loop
zero output is asymptotically stable.
Restricting the discussion to time-invariant systems gives us the assurance that the
closed loop zero output is asymptotically stable, from Theorem 10.5. If we further restrict
discussion to inputs of the type di = (t — io)'"^C/(*-<o)ei as in Definition 8.1, then Theorem
8.2 applies. Then we must introduce integral compensation when the open loop transfer
function matrix H(s) is not of the correct type I to follow the desired input.
Example 10.16.
Consider a system with transfer function G{s) in which G(0) ¥^ », i.e. it contains no pure integrations.
We must introduce integral compensation, lis. Then the optimal servomechanism to follow a step input
is as shown in Fig. 10-5 where the box marked ** contains R^ib^nN~iH(s). This is a linear combination
of l,s, . . .,s" since the overall system G{s)/s is of order m + 1. Thus we can write the contents of **
as fcj + &2S + • • • + A;„s". The compensation for G{s) is then fej/s + A;2 + ^38 + • • • + fc^s""!. In a noisy
environment the differentiations cannot be realized and are approximated by a filter, so that the compen-
sation takes the form of integral plus proportional plus a filter.
d(t) = step .
I -e(t)
^
K
Fig. 10-5
u{t)
OJ/o
■^ y{t)
220
INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
10.8 CONCLUSION
In this chapter we have studied the linear optimal control problem. For time-invariant
systems, we note a similarity to the pole placement technique of Section 8.6. We can take
our choice as to how to approach the feedback control design problem. Either we select
the pole positions or we select the weighting matrices Q and R in the criterion. The equiva-
lence of the two methods is manifested by Corollary 10.11, although the equivalence is not
one-to-one because analysis similar to Problem 10.8 shows that a control u = k'-x is optimal
if and only if |det(>I- A-bk^)| ^ ldet(y«,I- A)l for all a,. The dual of this equivalence
is that Section 8.7 on observers is similar to the Kalman-Bucy filter and the algebraic separa-
tion theorem of Section 8.8 is similar to the separation theorem of stochastic optimal control.
Solved Problems
10.1. Prove Theorem 10.1, page 211, and Theorem 10.2, page 213.
The heuristic proof given previously was not rigorous for the following six reasons.
(1) By minimizing under the integral sign at each instant of time, i.e. with t fixed, the resulting
minimum is not guaranteed continuous in time. Therefore the resulting optimal functions
x<>P(t) and p(t) may not have a derivative and equation (10.7) would make no sense.
(2) The open loop time function u(t) was found, and only later related to the feedback control u(x, t).
(3) We wish to take piecewise continuous controls into account.
(4) Taking derivatives of smooth functions gives local maxima, minima, or inflection points.
We wish to guarantee a global minimum.
(5) We supposed the minimum of each of the three terms n, V2, and v^ gave the minimum of v.
(6) We said in the heuristic proof that if the function were a minimum, then equations (10.6), (10.7)
and (10.8) held, i.e. were necessary conditions for a minimum. We wish to give sufiicient condi-
tions for a minimum: we will start with the assumption that a certain quantity v(x, t) obeys a
partial differential equation, and then show that a minimum, is attained.
To start the proof, call the trajectory x(t) =: *(t; x(to). ^ corresponding to the optimal system
dx/dt = Ax + Bu»P(x, t) starting from to and initial condition x(to). Note ^ does not depend on u
since u has been chosen as a specified function of x and t. Then v[x, u<'p(x, t)] can be evaluated
if ^ is known, simply by integrating out * and leaving the parameters <i, to and x(to). Symbolically,
r[x, u»P] = i^nh; x(*o), <o)S*(*i; x(y , to)
+ Ij *' [i.^T; x(to). to) Q(r) *(r; x(to). ^o) + (W^ni-. r)Ru<'P(*, t)] dr
Since we can start from any initial conditions x(<o) and initial time to, we can consider "[^.u"^] =
v(^(to),to;ti) where v is an explicit function otn + 1 variables x(to) and to. dependmg on the fixed
parameter tj.
Suppose we can find a solution v(k, t) to the ( Hamilton-Jacobi) partial differential equation
^ + ix^Qx - ^(gr&d^v)'^R-^B-r grad^v + (grad^i;)i'Ax = (10.S3)
with the boundary condition v(x(t,), t,) = iicT(t^) Sx(ti). Note that for any control u(x, t),
iu^TBu + ^(grad^v)rBR-iBTgrad^i; + (erad^v)-^u
=: ^(u + R-iBTgrad3,iJ)''R(u + R-iBTgradxV) s= o
where the equality is attained only for
u(x,t) = -R-iBTgrad^i; i^^^-^i)
Rearranging the inequality and adding ^x^Qx to both sides,
^xTQx + ^umu ^ ^xTQx - i(girad^i;)^R-'BTgrad^v - (grad^t;)^Bu
Using (i 0.^5) and dx/dt = Ax + Bu, we get
Jx^Qx + iumu ^ -[11 + (grad^fHAx + Bu)J = -^.,(x(t),t)
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 221
Integration preserves the inequality, except for a set of measure zero:
(xTQx + u''Ru) dt s= v{K(to),to) - ■u(x(ii), ti)
IX"
to
Given the boundary condition on the Hamilton-Jacobi equation (10.23), the criterion ^[x, u] for any
control is
4x,u] = ix'r(ti) Sx(ti) + U ' (x^Qx + ui-Ru) dt ss i,(x(fo), <o)
to
so that if v(x(^o)> *o) is nonnegative definite, then it is the cost due to the optimal control, (10.2^):
u<>P(x, t) = -R-i(t)BT(t) gradxV(x, t)
To solve the Hamilton-Jacobi equation {10.2S), set v{Ti, t) = lxT(t)x where P(t) is a time-
varying Hermitian matrix to be found. Then grad,. v = P(t)x and the Hamilton-Jacobi equation
reduces to the matrix Riccati equation [10.13). Therefore the optimal control problem has been
reduced to the existence of positive definite solutions for t < ij of the matrix Riccati equation.
Using more advanced techniques, it can be shown that a unique positive definite solution to the
matrix Riccati equation always exists if ||A(t)]| < <» for t < t^. We say positive definite because
|xr(to) P(to) x(to) = 'y(x(fo),«o) = -'[x-u""] > for all % < t^
and all nonzero x(to), which proves Theorem 10.2. Since we know from Section 10.4 that the matrix
Riccati equation is equivalent to (10.9), this proves Theorem 10.1.
10.2. Find the optimal feedback control for the scalar time-varying system dx/dt = - x/2t +
u/t to minimize v = ^ C\x^ + u^)dt.
Note the open loop system has a transition matrix <t-{t,r) = (r/t)i/2, which escapes at t =
The corresponding Riccati equation is
dP _ -, _ P _ P^
dt ~ J ~fi
with boundary condition P{«j) = 0. This has a solution
f? - f2
Pit) = t
t\ +i2
\- ^^1 T * ^ *' ^""^ ^""^ * < *i - 0- However, for -t, < t < Q < t„ the interval
in which the open loop system escapes, P{t) < 0. Hence we do not have a nonnegative solution of
the Riccati equation, and the control is not the optimal one in this interval. This can only happen
when \\A{t)\\ is not bounded in the interval considered.
10.3. Given the nonlinear scalar system dy/dt = y'^e. An open loop control d(i) = -t has
been found (perhaps by some other optimization scheme) such that the nonlinear
system dyjdt = y^d with initial condition i/„(0) = 1 will follow the nominal path
Vnit) - (1 + 1^) ^'^. Unfortunately the initial condition is not exactly one, but y{0) =
1 + e where £ is some very small, unknovra number. Find a feedback control that is
stable and will minimize the error y{ti) - yn{ti) at some time U.
Call the error y(t) - y^{t) = x{t) at any time. Consider the feedback system of Pig. 10-6 below.
The equations corresponding to this system are
dy _ dy„ ^x
di - 'df^ It "" (Vn + =o)Hd + u)
= yld + Zylxd + Zy^xU + xH + j/^m + Zylxu + Zy^xH + x^u
Assume the error \x\ < \y\ and corrections \u\ < \d\ so that
\Zylxd + ylu\ > \Zy^xU + xU + Zylxu + Zy^x^u-\-xM {10S5)
222
INTRODUCTION TO OPTIMAL CONTROL
[CHAP. 10
d{t).
tO
e(t)
Multiplier
u{t)
V(0) = 1 + .
y{t)
Cuber
K{t)
x{t)
+
o
l/n(«)
Fig. 10-6
Since dyjdt = yld, we have the approximate relation
dx
dt
- Sylxd + ylu
,3„ := _
3t
:X +
We choose to minimize
(2* ^ (l + f2)3/2
^ = uHtj) + I f ' R{t) uHt) dt
where R(t) is some appropriately chosen weightinj? function such that neither x{t) nor u(t) »* any
time t gets so large that the inequality (10.25) is violated. Then K{t) = -P(t)/[a + t^W^R(m
where P(t) is the solution to the Riccati equation
dP
' dt
etp
p2
1 + t2 (1 + t2)3B(t)
with P(ti) = 1. This Riccati equation is solved numerically on a computer from tj to 0. Then
the time-varying gain function K(t) can be found and used in the proposed feedback system.
10.4. Given the open loop system with a transfer function {s + a)-\ Find a compensation
to minimize
dt
y = lj°°l{y-doy + {du/dtY]
where do is the value of an arbitrary step input.
Since the closed loop system must follow a step, and has no pure integrations, we introduce
integral compensation. The open loop system in series with a pure integration is described by
dx2/dt = -aX2 + u, and defining Xi-u and du/dt = n gives dx^/dt = du/dt = /< so that
dx
dt
:-!)-fi-
y = (0 l)x
Since an arbitrary step can be generated by this, the equations for the error become
e = (0 l)w
dw
dt
;-!)-(::"
subject to minimization of ^ = | f (e^ + i^^^) dt. The corresponding matrix Riccati equation
^0
: n-(:->--(;-°)-"G o-
Using the method of Theorem 10.9 gives
n =
y + a
where V^y = -yJa^ + V^*^ + sfJ^^V^^ and A-i = a2 + 7a + l. The optimal control is
then /I = A(wi + ywj). Since e = w^ and Wi = ae + de/dt, the closed loop system is as shown in
Fig. 10-7 below.
CHAP. 10]
INTRODUCTION TO OPTIMAL CONTROL
223
step.
+
o
) "' •
A(y + a + 8)
f*
1
y
/
8(8 + a)
Fig. 10-7
Figure 10-8 is equivalent to that of Fig. 10-7 so that the compensation introduced is integral
plus proportional.
step .
tO
A 1 -I-
a + y
8 + .
Fig. 10-8
10,5. Given the single-input time-invariant system in partitioned Jordan canonical form
A
dt
+
u
where Zc are the j controllable states, za are the n-j uncontrollable states, Je and Jd
are ^X; and (n-j) x (n-j) matrices corresponding to real Jordan form. Section 7.3,
b is a y-vector that contains no zero elements, m is a scalar control, and the straight
lines indicate the partitioning of the vectors and matrices. Suppose it is desired to
minimize the quadratic criterion v where »' = | f" [z'^ Q,cZo + p-^u^jdr. Here p >
is a scalar, and Q^ is a positive definite symmetric real matrix. Show that the optimal
feedback control for this regulator problem is of the form u{t) = -k^x^it) where
k is a constant ^'-vector, and no uncontrollable states Zd are fed back. From the results
of this, indicate briefly the conditions under which a general time-invariant single-
input system that is in Jordan form will have only controllable .states fed back; i.e.
the conditions under which the optimal closed loop system can be separated into con-
trollable and uncontrollable parts.
The matrix n satisfies equation (10.16). For the case given, A =
Q =
Q
R * — p and partition n =
^cd
led
Then
u = -p(b^ I OT)
"^cd
-^ ) = -pb^nA
pb^HedZd
Now if n,. can be shown to be constant and n^d can be shown 0, then k^ = pb^n.. But from
(10.16),
Jd
n,
l»cd
''cd
Dcd
Ucd
-t-
-"«'^(^^)-(^
224
INTRODUCTION TO OPTIMAL CONTROL
[CHAP. 10
or
JcDc
^d"-cd
JcHcd
JriO
d"d
+
nL'^c
HcdJd
HdJd
n>brn.
HcbbTHed
nJdbbTne
Qc
Within the upper left-hand partition,
the matrix Riccati equation for the controllable system, which has a constant positive definite
solution He- Within the upper right-hand corner,
= JcHed + n^dJd - pOcbbrn,,, +
This is a linear equation in n,d and has a solution n,d = by inspection, and thus u = -pb^n^z,.
Now a general system i - Ax + Bu can be transformed to real Jordan form by x = Tz, where
T-iAT = J. Then the criterion
i f"(xTQx + p-iM2)d^ = ^j" {z'rTTq,'Iz + p-iu^)dr
Qc
gives the matrix Riccati equation shown for z^ and za if T^QT - ( ^ | „ j ' ^■^- °^^^ ^^^ '^°^'
troUable states are weighted in the criterion. Otherwise uncontrollable states must be fed back^
If the uncontrollable states are included, . diverges if any are unstable, but if they are all stable
the action of the controllable states is influenced by them and v remains finite.
10.6. Given the controllable and observable system
dx/dt ^ A(t)x + B(t)u, y - C{t)x {10.26)
It is desired that the closed loop response of this system approach that of an ideal or
model system dw/dt = L(i)w. In other M^ords, we have a hypothetical system
dw/dt = L(i)w and wish to adjust u such that the real system dx/dt = A(t)x-f B(t)u
behaves in a manner similar to the hypothetical system. Find a control u such that
the error between the model and plant output vector derivatives becomes small, i.e.
minimize ^ ^t,!-//ir \t /^-.r \ n
Note that in the case A, L = constants and B = C = I, R = 0, this is equivalent to
asking that the closed loop system dx/dt = (A + BK)x be identical to dx/dt = Lx
and hence have the same poles. Therefoire in this case it reduces to a pole placement
scheme (see Section 8.6).
Substituting the plant equation (10.26) into the performance index (10.27),
= 1 r '' ([(c + CA - LC)x -t- CBu]TQ[(C + CA - LC)x + CBu] + u^-Ru} dt
(10.28)
This performance index is not of the same form as criterion (10.2) because cross products of x
and u appear. However, let S = u + (R '+ BrQB)-iBrc^Q(dC/dt + CA - LC)x. Since R is positive
definite, z^Rz > for any nonzero z. Since B'^QB is nonnegative definite, < z^Rz -I- zTB^QEz -
zT(B -I- B'^QB)z so that B = R -i- B^QB is positive definite and hence its inverse always exists.
Therefore the control u can always be found in terms of u and x. Then the system (10.26) becomes
(10.29)
d-s.ldt -- Ax-f Bu
and the performance index (10.28) becomes
1 /"'l ^ J. A. A.
^ = i j (xTQx-fumu)dt
in which, defining M(t) = dC/dt -I- CA - LC and K(«) = (B^QB + R) - iBTCQIVI,
A = A-BK, Q = CT[(M-BKrQ(M-BK)-h K^BK]C and R = B + B^QB
(10.30)
CHAP. 10]
INTRODUCTION TO OPTIMAL CONTROL
225
Since R has been shown positive definite and Q is nonnegative definite by a similar argument, the
regulator problem (10.29) and (10.30) is in standard form. Then u = -R-iBTPx is the optimal
solution to the regulator problem (10.29) and (10.30), where P is the positive definite solution to
-dP/dt - q + A^p + PA - PBR-iRTp
with boundary condition ¥(ti) = 0. The control to minimize (10.27) is then
u = -R-iBi^(P + BrcrQM)x
Note that cases in which Q is not positive definite or the system (10.29) is not controllable may
give no solution P to the matrix Riccati equation. Even though the conditions under which this
procedure works have not been clearly defined, the engineering approach would be to try it on a
particular problem and see if a satisfactory answer could be obtained.
10.7. In Problem 10.6, feedback was placed around the plant dx/dt = A(i)x + B{t)u,
y = C{t)x so that it would behave similar to a model that might be hypothetical. In
this problem we consider actually building a model dw/dt = 'L{t)w with output
V = J{t)w and using it as a prefilter ahead of the plant. Find a control u to the plant
such that the error between the plant and model output becomes small, i.e. minimize
I f [(y - v)^Q(y - v) + u'rRu] dt
•/So
{10.31)
Again, we reduce this to an equivalent regulator problem. The plant and model equations can
be written as
rf^(x) = (o a)(J + (b
(10^2)
and the criterion is, in terms of the (w x) vector.
Thus the solution proceeds as usual and u is a linear combination of w and x
^VC-J C)rQ(-J o^^
+ u'TRu Idt
However, note that the system (10.82) is uncontrollable with respect to the respect to the w
variables. In fact, merely by setting L = and using 3(t) to generate the input, a form of general
servomechanism problem results. The conditions under which the general solution to the servo-
mechanism problem exists are not known. Hence we should expect the solution to this problem
to exist only under a more restrictive set of circumstances than Problem 10.6. But again, if a
positive definite solution of the corresponding matrix Riccati equation can be found, this procedure
can be useful in the design of a particular engineering system.
The resulting system can be diagrammed as shown in Fig. 10-9.
OWo
Model
Mt)
> m
v(t)
Feedforward
+
+
u(t)
:^
I
Plant
x(()
Feedback
> C(t)
y(t)
Fig. 10-9
226 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
10.8. Given the time-invariant controllable and observable single input-single output sys-
tem dTi-ldt = Ax + hu, y = c'^x. Assume that u = U-x., where k has been chosen such
that 2v = \ {qy"^ + u^) dt has been minimized. Find the asymptotic behavior of
the closed loop system eigenvalues as g -* and g -^ =o.
Since the system is optimal, k = -b^n where n satisfies (10.16), which is written here as
= qcc'^ + Ai'n + nA - nbbTn (10.33)
If q = 0, then n = is the unique nonnegative definite solution of the matrix Riccati equation.
Then u = 0, which corresponds to the intuitive feeling that if the criterion is independent of
response, mal<e this control zero. Therefore as q -' 0, the closed loop eigenvalues tend to the open
loop eigenvalues.
To examine the case g -* «, add and subtract sD; from (10.33), multiply on the right hy (si - A)-i
and on the left by (—si— Ai")"! to obtain
= 9(-sI-Ar)-iccr(sI-A)-i - n(sI-A)-i - (-sI-AT)-in - (-sl ^ A^)-inbbTn(sI- A)-i
(10. Si)
Multiply on the left by b^ and the right by b, and call the scalar G(s) = cr(sX- A)-ib and the
scalar H(s) = +bTn(sI - A)-ib. The reason for this notation is that, given an input with Laplace
transform p(s) to the open loop system, x(i) = ^-'{(sl - A)-ibp(s)} so that
Jl{y(t)} = cr^{x(«)} = cT(sI-A)-lbp(8) = G(s)p(s)
and -^{m(0} = bTn^{x(i)} = b''-n(sl - A)-ibp(s) = H(s) p(s)
In other words, G(s) is the open loop transfer function and H(s) is the transfer function from the
input to the control. Then (lO.Si) becomes
== qG(-s) G(s) - H(s) - H(-s) - H(s) H(-s)
Addmg 1 to each side and rearranging gives
!1 + H(s)|2 = l + q\G(sYf (10.35)
It has been shown that only optimal systems obey this relationship. Denote the numerator of H(s)
as n(H) and of G(s) as n(G), and the denominator of H(s) as d(H) and of G(s) as d(G). But d(G) -
det (si - A) = d(H). Multiplying (10.35) by ld(G)|2 gives
\d(G) + n(H)\2 = \d(G)\^ + q\n(G)\^ (10.36)
As g -^ =°, if there are m zeros of n(G), the open loop system, then 2m zeros of (10.36) tend to the
zeros of \n(G)\^. The remaining 2(n -m) zeros tend to » and are asymptotic to the zeros of the
equation s2(n-"«5 = q. »Since the closed loop eigenvalues are the left half plane zeros of \d(G) +
w(i?)|2, we conclude that as q -* « m closed loop poles tend to the m open loop zeros and the remain-
ing w-m closed loop poles tend to the left half plane zeros of the equation s2C»-"i> = q.
In other words, they tend to a stable Butterworth configuration of order n-m and radius q^
where 2(m-«i) = y-'. If the system has 3 open loop poles and one open loop zero, then 2 open
loop poles are asymptotic as shown in Fig. 10-10.
This is independent of the open loop pole-zero con-
figuration. Also, note equation (10.36) requireis the
closed loop poles to tend to the open loop poles as 9 -^ 0.
Furthermore, the criterion j (qy^ + vP) dt is quite
general for scalar controls since c can be chosen loy the
designer (also see Problem 10.19).
Of course we should remember that the results of
this analysis are valid only for this particular criterion
involving the output and are not valid for a general
quadratic in the state. ^^'
CHAP. 10]
INTRODUCTION TO OPTIMAL CONTROL
227
Supplementary Problems
10.9. Given the scalar system dxidt — Ax + u with initial condition a;(0) = Xp. Find a feedback control
u to minimize 2v = I (9x2 4- ^2^ ^i^
■^0
10.10. Consider the plant
Hi
-^5)'' + (f'"
We desire to find u to minimize 2c = I (^x\ + m^) d(.
(a) Write the canonical equations (10.9) and their boundary conditions.
(6) Solve the canonical equations for x(t) and p(t).
(c) Find the open loop control m(xq, i).
10.11. For the system and criterion of Problem 10.10,
(a) write three coupled nonlinear scalar equations for the elements (f^j of the P(t) matrix,
(6) find P(t) using the relationship p(t) = P(t) x(f) and the results of part (6) of Problem
10.10,
(c) verify the results of (6) satisfy the equations found in (a), and that P(t) is constant and
positive definite,
(d) draw a flow diagram of the closed loop system.
10.12. Consider the plant
and performance index
dt
: ;)(:;) ^(r'«
I r (.«=?
+ m2) dt
Assuming the initial conditions are Xi{0) = 5, KgiO) = -3, find the equations (10.9) whose solutions
will give the control (10.8). What are the boundary conditions for these equations (10.9)'!
10.13. Consider the scalar plant
and the performance index
dx/dt = X + u
Sx^T) + I r (c<;2 + m2)
dt
(a) Using the state-costate transition matrix *({, r), find P{t) such that p(t) = P(t) x(t).
(b) What is lim P(t)?
10.14. Given the system of Fig. 10-11, find K(t) to minimize
2v = \ (x2 + uyp)dt and find lim K(t).
to
10.15. Given the system
dx _
(ft
Find the control that minimizes
Zero input
D'^ + fJ'"
,r\ w .
1
J
e
+ L
+
J
8 -V a
K(t)
2c
J -too
(xT:
n
'X + m2) dt
Fig. 10-11
10.16. Consider the motion of a missile in a straight line. Let r = v and v = u, where r is the position
and V is the velocity. Also, u is the acceleration due to the control force. Hence the state equation
d fr
di
: ;)(:)-(:'-
It is desired to minimize
T
2v = q^vHT) + q,rKT) + T uHt) d-
where q„ and q^ are scalars > 0.
228 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
Find the feedback control matrix M{e) relating the control u(t) to the states r{t) and v(t) in
the form
where e = T — t. Here e is known as the "time to go".
10.17. Given the system ^ = (° Ir^il)"^ "^^^^ criterion 2v = j (qxl + u^)dt. Draw the
root locus of the closed loop roots as q varies frora zero to «>.
10.18. Consider the matrix
H = r
_ r ,1 -62/r"|
L— ? —a J
associated with the optimization problem
dx/dt = ax + bu
T
" = I f ((fx2 + rM2) dt
x{Q) = xo; p(T) =
Show that the roots of H are symmetrically placed with respect to the ju axis and show that the
location of the roots of H depends only upon the ratio g/r for fixed a and h.
10.19. Given the system (10.1) and let S = 0, Q(t) = v^Ut) and R(t) = pRo(i) in the criterion (10.2).
Prove that the optimal control law u°p depends only on the ratio ii/p if Qo(*) and Ko(t) are fixed.
10.20. Show that the transition matrix (lO.li) is symplectic, i.e.
and also show det Li, ,^ \ „, /, \ ~" ■'■•
10.21. Using the results of Problem 10.20, show that for the case S = 0, P(t) = <l>~^ (t, t) *2i(t, r). This
provides a check on numerical computations.
10.22. For the scalar time-varying system dx/dt = -x tan t + u, find the optimal feedback control
1 r*^
u(x, t) to minimize '' = 5 I ^^^^^ "^ ^"^^ '^^^
10.23. Find the solution P(f) to the scalar Riccati equation corresponding to the open loop system with
finite escape time dx/dt = -(x + u)/t with criterion " = | J '(6a;2 + u^) dt. Consider the behavior
for < to < h, for «o < *i < a^d ^o^ -ytj < to < < *i where y« = 3/2.
10.24. Given the system shown in Fig. 10-12. Find the compensation K to optimize the criterion
2u = C°°[{y-d)^ + u^]dt.
OWo
y-^ -e(t\ I I M(f)
d{t) = step 1
-t-
-N -e(t)
K
u{t)
1^
B
y(
t)
J
\
Fig. 10-12
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 229
10.25. In Problem 10.6, show that if C = B = I in equation {10.26) and if R = and Q = I in equation
(10.27), then the system becomes identical to the model.
10.26. Use the results of Problem 10.4 to generate a root locus for the closed loop system as a varies.
What happens when a = 0?
10.27. Given the linear time-invariant system dn/dt = Ax + Bu with criterion 2c = I (x^Qx + u^Ru) dt
where Q and R are both time-invariant positive definite symmetric matrices, but Q is to X w
and R is m X m. Find the feedback control u(x, t\ <j) that minimizes v in terms of the constant
M X n matrices Kn, K12, Kji, K22 where
Sn
Sn + l
g2n
in which the 2M-vector I ' j satisfies the eigenvalue problem given in equation (10.18), where
^i'^2 ?^n have negative real parts and X„ + i, X„+2, . . ., X2n have positive real parts and are
distinct.
10.28. Given only the matrices G and F as defined in Theorem 10.9, note that G and F are in general
complex since they contain partitions of eigenvectors. Find a method for calculating n = GF"!
using only the real number field, not complex numbers.
10.29. (a) Given the nominal scalar system d^/dt - i + 11 with Jo = 1(0), and the performance criterion
"^ - P I [(i?^ - 1)^^ + li^] dt for constant i; > 1. Construct two Liapunov functions for the
optimal closed loop system that are valid for all 7; > 1. Hint: One method of construction is
given in Chapter 10 and another in Chapter 9.
(6) Now suppose the actual system is d^/dt = ejs + ? + ^. Using both Liapunov functions obtained
in (o), give estimates upon how large e > can be such that the closed loop system remains
asymptotically stable. In other words, find a function /(,, i) such that e < f(ri, |) implies
asymptotic stability of the closed loop system.
10.30. Given the system dx/d« = A(t)x -f- B(t)u with x(to) = Xq and with criterion function
J = xr(ij) x(ii)/2 + r ' uTu dt/2
(a) What are the canonical equations and their boundary conditions?
(6) Given the transition matrix *(*, t) where B
canonical equations in terms of *(*, t) and f
(c) Write and solve the matrix Riccati equation
(6) Given the transition matrix *(<, t) where 5*(t, T)/et - A(t)*(t, t) and *(t, t) = I. Solve the
canonical equations in terms of 9(t, r) and find a feedback control u(«).
230
INTRODUCTION TO OPTIMAL CONTROL
[CHAP. 10
Answers to Supplementary Problems
10.9. u = -9x
10.10. (a) dxjdt = X2 a^i(O) = x-^q
dx^ldt = - VI X2 — P2 «2(0) = ^^20
dpi/dt = —4x1 Pi(°°) =
dp2/dt = -pi + y/5p2 P2(°°) =
(6) Xiit) = (2xiQ + X2o)e-* - (xio + X2o)e-^*
Xiit) = -&Xio + «2o)«~' + 2(a;io + a;2o)e-2t
P2{t) = (^/5-l){2xlo + »>2o)e-*+{4-2^/5)(XM + X2o)e-^*
Piit) = 4(2a;jo + X2o)e-* - 2(xio + a;2o)6~2'
(c) M(t) = (l-\/5)(2a;io + a;2o)e-« + (2\/5-4)(a;io + a;2o)e-2«
10.11.
(a) = 4-^2^
= 011 ~ V 5 012 — 012022
= 2,Si2 - 2\/5 ,622 - 0^2
id)
(6) P(«) =
6 2
2 3 - -/S
Fig. 10-13
d «^2
''■''■ di\ p.
10.13. (a) P(t) =
10 0'
-1 U a'a
-q II Pi
0-1 0/ \ P2/
with a;:i(0) - 5, XjCO) = -3, p^iT) - 0, p2(T) = 0.
[S{1 + V2 ) + l]e2 v^(T-t) -i-S{l-y^)
[V2-I +S]e2V^cr-» + 1 + V2 -5'
(6) lim P(t) = -1 + V2 regardless of S.
t-* —00
10.14. K(t) =
pgyUi-t) - 1)
where y =; -2vVTp . lim /:(*) -a- Vo^Tp
(a - 7^2+7 )e>'"i-» + Va^ + P +
10.15. Any control results in p = x since there is an unstable uncontrollable state.
10.16. M(9) = -A-HQrS + 9r9««V2 g^g^e2 + qrq^eyS) where A = 1 + g„9 + g.ffS/S + 9r9«eV12
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 231
10.17. This is the diagram associated with Problem 10.8.
10.18. \ = ±Va2 + 62g/r
10.19. uop = RJ"'b^o'j/p where Pq depends only on the ratio i)/p as
-dPo/dt = Qo + ATPo + PflA - Fohm^^hFov/p
10.20. Let #(t,to) = *'r(«, to)E*{t, to) - E where E = (_i q)- ''''*^" *(<o. to) = and d#/dt =
*T(t, <o)(HTE + EH)*(t, to) = 0, so *(t, to) = 0. This shows that for every increasing function
of time in * there is a corresponding decreasing function of time.
10.22. u = (tant - 4 tan — :r-^]x
2 2
6tJ(-6tif6
10.23. P{t) = g^5 _|_ 2^5 . For < to < fj and to < tj < 0, P(t) > 0. For -ytj < to < < ti, P{t) <
and tends to — «= as t tends to —yt^ from the right.
10.24. K = l
10.26. At a = 0, the closed loop system poles are at e''''^^^ and e^^'^i^.
10 27 (''^^^\ = /^" KiA-VeA^"-'!' \/Kn K,A/x(ti)
Vp(t)y i^Kji K22y >^ eAu(t-t,)y^K2i K^i)\-p(ti)
Note p((i) = and solve for x(ti) in terms of x(f). Then p(t) = P(t) x(f).
10.28. Let the first 2m — n eigenvectors be complex conjugates, so that f2i-i = f2i aiid 821-1 = 821
for t = 1, 2, . . . , m. Define
F = (Refi|Imfi|Ref3|Imf8l...|Imf2„,-i|f2ml ... |fn)
and G similarly. Then F and G are real and n = GP-i.
10.29. (a) From Chapter 10, Vi = (, + 1)^2 and from Chapter 9, Vg = i^^V-
(6) e < i){-2 for both V^ and V^.
10.30. (a) dx/dt - Ax - BBTp and dp/df = -A^p with x(to) = xo and p(fi) = x(fi).
(c) obvious from (6) above.
INDEX
Abstract object, 2-6
uniform, 14
Adjoint of a linear operator, 112
Adjoint system, 112-114
Adjugate matrix, 44
Aerospace vehicle, state equations of, 34
Algebraic
equation for time-invariant optimal
control, 214
separation, 178
Alternating property, 56
Analytic function, 79
Anticipatory system. See State variable of
anticipatory system; Predictor
Associated Legendre equation, 107
Asymptotic
behavior of optimal closed loop poles, 226
stability, 193
state estimator (see Observer systems;
Steady state errors)
Basis, 47
Bessel's equation, 107
BIBO stability. See External stability
Block diagrams, 164-169
Bounded
function, 191
input-bounded output stability
(see External stability)
Boundedness of transition matrix, 192
Butterworth configuration, 226
Cancellations in a transfer function, 133-134
Canonical
equations of Hamilton, 211
flow diagrams (see Flow diagrams, first
canonical form; Flow diagrams,
second canonical form)
Cauchy integral representation, 83
Causality, 2-6
Cayley-Hamilton theorem, 81
Characteristic polynomial, 69
Check on solution for transition matrices, 114
Closed
forms for time-varying systems, 105
loop eigenvalues, 217
loop optimal response, 217
under addition, 46
under scalar multiplication, 46
Cofactor of a matrix element, 43
Column of a matrix, 38
Commutativity of matrices, 40
Compatible, 39-40
Compensator
pole placement, 172
integral plus proportional, 219-223
Complex conjugate transpose matrix, 41
Component. See Element
Contour integral representation of /(A), 83
Contraction mapping, 92-93
Control
energy, 210
law, 208-211
Controllable form of time-varying
systems, 148-152
Controllability, 128-146
of discrete-time systems, 132, 136
of the output, 144
for time-invariant systems with distinct
eigenvalues, 129-130
for time-invariant systems with nondistinct
eigenvalues, 131, 135
of time-varying systems, 136-138, 142
Convergence of the matrix exponential, 96
Convolution integral, 109
Costate, 210
Cramer's rule, 44
Criterion functional, 209
Dead beat control, 132
Decoupling. See Pole placement of
multiple-input systems
Delay line, state variables of.
See State variable of a delay line
Delayer, 4, 17, 164
Delta function
derivatives of Dirac, 135
Dirac, 135
Kronecker, 44
Derivative
formula for integrals, 125
of a matrix, 39
Determinant, 41
by exterior products, 57
of the product of two matrices, 42
of similar matrices, 72
of a transition matrix, 99-100
of a transpose matrix, 42
of a triangular matrix, 43
Diagonal matrix, 41
Diagonal of a matrix, 39
Difference equations, 7
from sampling. 111
Differential equations, 7
Differentiator, state of.
See State variable of a differentiator
Diffusion equation, 10
Dimension, 49
of null space, 49
of range space, 50
Disconnected flow diagram, 129
Discrete-time systems, 7
from sampling. 111
233
234
INDEX
Distinct eigenvalues, 69-71
Dot product. See Inner product
Duality, 138-139
Dynamical systems, 4-6
Eigenvalue, 69
of a nonnegative definite matrix, 77
of a positive definite matrix, 77
of similar matrices, 72
of a time-varying system, 194
of a triangular matrix, 72
of a unitary matrix, 95
Eigenvector, 69
generalized, 74
expression for H = GF-i, 216, 229
of a normal matrix, 76
Element, 38
Elementary operations, 42
Ellipse in n-space, 77
Equilibrium state, 192
Equivalence of pole placement and
optimal control, 220
Equivalence transformation, 148
Estimation of state. See Observer systems
Existence of solution
to matrix equations, 63
to Riccati equation, 213
to time-invariant optimal control
problem, 214
Exp At, 101-111
Exponential stability.
See Stability, uniform asymptotic
Exterior product, 56
External stability, 194
Factorization of time-varying matrices, 152
Feedback optimal control, 211-231
Finite escape time, 104-105, 204
First canonical form, 19, 20
for time-varying systems, 153
Floquet theory, 107-109
Flow diagrams, 16-24
of first canonical form, 19
interchange of elements in, 17
of Jordan form, 21-24
of second canonical (phase variable) form, 20
for time-varying systems, 18
Frequency domain characterization of
optimality, 220, 226
Function of a matrix, 79-84
cosine, 82
exponential, 96-104
logarithm, 94
square root, 80, 91
Fundamental matrix. See Transition matrix
Gauss elimination (Example 3.13), 44
Gauss-Seidel method, 171
Global
minimum of criterion functional, 220
stability, 193
Gradient operator, 96
Gram matrix, 64
(Jram-Schmidt process, 53-54
(Jrassmann algebra. See Exterior product
CJronwall-Bellman lemma, 203
CJuidance control law, 121-122
Ilamilton-Jacobi equation, 220
Ilamiltonian matrix, 211
Harmonic oscillator, 193
Hereditary system.
See State variable of hereditary system
Ilermite equation, 107
Ilermitian matrix, 41
nonnegative definite, 77
positive definite, 77
Hidden oscillations in sampled data
systems, 132
Hypergeometric equation, 107
Impulse response matrices, 112
Induced norm. See Matrix, norm of
Induction, 43
Inner product, 52
for quadratic forms, 75
Input-output pair, 1-6
Input solution to the state equation, 109
Instantaneous controllability and observability.
See Totally controllable and observable
Integral
of a matrix, 39
representation of /(A), 83
Integral plus proportional control, 219-223
Integrator, 16, 164
Inverse
matrix, 44
of (si — A), computation of, 102
system, 166
Jaeobian matrix, 9
Jordan block, 74
Jordan form
flow diagram of, 21-24
of a matrix, 73-75
of state equations, 147
Kronecker delta, 44
I^agrange multiplier, 210
I^aguerre equation, 107
Ijaplace expansion of a determinant, 43, 57
Least squares fit, 86
Left inverse, 44
Legendre equation, 107
I^everrier's algorithm, 102
Ijapunov function
construction of, for linear systems, 199-202
discrete-time, 198
time-invariant, 196
time-varying, 197
Linear
dependence, 47
independence, 47
manifold, 46
network with a switch, 11, 12
INDEX
235
Linear (cont.)
system, 6-7
vector space, 50
Linear operator, 55
matrix representation of, 55
null space of, 49
range of, 49
Linearization, 8, 9
Logarithm of a matrix, 94
Loss matrix. See Criterion functional
Lower triangular matrix, 41
Mathieu equation, 107-109
Matrix, 38. See also Linear operator
addition and subtraction, 39
characteristic polynomial of, 69
compatibility, 39
differentiation of, 39
eigenvalue of, 69
eigenvector of, 69
equality, 39
function of, 79-84
fundamental, 99-111
generalized eigenvector of, 74
Hermitian, 107
Hermitian nonnegative definite, 77
Hermitian positive definite, 77
integration of, 39
Jordan form of, 73-75
logarithm of, 94
multiplication, 39
norm of, 78
principal minor of, 77
rank of, 50
representation of a linear operator, 55
Riccati equation, 211-231
square, 38
state equations, 26, 27
transition, 99-111
unitary, 41
Matrizant. See Transition matrix
Mean square estimate, 94
Metric, 51
Minimal order observer, 175-177
Minimum
energy control (Problem 10.30), 229
norm solution of a matrix equation, 86
Minor, 43
principal, 77
Missile equations of state, 34
Model-following systems, 224-225
Motor equations of state, 34
Multilinear, 56
Multiple loop system, 185-186
Natural norm, 53
of a matrix, 79
Neighboring optimal control, 221
Noise rejection, 179-181
Nominal optimal control, 221
Nonanticipative system. See Causality
Nonexistence
of solution to the matrix Riccati
equation, 221, 228
of trajectories, 10
Nonlinear effects, 179-181
Nonnegative definite, 77
Nonsingular, 44
Norm, 51
of a matrix, 78
natural, 53
Normal matrix, 41
Normalized eigenvector, 70
Null space, 49
Nullity, 49
n-vectors, 38
Nyquist diagrams, 171
Observable form of time-varying
systems, 148-152
Observability, 128-146
of discrete-time systems, 133-138
of time-invariant systems, 133-136
of time-varying systems, 136-138, 146
Observer systems, 173-177
Open loop optimal control, 211
Optimal control law, 210
Order of a matrix, 38
Oriented mathematical relations, 2-4
Orthogonal vectors, 52
Orthonormal vectors, 53
Output feedback, 217-218
Parameter variations, 170, 171
Partial differential equation of state
diffusion, 10
wave, 141
Partial fraction expansion, 21, 22
of a matrix, 102
Partition of a matrix, 40
Passive system stability, 205
Penalty function. See Criterion functional
Periodically varying system, 107-109
Permutation, 41
Phase plane, 11
Phase variable canonical form, 20, 21
for time-varying systems, 153-4
Physical object, 1
Piecewise time-invariant system, 106
Polar decomposition of a matrix, 87
Pole placement, 172-173, 220, 224
Poles of a system, 169
Pole-zero cancellation, 134
Positive definite, 77
Predictor, 5
Principal minor, 77
Pseudoinverse, 84-87
Quadratic form, 75
Random processes, 3
Rank, 50
n(e.d.), 137
Range space, 49
236
INDEX
Reachable states, 135
Reciprocal basis, 54
Recoverable states, 135
Regulator, 208
Representation
of a linear operator, 55
spectral, 80
for rectangular matrices, 85
Residue
of poles, 21,22
matrix, 102-103
Response function. See Trajectory
Riccati equation, 212-213
negative solution to, 221
Right inverse, 44
RLC circuit, 2, 6, 11, 35
stability of, 205-207
Root locus, 169
Row of matrix, 38
Sampled systems. 111
Scalar, 38
equation representation for time-varying
systems, 153-154, 160
product (see Inner product)
transition matrix, 105
Scalor, 16, 164
Schmidt orthonormalization.
See Gram-Schmidt process
Schwarz inequality, 52
Second canonical form, 20-21
for time-varying systems, 153-154
Sensitivity, 179-181
Servomechanism, 208
problem, 218-220
Similarity transformation, 70-72
Simultaneous diagonalization
of two Hermitian matrices, 91
of two arbitrary matrices, 118
Skew-symmetric matrix, 41
Span, 46
Spectral representation, 80
Spring-mass system.
See State variable of spring-mass system
Square root of a matrix, 77
Stability
asymptotic, 194
BIBO, 194
external, 194
i.s.L., 192
in the large, 193
of optimal control, 214
of solution of time-invariant
Riccati equation, 215
of a system, 192
uniform, 193
uniform asymptotic, 193
State
of an abstract object, 1-3
controllable, 128
observable, 128
of a physical object, 1
State (cont.)
reachable, 135
recoverable, 135
State equations
in matrix form, 26, 27
solutions of, with input, 109-110
solutions of, with nonzero input, 99-109
State feedback pole placement, 172—173
State space, 3
conditions on description by, 5
State transition matrix, 99-111
State variable, 3
of anticipatory system, 14
of delay line, 10
of differentiator, 14
of diffusion equation, 10
of hereditary system, 10
of spring-mass system, 10
Steady state
errors, 165
response, 125
Sturm-Liouville equation, 197
stability properties of, 198
Submatrices, 40
Sufficient conditions for optimal control, 220
Summer, 16, 164
Superposition, 7
integral, 109
Switch. See Linear network with a switch
Sylvester's criterion, 77
Symmetric matrix, 41
Symmetry
of eigenvalues of H, 215
of solution to matrix Riccati equation, 213
Symplectic, 228
Systems, 7, 8
dynamical, 5
nonanticipative, 5
physical, 1
Time to go, 228
Time-invariant, 7
optimal systems, 213-217
Time-varying systems
flow diagram of, 18
matrix state equations of, 25-26
transition matrices for, 104-111
Totally controllable and observable, 128
Trace, 39
of similar matrices, 72
Trajectory, 4, 5
Transfer function, 112
Transition
matrix, 99-111
property, 5
Transpose matrix, 41
Transposition, 41
Triangle inequality, 51
Triangular matrix, 41
Type-l systems, 167
INDEX
237
Undetermined coefficients, method of.
See Variation of parameters, method of
Uniform abstract object.
See Abstract object, uniform
Uniform stability, 193
asymptotic, 193
Uncontrollable, 3
states in optimal control, 223
Uniqueness of solution
to ATp + PA = -Q, 204
to matrix equations, 63
to the time-invariant optimal control, 215
for the square root matrix, 91
Unit
matrix, 41
vector, 41
Unitary matrix, 41
for similarity transformation of a
Hermitian matrix, 76
Unit-time delayor. See Delayor
Unity feedback systems, 165
Unobservable, 3
Unstable, 192
Upper triangular matrix, 41
Van der Pol equation, 192
Vandermonde matrix, 60
Variation of parameters, method of, 109
Variational equations. See Linearization
Vector, 38, 46, 50
Vector flow diagrams, 164
Voltage divider, 1, 3
Wave equation, 141
White noise input, 208
Wiener factorization, 216
Zero matrix, 41
Zero-input stability, 191
z transfer function, 18
I