(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Children's Library | Biodiversity Heritage Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "Principles of digital communication and coding"

Principles 

of Digital Communication 

and Coding 






Andrew JViterbi 
Jim K.Omura 



PRINCIPLES OF 
DIGITAL 

COMMUNICATION 
AND CODING 



McGraw-Hill Series in Electrical Engineering 

Consulting Editor 

Stephen W. Director, Carnegie-Mellon University 



Networks and Systems 

Communications and Information Theory 

Control Theory 

Electronics and Electronic Circuits 

Power and Energy 

Electromagnetics 

Computer Engineering and Switching Theory 

Introductory and Survey 

Radio, Television, Radar, and Antennas 



Previous Consulting Editors 

Ronald M. Bracewell, Colin Cherry, James F. Gibbons, Willis W. Harman, 
Hubert Heffner, Edward W. Herold, John G. Linvill, Simon Ramo, Ronald A. Rohrer, 
Anthony E. Siegman, Charles Susskind, Frederick E. Terman, John G. Truxal, 
Ernst Weber, and John R. Whinnery 



Communications and Information Theory 

Consulting Editor 

Stephen W. Director, Carnegie-Mellon University 

Abramson: Information Theory and Coding 

Angelakos and Everhart: Microwave Communications 

Antoniou: Digital Filters: Analysis and Design 

Bennett: Introduction to Signal Transmission 

Berlekamp: Algebraic Coding Theory 

Carlson: Communications Systems 

Davenport: Probability and Random Processes: An Introduction for Applied Scientists and 

Engineers 

Davenport and Root: Introduction to Random Signals and Noise 
Drake: Fundamentals of Applied Probability Theory 
Gold and Rader: Digital Processing of Signals 
Guiasu: Information Theory with New Applications 
Hancock: An Introduction to Principles of Communication Theory 
Melsa and Cohn: Decision and Estimation Theory 
Papoulis: Probability, Random Variables, and Stochastic Processes 
Papoulis: Signal Analysis 

Schwartz: Information Transmission, Modulation, and Noise 
Schwartz, Bennett, and Stein: Communication Systems and Techniques 
Schwartz and Shaw: Signal Processing 

Shooman: Probabilistic Reliability: An Engineering Approach 
Taub and Schilling: Principles of Communication Systems 
Viterbi: Principles of Coherent Communication 
Viterbi and Omura: Principles of Digital Communication and Coding 



PRINCIPLES OF 
DIGITAL 

COMMUNICATION 
AND CODING 



Andrew J. Viterbi 

LINK ABIT Corporation 

Jim K. Omura 

University of California, 
Los Angeles 



McGraw-Hill, Inc. 

New York St. Louis San Francisco Auckland Bogota 

Caracas Lisbon London Madrid Mexico City Milan 

Montreal New Delhi San Juan Singapore 

Sydney Tokyo Toronto 



PRINCIPLES OF DIGITAL COMMUNICATION AND CODING 

Copyright 1979 by McGraw-Hill, Inc. All rights reserved. 
Printed in the United States of America. No part of this publication 
may be reproduced, stored in a retrieval system, or transmitted, in any 
form or by any means, electronic, mechanical, photocopying, 
recording, or otherwise, without the 
prior written permission of the publisher. 

9101112 KPKP 976543 



This book was set in Times Roman. 
The editors were Frank J. Cerra and J. W. Maisel; 
the cover was designed by Albert M. Cetta; 
the production supervisor was Charles Hess. 
The drawings were done by Santype Ltd. 
Kingsport Press was printer and binder. 



Library of Congress Cataloging in Publication Data 

Viterbi, Andrew J 

Principles of digital communication and coding. 

(McGraw-Hill electrical engineering series: Communi 
cations and information theory section) 

Includes bibliographical references and index. 

1. Digital communications. 2. Coding theory. 
I. Omura, Jim K., joint author. II. Title. 
III. Series. 

TK5103.7.V57 621.38 78-13951 

ISBN 0-07-0675 16-3 



CONTENTS 



Preface xi 

Part One Fundamentals of Digital 

Communication and Block Coding 

Chapter 1 Digital Communication Systems: 

Fundamental Concepts and Parameters 3 

1.1 Sources, Entropy, and the Noiseless Coding Theorem 7 

1.2 Mutual Information and Channel Capacity 19 

1.3 The Converse to the Coding Theorem 28 

1.4 Summary and Bibliographical Notes 34 

Appendix 1A Convex Functions 35 

Appendix IB Jensen Inequality for Convex Functions 40 

Problems 42 

Chapter 2 Channel Models and Block Coding 47 

2.1 Block-coded Digital Communication on the Additive 

Gaussian Noise Channel 47 

2.2 Minimum Error Probability and Maximum Likelihood 

Decoder 54 

2.3 Error Probability and a Simple Upper Bound 58 

2.4 A Tighter Upper Bound on Error Probability 64 

2.5 Equal Energy Orthogonal Signals on the AWGN Channel 65 

2.6 Bandwidth Constraints, Intersymbol Interference, and 

Tracking Uncertainty 69 

2.7 Channel Input Constraints 76 

2.8 Channel Output Quantization: Discrete Memoryless 

Channels 78 

2.9 Linear Codes 82 

vii 



VIII CONTENTS 

*2.10 Systematic Linear Codes and Optimum Decoding for the 

BSC 89 

*2.11 Examples of Linear Block Code Performance on the 

AWGN Channel and Its Quantized Reductions 96 

2.12 Other Memoryless Channels 102 

2.13 Bibliographical Notes and References 116 

Appendix 2A Gram-Schmidt Orthogonalization and Signal 

Representation 117 

Problems 119 

Chapter 3 Block Code Ensemble Performance Analysis 128 

3.1 Code Ensemble Average Error Probability: Upper Bound 128 

3.2 The Channel Coding Theorem and Error Exponent 

Properties for Memoryless Channels 133 

3.3 Expurgated Ensemble Average Error Probability: Upper 

Bound at Low Rates 143 

3.4 Examples: Binary-Input, Output-Symmetric Channels, and 

Very Noisy Channels 151 

3.5 ChernorT Bounds and the Neyman-Pearson Lemma 158 

3.6 Sphere- Pack ing Lower Bounds 164 
*3.7 Zero Rate Lower Bounds 173 
*3.8 Low Rate Lower Bounds 178 
*3.9 Conjectures and Converses 184 

*3.10 Ensemble Bounds for Linear Codes 189 

3.11 Bibliographical Notes and References 194 

Appendix 3 A Useful Inequalities and the Proofs of Lemma 3.2.1 

and Corollary 3.3.2 194 

Appendix 3B Kuhn-Tucker Conditions and Proofs of Theorems 

3.2.2 and 3.2.3 202 

Appendix 3C Computational Algorithm for Capacity 207 

Problems 212 

Part Two Convolutional Coding and Digital 
Communication 



Chapter 4 Convolutional Codes 227 

4.1 Introduction and Basic Structure 227 

4.2 Maximum Likelihood Decoder for Convolutional Codes 

The Viterbi Algorithm 235 

4.3 Distance Properties of Convolutional Codes for 

Binary-Input Channels 239 

4.4 Performance Bounds for Specific Convolutional Codes on 
Binary-Input, Output-Symmetric Memoryless Channels 242 

4.5 Special Cases and Examples 246 

4.6 Structure of Rate I/A? Codes and Orthogonal Convolutional 

Codes 253 

* May be omitted without loss of continuity. 



CONTENTS IX 

4.7 Path Memory Truncationa, Metric Quantization, and Code 

Synchronization in Viterbi Decoders 258 

*4.8 Feedback Decoding 262 

*4.9 Intersymbol Interference Channels 272 

*4.10 Coding for Intersymbol Interference Channels 284 

4.11 Bibliographical Notes and References 286 

Problems 287 

Chapter 5 Convolutional Code Ensemble Performance 301 

5.1 The Channel Coding Theorem for Time- varying 
Convolutional Codes 301 

5.2 Examples: Convolutional Coding Exponents for Very Noisy 
Channels 313 

5.3 Expurgated Upper Bound for Binary-Input, 
Output-Symmetric Channels 315 

5.4 Lower Bound on Error Probability 318 
*5.5 Critical Lengths of Error Events 322 

5.6 Path Memory Truncation and Initial Synchronization 

Errors 327 

5.7 Error Bounds for Systematic Convolutional Codes 328 
*5.8 Time-varying Convolutional Codes on Intersymbol 

Interference Channels 331 

5.9 Bibliographical Notes and References 341 

Problems 342 

Chapter 6 Sequential Decoding of Convolutional 

Codes 349 

6.1 Fundamentals and a Basic Stack Algorithm 349 

6.2 Distribution of Computation: Upper Bound 355 

6.3 Error Probability Upper Bound 361 

6.4 Distribution of Computations: Lower Bound 365 

6.5 The Fano Algorithm and Other Sequential Decoding 
Algorithms 370 

6.6 Complexity, Buffer Overflow, and Other System 
Considerations 374 

6.7 Bibliographical Notes and References 378 

Problems 379 



Part Three Source Coding for Digital 
Communication 

Chapter 7 Rate Distortion Theory: Fundamental 

Concepts for Memoryless Sources 385 

7.1 The Source Coding Problem 385 

7.2 Discrete Memoryless Sources Block Codes 388 



X CONTENTS 

7.3 Relationships with Channel Coding 404 

7.4 Discrete Memoryless Sources Trellis Codes 411 

7.5 Continuous Amplitude Memoryless Sources 423 
*7.6 Evaluation of R(D) Discrete Memoryless Sources 431 
*7.7 Evaluation of R(D) Continuous Amplitude Memoryless 

Sources 445 

7.8 Bibliographical Notes and References 453 

Appendix 7A Computational Algorithm for R(D) 454 

Problems 459 

Chapter 8 Rate Distortion Theory: Memory, Gaussian 

Sources, and Universal Coding 468 

8.1 Memoryless Vector Sources 468 

8.2 Sources with Memory 479 

8.3 Bounds for R(D) 494 

8.4 Gaussian Sources with Squared-Error Distortion 498 

8.5 Symmetric Sources with Balanced Distortion Measures and 

Fixed Composition Sequences 513 

8.6 Universal Coding 526 

8.7 Bibliographical Notes and References 534 

Appendix 8A Chernoff Bounds for Distortion Distributions 534 

Problems 541 



Bibliography 547 

Index 553 



PREFACE 



Digital communication is a much used term with many shades of meaning, 
widely varying and strongly dependent on the user s role and requirements. 
This book is directed to the communication theory student and to the designer 
of the channel, link, terminal, modem, or network used to transmit and receive 
digital messages. Within this community, digital communication theory has come 
to signify the body of knowledge and techniques which deal with the two-faceted 
problem of (1) minimizing the number of bits which must be transmitted over 
the communication channel so as to provide a given printed, audio, or visual 
record within a predetermined fidelity requirement (called source coding): and 
(2) ensuring that bits transmitted over the channel are received correctly despite 
the effects of interference of various types and origins (called channel coding). 
The foundations of the theory which provides the solution to this twofold problem 
were laid by Claude Shannon in one remarkable series of papers in 1948. In the 
intervening decades, the evolution and application of this so-called information 
theory have had ever-expanding influence on the practical implementation of 
digital communication systems, although their widespread application has 
required the evolution of electronic-device and system technology to a point 
which was hardly conceivable in 1948. This progress was accelerated by the 
development of the large-scale integrated-circuit building block and the 
economic incentive of communication satellite applications. 

We have not attempted in this book to cover peripheral topics related to 
digital communication theory when they involve a major deviation from the 
basic concepts and techniques which lead to the solution of this fundamental 
problem. For this reason, constructive algebraic techniques, though valuable for 
developing code structures and important theoretical results of broad interest, are 
specifically avoided in this book. Similarly, the peripheral, though practically 
important, problems of carrier phase and frequency tracking, and time synchroni 
zation are not treated here. These have been covered adequately elsewhere. On 
the other hand, the equally practical subject of intersymbol interference in 

xi 



xii PREFACE 

digital communication, which is fundamentally similar to the problem of con- 
volutional coding, is covered and new insights are developed through connections 
with the mainstream topics of the text. 

This book was developed over approximately a dozen years of teaching a 
sequence of graduate courses at the University of California, Los Angeles, and later 
at the University of California, San Diego, with partial notes being distributed 
over the past few years. Our goal in the resulting manuscript has been to provide 
the most direct routes to achieve an understanding of this field for a variety of 
goals and needs. All readers require some fundamental background in probability 
and random processes and preferably their application to communication 
problems; one year s exposure to any of a variety of engineering or mathematics 
courses provides this background and the resulting maturity required to start. 

Given this preliminary knowledge, there are numerous approaches to utiliza 
tion of this text to achieve various individual goals, as illustrated graphically 
by the prerequisite structure of Fig. P-l. A semester or quarter course for the begin 
ning graduate student may involve only Part One, consisting of the first three 
chapters (omitting starred sections) which provide, respectively, the fundamental 
concepts and parameters of sources and channels, a thorough treatment of channel 
models based on physical requirements, and an undiluted initiation into the eval 
uation of code capabilities based on ensemble averages. The advanced student or 



Part one 

Fundamentals of 
digital communication 
and block coding 




Part two 

Convolutional coding for 
digital communication 






Part three 

Source coding for 
digital communication 





Introductory -- 



Advanced 



Figure P.I Organization and prerequisite structure. 



PREFACE xiii 

specialist can then proceed with Part Two, an equally detailed exposition of 
convolutional coding and decoding. These techniques are most effective in ex 
ploiting the capabilities of the channel toward approaching virtually error-free 
communications. It is possible in a one-year course to cover Part Three as well, 
which demonstrates how optimal source coding techniques are derived essentially 
as the duals of the channel coding techniques of Parts One and Two. 

The applications-oriented engineer or student can obtain an understanding 
of channel coding for physical channels by tackling only Chapters 2, 4, and about 
half of 6. Avoiding the intricacies of ensemble-average arguments, the reader 
can learn how to code for noisy channels without making the additional effort 
to understand the complete theory. 

At the opposite extreme, students with some background in digital 
communications can be guided through the channel-coding material in Chapters 
3 through 6 in a one-semester or one-quarter course, and advanced students, 
who already have channel-coding background, can cover Part Three on source 
coding in a course of similar duration. Numerous problems are provided to 
furnish examples, to expand on the material or indicate related results, and 
occasionally to guide the reader through the steps of lengthy alternate proofs 
and derivations. 

Aside from the obvious dependence of any course in this field on Shannon s 
work, two important textbooks have had notable effect on the development and 
organization of this book. These are Wozencraft and Jacobs [1965], which first 
emphasized the physical characteristics of digital communication channels as a 
basis for the development of coding theory fundamentals, and Gallager [1968]. 
which is the most complete and expert treatment of this field to date. 

Collaboration with numerous university colleagues and students helped 
establish the framework for this book. But the academic viewpoint has been 
tempered in the book by the authors extensive involvement with industrial 
applications. A particularly strong influence has been the close association of the 
first author with the design team at LINKABIT Corporation, led by I. M. Jacobs, 
J. A. Heller, A. R. Cohen, and K. S. Gilhousen, which first implemented high 
speed reliable versions of all the convolutional decoding techniques treated in this 
book. The final manuscript also reflects the thorough and complete reviews and 
critiques of the entire text by J. L. Massey, many of whose suggested improvements 
have been incorporated to the considerable benefit of the prospective reader. 

Finally, those discouraged by the seemingly lengthy and arduous route to a 
thorough understanding of communication theory might well recall the ancient 
words attributed to Lao Tzu of twenty-five centuries ago: "The longest journey 
starts with but a single step." 

Andrew J. Viterbi 
Jim K. Omura 



PART 



ONE 



FUNDAMENTALS OF 

DIGITAL COMMUNICATION 

AND BLOCK CODING 



CHAPTER 

ONE 

DIGITAL COMMUNICATION SYSTEMS: 

FUNDAMENTAL CONCEPTS 

AND PARAMETERS 



In the field of communication system engineering, the second half of the twentieth 
century is destined to be recognized as the era of the evolution of digital communi 
cation, as indeed the first half was the era of the evolution of radio communication 
to the point of reliable transmission of messages, speech, and television, mostly in 
analog form. 

The development of digital communication was given impetus by three prime 
driving needs: 

1. Greatly increased demands for data transmission of every form, from computer 
data banks to remote-entry data terminals for a variety of applications, with 
ever-increasing accuracy requirements 

2. Rapid evolution of synchronous artificial satellite relays which facilitate world 
wide communications at very high data rates, but whose launch costs, and 
consequent power and bandwidth limitations, impose a significant economic 
incentive on the efficient use of the channel resources 

3. Data communication networks which must simultaneously service many differ 
ent users with a variety of rates and requirements, in which simple and efficient 
multiplexing of data and multiple access of channels is a primary economic 
concern 

These requirements and the solid-state electronic technology needed to sup 
port the development of efficient, flexible, and error-free digital communication 



4 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



Source 





Input 












Output 


De 




















Source 


sequence j 


Channel 








Channel 


j sequence 


Source 


encoder 


I 

i 


encoder 








decoder 


i 

1 


decoder 



Destination 



Figure 1.1 Basic model of a digital communication system. 

systems evolved simultaneously and in parallel throughout the third quarter of 
this century, but the theoretical foundations were laid just before mid-century by 
the celebrated " Mathematical Theory of Communication " papers of C. E. Shan 
non [1948]. With unique intuition, Shannon perceived that the goals of approach 
ing error-free digital communication on noisy channels and of maximally efficient 
conversion of analog signals to digital form were dual facets of the same problem, 
that they share a common framework and virtually a common solution. For the 
most part, this solution is presented in the original Shannon papers. The 
refinement, embellishment, and reduction to practical form of the theory occupied 
many researchers for the next two decades in efforts which paralleled in time 
the technology development required to implement the techniques and algorithms 
which the theory dictated. 

The dual problem formulated and solved by Shannon is best described in 
terms of the block diagram of Fig. 1.1. The source is modeled as a random 
generator of data or a stochastic signal to be transmitted. The source encoder 
performs a mapping from the source output into a digital sequence (usually 
binary). If the source itself generates a digital output, the encoder mapping can be 
one-to-one. Ignore for the moment the channel with its encoder and decoder 
(within the dashed contour in Fig. 1.1) and replace it by a direct connection called 
a noiseless channel. Then if the source encoder mapping is one-to-one, the source 
decoder can simply perform the inverse mapping and thus deliver to the destina 
tion the same data as was generated by the source. The purpose of the source 
encoder-decoder pair is then to reduce the source output to a minimal representa 
tion. The measure of the " data compression " achieved is the rate in symbols 
(usually binary) required per unit time to fully represent and, ultimately at the 
source decoder, to reconstitute the source output sequence. This minimum rate at 
which the stochastic digital source sequence can be transmitted over a noiseless 
channel and reconstructed perfectly is related to a basic parameter of stochastic 
sources called entropy. 

When the source is analog, it cannot be represented perfectly by a digital 
sequence because the source output sequence takes on values from an un- 
countably infinite set, and thus obviously cannot be mapped one-to-one into a 
discrete set, i.e., a digital alphabet. 1 The best that can be done in mapping the 
source into a digital sequence is to tolerate some distortion at the destination after 



1 The simplest example of an analog source encoder is an analog-to-digital converter, also called a 
quantizer, for which the source decoder is a digital-to-analog converter. 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 5 

the source decoder operation which now only approximates the inverse mapping. 
In this case, the distortion (appropriately defined) is set at a fixed maximum, and 
the goal is to minimize the rate again defined in digital symbols per unit time 
subject to the distortion limit. The solution to this problem requires the generali 
zation of the entropy parameter of the source to a quantity called the rate 
distortion function. This function of distortion represents the minimum rate at 
which the source output can be transmitted over a noiseless channel and still be 
reconstructed within the given distortion. 

The dual to this first problem is the accurate transmission of the digital 
(source encoder output) sequence over a noisy channel. Considering now only the 
blocks within the dashed contour in Fig. 1.1, the noisy channel is to be regarded as 
a random mapping of its input defined over a given discrete set (digital alphabet) 
into an output defined over an arbitrary set which is not necessarily the same as 
the input set. In fact, for most physical channels the output space is often contin 
uous (uncountable) although discrete channel models are also commonly 
considered. 

The goal of the channel encoder and decoder is to map the input digital 
sequence into a channel input sequence and conversely the channel output se 
quence into an output digital sequence such that the effect of the channel noise is 
minimized that is, such that the number of discrepancies (errors) between the 
output and input digital sequences is minimized. The approach is to introduce 
redundancy in the channel encoder and to use this redundancy at the decoder to 
reconstitute the input sequence as accurately as possible. Thus in a simplistic sense 
the channel coding is dual to the source coding in that the latter eliminates or 
reduces redundancy while the former introduces it for the purpose of minimizing 
errors. As will be shown to the reader who completes this book, this duality can be 
established in a much more quantitative and precise sense. Without further evolu 
tion of the concepts at this point, we can state the single most remarkable conclu 
sion of the Shannon channel coding theory: namely, that with sufficient but finite 
redundancy properly introduced at the channel encoder, it is possible for the 
channel decoder to reconstitute the input sequence to any degree of accuracy 
desired. The measure of redundancy introduced is established by the rate of digital 
symbols per unit time input to the channel encoder and output from the channel 
decoder. This rate, which is the same as the rate at the source encoder output and 
source decoder input, must be less than the rate of transmission over the noisy 
channel because of the redundancy introduced. Shannon s main result here is that 
provided the input rate to the channel encoder is less than a given value estab 
lished by the channel capacity (a basic parameter of the channel which is a func 
tion of the random mapping distribution which defines the channel), there exist 
encoding and decoding operations which asymptotically for arbitrarily long se 
quences can lead to error-free reconstruction of the input sequence. 

As an immediate consequence of the source coding and channel coding 
theories, it follows that if the minimum rate at which a digital source sequence can 
be uniquely represented by the source encoder is less than the maximum rate for 
which the channel output can be reconstructed error-free by the channel decoder 



6 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

then the system of Fig. 1.1 can transfer digital data with arbitrarily high accuracy 
from source to destination. For analog sources the same holds, but only within a 
predetermined (tolerable) distortion which determines the source encoder s mini 
mum rate, provided this rate is less than the channel maximum rate mentioned 
above. 

This text aims to present quantitatively these fundamental concepts of digital 
communication system theory and to demonstrate their applicability to existing 
channels and sources. 

In this first chapter, two of the basic parameters, source entropy and channel 
capacity, are defined and a start is made toward establishing their significance. 
Entropy is shown to be the key parameter in the noiseless source coding theorem, 
proved in Sec. 1.1. The similar role of the capacity parameter for channels is 
partially established by the proof in Sec. 1.3 of the converse to the channel coding 
theorem, which establishes that for no rate greater than the maximum determined 
by capacity can error-free reconstruction be effected by any channel encoder- 
decoder pair. The full significance of capacity is established only in the next two 
chapters. Chap. 2 defines and derives the models of the channels of greatest inter 
est to the communication system designer and introduces the rudimentary con 
cepts of channel encoding and decoding. In Chap. 3 the proof of the channel 
coding theorem is completed in terms of a particular class of channel codes called 
block codes, and thus the full significance of capacity is established. 

However, while the theoretical capabilities and limitations of channel coding 
are well established by the end of Chap. 3, their practical applicability and manner 
of implementation is not yet clear. This situation is for the most part remedied by 
Chap. 4 which describes a more practical and powerful class of codes, called 
convolutional codes, for which the channel encoding operation is performed by a 
digital linear filter, and for which the channel decoding operation arises in a 
natural manner from the simple properties of the code. Chap. 5 establishes further 
properties and limitations of these codes and compares them with those of block 
codes established in Chap. 3. Then Chap. 6 explores an alternative decoding 
procedure, called sequential decoding, which permits under some circumstances 
and with some limitations the use of extremely powerful convolutional codes. 

Finally Chap. 7 returns to the source coding problem, considering analog 
sources for the first time and developing the fundamentals of rate distortion 
theory for memoryless sources. Both block and convolutional source coding 
techniques are treated and thereby the somewhat remarkable duality between 
channel and source coding problems and solutions is established. Chap. 8 extends 
the concepts of Chap. 7 to sources with memory and presents more advanced 
topics in rate distortion theory. 

Shannon s mathematical theory of communication almost from the outset 
became known as information theory. While indeed one aspect of the theory is to 
define information and establish its significance in practical engineering terms, the 
main contribution of the theory has been in establishing the ultimate capabilities 
and limitations of digital communication systems. Nevertheless, a natural starting 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 7 

point is the quantitative definition of information as required by the communica 
tion engineer. This will lead us in Sec. 1.1 to the definition of entropy and the 
development of its key role as the basic parameter of digital source coding. 



1.1 SOURCES, ENTROPY, AND THE NOISELESS CODING 
THEOREM 

" The weather today in Los Angeles is sunny with moderate amounts of smog " is a 
news event that, though not surprising, contains some information, since our 
previous uncertainty about the weather in Los Angeles is now resolved. On the 
other hand, the news event, " Today there was a devastating earthquake in Cali 
fornia which leveled much of downtown Los Angeles," is more unexpected and 
certainly contains more information than the first report. But what is informa 
tion? What is meant by the "information" contained in the above two events? 
Certainly if we are formally to define a quantitative measure of information con 
tained in such events, this measure should have some intuitive properties such as : 

1. Information contained in events ought to be defined in terms of some measure 
of the uncertainty of the events. 

2. Less certain events ought to contain more information than more certain 
events. 

In addition, assuming that weather conditions and earthquakes are unrelated 
events, if we were informed of both news events we would expect that the total 
amount of information in the two news events be the sum of the information 
contained in each. Hence we have a third desired property: 

3. The information of unrelated events taken as a single event should equal the 
sum of the information of the unrelated events. 

A natural measure of the uncertainty of an event a is the probability of a 
denoted P(a). The formal term for " unrelatedness " is independence; two events a 
and f$ are said to be independent if 

P(a n /?) = P(a)P(/J) (1.1.1) 

Once we agree to define the information of an event a in terms of the probability 
of a, the properties (2) and (3) will be satisfied if the information in event a is 
defined as 

/(a)= -logP(a) (1.1.2) 

from which it follows that, if a and /? are independent, /(a n ft] = log 
P(a)P() = -log P(a) -log P(fi) = /(a) + /(). The base of the logarithm merely 
specifies the scaling and hence the unit of information we wish to use. This 



8 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

definition of information appears naturally from the intuitive properties proposed 
above, but what good is such a definition of information? Although we would not 
expect such a simple definition to be particularly useful in quantifying most of the 
complex exchanges of information, we shall demonstrate that this definition is not 
only appropriate but also a central concept in digital communication. 

Our main concern is the transmission and processing of information in which 
the information source and the communication channel are represented by prob 
abilistic models. Sources of information, for example, are defined in terms of 
probabilistic models which emit events or random variables. We begin by defining 
the simplest type of information source. 

Definition A discrete memoryless source (DMS) is characterized by its output, 
the random variable u, which takes on letters from a finite alphabet 
<% = (a v , a 2 , . . ., a A ] with probabilities 

P(a k ) /c=l,2,...,A (1.1.3) 

Each unit of time, say every T s seconds, the source emits a random variable 
which is independent of all preceding and succeeding source outputs. 

According to our definition of information, if at any time the output of our 
DMS is u = a k which situation we shall label as event a fc , then that output 
contains 

/(a k )=-logPK) (1.1.4) 



units of information. If we use natural logarithms, then our units are called " 
and if we use logarithms to the base 2, our units are called " bits" Clearly, the two 
units differ merely by the scale factor In 2. We shall use " log " to mean logarithm 
to the base 2 and "In" to denote natural logarithm. The average amount of 
information per source output is simply 2 



= I PM log ,) ( U - 5 ) 

u r \ u ) 

is called the entropy of the DMS. Here we take (0) log (0) = lim e log e = 0. 

^0 

To establish the operational significance of entropy we require the fundamen 
tal inequality 

lnx<x-l (1.1.6) 



2 Throughout this book we shall write a variable below the summation sign to mean summation 
over the entire range of the variable (i.e., all possible values which the variable can assume). When the 
summation is over only a subset of all the possible values, then the subset will also be shown under the 
summation. 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 9 



2 - 



In .V 




Figure 1.2 Sketch of the functions In x and x - 1. 

which can be verified by noting that the function /(x) = In x (x 1) has a 
unique maximum value of at x = 1. In Fig. 1.2 we sketch In x and x 1. For 
any two probability distributions P( ) and Q( ) on the alphabet #, it follows from 
this inequality that 



<(ln2)- 
= 



- 1 



which establishes the inequality 



(1.1.7) 
(1.1.8) 



with equality if and only if Q(w) = P(w) for all u 6 J l/. 

Inequalities (1.1.6) and (1.1.8) are among the most commonly used inequali 
ties in information theory. Choosing Q(u) = I/ A for all u e {a^ a 2 , ..., a A } m 
(1.1.8), for example, shows that sources with equiprobable output symbols have 
the greatest entropy. That is, 



< JFf (#) < log A 

with equality if and only if P(u) = \/A for all u e ^ = 



(1.1.9) 



10 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



Example (Binary memoryless source) For a DMS with alphabet ^ = (0, 1} with probability 
P(0) = p and P(l) = 1 p we have entropy 

p log l + (1 - p) log - bits 

P 1 -P 



where J^(p) is called the binary entropy function. Here J^f(p) < 1 with equality if and only if 
p = \. When p = i, we call this source a binary symmetric source (BSS). Each output of a BSS 
contains 1 bit of information. 

Suppose we next let u = (u l9 u 2 , . . . , U N ) be the DMS output random sequence 
of length N. The random variables w t , u 2 , . . . , U N are independent and identically 
distributed; hence the probability distribution of u is given by 3 

P N (u)= Y\P(u n ) (1.1.10) 

n=l 

where P( ) is the given source output distribution. Note that for source output 
sequences u = (u^ u 2 , . . . , U N ) e tfS N of length N, we can define the average amount 
of information per source output sequence as 

H (VN) = Z P N( U ) J g ^T7^ (1.1.11) 

As expected, since the source is memoryless, we get 

1 



) = I P*(* 



n 



- NH(W) (1.1.12) 

which shows that the total average information in a sequence of independent 
outputs is the sum of the average information in each output in the sequence. 



3 We adopt the notation that a subscript on a density or distribution function indicates the 
dimensionality of the random vector; however, in the case of a one-dimensional random variable, no 
subscript is used. Similar subscript notation is used for alphabets to indicate Cartesian products. 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 11 

If the N outputs are not independent, the equality (1.1.12) becomes only an 
upper bound. To obtain this more general result, let 

e.v(u) = fl /(,) where P(n.) =- P v (u) (1.1.13) 

n=l ui ta UN 

i + n 

is the first-order probability 4 of output u n and Q N (u) =/= P N (u) unless the variables 
are independent. Then it follows from (1.1.8) that 



jyu) log 



= P N (U) log -r 



UP(. 



where the last step follows exactly as in the derivation of (1.1.12). Hence 

H(% N }<NH(%) (1.1.14) 

with equality if and only if the source outputs u l ,u 2 ,...,u N are independent; i.e., 
the random variables M I? u 2 , ..., % are the outputs of a memory less source. 

In many applications, the outputs of an information source are either trans 
mitted to some destination or stored in a computer. In either case, it is convenient 
to represent the source outputs by binary symbols. It is imperative that this be 
done in such a way that the original source outputs can be recovered from the 
binary symbols. Naturally, we would like to use as few binary symbols per source 
output as possible. Shannon s first theorem, called the noiseless source coding 
theorem, shows that the average number of binary symbols per source output can 
be made to approach the entropy of the source and no less. This rather surprising 
result gives the notion of entropy of a source its operational significance. We now 
prove this theorem for the DMS. 

Let u = (MJ, u 2 , . . . , U N ) be a DMS output random sequence of length N and 
x = (x l5 x 2 , ..., x /N ) be the corresponding binary sequence of length / v (u) rep 
resenting the source sequence u. For fixed N, the set of all A N binary sequences 
(codewords) corresponding to all the source sequences of length N is called a code. 
Since codeword lengths can be different, in order to be able to recover the original 



4 We assume here that this distribution is the same for each output and that 

#(*)= -XP(")logP(u) 

For generalizations see Prob. 1.2. 



12 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

source sequence from the binary symbols we require that no two distinct finite 
sequences of codewords form the same overall binary sequence. Such codes are 
called uniquely decodable. A sufficient condition for a code to be uniquely decod- 
able is the property that no codeword of length / is identical to the first / binary 
symbols of another codeword of length greater than or equal to /. That is, no 
codeword is a prefix of another codeword. This is clearly a sufficient condition, for 
given the binary sequence we can always uniquely determine the end of a code 
word and no two codewords are the same. Uniquely decodable codes with this 
prefix property have the practical advantage of being "instantaneously decod 
able"; that is, each codeword can be decoded as soon as the last symbol of the 
codeword is received. 

Example Suppose W = {a, b, c}. Consider the following codes for sequences of length N = 1. 



Code 1 Code 2 Code 3 



a 





00 


1 


b 


1 


01 


10 


c 


01 


10 


100 



Code 1 is not uniquely decodable since the binary sequence 0101 can be due to source sequences 
abab, abc, cc, or cab. Code 2 is uniquely decodable since all codewords are the same length and 
distinct. Code 3 is also uniquely decodable since " 1 " always marks the beginning of a codeword 
and codewords are distinct. For N = 2 suppose we have a code 



Code 4 



aa 


000 


ab 


001 


ac 


010 


ba 


Oil 


bb 


1000 


be 


1001 


ca 


1010 


cb 


1011 


cc 


1100 



This code for source sequences of length 2 in <% 2 is uniquely decodable since all sequences are 
unique and the first symbol tells us the codeword length. A first "0" tells us the codeword is of 
length 3 while a first " 1 " will tell us the codeword is of length 4. Furthermore this code has the 
property that no codeword is a prefix of another. That is, all codewords are distinct and no 
codeword of length 3 can be the first 3 binary symbols of a codeword of length 4. 

We now proceed to state and prove the noiseless source coding theorem in its 
simplest form. This theorem will serve to establish the operational significance of 
entropy. 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 13 

Theorem 1.1.1: Noiseless coding theorem for discrete memoryless sources 

Given a DMS with alphabet 31 and entropy //(^), for source sequences of 
length N(N = 1,2,...,) there exists a uniquely decodable binary code consist 
ing of binary sequences (codewords) of lengths / v (u) for u 6 ^ N such that the 
average length of the codewords is bounded by 



< N[H(%) + o(N)] (1.1.15) 

where o(N) is a term which becomes vanishingly small as N approaches 
infinity. Conversely, no such code exists for which 



The direct half of the theorem, as expressed by (1.1.15) is proved by construct 
ing a uniquely decodable code which achieves the average length bound. There are 
several such techniques, the earliest being that of Shannon [1948] (see Prob. 1.6), 
and the one producing an optimal code, i.e., the one which minimizes the average 
length for any value of N, being that of Huffman [1952]. We present here yet 
another technique which, while less efficient than these standard techniques, not 
only proves the theorem very directly, but also serves to illustrate an interesting 
property of the DMS, shared by a much wider class of sources, called the asymp 
totic equipartition property (AEP). We develop this by means of the following: 

Lemma 1.1.1 For any c > 0, consider a DMS with alphabet ^, entropy 
H = H(^U\ and the subset of all source sequences of length N defined by 



S(N, e) = {u: 2~ N[H + C] < P N (u) < 2~ N[H - t] } (1.1.16) 

Then all the source sequences in S(N, e) can be uniquely represented by 
binary codewords of fixed length L^ where 



+ c] < L N < N(H(W] + ) + 1 (1.1.17) 

Furthermore 



)}< (1.1.18) 

where 



Note that all source sequences in the set S(N, c) are nearly equiprobable, 
deviating from the value 2~ N//W by a factor no greater than 2 Ne . 



14 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

PROOF Since S(N 9 e) is a subset of <% N , the set of sequences of length N, we 
have the inequality 



> I PN(") (1.1-19) 

Since by definition P N (u) > 2~ N[H+t] for every u e S(N, e), this becomes 



:)| (1.1.20) 

where \S(N, e) \ is the number of distinct sequences in S(N, e). This gives us 
the bound 



\S(N,e)\ < 



(1.1.21) 



This bound, of course, is generally not an integer, let alone a power of 2. 
However, we may bracket it between powers of 2 by choosing the integer L N 
such that 

2^N-i <- jN[H-\-c\ <- 2 L N t\ i 22} 

Since there are 2 LN distinct binary sequences of length L^ , we can represent 
uniquely all source sequences belonging to S(N, e) with binary sequences of 

length LH which satisfies (1.1.17). 

Turning now to the probability of the set S(N, e), the complementary set 
of S(N, e), which consists of all sequences not represented in this manner, let 



= I P N (u) 

ueS(N,c) 

From the definition (1.1.16) of S(N, c), we have 

S(N, e) = {u: -N[H + e] < log P N (u) <-N[H- e]} 

I N 

= u: -NH - Ne< log f] P(u n ) < -NH + Ne 



(1.1.23) 



= u: -Ne < X log PM + NH < Ne 



n=l 



= u: 



- log P(u n ) - H 



<e 



Hence the complementary set is, 



S(N,e)= u: 



_1 y 

N n = 1 



(1.1.24) 



(1.1.25) 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 15 

The random variables 

z n =-\ogP(u n ) =1,2,...,N (1.1.26) 

are independent identically distributed random variables with expected value 

z = [z] 



(1.1.27) 



= -*(%) tog 

k=l 

= H(W) 

and a finite variance which we denote as 

a 2 = var [z] 

From the well-known Chebyshev inequality (see Prob. 1.4) it follows that for 
the sum of N such random variables 



Pr z: 



(1.1.28) 



Hence for F v we have 



Pr u: 

^2 



1 " 



- H 



> I 



(LL29) 



Thus F N , the probability of occurrence of any source sequence not encoded by a 
binary sequence of length Ly , becomes vanishingly small as N approaches infinity. 
Indeed, using the tighter Chernoff bound (see Prob. 1.5) we can show that F N 
decreases exponentially with N. The property that source output sequences 
belong to S(N, c) with probability approaching 1 as N increases to infinity is 
called the asymptotic equipartition property. 



PROOF OF THEOREM 1.1.1 Using the results of Lemma 1.1.1, suppose we add one 

more binary symbol to each of the binary representatives of the sequences in 
S(N, e) by preceding these binary representatives with a "0." While this in 
creases the binary sequence lengths from Ly to Ly + 1, it has a vanishingly 
small influence for asymptotically large N. Then using (1.1.17) we have that all 
sequences in S(N, e) are represented uniquely with binary sequences of length 
1 + Ly < N[H + e] + 2 bits. For all other sequences in S(N, c), suppose these 
are represented by a sequence of length 1 + L^ where the first binary symbol 
is always "1" and the remaining LV symbols uniquely represent each se- 



16 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



quence in S(N, e). This is certainly possible if L^ satisfies 

2 L> N~ l < A N < 2 L N 
or N log A <LH < N log A + 1 (1.1.30) 

since this is enough to represent uniquely all sequences in <% N . 

We now have a unique binary representation or codeword for each 
output sequence of length N. This code is the same type as Code 4 in 
the example. It is uniquely decodable since the first bit specifies the length 
("0" means length 1 + L^ and " 1" means length 1 + L^) of the codeword 
and the remaining bits uniquely specify the source sequence of length N. If the 
first bit is a " " we examine the next L^ bits which establish uniquely a source 
sequence in S(N, e) while if the first bit is a " 1 " we examine the next L^ bits 
which establish uniquely a source sequence in S(N, e). Each codeword is a 
unique binary sequence and there is never any uncertainty as to when a 
codeword sequence begins and ends. No codeword is a prefix of another. The 
encoder just described is illustrated in Fig. 1.3. 

We have thus developed a uniquely decodable code with two possible 
codeword lengths, L^ and L^ . The average length of codewords is thus 



Pr {u e S(N, e)} + (1 + 14,) Pr {u e SfN, e)} 
<l + L N + L N F N (1.1.31) 

and it follows from (1.1.17), (1.1.18), and (1.1.30) that 

2 

< 1 + N[H(W) + c] + 1 + [N log A + 1]-^ 



N 
or 



<T 2 



(1.1.32) 



U 


= (M,,M 2 ,...,A 


) 


(0,*! , . . . ,XI. N if ue 

x = I 

(l,x lt .*.,x L * N ifue 


S(N,e) 


DMS 


Source 
encoder 


S(N,e) 




L N <N[H(W+e] +1 



L N < N log A + 1 
Figure 13 Noiseless source encoder. 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 17 

Choosing e = N~ 1/3 , this yields, 

\^v> . TT,^ + 27v- 1 + [(log A + AT- l )v 2 + 1]N 



N 

(1.1.33) 



which establishes the direct half of the theorem. 

Before proceeding with the converse half of the theorem, we note that by 
virtue of the asymptotic equipartition property, for large N nearly all code 
words can be made of equal length, slightly larger than NH(3t\ and only two 
lengths of codewords are required. 5 For small N, a large variety of codeword 
lengths becomes more desirable. In fact, just as we have chosen here the length 
LN to be approximately equal to the negative logarithm (base 2) of the almost 
common probability of the output sequence of length N where N is large, so it 
is desirable (and nearly optimal) to make the codeword lengths proportional 
to the logarithms of the source sequence probabilities when N is small. In the 
latter case, individual source sequence probabilities are not generally close 
together and hence many codeword lengths are required to achieve small 
average length. The techniques for choosing these so as to produce a uniquely 
decodable code are several (Shannon [1948], Huffman [1952]) and they have 
been amply described in many texts. The techniques are not prerequisites to 
any of the material presented in this book and thus they will not be included 
in this introductory chapter on fundamental parameters (see, however, 
Prob. 1.6). 

To prove the converse, we must keep in mind that in general we may have 
a large variety of codeword lengths. Thus for source sequence u e W N we have 
a codeword x(u) which represents u and has length denoted l N (u). The lengths 
of the codewords may be arbitrary. However, the resulting code must be 
uniquely decodable. For an arbitrary uniquely decodable code we establish a 
lower bound on (L^). 

Consider the identity 



/ \ M / -i \/ -i \ / -i \ 

Z 2 " B) =112* 1 III2 2) 1-(Z2- ->l 

\ \i \H2 \**Af 

_ v v ... v 7-HN(i)+ijv(2)++ijv("M)] n i ^/i^ 

~LL L L \i.i^*) 



where each sum on both sides of the equation is over the entire space W N . If 
we let A k be the number of sequences of M successive codewords having a 



5 If errors occurring with probability F N could be tolerated, all codewords could be made of equal 
length. While this may seem unacceptable, we shall find in the next chapter that in transmission over a 
noisy channel some errors are inevitable; hence if we can make F v smaller than the probability of 
transmission errors, this may be a reasonable approach. 



18 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

total length of k binary symbols, (1.1.34) can be expressed as 



\M Ml* N 

-M*l = ^ 

u / k=l 



where /J = max u l N (u). But in order for the source sequences to be recoverable 
from the binary sequences we must have 

A k <2 k k= 1, 2, ...,M/* (1.1.36) 

Otherwise two or more sequences of M successive codewords will give the 
same binary sequence, violating our uniqueness requirement. Using this 
bound for A k , we have 

M Ml% 

< 1 = M/* (1.1.37) 



for all integers M. Clearly this can be satisfied for all M if and only if 

^2-^ u) < 1 (1.1.38) 

a 

for the left side of (1.1.37) behaves exponentially in M while the right side 
grows only linearly with M. This inequality is known as the Kraft-McMillan 
inequality (Kraft [1949], McMillan [1956]). 

If we were now to use the general variable length source encoder whose 
code lengths must satisfy (1.1.38) we would have an average of 



= P N (u)/ (1.1.39) 

a 

binary symbols per source sequence. Defining on u e tf/ N the distribution 



we have from inequality (1.1.8) and (1.1.12) 
AHf() = P log 



I 
= y P N (U) log 



log 

log I2- "" (1.1.41) 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 19 

Since the Kraft-McMillan inequality (1.1.38) guarantees that the second term 
is not positive we have 

AT/f(^)<<L N > (1.1.42) 

This bound applies for any sequence length N and it follows that any source 
code for which the source sequence can be recovered from the binary se 
quence (uniquely decodable) requires at least an average of H(%) bits per 
source symbol. 

This completes the proof of Theorem 1.1.1 and we have thus shown that it is 
possible to source encode a DMS with an average number of binary symbols per 
source symbol arbitrarily close to its entropy and that it is impossible to have a 
lower average. This is a special case of the noiseless source coding theorem of 
information theory which applies for arbitrary discrete alphabet stationary ergod- 
ic sources and arbitrary finite code alphabets (see Prob. 1.3) and gives the notion 
of entropy its operational significance. If we were to relax the requirement that the 
source sequence be recoverable from the binary-code sequence and replaced it by 
some average distortion requirement, then of course, we could use fewer than 
H(<%) bits per source symbol. This generalization to source encoding with a distor 
tion measure is called rate distortion theory. This theory, which was first pre 
sented by Shannon in 1948 and developed further by him in 1959, is the subject of 
Chap. 7 and Chap. 8. 

Another important consequence of the theorem is the asymptotic equality of 
the probability of source sequences as N becomes large. If we treat these sequences 
of length N as messages to be transmitted, even without considering their efficient 
binary representation, we have shown that the "typical" messages are asymptot 
ically equiprobable, a useful property in subsequent chapters where we treat 
means of accurately transmitting messages over noisy channels. 



1.2 MUTUAL INFORMATION AND CHANNEL CAPACITY 

Shannon demonstrated how information can be reliably transmitted over a noisy 
communication channel by considering first a measure of the amount of informa 
tion about the transmitted message contained in the observed output of the chan 
nel. To do this he defined the notion of mutual information between events a and 
/? denoted /(a; /?) which is the information provided about the event a by the 
occurrence of the event p. As before the probabilities -P(a), P(f$), and P(a n /?) are 
assumed as given parameters of the model. Clearly to be consistent with our 
previous definition of information we must have two boundary condition 
properties : 

1. If a and f$ are independent events (P(a n /?) = P(a)P(/?)), then the occurrence 
of p would provide no information about a. That is, /(a; /?) = 0. 



20 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

2. If the occurrence of /? indicates that a has definitely occurred (P(a | /?) = 1), then 
the occurrence of /? provides us with all the information regarding a. That is, 
/(a; = /() = log [l/P(<x)]. 

These two boundary condition properties are satisfied if the mutual information 
between events a and /? is defined as 



Note that this definition is symmetric in the two events since /(a; /?) = /(/?; a). 
Also mutual information is a generalization of the earlier definition of the infor 
mation of an event a since /(a) = log [l/P(a)] = /(a; a). Hence /(a) is sometimes 
referred to as the self-information of the event a. Note that although /(a) is always 
nonnegative, mutual information /(a; /?) can assume negative values. For example, 
if P(a|/?) < F(a) then /(a; /?) < and we see that observing /? makes a seem less 
likely than it was a priori before the observation. 

We are primarily interested in the mutual information between inputs and 
outputs of a communication channel. Virtually all the channels treated through 
out this book will be reduced to discrete-time channels which may be regarded as 
a random mapping of the random variable x n , the channel input, to the variable 
y n , the channel output, at integer-valued time n. Generally these random variables 
will be either discrete random variables or absolutely continuous random var 
iables. While only the former usually apply to practical systems, the latter also 
merit consideration in that they represent the limiting case of the discrete model. 
We start with discrete channels where the input and output random variables are 
discrete random variables. Generalizations to continuous random variables or a 
combination of a discrete input random variable and a continuous output random 
variable is usually trivial and requires simply changing probability distributions 
to probability densities and summations to integrals. In Chap. 2 we shall see how 
these various channels appear in practice when we have additive white Gaussian 
noise disturbance in the channel. Here we begin by formally defining discrete 
memoryless channels. 

Definition A discrete memoryless channel (DMC) is characterized by a discrete 
input alphabet $T, a discrete output alphabet ^, and a set of conditional 
probabilities for outputs given each of the inputs. We denote the given condi 
tional probabilities 6 by p(y \ x) for y e J and x e <X. Each output letter of the 
channel depends only on the corresponding input so that for an input se- 



6 Throughout the book we use lowercase letters for both probability distributions and probability 
densities associated with channel input and output random variables. 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 21 

I-/; 




Figure 1.4 Binary symmetric channel. 



quence of length N, denoted x = (x l5 x 2 , . . . , X N ), the conditional probability 
of a corresponding output sequence, denoted y = (y l9 y 2 , ..., y N ), may be 
expressed as 7 

) = t\pb\\x m ) (1.2.2) 



This is the memoryless condition of the definition. We define next the most 
common type of DMC. 

Definition A binary symmetric channel (BSC) is a DMC with 3C = % = {0, 1} 
and conditional probabilities of the form 



p(0|0) = p(l i)=l-p (1.2.3) 

This is represented by the diagram of Fig. 1.4. 

We can easily generalize our definition of DMC to channels with alphabets 
that are not discrete. A common example is the additive Gaussian noise channel 
which we define next. 

Definition The memoryless discrete-input additive Gaussian noise channel is a 
memoryless channel with discrete input alphabet 3C = [a l9 a 2 , . . . , a Q ], output 
alphabet ^ = ( oo, oo) and conditional probability density 



p(y\a k ) = =e-o-"* 12 * 2 for all v e # (1.2.4) 



where k = 1, 2, ..., Q. 

This is represented by the diagram of Fig. 1.5 where n is a Gaussian random 
variable with zero mean and variance a 2 . For this case, memoryless again means 



7 This definition is appropriate when and only when feedback is excluded; that is, when the 
transmitter has no knowledge of what was received. In general, we would require p(y n |x 1? . . ., x n , y lt 
>> - 1) = P(y n | xj for all n. 



22 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 







Figure 1.5 Additive Gaussian noise channel. 



that for any input sequence x of length N and any corresponding output sequence 
y we have 

N 

Pjv(y|x)= nHvnW (!- 2 - 2 ) 

n=i 

for all N. These and other channels will be discussed further in Chap. 2. In this 
chapter we examine only discrete memoryless channels. 

Consider a DMC with input alphabet $T, output alphabet l and conditional 
probabilities p(y \x) for y e ^, x e 3C . Suppose, in addition, that input letters 
occur with probability q(x) for x e 9C. We can then regard the input to the channel 
as a random variable and the output as a random variable. If we observe the 
output y then the amount of information this provides about the input x is the 
mutual information 

I(x;y) = log 



p(y) 

= log^ 



where p(y) = p(y\x)q( X ) (1.2.6) 

x 

As with sources, we are primarily interested in the average amount of information 
that the output of the channel provides about the input. Thus we define the 
average mutual information between inputs and outputs of the DMC as 8 

l(X; 9) = E(l(x- y)] 

(L17) 



The average mutual information /(#*; ^) is defined in terms of the given channel 
conditional probabilities and the input probability which is independent of the 



8 Actually the definition is not restricted to channel inputs and outputs. It is the appropriate 
definition for the average mutual information between an arbitrary pair of random variables. For 
absolutely continuous random variables we replace summations and probabilities by integrals and 
density functions. 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 23 

DMC. We can then maximize /($"; %) with respect to the input probability 
distribution q = (q(x) : x e 3C\ 

Definition The channel capacity of a DMC is the maximum average mutual 
information, where the maximization is over all possible input probability 
distributions. That is, 

C = max l(3C\ &) (1.2.8) 

q 

Example (BSC) By symmetry the capacity for the BSC with crossover probability p, as shown in 
Fig. 1.4, is achieved with channel input probability q(Q) = q(\] = {. Hence 

C = I(3T\ 

4( 

= 1 -plog - - (1 -p)log 



p - p 

= 1 - Jf(p) bits/symbol 

As expected when p = \ we have jf() = 1 and C = 0. With p = we get Jf (0) = and C = 1 bit 
which is exactly the information in each channel input. Note that we also have C = 1 when p = 1 
since from the output symbol, which is the complement of the input binary symbol, we can 
uniquely determine the input symbol. By extending this argument, it follows that 
C(p) = C(l-p). 

Note that channel capacity is defined only in terms of given channel charac 
teristics. Even though these are assumed given, performing the maximization to 
find channel capacity is generally difficult. Maximization or minimization of func 
tions over probability distributions can often be evaluated with the aid of the 
Kuhn-Tucker theorem (see App. 3B). In Chap. 3 we shall find necessary and 
sufficient conditions on the input probability assignment that achieves capacity as 
well as for the maximization of other functions that arise in the analysis of digital 
communication systems. (In App. 3C we also give a simple computational algor 
ithm for evaluating capacity.) We shall see that, like the entropy parameter for a 
source, the capacity for a channel has operational significance, related directly to 
limitations on the reliable transmission of information through the channel. First, 
however, we examine some properties of average mutual information, which will 
be useful later. 

Lemma 1.2.1 

</(!; 9) < 1 1 p(y | xWx) log (1.2.9) 



where p( ) is any probability distribution. Equality is achieved in the upper 
bound if and only if p(y) = p(y) = q(x)p(y\x) for all y e &. l(3C\ %) = if 

X 

and only if the output random variable is independent of the input random 
variable. 



24 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

PROOF 9 The lower bound is found by using the inequality In x < x 1 as 
follows : 



pfri*) 

p(y) 



KyW 

p(y) 



- 1 



= O n 2 ) 1 Z Z pG^M - Z Z ptv I x)q(x) 

ly * y ^c 

-0 (1.2.10) 

with equality to zero if and only if p(y \ x) = p(y) for all y and x. 
The upper bound to /(#*; ^) follows from the form 

log 77~N -ZIKH*te(*)ig 



p(y\ x 

It follows from (1.1.8) that 



with equality if and only if p(y) = p(y) for all y E j . Substituting this inequa 
lity for the first term of (1.2.11) yields the desired result. 

Consider a sequence of input random variables of length N denoted 
x = (x l9 x 29 ., X N ). Let the probability of the input sequence be given by q N (\) 
for x e & N and let the resulting marginal probability of x n be ^f (n) (x) for x e 3C 
where n = 1, 2, . . . , N. That is 



""w =z--- z 



X N 



for each n. The average mutual information between input sequences of length N 
and the corresponding output sequences of length N is 



N) = Z Z My ! 

y x 
where p N (y) = 



9 Although the properties given here hold for any logarithm base we shall prove properties for base 
2. Generalization to any base is trivial. 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 25 

Since the channel is memoryless, the average mutual information between x n and 
the corresponding output y n is 



log (1-2.13) 

y x p (y) 

where p (n) (y) = x p(y\x)q w (x) and n = 1, 2, . . . , N. We then have 
Lemma 1.2.2 

N 

(1.2.14) 



where equality is achieved in the lower inequality when (but not only when) 
Xj, x 2 , . . . , X N are independent random variables and in the upper inequality 
when and only when each independent input random variable has the proba 
bility distribution that achieves channel capacity. 

PROOF From Lemma 1.2.1 we have 

I(SF N ; ) < Z Z PN (y |xfc w (x) log ^*^ (1.2.15) 

y x PN\y) 

for any probability distribution p\( )- Now choose 



Then since 

flv(y|*)_ A PCd* 
P,(y) "M ^(y 

we have 

; ^) < Z S 




(1-2.18) 



n=l 



with equality if and only if p N (y) = p N (y) for all y e & N . Equality is thus 
achieved when the output random variables j/j, y 2 , ..., y N are independent. 
Since the channel is memoryless, this certainly happens if the input random 
variables x 1? x 2 , . . . , X N are independent. The upper inequality follows trivially 
since / (n) (f ; ^) < C with equality, according to (1.2.8), when and only when 
the input probability distribution g (n) (-) achieves the maximum average 
mutual information. 



26 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



9C 



DMC 1 


y 


DMC 2 


HJ 



Figure 1.6 Cascade of channels. 



Lemma 1.2.3 Consider three random variables x, y, i; which have joint prob 
ability p(x, y, y) for x e $T, y e ^, r e y . Let average mutual information be 
defined for each pair of random variables, l(3C\ ty\ l(3C; V\ and /(^; V\ 
Assume, as shown in Fig. 1.6, that x is an input random variable to one 
channel with output y which in turn becomes the input to a second channel 
with output random variable y. Assume further that, as implied by Fig. 1.6, 



p(v\x, y) = p(v\y) 

which means that x influences v only through y. Then 
)T) are related by inequalities 



and 



PROOF 



; r) - 



p(x, v) log 

= XIIp(w) 



(1.2.19) 
/(; IT), and 

(1.2.20) 
(1.2.21) 

Hog** 

p(y) 



p(y)p(y|x) 

. p(\x)p(y 



pyx 



v y x 



p(v)p(y\x 
p(v\x)p(y) 



- 1 



(1.2.22) 



where we have again used In x < x - 1. Note further that by Bayes rule 
p(x, y, v)p(v | x)p(y) p(x, y, v)p(x, v)p(y) 



p(v)p(y\x 



P(v)p(x,y) 



P(v) 

p(v\x,y)p(x\v)p(y) 
p(v\y)p(x\v)p(y) 



(1.2.23) 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 27 



Encoder 




DMC 




Decoder 


T . 


or 



Figure 1.7 Data processing system. 

where in the last equality we used the hypothesis (1.2.19). Hence combining 
(1.2.22) and (1.2.23) 



-/(.f; 30 < (In 2) 



= (1* 2) 






- 1 



The second inequality follows from a similar argument. 



(1.2.24) 



Lemma 1.2.3 can be generalized easily to various length sequences in a 
cascade of devices. A special case of the second DMC in Fig. 1.6 is a deterministic 
device, that maps input y into output r deterministically. Next consider Fig. 1.7 
where we assume that u is a sequence of length L of random variables with 
probability p L (u) for u 6 J U L which generates the inputs to a deterministic device 
called an encoder whose output sequence x is of length N. The sequence x is then 
the input to the DMC for which, by definition 



x ) = 



for 



and 



Finally y is the input to a deterministic device called a decoder whose output is v, a 
sequence of length L The encoder can be assumed to operate on the entire L 
length sequence u to generate the N length output sequence x. Similarly the 
decoder can be assumed to operate on the entire N length sequence y to output 
the L length sequence v. Regarding sequences as single inputs and outputs we have 
from Lemma 1.2.3 the inequalities 

&N) (1-2.25) 



and 



(1.2.26) 



Combining these we obtain the data-processing theorem: 

Theorem 1.2.1: Data-processing theorem For the system of Fig. 1.7 

/(# L ; y ~ L ) < I(3C y \ #v) (1.2.27) 

This result assumes that each sequence influences subsequent sequences 
as shown in Fig. 1.7. That is u influences v only through x, which in turn 



28 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

influences v only through y so that p L (\ \ u, x, y) = p L (\ \ y) where y e / N and 
v e V L . Also, from Lemma 1.2.2, we obtain the result that for the system of 
Fig. 1.7 



(1.2.28) 
where C is the channel capacity of the DMC. 

The above properties of average mutual information follow easily from simple 
inequalities and definitions. Even though mutual information can be negative- 
valued, the average mutual information cannot be negative. Furthermore, the 
average mutual information between outputs and inputs of a DMC is nonnegative 
and becomes zero only when the outputs are independent of the inputs. Thus it is 
not surprising to find that by cascading more devices between inputs and outputs 
the average mutual information decreases, for the insertion of each additional 
device weakens the dependence between input and output. Other properties of 
average mutual information are given in App. 1A. Although these properties of 
average mutual information are discussed in terms of " channels " they apply to 
more general situations. For example, the "data-processing theorem" applies 
even when the encoder, channel, and decoder in Fig. 1.7 are replaced by arbitrary 
"data processors." 

To show the significance of the definition of mutual information, average 
mutual information, and channel capacity, we examine the problem of sending the 
outputs of a source over a communication channel. We shall show that if the 
entropy of the source is greater than the capacity of the channel, then the com 
munication system cannot operate with arbitrarily small error no matter how 
complex the coding system. This negative result is called the converse to the 
coding theorem. 



1.3 THE CONVERSE TO THE CODING THEOREM 

Let us now examine the problem of sending the outputs of a discrete memoryless 
source (DMS) to a destination through a communication channel modeled as a 
discrete memoryless channel (DMC). Specifically, consider the block diagram of 
Fig. 1.8 where the DMS alphabet is ^ = {a i9 a 2 , . . ., a A }, with probability distri 
bution P( ) and entropy H(fll\ We assume that source outputs occur once every 7^ 
seconds so that the DMS average information output rate is H(W)/T S bits per 
second, when H (ft) is measured in bits per output. The destination accepts letters 
belonging to the same alphabet, y = ^, at the same source rate of one symbol 
every T s seconds. 

The DMC has input alphabet $T, output alphabet ^, and conditional probabil 
ities p(y \x) for y e ty, x e 3C. It also has a channel capacity of C bits per channel 
use, when mutual information is measured in bits. We assume that the channel is 
used once every T c seconds. 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 29 



DMS 


U l,Mj ,M 2 , . ... t j 


Encoder 






A\, e9C 




Figure 1.8 A communication system. 



We are now dealing with a DMS that outputs a symbol once every 7^ seconds 
and a DMC that can be used once every T c seconds. Without compromising 
notation, we can continue to label source outputs and channel inputs with integer 
indices. We merely adopt the convention that the source output u l occurs at time 
/7^ and x n is the channel input at time nT c + T d where T d is the encoding delay. 

We assume that the DMS and DMC are given and are not under our control. 
The encoder and decoder, on the other hand, can be designed in any way we 
please. In particular, the encoder takes source symbols and outputs channel input 
symbols while the decoder takes channel output symbols and outputs symbols 
belonging to the source alphabet i ~ = %. Suppose now we wish to send to the 
destination L source output symbols, u. The encoder then sends N channel input 
symbols, x, over the channel where we assume that 



LT S = NT C 



(1.3.1) 



Each channel input symbol can depend on the L source symbols, u, in any way 
desired. Similarly the decoder takes the N channel output symbols y, and outputs 
a sequence of L destination symbols, v. Again each destination symbol can depend 
on the N channel output symbols, y, in any way desired. The channel is mem- 
oryless so that for each time nT c + T d the channel output symbol y n depends only 
on the corresponding channel input symbol x n . 

In any communication system of this type we would like to achieve very small 
error probabilities. In particular, we are interested in the probability of error for 
each source letter, as defined by 



(1.3.2) 



for / = 1, 2, . . . , L. Here P (/) (w, r) is the joint probability distribution of v t and u l . 
P e l is the probability that the /th source output u { is decoded incorrectly by the 



30 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

destination. The average per digit error probability, (P e \ over the L source 
outputs is defined as 



(1-3.3) 



For most digital communication systems (P e ) is the appropriate performance 
criterion for evaluating the system. If (P e ) can be made arbitrarily small we have 
a reliable communication system. We proceed to show that if the source entropy is 
greater than channel capacity, reliable communication is impossible. This result is 
known as the converse to the coding theorem. 

We begin by considering the difference between the entropy of the source 
sequence, H(^ L ), and the average mutual information between the source se 
quence and the destination sequence, I(^ L \ T^ L ). From the definitions and Bayes 
rule, it follows that 



Jf(* t ) - 



j = P L (u) log 



L (u, v) log 



= ZI,v; 



.. 
a v 



= y 



Next we apply the inequality (1.1.8) to get the bound 



(1.3.4) 



(L15) 



for any conditional probability P L (u | v). Let us now choose 



1=1 



where 



(1.3.6) 



and 



= Z 



(1-3-8) 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 31 

This choice in (1.3.4) and (1.3.5) yields the bound 



H(* L ) - 



|V) log 



(1.3.9) 



We now consider the two parts in this bound separately using again the fun 
damental inequality In x < x 1 and the relationship 



>>,, = ! !>*>,) 

u v^u 

from which it readily follows that 



(1.3.2) 



(1.3.10) 



We bound the first term in the brace in (1.3.9) as follows using (1.3.2): 



, / 



32 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



- 1 



= (^ 2) 



y y p (i \v ) y y P (/) (M r) 

u r^u -^ u u^u 



- 1 



p.. I 



- 1 



- i 



(1.3.11) 



The second term is bounded in a similar manner as follows using (1.3.10): 

= p<1)( * *> los 



P, f ) log 



= (ln2)- 1 



1- 



- 1 



(l-P e ,,)log - 

1 ~ "e, I 



(l-l"..i)log 



(1.3.12) 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 33 

Recalling from Sec. 1.1 the definition of the binary entropy function 

1 1 

P 1 P 

and the definition (1.3.3) of <P e > and using the bounds (1.3.11) and (1.3.12) in 
(1.3.9), we obtain 

! A-l 1 

H(W L ) - /(#,; r.) < V [P e , log - - + (1 - P e ,) log - 

1=1 \ Pe,l 1 ~Pe,l\ 

L 

= L<P e > log (A - 1) + jf (P e ,) (1.3.14) 

/=! 

The next to final form of the desired inequality follows from the observation that 
from (1.1.8) we have 

1 1 

Pe, I ^g + (1 - P e , ,) log 

r e , I l ~ ?, I 

<^,/log T^+(l-^.,)log l _\ p} (1-3-15) 



so that 



L L I i i 

I *(P..,) < I k, k)g 7^ + (1 - P c .,) log - -, 



= L^fP e (1.3.16) 

Hence 

H( L ) - /(*,.; rj < L<P e > log (^ - 1) + L^P e (1.3.17) 
Since the source is memoryless, from (1.1.12) we have 

H(W L )=LH(W) (1.3.18) 

Furthermore, Theorem 1.2.1, Lemma 1.2.2, and (1.3.1) give us 



^N) < NC = LC (1.3.19) 

^c 

Using (1.3.18) and (1.3.19) in (1.3.17) yields the desired bound 



p L 

< <P e > log M - 1) H- JfP, (1.3.20) 

For convenience in using the upper bound of (1.3.20), we define 

. = <P e ) log (A - 1) + JT P e 



34 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



log A 
log 04-1) 




Figure 1.9 FP e = <P e > log (A - 1) + tf(P e \ 



According to (1.3.20), if the source entropy of H(tft)/T s bits per second is 
greater than the channel capacity of C/T C bits per second, then F((P e )) = 
<P e > log (A - 1) + Jf({P e )) is greater than the constant /? = H(%) - 
(T S /T C )C > 0. Figure 1.9 shows FP e as a function of <P e >. From this it is clear 
that if /? > then there exists some a > such that <P e ) > a. Note that this holds 
regardless of the source sequence length L, and hence yields the following form of 
the converse theorem due to Fano [1952]. 

Theorem 1.3.1 (Converse to the Coding Theorem) If the entropy per second, 
H(tft)/T s , of the source is greater than the channel capacity per second, C/T C9 
then there exists a constant a > such that (P e ) > a for all sequence lengths. 

The converse to the coding theorem shows that it is impossible for a commun 
ication system to operate with arbitrarily small average error probability when the 
information rate of the source is greater than channel capacity. We shall see in 
subsequent chapters that if the information rate is less than channel capacity, then 
there are ways to achieve arbitrarily small average error probability. These results 
give the concepts of mutual information and particularly channel capacity their 
operational significance. 



1.4 SUMMARY AND BIBLIOGRAPHICAL NOTES 

In this introductory chapter we have presented the basic concepts of information 
and its more general form, mutual information. We have shown that for a discrete 
memoryless source the average amount of information per source output, called 
entropy, represents the theoretical limit on the minimum average number of 
binary symbols per source output necessary to represent source-output sequences. 
This result generalizes to discrete stationary ergodic sources (see Prob. 1.3) and 
more general code alphabets. Next we defined discrete memoryless channels 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 35 

which serve as models for many real noisy communication channels. The maxi 
mum average mutual information of a discrete memoryless channel, called chan 
nel capacity, represents the theoretical limit on the rate of information that can be 
reliably transmitted over the channel. In this introductory chapter we have proved 
for discrete memoryless channels the negative part of this result, commonly called 
the converse to the coding theorem. This result generalizes easily to all memoryless 
channels. 

The theoretical foundations of digital communication were laid by C. E. 
Shannon [1948]. Most of the concepts of this chapter are found in greater gener 
ality in this original work. Other similar treatments can be found in Fano [1961], 
Abramson [1963], Gallager [1968], and Jelinek [19680]. The books of Feinstein 
[1958], Wolfowitz [1961], and Ash [1965] may appeal to those who prefer math 
ematics to engineering applications. 



APPENDIX 1A CONVEX FUNCTIONS 



In this chapter we defined two fundamental parameters of information theory: 
H(%) 9 the entropy of an information source, and /(^; #), the average mutual 
information between the inputs and outputs of a communication channel. These 
are two examples of a more general class of functions which have the property 
known as convexity. In this section we briefly examine convex functions and some 
of their properties. These results will be useful throughout the rest of this book. 

Definition A real-valued function /() of a real number is defined to be 
convex n over an interval J if, for all x x e ./, x 2 e ,/, and 6, < 6 < 1, the 
function satisfies 



Qf( Xl ) + (1 - 9)f(x 2 ) <f[9x, + (1 - 9)x 2 ] 

If the inequality in (lA.l)is reversed for all such x ls x 2 , and 9 then/(- ) is called 
convex u. When (1 A.I) or its converse is a strict inequality whenever x x =f= x 2 
then we call/(-) strictly convex n or strictly convex u. 

In Fig. 1A.1 we sketch a typical convex n function for fixed Xj and x 2 as 
a function of 9. From this it is clear why the n (cap) notation is used here. 10 
Similar comments apply to convex u (cup) functions. In fact since a convex u 
function is the negative of a convex n function, we need only examine the prop 
erties of convex n functions. Commonly encountered convex n functions are 
In x and x p (0 < p < 1) for the interval / = (0, oo). Convex u functions include 



10 In the mathematical literature a convex n function is called concave and a convex u function 
convex. Gallager [1968] introduced the notation used here to avoid the usual confusion associated with 
the names concave and convex. 



36 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



f[8x } +(\ -8)x 2 ] 




Figure 1A.1 A convex n function. 



In x and x p (p > 1) for / = (0, oo). Functions that are both convex u and 
convex n are linear functions of the form ax + b. 

Sometimes with more complex functions it is difficult to tell whether or not a 
function is convex n. A useful test is given next. 

Lemma 1A.1 Suppose/(- ) is a real-valued function with derivatives / () and 
/"( ) defined on an interval J. Then/( ) is a convex n function over interval 
J if and only if 



/"(x) < for all x e / 



(1A.2) 



PROOF Let x,, x 2 , and y be any set of points in J. Integrating /"() twice, 
we have 



dp 



and 



I" ff"(x) d dp =/(*,) -/(y) -/ (y)[*2 - y] 

y i 

For any e (0, 1) we combine these equations to obtain, 



(1A.3) 
(1A.4) 



= f X1 [ /"(a) 



da 



- 9) f 2 I /"(a) 



d* 



Now choosing y = 0Xj -f (1 - ^)x 2 we see from (1A.5) that 
Gf( Xl ) + (1 - 0)/(x 2 ) <f[0 Xl + (1 - 0)x 2 ] 
for all Xj and x 2 in ./ and e (0, 1) if and only if (1A.2) is true. 

We proceed to define convex functions of several variables, but first we need 
to define a convex region in a real vector space. Let & N be the set of N-dimensional 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 37 

real vectors. We define a region ,/ v c N to be a convex region if for each vector 
Xj e y v and each vector x 2 e ,/ v , the vector 9x l + (1 - 0)x 2 is in S N for all 
9 e (0, 1). This means that for a convex region all points connecting any two 
points in the region also belong to the region. The convex region most often 
encountered in this book is # N , the set of probability vectors. Formally, 

x:x n >0, n= 1,2,..., AT; x n = ij (1A.6) 

w=l 

Definition A real-valued function/( ) of vectors of dimension N is defined to 
be convex n over a convex region S N if, for all x x e / jV , x 2 e ,/ v , and 9, 
< < 1, the function satisfies 



e/( Xl ) + (1 - 0)/(x 2 

If we have a strict inequality whenever Xj i= \ 2 then/(-) is called strictly 
convex n. The function is convex u if the inequality is reversed. 

For convex n functions of vectors we have two important properties: 

1. If/ 1 (x),/ 2 (x), . . . ,/ L (x) are convex n functions and if c 1? c 2 , . . . , C L are positive 
numbers, then 



1=1 

is convex n with strict convexity if any of the {//(x)} are strictly convex n. This 
follows immediately from the definition given in (1A.7). 

2. Let x be a random vector of dimension N and let/(x) be any convex n function 
of vectors of dimension N. Then 



where [] is the expectation. This very useful inequality, known as the 
Jensen inequality, is proved in App. IB. 



The entropy function 



H(x)= x n ln - (1A.10) 

n=l X n 



is a convex n function over & N defined by (1A.6). To see this let 
/ n (x) = x n ln - for n= 1,2, ...,N 

*n 

By using Lemma 1A.1 we see that each/ n (x) is convex n. Then by property 1 we 
have that H(\) = =1 / n (x) is also convex n. Another proof can be obtained 
directly from inequality (1.1.8). (See Prob. 1.12.) 



38 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Finally suppose we consider a DMC with input alphabet #T, output alphabet 
^, and transition probabilities p(y \ x) for x e <%, y e ^. For an input probability 
distribution q(x) for x e $T, we defined average mutual information as 

(1.2.7) 



To emphasize the dependence of l(%\ <%) on the transition probabilities repre 
sented by P and the input probability distribution represented by q we write this 
as 



Lemma 1A.2 l(3C\ <&) for fixed channel transition probabilities is a convex n 
function over the input probability space, and for fixed input probability 
distribution a convex u function over the channel transition probability 
space. That is 

0/(q {1) ; P) + (1 - 0)/(q (2) ; P) < /(0q (1) + (1 - 0)q (2) ; P) (1A.12) 

where 0q (1) 4- (1 - 0)q (2) represents the probability distribution 6q (1) (x) + 
(1 9)q (2) (x), x e #*, for any input probability distributions q (1) and q (2) and 
forall0e(0, 1) 



0/(q; P (1) ) + (1 - 0)/(q; P (2) ) > /(q; 0P (1) + (1 - 0)P (2) ) 

where 0P (1) + (1 - 0)P (2) represents the transition probabilities 6p (1) (y\x) + 
(1 - 0)p (2) (y|x) for y e &\ x e 3C for any transition probabilities P (1) and P (2) 
and for all e (0, 1). P in (1 A. 12) represents any transition probabilities and q 
in (1A.13) represents any input probability distribution. 

PROOF For any given P and q let us denote by p the output distribution 



For fixed P it should be clear that when input distributions q (1) and q (2) result 
in output distributions p (1) and p (2) respectively, then the input distribution 
0q (1) + (1 - 0)q (2) results in the output distribution 0p (1) + (1 - 0)p (2) . Now 
note that 

/(q; P) = I q(x) Z p(y\x) log p(y\x) 4- //(p) (1A.15) 

x y 

where //(p) is the entropy of the output alphabet. The first term in (1A.15) is 
linear in q and therefore convex n in q. The second term is convex n in p, as 
established by the argument following (1A.10). But since p is linear in q this 
means that it is also convex n in q. By property 1 we see that 7(q; P) is convex 
n in q for fixed P. This proves (1A.12). 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 39 



To prove (1A.13), let 

p (e \y |x) = e P (l \y\x) + (1 - 6)p (2) (y \x) y e & and x e 3C 



and 

Then 

(ln2)/(q;0P (1) + (l-0)P (2) 



ye 



P m (y) 



p m (y) 



(1A.16) 



Next using the inequality In x < x - 1 we have 

D m (v\x) 

(x)p" (>- x) In 






In 



In 



In 



p m (y)P (1} (y\x 



_ 



-= (In 2)/(q; P<) 
since the second term sums to zero. Similarly 



Z I (x)p <2) (y|x) In 



< (ta 2)/ (q; 



(1A.17) 



(1A.18) 



Using (1A.17) and (1A.18) in (1A.16) we have the desired result (1A.13). 

We have shown here that the fundamental parameters, entropy and average 
mutual information, have certain convexity properties. In subsequent chapters we 
shall encounter other important parameters of information theory that also have 
convexity properties. 



40 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

APPENDIX IB JENSEN INEQUALITY FOR 

CONVEX FUNCTIONS 



Lemma Let/( ) be a convex n real-valued function defined on the real line. 
Let x be a random variable with finite expectation. Then 

[/(*)] </([*]) 

For convex u functions, the inequality is reversed. 

PROOF We first prove this for a discrete finite sample space. The definition of 
a convex function is most concisely stated as the property that any line 
segment connecting two points (x 1 ,/(x 1 )) and (x 2 ,f(x 2 )) must ne below the 
function over the interval Xj < x < x 2 (see Figure 1B.1). Consider first the 
distribution p l9 p 2 = 1 - Pi for a binary-valued random variable. Then it 
follows from the definition of the line that the point (p x x + p 2 x 2 , Pi/(*i) + 
P 2 f(x 2 )) lies on the line and hence must lie directly below the point 
(Pi*i + P2*2>f(Pi x i + 2*2)) on tne function. It follows that 



P 2 f(x 2 ) <f(Pi*i 



(1B.1) 



Now extending 
Pi + P 2 + P 3 = 1, 



to a three-point distribution, p l9 p 2 , PJ, where 



= (Pi + Pi) 



Pi + P2 



Pi +P2 



f(X2) 



where we have used (1B.1) recognizing that the coefficients pj /(p^ 4- p 2 ) and 
PI/(PI + Pi) constitute a binary distribution defined at the points Xj and x 2 . 



f(p l x 




Pl x l 



X 2 



Figure 1R1 Convex n function. 



DIGITAL COMMUNICATION SYSTEMS: FUNDAMENTAL CONCEPTS AND PARAMETERS 41 

Again using (1B.1) on the binary distribution (p l + p 2 ) and p 3 defined at 
the points c = (p^x^ + p 2 .x 2 )/(pi + Pi) an d - X 3 > we nave 



(Pi 
Substituting for and combining (1B.2) and (1B.3), we obtain 

) + P2 /(* 2 ) + P3 



We proceed to extend by induction to a finite distribution of order n. 
Suppose that for a distribution of order n I 



i=l 

Then for order n 

n-l 
n n- 1 2-i 

IP; /(*;) = Ift^^T 

7=1 i=l V 



i=l 
n- 1 

Pnf(Xn) ( 1B - 6 ) 



where 

n-l /n-l 



= Z p Z 



and where we used (IB. 5) and the fact that p//JJ=i p,, for 7 = 1, 2, ..., 
(n 1), constitutes an (n l)-point distribution. Now applying (1B.1) on the 
binary distribution Z"=i Pi an( i P B ^ follows that 



i=l 

Finally, combining (1B.6) and (1B.7), we have 



or [/(*)] <f(E[x]) 9 as was to be shown. 

Extension to any infinite discrete sample space is direct as is extension to any 
distribution function P(- ) for which the Stieltjes integral J f(x) dP(x) exists. For 
such cases (IB. 8) becomes 



42 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Inequalities (1B.8) and (1B.9) can be expressed generically as 

[/(*)] </([*]) 

where/(x) is a convex n function. If/(x) is convex u, it immediately follows that 
all inequalities are reversed. 



PROBLEMS 

1.1 Examples of entropy 

(a) For a DMS with A 2 output letters ty (a t , a 2 ] and P(a l ) = p, show by direct differentia 
tion that the entropy 

= p log -- + (1 - p) log 



P 1 - P 

is maximized when p = i- 

(b) For the binary source in (a) consider sequences of two outputs as a single source output of an 
extended source with alphabet # 2 = {(i> a tX ( fl i> a 2\ ( a z^ a \\ ( a 2> a 2)}- Show directly that 



(c) Consider the drawing of a card (with replacement) from a deck of 52 playing cards as a DMS. 
What is the entropy of a randomly selected card? Suppose suits are ignored so that the output space is 
now # = {A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K}. What is the entropy of a randomly selected card now? 
What if % = {face card, not a face card}? 

(d) What is the entropy of the output of the toss of a fair die? Suppose the die is biased so that the 
probability of any face is proportional to the number of dots; now what is the entropy? 

1.2 Given a sequence of discrete random variables w t , 2 , ..., U N with alphabets # (1) ,# (2) , ..., # <N) and 
a joint probability P^u) for u e # (1) x # (2) x x ^ (N) . Its entropy is 

= P log 



*>*( 

Show that 



with equality if and only if the random variables are independent. Here H(W (n) ) is the entropy of the nth 

random variable. 

1.3 For an arbitrary stationary ergodic source define entropy as 

= lim 



where H(V N ) = P log -- 

u r N (u) 

The asymptotic equipartition property of stationary ergodic sources gives 



for any t > 0, where F N = Pr u: 



, logP-tf 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 43 



(a) Show that 



for k < n 



n k 

(b) Prove the noiseless source coding theorem assuming that 



1.4 (Chebyshev Inequality and the Weak Law of Large Numbers) 

(a) Show that for a random variable x with mean m and variance a 2 , Pr { | x - m \ >(]< a 2 /c 2 
for any t > 0. 

(b) Let Zj, z 2 , ..., Z N be independent identically distributed random variables with mean land 
variance a 2 . Show that for any t > 



and 



Pr z: 



//inf. Lower bound the variance of x by reducing the region of integration. 

1.5 (Chernoff Bound) Show that F v defined in (1.1.23) decreases at least exponentially with N by the 
following steps: 

log P v (u) - H > c}. 



(a) Define S(N, e)+ = { 

(b) For z n defined in (1.1.26), note that for u e 5(N, a) + 



z n - N(H + )>0 



Hence for any s > show that 



t= Z P 



where 



G(s) = s[H + ] - log [2] 



1 < exp 5[X n v = j z n - N(H + e)] for u e S(N, c) + . 
(c) By examining the first two derivatives of G(s) show that for some 5* > we have 



where G(s*) > 0. 

J Do the same for 



S(N, )_ = {u : (1/N) log P v (u) - // < -J 
then combine with the result of (c) to get the desired bound. 



44 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



1.6 Assume a DMS with alphabet ^ = {a t , a 2 , ..., a A ] and probability P(u) for u e 
(a) For each u e ^ pick a binary codeword of length /(u) which satisfies 



log J-j <,< tog J_ 



Show that the average length 



satisfies 

< <L> < 



(b) Repeat (a) for source sequences of length N and obtain A N binary codewords of lengths 
{/(u) : u e <% N } with average length 



I 



which satisfies 

<L N > 



(c) Show that the code words in (a) and (b) can be chosen such that no codeword of length / is 
identical to the first / bits of a codeword of length greater than or equal to /. That is, no codeword is a 
prefix of another. Such a set of distinct codewords has the uniquely decodable property that no two 
different codeword sequences can form the same binary sequence. Hence with these codes the source 
outputs can be uniquely determined from the binary-code sequence. 
1.7 Show that 

(a) 1(%;} 

and 



where I(3C\ &) is defined by (1.2.7) and 



(b) 
where 



, y) log 



q(x\y) 



1.8 Find the average mutual information between inputs and outputs of the following DMCs. Then 
find their capacities. 

(a) The binary erasure channel (BEC) of Fig. Pl.Sa 

(b) The Z channel of Fig. P 1.8/7 



DIGITAL COMMUNICATION SYSTEMS! FUNDAMENTAL CONCEPTS AND PARAMETERS 45 
I.-JI 





(b) Figure P1.8 



1.9 For the BEC given in Prob. 1.8(a) suppose that the encoder can observe the outputs of the 
channel and constructs a variable length code as follows: 

When the information symbol (assume a zero-memory binary-symmetric information source) is a 
"0." then the encoder keeps sending Os across the channel until an unerased output is achieved. If the 
information symbol is a " 1." then the encoder keeps sending Is until an unerased output is achieved. 
For each information symbol the number of channel symbols used is a random variable. 

Compute the average codeword length for each information bit. What is the rate of this encoding 
scheme measured in information bits per channel use? What is the information bit error probability? 

1.10 There are two biased coins in a box. The first coin when flipped will produce a "head" with 
probability | while the second coin will produce a " head " with probability \. A coin is randomly 
selected from the box and flipped. 

(a) If a head appears how much information does this provide about the first coin being selected ? 
The second coin? 

(b) What is the average mutual information provided about the coin selected when the outcome 
of a flip of a randomly selected coin is observed? 

1.11 There are 13 coins of which 12 are known to have equal weight. The remaining coin is 
either the same weight or heavier or lighter than the other coins. The objective is to find the 
odd coin, if any, after the coins are mixed and determine whether the odd coin is heavy or light 
by using a balance and a known standard coin. 

(a) Show by considering the information provided that it is impossible to guarantee solving the 
problem in two uses of the balance. Similarly show that it might be possible always to solve the 
problem in three weighings. 

(b) By trying to maximize the average information provided by the three weighings, give a 
weighing strategy that works. 

(c) Show that three weighings are not enough without the standard coin. 

1.12 For a finite alphabet # consider the three distributions P t (u), P 2 ("), and P A (u) 
(1 - A)P 2 (u) for all u e ^ and A e (0, 1). Let 

pies. Using inequality (1.1.8) show that 



), H 2 (%\ and H .(#), be the corresponding entro 



1.13 Let y be an absolutely continuous random variable with probability density function p(y), y e 
where 

. x . x 

I yp(.v) d\ = and | y 2 p(y) dy = a]. 

- x - x 

Using a version of (1.1.8) show that 

I P(.V) log - dy < i log (2neff*) 



with equality when y is a Gaussian random variable. Use this to find the maximum mutual information. 



46 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

I(3 , &} for the additive Gaussian noise channel of Fig. 1.5 where we maximize over the input probabi 
lity density function q(x), x e 3C subject to 

J xq(x) dx = and J x 2 q(x) dx = 

-oo - oo 

1.14 Use the source encoder discussed in Prob. 1.6(6) to show that in the limit of large N the 
combination of source and source encoder approximates (in the sense that H N , defined below, 
approaches 1 as N -> oo) a binary-symmetric source. Do this by the following steps. 

(a) If source sequence u e tfS N is mapped into codeword x(u) of length /(u) then the encoder 
output has normalized information of 

bits/binary symbol 



/(u) 
The average information per binary symbol out of the source encoder is then 



Show that 

1 + *WjVJ sH fil where p <*> 

(b) Next show that the binary-symmetric source (BSS) is the only binary source that has H N = 1 
for all N. 

1.15 Use the source encoder described in the proof of Theorem 1.1.1 (see Fig. 1.3) to show that in the 
limit of large N the combination of source and source encoder becomes a binary-symmetric source in 
the sense that H N -* 1 as N -* oo. 



CHAPTER 

TWO 

CHANNEL MODELS AND BLOCK CODING 



2.1 BLOCK-CODED DIGITAL COMMUNICATION ON THE 
ADDITIVE GAUSSIAN NOISE CHANNEL 

The most general digital communication system to be treated in this chapter and 
the next is that shown in Fig. 2.1. The input digital data is usually binary, but may 
have been encoded into any alphabet of q > 2 symbols. The incoming data which 
arrives at the rate of one symbol every 7^ seconds is stored in an input register 
until a block of K data symbols 1 has been accumulated. This block is then pre 
sented to the channel encoder, as one of M possible messages, denoted H i9 H 2 , 
. . . , H M where M = q K and q is the size of the data alphabet. The combination of 
encoder and modulator performs a mapping from a set of M messages, {//}, onto 
a set of M finite-energy signals, (x m (f)}, of finite duration T = KT S . 

While the encoder-modulator would appear thus to perform a single indivi 
sible function, it can in -fact be divided into separate discrete-time and continuous- 
time operations. The justification for this separation lies in the Gram-Schmidt 
orthogonalization procedure which permits the representation of any M finite- 
energy time functions as linear combinations of N < M orthonormal basis func 
tions. That is, over the finite interval < t < T the M finite-energy signals Xi(f), 
x 2 (r), . . . , x M (f ), representing the M block messages H l9 H 2 , . . . , H M respectively, 
can be expressed as (see App. 2. A) 

JV 

x m (t)= Zx mn (/) n (r) m=l, 2, ...,M (2.1.1) 



1 When the data alphabet is binary, these are generally called bits, whether or not they correspond 
to bits of information in the sense of Sec. 1.1. 

47 







1 


i 




i 








> 

1 


3 




*i 
( 



E 
3 






! 


5 




D 


/ 










g 










vt^ 




oj 


^ 








H 




6 


\ 








! 
| 


5 




i 

i 

H 
( 


5 
^ 
i 








* 


^3 

3 

a 

D 

> 




1 

1 




3 
] 






















E 
x 






-, 








n 


-H 

3 




1 




3 

D 






1 


q 




c 


s 






48 




VV 3 

T 2 




49 



50 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where for each m and n 

r r 
x mn = X.W6.M dt 

J o 

and the basis functions {(^(f), 2 (0 <MO} are orthonormal : 



(2.1.2) 



and AT < M. In fact, AT = M if and only if the signals are linearly independent. A 
consequence of this representation is that the signal energies can be expressed as 
square norms of the vectors 



= 1, 2, ..., M 



for it follows from (2.1.2) that for each m 

<$ m =\\x m (t)} 2 dt 

j o 

-i 

t 

= 11 

n = 1 j = 1 

N 



( T 

Jn 



fi=l 



= X, 



(2.1.3) 



The representation (2.1.1) suggests the general implementation of encoder and 
modulator shown in Fig. 2. la. Thus the encoder becomes a mapping from a 
discrete set of M messages to a vector of N < M real numbers. The most general 
modulator consists of N amplitude modulators [waveform (j) n (t) modulated by 
amplitudes x mn for n = 1, 2, ..., N] followed by a summer. In fact, this most 
general form is considerably simplified, as will be discussed in Sec. 2.7, when the 
amplitudes {x mn } are constrained to be elements of a finite alphabet so that strictly 
digital encoders can be used, and when the basis functions ($ H (t)} are chosen to be 
disjoint time-orthogonal (i.e., functions which take on nonzero values on disjoint 
time intervals) only a single time-shared modulator need be implemented. 

The transmitter and receiver of the general system of Fig. 2.1, together with 
the propagation medium, may be regarded as a random mapping from the finite 
set of transmitted waveforms {x m (t)} to the received random process y(t). All sorts 
of distortions including fading, multipath, intersymbol interference, nonlinear 
distortion, and additive noise may be inflicted upon the signal by the propagation 
medium and the electromagnetic componentry before it emerges from the 
receiver. At this point the only disturbance that we will consider is additive white 



CHANNEL MODELS AND BLOCK CODING 51 

Gaussian noise, both to establish a minimally complex model for our starting 
point, and also because this model is in fact very accurate for an important class of 
communication systems. In Sees. 2.6 and 2.12 we shall consider the influence of 
some of the other forms of disturbance just mentioned. 

The additive white Gaussian noise (AWGN) channel is modeled simply with a 
summing junction, as shown in Fig. 2.1/7. For an input x m (t\ the output 2 is 

y(t) = x m (t) + n(t) 0<r<T (2.1.4) 

where n(t ) is a stationary random process whose power is spread uniformly over a 
bandwidth much wider than the signal bandwidth; hence it is modeled as a 
process with a uniform arbitrarily wide spectral density, or, equivalently, with 
covariance function 

R(r)=(N /2)6(r) (2.1.5) 

where S( ) is the Dirac delta function and N is the one-sided noise power spectral 
density. 3 

The demodulator -decoder can be regarded in general as a mapping from the 
received process y(t) to a decision on the state of the original message H^ . But, for 
this specific channel model, the demodulator-decoder can also be decomposed 
into two separate functions which are essentially the duals of those performed by 
the encoder-modulator. Consider first projecting the random process y(t) onto 
each of the modulator s basis functions, thus generating the N integral inner 
products 



-f 



T 

y(t)(f) n (t) dt n=l,2,...,JV (2.1.6) 



o 
This can be performed by the system of Fig. 2.\b. We define also 



}dt n=l,2,...,N (2.1.7) 

o 

and hence it follows from (2.1.1) and (2.1.4) that 

y n = x mn + n n n=l,2,...,N (2.1.8) 

Now consider the process 

MO = y(t) - I yMt) (2.1.9) 



2 Although the propagation medium naturally attenuates the signal, we may ignore this effect by 
conceptually amplifying both signal and noise to the normalized pretransmission level. 

3 This means that, in response to this noise input, an ideal bandpass filter of bandwidth 1 Hz would 
produce an output power of N watts. 



52 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Given that x m (t) is the transmitted signal, it follows from (2.1.9) and (2.1.1) that 
this process can be written as 

N 

y(t) = x m (t) + n(t) - X (x mn + n n )(/) n (t) 



= n(t) - nMt) = n(t) (2.1.10) 

n=\ 

which depends only on the noise process. Thus we may represent the original 
process as 

At) = I yn<t>n(t) + y(t) = X y n <t> n (t) + n(t) (2.1.11) 

n=l n=l 

Now, as will be elaborated upon in Sec. 2.2, any statistical decision regarding 
the transmitted message is based on the a priori probabilities of the messages and 
on the conditional probabilities (densities or distributions) of the measurements 
performed on y(t), generally called the observables. Suppose, for the moment, that 
we take as the observables only the N projections {y n } defined by (2.1.6). Because 
y(t), defined by (2.1.4), is a Gaussian process, the observables are Gaussian vari 
ables with means depending only on the corresponding signal components, since 





= x mn n=l,2,...,N (2.1.12) 

and with variances equal to N /2, since for any n 
vsiT[y n xj = E[(y n - x mn ) 2 \ xj 







f n(t)n( U )<S> n (t}<t> a (u) dt du 

J o 

= (N,/2) I f d(t - u)4> n (t)4> n (u) dt du 



= (NJ2) | ^ 2 (r) dt 
o 

= NJ2 (2.1.13) 



CHANNEL MODELS AND BLOCK CODING 53 

Similarly, it follows that these observables are mutually uncorrelated since, for 



= 



= (N.P.) 



dt du 



dt 



o 



-0 



(2.1.14) 

which, since the variables are Gaussian, implies that they are also independent. 
Then defining the vector of N observables 



whose components are independent Gaussian variables with means given by 
(2.1.12) and variances N /2, it follows that the conditional probability density of y 
given the signal vector \ m (or equivalently, given that message H m was sent) is 



= El 



- 

.0, 



(2.1.15) 



Returning to the representation (2.1.11) of y(f), while it is clear that the vector 
of observables y = (yi, > 2, -, X\) completely characterizes the terms of the 
summation, there remains the term n(t), defined by (2.1.10), which depends only 
on the noise and not at all on the signals. Furthermore, since the noise has zero 
mean, h(t) is a zero-mean Gaussian process. Finally, n(t\ and hence any observ 
able derived therefrom, is independent of all the observables { y n } because 



E[n(t) yj ] = E 



h(t) | y(u)(t>j(u) du 



.T 

n(t) | n(u)(t)j(u) du 
o 



= En(t)- Z 



= 



= 7=1, 2,..., N 

Thus, since any observable based on h(t } is independent of the observables {y n } 
and of the transmitted signal x m , it should be clear that such an observable is 
irrelevant to the decision of which message was transmitted. More explicitly, if n is 



54 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

H m x m Channel V 1 H m 



Encoder 


x m 


Channel 

^(ylx/n) 


y 


Decoder 







Figure 2.2 General memoryless channel. 

any vector of N observables based only on h(t\ then it follows from the above that 
the joint conditional probability density is 

, n|x m ) = p N (y |x m )/v(n) 



Since the term p N >(n) enters into all the conditional densities (for m = 1, 2, . . . , M) 
in identically the same way, it is useless in making the decision. 

Hence, we conclude finally that the components of the original observable 
vector y are the only data based on y(t) useful for the decision and thus represent 
sufficient statistics. Therefore the demodulator can be implemented as shown in 
Fig. 2.\b. The time-continuous process is thus reduced by the demodulator to the 
N-dimensional random vector y which then constitutes the input to the decoder 
whose structure we shall study in the next section. We may summarize the results 
of this section by noting that, for the AWGN channel, by using the general but 
explicit forms of modulators and demodulators of Figs. 2. la and 2.1fe, we can 
N-dimensional random vector y which then constitutes the input to the decoder 
model of Fig. 2.2 where the channel is in effect a random mapping defined by the 
conditional probability density 



While this result has only been shown to characterize an AWGN channel, many 
other channels can be characterized in this way. Any channel whose conditional 
(or transition) probability density (or distribution) satisfies (2.1.16) is called a 
memoryless channel. We shall discuss a class of memoryless channels derived from 
the AWGN channel in Sec. 2.8, and give more elaborate examples in Sec. 2.12. 



2.2 MINIMUM ERROR PROBABILITY AND MAXIMUM 
LIKELIHOOD DECODER 

There remain the problems of characterizing more explicitly the encoder and 
decoder. Both will occupy the better part of this book. The principles and optimal 
design of the decoder are more easily developed, although its implementation is 
usually more complex than that of the encoder. The goal of the decoder is to 
perform a mapping from the vector y to a decision H^ on the message transmitted. 
Such a decision must be based on some desirable criterion of performance. The 
most reasonable, as well as the most convenient, criterion for this decision is to 
minimize the probability of error of the decision. Suppose that, when the vector y 
takes on some particular value (a real vector), we make the decision H^ = H m . 



CHANNEL MODELS AND BLOCK CODING 55 

The probability of an error in this decision, which we denote by P E (H m \ y), is just 
P (// m ;y) = Pr(H m notsent|y) 

= l-Pr(// m sent|y) (2.2.1) 

Now, since our criterion is to minimize the error probability in mapping each 
given y into a decision, it follows that the optimum decision rule is 

Ht = H m if Present | y)>Pr(H m . sent|y) for all m + m (2.2.2) 

If m satisfies inequality (2.2.2) but equality holds for one or more values of m , we 
may choose any of these m as the decision and achieve the same error probability. 
Condition (2.2.1), which is completely general for any channel (memoryless or 
not), can be expressed more explicitly in terms of the prior probabilities of the 
messages 

n m = Pr (H m sent) m = 1, 2, . . . , M (2.2.3) 

and in terms of the conditional probabilities of y given each H m (usually called the 
likelihood functions 4 ) 



m=l,2, ...,M (2.2.4) 

This last relation follows from the fact that the mapping from H m to x m , which is 
the coding operation, is deterministic and one-to-one. These likelihood functions, 
which are in fact the channel characterization (Fig. 2.2), are also called the channel 
transition probabilities. Applying Bayes rule to (2.2.2), using (2.2.3) and (2.2.4), 
and for the moment ignoring ties, we conclude that, to minimize error probability, 
the optimum decision is 



Since the denominator p v (y), the unconditional probability (density) of y, is 
independent of m, it can be ignored. Also, since it is usually more convenient to 
perform summations than multiplications, and since if A > B > then In A > 
In B, we rewrite (2.2.5) as 

H^ = H m if In n m + In p N (y \ x m ) > In n m . + In p N (y \ \ m .) for all m + m 

(2.2.6) 

For a memoryless channel as defined by (2.1.16), this decision simplifies further to 

H^ = H m 

N N 

if ^ n m + In p(y n \x mn ) > In n m . + In p(y n \x m . n ] 



n=l 



for all m + m (2.2.7) 



4 p v ( ) is a density function if y is a vector of continuous random variables, and is a distribution if y 
is a vector of discrete random variables. 



56 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



Another useful interpretation of the above, consistent with our original view 
of the decoder as a mapping, is that the decision rule (2.2.6) or (2.2.7) defines a 
partition of the N-dimensional space of all observable vectors y into regions A 1? 
A 2 , ..., A M where 

A m = {y : In n m + In p N (y \ x m ) > In n m , + In p N (y \x m ,)} for all m + m} (2.2.8) 
As is clear from their definition, these regions must be disjoint, i.e., 

A fc n Aj F = for all k+j (2.2.9) 

Then the decision rule can indeed be considered as the mapping from y to H A such 
that 



if 



yeA 



then 



(2.2.10) 



Aside from the boundaries between regions, it is also clear from definition (2.2.8) 
that the regions A m cover the entire space of observable vectors y. We shall adopt 
the convention that all ties will be resolved at random. That is, the boundary 
region between A m and \ m , , consisting of all y for which (2.2.8) becomes an 
equality, will be resolved a priori by the flip of a fair coin ; the outcome of such a 
flip does not alter the ultimate error probability since, for y on the boundary, 
(2.2.2) is satisfied with equality. It then follows that the union of the regions covers 
the entire N-dimensional observation space <& N ; that is 



U 



(2.2.11) 



The above concept can best be demonstrated by examining again the AWGN 
channel defined by (2.1.15). Since the channel is memory less, we have, using 
(2.1.16) and (2.1.3) and the boundary convention 5 



In 



- x. 



> for all m =f= m 



||y-x m .|| 2 >0 forallm ^m, 



N N 

n= 1 n= 1 

2 , 



for all m m 



for all m = m\ 

(2.2.12) 



5 We denote the inner product 
by (a, b). 



a n b n of vectors a = (a t , a 2 , . . . , a N ) and b = (b^ b 2 , . . . , b N ) 



CHANNEL MODELS AND BLOCK CODING 57 



V A 2 

\ 



AI 



\ 
\ 

\ 
\ 

\ 

x x i 


/ * 3 

t X 




V 

\ 

| \ 

\ 



(a) &! = 3 < 2 = 4 

7T 2 = 7T 4 < TTj = 7T 3 

Figure 2.3 Signal sets and decision regions. 



(b) fi w = fi,7r m = I m= 1,2,3,4 



Note also that, by virtue of (2.1.1) and (2.1.6) 



(x m - x m , , y) = | [ Xm (t) - x m .(t)]y(t) dt 
o 



while 



Thus it follows from (2.2.12) that for the AWGN channel the decision regions are 
regions of the N-dimensional real vector space, bounded by linear 
[(N l)-dimensional hyperplane] boundaries. Figure 23a and b gives two 
examples of decision regions for M = 4 signals in N = 2 dimensions, the first with 
unequal energies and prior probabilities, and the second with equal energies and 
prior probabilities, i.e., with < m = <$ and n m = 1/M for all m. Decision regions for 
more elaborate signal sets are treated in Probs. 2. 1 and 2.2. We note also from this 
result and (2.2.12) that the decision rule, and hence the decoder, for the AWGN 
can be implemented as shown in Fig. 2.4, where the M multipliers each multiply 
the N observables by N signal component values and the products are succes 
sively added to form the inner products. When the prior probabilities and energies 
are all equal, the additional summing junctions can be eliminated. Examples of 
decoders for other channels will be given in Sees. 2.8 and 2.12. 

In most cases of interest, the message a priori probabilities are all equal; that 
is, 






(2.2.13) 



58 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 




Figure 2.4 An implementation of decoder for AWGN channel. 

As was discussed in Chap. 1, this is in fact the situation when the original data 
source has been efficiently encoded into equiprobable sequences of data symbols. 
In this case, the factors n m and n m , can be eliminated in (2.2.5) through (2.2.8) and 
(2.2.12). The decision rule and corresponding decoder are then referred to as 
maximum likelihood. The maximum likelihood decoder depends only on the chan 
nel, and is often robust in the sense that it gives the same or nearly the same error 
probability for each message regardless of the true message a priori probabilities. 
From a practical design point of view, this is important because different users 
may have different message a priori probabilities. Henceforth, in the text we shall 
assume only equiprobable messages 6 and thus the maximum likelihood decoder 
will be optimum. Unequal prior probability cases will be treated in the problems. 
For a memoryless channel, the logarithm of the likelihood function (2.2.4) is 
commonly called the metric; thus a maximum likelihood decoder computes the 
metrics for each possible signal vector, compares them, and decides in favor of the 
maximum. 



2.3 ERROR PROBABILITY AND A SIMPLE UPPER BOUND 

Having established the optimum decoder to minimize error probability for any 
given set of observables, we now wish to determine its performance as a function 
of the signal set. Given that message H m (signal vector x m ) was sent and a given 

6 Note that for the AWGN channel, unequal prior probabilities requires only inclusion of the 
additive term in (2.2.12). 



CHANNEL MODELS AND BLOCK CODING 59 

observation vector y was received, an error will occur if y is not in A m (denoted 
y i \ m or y e A m ). Since y is a random vector, the probability of error when x m is 
sent is then 



= l-Pr{yeA m |x m } 

= Z_P*(y|xJ (2.3.1) 

We use the symbol to denote summation or integration over a subspace of the 
observation space. Thus, for continuous channels (such as the AWGN channel) 
with TV-dimensional observation vectors, ]T represents an N-dimensional integra 
tion and p N ( ) is a density function. On the other hand, for discrete channels where 
both the x m and y vector components are elements of a finite symbol alphabet, 
represents an A/-fold summation and p N (-) represents a discrete distribution. 

The overall error probability is then the average of the message error 
probabilities 



X p E, (2.3.2) 

m=l 

Although the calculation of P E by (2.3.2) is conceptually straightforward, it is 
computationally impractical in all but a few special cases (see, e.g., Probs. 2.4 and 
2.5). On the other hand, simple upper bounds on P E are available which in some 
cases give very tight approximations. When these fail, a more elaborate upper 
bound, derived in the next section, gives tight results for virtually all cases of 
practical interest. 

A simple upper bound on P E is obtained by examining the complements A^ of 
decision regions. By definition (2.2.8) with n m = 1/M for all m, A m can be 
written as 7 

A^ = {y : In p N (y \ \ m ) > In p N (y \ x m ) for some m =/= m} 

= \J {y : In p N (y \ \ m .) > In p N (y \ x m )} 

m ^ m 

= U A^. (2.3.3) 

m f m 

where 



7 We take for the moment the pessimistic view that all ties are resolved in favor of the other 
message, thus at worst increasing the error probability. We note, however, that for continuous chan 
nels such as the AWGN, the boundaries do not contribute measurably to the error probability. 



60 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



\ 



\ 



\ 



\ 






s 



/< 



Figure 2.5 A lm regions for signal set of Fig. 2.3b. A~J = A 12 u A 13 u A 14 . 

Note that each of the terms A mm - of the union is actually the decision region for \ m , 
when there are only the two signals (messages) x m and \ m , . An example based on 
the signal set of Fig. 2.3/7 is shown in Fig. 2.5. Using (2.3.3) in (2.3.1), we find from 
the axioms of probability that 



= Pr y 6 



A 



Pr{y 6 A mm ,|x m } 



- 



[m * m 



(2.3.4) 



where P E (m -> m ) denotes the pairwise error probability when x m is sent and x m is 
the only alternative. We note that the inequality (2.3.4) becomes an equality 8 

8 Also, for some trivial channels, P E (m - m) ^ for at most one m = m, thus obviously satisfying 
(2.3.4) as an equality. 



CHANNEL MODELS AND BLOCK CODING 61 

whenever the regions A mm are disjoint, which occurs only in the trivial case where 
M = 2. For obvious reasons the bound of (2.3.4) is called a union bound. 

For the AWGN channel, the terms of the union bound can be calculated 
exactly, by using (2.2.12) with n m = n m > . This gives 



z < L 

I mm ~ N, 



(23.5) 

! "o } 

where 



2 N 

mm T / i V win m n/Jn 

^o n=l 



But, since x m was sent 

+ n a (2.3.6) 



for each M is a Gaussian random variable with mean x mn and variance N /2. Also, 
as was shown in (2.1.14), ) and y/ are independent for all n =/= /. Hence, since Z mm . 
is a linear combination of independent Gaussian variables, it must be itself Gaus 
sian; using (2.1.3) and (2.3.6), we find its mean 

2 N 

E(Z mm , | X m ) = X 





= Mz (23.7) 

and its variance 

4 v 
var (Z mm .\\ m ) = 2 X t* - x m n] 2 var (y n |x mn ) 

^o n= 1 

-JK-M .^ (".8) 

Thus 

PAm- J >m)= I 






62 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where 



2 
~N. 



This leads finally to the simple expression 



(2.3.9) 



(2.3.10) 



(2.3.11) 



Returning to the error probability bound (2.3.4), we now derive a weaker but 
completely general bound on P E (m -> m ). It follows immediately from (2.3.4) and 
(2.3.3) that 



where Q( ) is the Gaussian integral function 



= I. p N (y\*, 

y e Amm 



where 



(2.3.12) 



and ^ N is the entire observation space. We may express this alternatively as 

ylxj (2.3.13) 



where 



|0 



But we may easily bound /(y) by 



(2.3.14) 



for,ll,<A.. 



CHANNEL MODELS AND BLOCK CODING 63 

where the upper branch bound follows from (2.3.12), while the lower branch 
bound follows trivially. Then since the factors in the summands of (2.3.13) are 
everywhere nonnegative, we may replace /(y) by its bound (2.3.14) and obtain 



P E (m - m ) < I Vp.v(y |x m .)p. v (y|xj (2.3.15) 

y 

The expression (2.3.15) is called the Bhattacharyya bound, and its negative logar 
ithm the Bhattacharyya distance. It is a special case of the Chernoff bound which 
will be derived in the next chapter (see also Prob. 2.10). 

Combining the union bound (2.3.4) with the general Bhattacharyya bound 
(2.3.15), we obtain finally a bound on the error probability for the wth message 



PE < I 

f--m L^ 



m 



< I I Vp.v(y|x m .)P. v (y|xj (2.3.16) 

y m 3= m 

The interchange of summations is always valid because at least the sum over m is 
over a finite set. Equation (2.3.16) will be shown to be a special case of the more 
elaborate bound derived in the next section. 

To assess the tightness of the Bhattacharyya bound and to gain some intui 
tion, we again consider the AWGN channel and substitute the likelihood func 
tions of (2.1.15) into (2.3.15). Then, since & s is a space of real vectors, we obtain 



= exp{-||x m -x m .|| 2 /4N } (2.3.17) 

Comparing the bound (2.3.17) with the exact expression (2.3.10), we find that we 
have replaced Q(fJ) by exp ( /? 2 /2). But it is well known (see Wozencraft and 
Jacobs [1965]) that 



Thus, for large arguments, the bound (2.3.17) is reasonably tight. Note also that 
the negative logarithm of (2.3.17) is proportional to the square of the distance 
between signals. To carry this one step further and evaluate the tightness of the 
union bound, we consider the special case of M equal-energy M-dimensional 
signals, each with a unique nonzero component 

, (0 if n = m 

X = 6 d = if n = m 



(This is a special case of an orthogonal signal set and will be considered further in 
Sec. 2.5.) Then (2.3.17) becomes 

P E (m - m) <e~ & (2v o) for all m =*= m 



64 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

and consequently (2.3.16) yields the union bound 

P Em <(M - \)e- s/(2No) m = 1, 2, ..., M (2.3.19) 

Thus this bound is useless when M > exp (&/2N ). In the next section, we derive a 
bound which is useful over a considerably extended range. 

2.4 A TIGHTER UPPER BOUND ON ERROR PROBABILITY 

When the union bound fails to give useful results, a more refined technique will 
invariably yield an improved bound which is tight over a significantly wider range. 
Returning to the original general expression (2.3.1), we begin by defining the 
subset of the observation space 

/!> \li I 

>1 A>0 (2.4.1) 



which contains the region of summation \ m . For if n m = 1/M for all m, then, by 
the definition (2.2.8) we have for any y e A^ 

> 1 for some m" =f= m (2-4.2) 

Moreover, since A > 0, raising both sides of the inequality (2.4.2) to the Ath power 
does not alter the inequality, and summing over all m =/= m will include the m" 
term for which (2.4.2) holds, in addition to other nonnegative terms. Hence (2.4.2) 
implies 

A 

> 1 for all y A m (2.4.3) 

It then follows from (2.4.1) and (2.4.3) that every y e A m is also in A m , and 
consequently that 

A, <= A m (2.4.4) 

Thus, since the summand in (2.3.1) is always nonnegative, by enlarging the do 
main of summation of (2.3.1) we obtain the bound 



where 

/(y) = {* !| y E ^ m (2.4.5) 

Furthermore, we have 

r~ /., i *, \ A o 

for all y e W N , p > 0, A > (2.4.6) 



CHANNEL MODELS AND BLOCK CODING 65 

for it follows from the definition (2.4.1) that, if y e A m , the right side of (2.4.6) is 
greater than 1, while, if y ^ A m , the right side is at least greater than 0. 
Substituting the bound (2.4.6) for/(y) in (2.4.5) yields 

P Em < I [P.v(y KB H I hayMrf / > 0, p > (2.4.7) 

y Im ^m 

Since / and p are arbitrary positive numbers, we may choose A =!/(! + p) and 
thus obtain 

^^Itp.vfylxJ]""^ ! Z[ P .v(y|x m .)]"" + ">f P >o (2.4.8) 

y Im *m 

This bound, which is due to R. G. Gallager [1965], is much less intuitive than the 
union bound. However, it is clear that the union bound (2.3.16) is the special case 
of this bound obtained by setting p = 1 in (2.4.8). To what extent the Gallager 
bound is more powerful than the union bound will be demonstrated by the 
example of the next section. 



2.5 EQUAL-ENERGY ORTHOGONAL SIGNALS ON THE 
AWGN CHANNEL 

To test the results of the preceding section on a specific signal set and channel, we 
consider the most simply described and represented signal set on the AWGN 
channel. This is the set of equal-energy orthogonal signals defined by the relations 

| T .x m (r)x n (f) dt = 6d mn = I m, n = 1, 2, .... M (2.5.1) 

o if m ^ n] 

In the next section, we shall consider several examples of orthogonal signal sets. 
Since the signals are already orthogonal, the orthonormal basis functions are most 
conveniently chosen as 

) m=l,2,...,M (2.5.2) 



m -~= m ,,..., 

which clearly satisfies (2.1.2). Then the signal vector components become simply 

mit m,n=l,2, ...,M (2.5.3) 



and consequently the likelihood function for the AWGN channel given in (2.1.15) 
becomes, with N = M, 

NJ pj exp(-rf/JV.) 



= exp 



N 



exp 




f 2 



- 1, 2, ..., M 



(2.5.4) 



66 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Substituting into (2.4.8), we obtain after a few manipulations, for every m 



J - 

00 00/1=1 



M e ~y n 2/N 



exp 



Z ex P 



Letting z n = y n /(^/NJ2\ this becomes 

oo M 



,. (.w M p <-n 

" - n^ 

00 00 71= 1 "V/^ 



where 






p > (2.5.5) 



(2.5.6) 



Since the M-fold product in (2.5.5) is the density function of M independent 
normalized (zero mean, unit variance) Gaussian variables, (2.5.5) can be expressed 
as 



9( Z m . 



p >0 



(2.5.7) 



where the expectation is with respect to the independent normalized Gaussian 
variables z 1? z 2 , . . . , Z M . Then the expectation of (2.5.6) is readily determined to be 



= exp 



(2.5.8) 



The second expectation in (2.5.7) cannot be evaluated in closed form. But it 
can be upper bounded simply, provided we restrict the parameter p to lie in the 
unit interval. For, by the Jensen inequality derived in App. IB, we have for a 
convex n function /() of a random variable 



(2.5.9) 



Now letting 



g(z m . 



and 



which is a convex n function provided < p < 1, we obtain from (2.5.9) 



?*->]) 

= (M - l)"([s(r)])" < p < 1 



(2.5.10) 



CHANNEL MODELS AND BLOCK CODING 67 

where the equality follows because all the random variables z m . are identically 
distributed. Thus (2.5.7) becomes 

P Em < (M - \Y e- SINo (E[g(z)]) l+(} (2.5.11) 

This bound holds uniformly for all m and hence is also a bound on P E . Finally 
substituting (2.5.8) into (2.5.11), we obtain 

P E < (M - \Y exp 



Clearly, (2.5.12) is a generalization of the union-Bhattacharyya bound (2.3.19), to 
which it reduces when p = 1. 

Before proceeding to optimize this bound with respect to p, it is convenient to 
define the signal-to-noise parameter 



where S is the signal power or energy per second, and to define the rate 9 
parameter 

R T = (In M)/T = (In q)/T s nats/s (2.5.14) 

as is appropriate since we assumed that the source emits one of q equally likely 
symbols once every T s seconds. Then trivially bounding (M 1) by M, we can 
express (2.5.12) in terms of (2.5.13) and (2.5.14) as 

P E <exp{-T\_E (p)-pR T ]} 
where 

= f^7 0<p<l (2.5.15) 

The tightest upper bound of this form is obtained by maximizing the negative 
exponent of (2.5.15) with respect to p on the unit interval. But, for positive p, this 
negative exponent is a convex n function, as shown in Fig. 2.6, with maximum at 
p = ^/C T /R T 1. Thus for < R T /C T < 1, this maximum occurs within the unit 
interval; but, for R T /C T < , the maximum occurs at p > 1 and consequently the 
negative exponent increases monotonically on the unit interval; hence, in the 
latter case, the tightest bound results when p = 1. Substituting these values of p 
into (2.5.15), we obtain 

P E <e~ TE(RT) (2.5.16) 

9 This is a scaling of the binary data rate for which the logarithm is usually taken to the base 2 and 
the dimensions are in bits per second. 



68 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

E (p)-pR T E (p)-pR T 



0101 

(a) I <R T IC T <\ (b)R T !C T <\ 

Figure 2.6 Negative exponent of upper bound (2.5.15). 



where 



E(R T ) = 



2 C T R T 



< R T /C T < i 

1 < R T /C T < 1 



For R T > C T , the bound is useless and in fact, as will be discussed in the next 
chapter, in this region, P E -> 1 as T and M approach infinity. 

The bound (2.5.16) was first obtained in a somewhat more elaborate form by 
R. M. Fano [1961]. The negative exponent E(R T ), sometimes called the reliability 
function, is shown in Fig. 2.7. Note that the union-Bhattacharyya bound (2.3.19), 
corresponding to (2.5.12) with p = 1, would produce the straight-line exponent 
shown dashed in the figure. Thus the Gallager bound dominates the union bound 
everywhere but at low rates, a property we shall find true for much more general 
channels and signal sets. 

E(R T )IC T 




Union- 
Bhattacharyya 
bound 







R 



4 2 

Figure 2.7 Negative exponent of optimized upper bound (2.5.16). 



CHANNEL MODELS AND BLOCK CODING 69 

Another choice of parameters, more physically oriented than those in (2.5.13) 
and (2.5.14), involves the received energy per information bit. This is defined in 
terms of the system of Fig. 2.1 where q = 2, as the energy per signal normalized by 
the number of bits transmitted per signal, that is, 

(2 - 5 - 17) 



Comparing with (2.5.13) and (2.5.14), we see that 

CT = *> 
R T N. In 2 



(2.5.18) 



b /N is called the bit energy-to-noise density ratio. Thus, (2.5.16) and (2.5.18) 
together imply that, with orthogonal signals, P E decreases exponentially with T 
for al\ b /N > In 2. 

Ultimately, the most important consequence of (2.5.16) is that, by letting T, 
and hence M, become asymptotically large, we can make P E become arbitrarily 
small for all transmission rates R T < C T (or in this case, for all S b /N > In 2). 
Again, this is a fundamental result applicable to all channels. However, making T 
very large may be prohibitive in system complexity. In fact, as will be shown in the 
next section, this is always the case for orthogonal signals. The major part of this 
book deals with the problem of finding signal sets, or codes, and decoding 
techniques for which system complexity remains manageable as T and M increase. 



2.6 BANDWIDTH CONSTRAINTS, INTERSYMBOL 
INTERFERENCE, AND TRACKING UNCERTAINTY 

Up to this point, the only constraint we have imposed on the signal set was the 
fundamental one of finite energy. Almost as important is the dimensionality con 
straint imposed by bandwidth requirements. The only limitation on dimensional 
ity discussed thus far was the one inherent in the fact that M signals defined over 
a 7-second interval can be represented using no more than M orthogonal basis 
functions, or dimensions, as established by the Gram-Schmidt theorem (App. 2A). 
These orthogonal functions (or signal sets) can take on an infinite multitude of 
forms. Four of the most common are given in Table 2.1. The orthonormal relation 
(2.1.2) can be verified in each case. An obvious advantage of the orthonormal set 
of Example 1 is that, as contrasted with the general modulator and demodulator 
of Fig. 2.1, only a single modulator and demodulator element need be imple 
mented, for this can be time-shared among the N dimensions, as shown in Fig. 2.8. 
The observables {y n } then appear serially as sampled outputs of the integrator. 
These are generated by a device which integrates over each symbol period of 
duration T/N, is sampled, dumps its contents, and then proceeds to integrate over 
the next symbol period, etc. The orthonormal set of Example 2 requires two 







5" 


! ! I 






o 


*o *o o 






J8 


j} ju ju 






g, 


!& !& ^ 






"3 


* "3 






E 


see 






1 


111 






c c c 






V V V ^ ^ 






* 


*- V 






vi vi vi y 








5 * VI Vl 






E .8 ? 


<u tlT u ^ > 
^ . 5L ? 








H ^ -H ^ 






I J i 1 i J -TT- ^^ 









o -^ ^ c g 








+ 






3 1 3 f = 






.S . 


,^~-~- - \ ^ -^^ , ^^ 






on c, 


.. 








,^ G G ? 






IS ^ IS 


hi H S C 






_____^ 


^^ O ~~?* O ^^ - ^ ^ - 


1 




II 


1 II II II 1 


1 


| 


5 " 


\ ^ ^ ^ 


.Sf 


C 
UN 




- ~S 


13 








1 

s 




4) 

e 3 
O g 


175 3 

| 1 


*o 




3 


"5 


I 




1 1 

03 03 


1 | 
2 ^ 






C C 


c -5 ^ e 


i 




c? o* 


S S o -2 

w i. >. w 


3 


U 


5 c 

9 9 

i D 


C ^^ O CH 

if 

1 1 f 1 


Jg 


1 


H H 


"S. u- o, 


CS 





-H <N 


rn ^ 


H 









70 



CHANNEL MODELS AND BLOCK CODING 71 



+ (^ 


nT/N 

J (n \)T/N 


1 


y^ 





(a) Modulator during 
/?th subinterval 



Demodulator during 
;jth subinterval 



(n ~ 1 )r/.V < t < nTlN n = 1 , 2, . . . , N 
Figure 2.8 Modulator and demodulator for time-orthogonal functions. 



modulator-demodulator elements, as shown in Fig. 2.9, which are generally called 
quadrature modulator-demodulators. On the other hand, 3 and 4 would seem to 
require a full bank of N demodulating elements. 10 

It is well known that the maximum number of orthogonal dimensions trans- 
mittable in time T over a channel of bandwidth W is approximately 

N*2WT (2.6.1) 

The approximation comes about because of the freedom in the definition and 
interpretation of bandwidth. To illustrate, we begin by giving a simplistic inter 
pretation of bandwidth. Suppose all communication channels on all frequency 
bands are operating on a common time scale and using a common set of orthog 
onal signals, such as the frequency-orthogonal functions of Example 3. Then, 
depending on its requirements, a channel would be assigned a given number N of 
basis functions which are sinusoids at consecutive frequency multiples of n/T 

10 In fact, there exist both analog and digital techniques for implementing the entire bank with a 
single serial processing device (Darlington [1964], Oppenheim and Schafer [1975]). 



(n - l)T/N<t<nT/N 




2/1 + 1 



Figure 2.9 Demodulator for time-orthogonal quadrature-phase function. 



72 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

radians per second. Given that these were processed ideally by the demodulator of 
Fig. 2.1fe, all other channels would have no effect on the given channel s perfor 
mance since the basis functions of the other channels are orthogonal to those 
assigned to the given channel and consequently would add zero components to 
the integrator outputs 3^, y 2 , ..., y N of the demodulator of Figure 2.1ft. Now 
suppose we defined the channel bandwidth occupancy as the minimum frequency 
separation between the basis functions of the Example 3 signal set times the 
number of functions utilized by the channel. Then since the former is n/T radians 
per second or 1/(2T) Hz, for a number of dimensions N, the bandwidth occupancy 
W in Hz is given exactly by (2.6.1). We note also that, if the frequency separation 
were any less, the functions would not be orthogonal. 

The same argument can be made for the time-orthogonal functions of 
Example 1 provided we take a> to be a multiple ofnN/T. Then it is readily verified 
that, where the waveforms of any two channels overlap for a time interval T/N, 
they are orthogonal over this interval and consequently the demodulator of one 
channel is unaffected by the signals of the other. Thus the separation between 
channels is exactly nN/T radians per second or N/2T Hz, again verifying (2.6.1). 
In Examples 2 and 4 two phases (sine and cosine) are used for each frequency, but 
as a result, consecutive frequencies must be spaced twice as far apart; hence the 
bandwidth occupancy is the same as for 1 and 3. 

The weakness in the above arguments, aside from the obvious impossibility of 
regulating all channels to adopt a common modulation system with identical 
timing, is that, inherent in the transmitter, receiver, and transmission medium, 
there is a linear distortion which causes some frequencies to be attenuated more 
than others. This leads to the necessity of defining bandwidth in terms of the signal 
spectrum. The spectral density of the transmission just described is actually non 
zero for all frequencies, although its envelope decreases in proportion to the 
frequency separation from co . This, in fact, is a property of all time-limited 
signals. 

On the other hand, we may adopt another simplistic viewpoint, dual to the 
above, and require that all our signals be strictly bandwidth-limited in the sense 
that their spectral density is identically zero outside a bandwidth of W Hz. Then, 
according to the classical sampling theorem, any signal or sequence of signals 
satisfying this constraint can be represented as 

" J2 sin [nW(t - n/W)] , 
x(t) = X ( a sm ^o + b cos w o (2-6.2) 



This suggests then that any subset of the set of band-limited functions 
^_^2Wsm[nW(t-n/W)]_ : 



nW(t - n/W) 

n any integer (2.6.3) 
sin [nW(t - n/W)] 



nW(t-n/W) 



CHANNEL MODELS AND BLOCK CODING 73 



Viw 




2/W 



Figure 2.10 Envelope of (/> 2rl (0 and </> 2 n+i(0 
(2.6.3). 



can be used as the basis functions for our transmission set. It is readily verified 
that the functions are orthonormal over the doubly infinite interval, i.e., that 



(t>j(t)<t> k (t) dt = d jk 



As shown in Fig. 2.10, the envelope of both </> 2n (0 and </> 2n + ^(t) reaches its peak at 
t = n/W and has nulls at all other multiples of 1/W seconds. Furthermore, the 
functions (2.6.3) can be regarded as the band-limited duals of the time-orthogonal 
quadrature-phase orthonormal functions of Example 2, where we have exchanged 
finite time and infinite bandwidth for finite bandwidth and infinite time. Another 
interesting feature of this set of band-limited basis functions is that the demodula 
tor can be implemented as a pair of ideal bandpass filters (or quadrature multi 
pliers and ideal lowpass filters), sampled every l/W seconds, producing at 
t = n/W the two observables y 2n and y 2n +i ( see Fig. 2.11; also Prob. 2.6). Thus 
again it appears that we can transmit in this way 2W dimensions per second so 
that, as T -> oo where we can ignore the slight excess time-width of the basis 
functions, (2.6.1) is again satisfied. 

Sample at n/W 




(Ideal lowpass filters 
with bandwidth W/2) 



Figure 2.11 Demodulator for functions of Eqs. (2.6.3). 



74 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

On the basis of (2.6.1) we may draw a conclusion about the practicality of the 
orthogonal signal set whose performance was analyzed in Sec. 2.5. There we found 
that the error probability decreases exponentially with the product TE(R T \ but 
it follows from (2.5.14) that the number of signals, and therefore orthogonal 
dimensions, is N = M = e TRr . Consequently, according to (2.6.1), we find that, for 
orthogonal signals, 

W * e TRT /2T (2.6.4) 

which implies that, for all R T > C r /4, the bandwidth grows more rapidly with T 
than the inverse error probability. This exponential bandwidth growth is a severe 
handicap to the utilization of such signal sets. We shall find, however, in the next 
chapter that there exist codes or signal sets whose dimensionality grows only 
linearly with T and yet which perform nearly as well as the orthogonal set. 

The impossibility of generating functions which are both time-limited and 
band-limited has led to many approaches to a compromise (Slepian and Pollack, 
[1961], Landau and Pollack [1961, 1962]). In terms of the previous discussions, we 
may generalize on the time-orthogonal functions of Table 2.1 (Examples 1 and 2) 
by multiplying all the functions in question by an envelope function /(f - nT/N) 
with the property that 



4>2n(t) = x/2/ ( t - ^- I sin co 1 



t = d mn (2.6.5) 

* oo > / \ * / 

to obtain 

(j) n (t) = J2f\t - -Isincoo^ w = 1, 2, ... (2.6.6) 

\ N / 

and 



n=l 2, ... (2.6.7) 



Equation (2.6.7) includes as a special case the band-limited example of (2.6.3) 
where the envelope function is taken to be 



< 2 - 6 - 8 



Typically, however, /(f) is chosen to be time-limited, though not necessarily to 
T/N seconds, and, though of infinite frequency duration, its spectrum decreases 
much more rapidly than l/W. 



CHANNEL MODELS AND BLOCK CODING 75 

The choice of envelope function, also called the spectrum shaping function, is 
not made on the basis of signal spectrum alone. For bandwidth is never an end 
unto itself; rather, the goal is to minimize interference and linear distortion in 
troduced by the channel. Thus, even if f(t) is the ideal band-limited function of 
(2.6.8) and the demodulator contains ideal lowpass filters (as shown in Fig. 2.11), 
the transmitter, transmitting media, and receiver introduce other (non-ideal) 
linear filtering characteristics which distort the waveform, so that the signal com 
ponent of the received waveform is no longer exactly/(f ). As a result, we no longer 
have the orthogonality condition (2.6.5) among the signals for successive dimen 
sions and the demodulator output for a given dimension is influenced by the 
signal component of adjacent dimensions. This phenomenon is called intersymbol 
interference. The degree of this effect depends on the bandwidth of the filters, or 
linearly distorting elements, in the transmitter, receiver, and medium. Only when 
the bandwidth of these distorting filters is on the order of that of f(t) does this 
become a serious problem. In such cases, of which data communication over 
analog telephone lines is a prime example, spectrum shaping functions are chosen 
very carefully to minimize the intersymbol interference. Also, with intersymbol 
interference present, the demodulator of Fig. 2.1/7 is no longer optimum because 
of the nonorthogonality of signals for successive dimensions. Optimum demodu 
lation for such channels, which has been studied extensively (Lucky, Salz, 
Weldon [1968], Forney [1972], Omura [1971]), leads to nonindependent observ- 
ables. In this chapter and the next we shall avoid the problem of intersymbol 
interference, by assuming a sufficiently wideband channel. In Chap. 4, we return 
to this issue and treat the problem as a natural extension of decoding techniques 
developed in that chapter. 

Additional sources of imperfection arise because of uncertainties in tracking 
carrier frequency and phase, and symbol timing. For the time-orthogonal func 
tions (Example 1 of Table 2.1), uncertainty in phase or frequency will cause the 
demodulator to attenuate the signal component of the output. For example, if the 
frequency error is Aco and the phase error is 0, the attenuation factor is easily 
shown to be approximately 

cos sin (T&co/N) 



provided we take T(&CD)/N <^ 1 and Tco /N > 1 (see Prob. 2.7). For time- 
orthogonal quadrature-phase functions (Example 2 of Table 2.1), the situation is 
aggravated by the fact that incorrect phase causes intersymbol interference be 
tween the two dimensions which share a common frequency. For with a phase 
error 0, the signal component y 2n is proportional to x 2n cos + x 2n+1 sin 0, 
while that of y 2n +i is proportional to x 2n+1 cos </> x 2n sin (j> (see Prob. 2.7). 
Finally, symbol time uncertainty will cause adjacent symbol overlap during 
presumed symbol times and hence intersymbol interference. The influence of all 
these imperfections on demodulation and decoding has been treated in the appli 
cations literature (Jacobs [1967], Heller and Jacobs [1971]). 



76 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

2.7 CHANNEL INPUT CONSTRAINTS 

The last section treated the causes of performance degradation which arise in the 
channel, comprising the modulator, transmitter, medium, receiver and demodula 
tor (Fig. 2.1). These imperfections and constraints are inherent in the continuous 
or analog components of the channel which, as we noted, are not easily control 
lable. In contrast, we now consider constraints on the channel inputs and outputs 
imposed by limitations in the encoder and decoder. Such limitations, which may 
lead to suboptimal operation, are imposed whenever the encoder and decoder are 
implemented digitally. In most cases, they produce a very small degradation 
which can be very accurately predicted and controlled. 

A digital implementation of the encoder requires that the encoder output 
symbols {x mn } be elements of a finite alphabet. The most common and simplest 
code alphabet is binary. For an AWGN channel with binary inputs, for any m and 
H, the choice x mn = +^f$~ s (where $ s is the energy per channel symbol) guarantees 
a constant-energy transmitted signal. The binary choice can be implemented either 
by amplitude modulation (plus or minus amplitude), or by phase modulation (0 
or 180 phases) of any of the basis function sets discussed in the last section. When 
used with time-orthogonal functions (Table 2.1, Example 1), this is usually referred 
to as biphase modulation; when used with time-orthogonal quadrature-phase 
functions (Table 2.1, Example 2), this is usually called quadriphase modulation. 
The reason for the latter term is that two successive encoded symbols generate 
the modulator output signal in the single interval (n l)T/N < t < nT/N, that is, 



( SU1 CO t COS CO t) 




where n = n/4 + kn/2, k = 0, 1, 2, or 3. 

Note that this results in twice the symbol energy of biphase modulation, but it 
is spread out over twice the time, since two code symbols are transmitted ; hence the 
signal energy per symbol and consequently the power is the same. We note also 
that, as shown in Sec. 2.2, the demodulator outputs are the same in both cases, 
and consequently the performance is identical. 1 1 

An obvious disadvantage of a binary-code symbol alphabet is that it limits the 
number of messages which can be transmitted with N dimensions, or channel 
symbols, to M < 2 N and hence constrains the transmission rate to 
R T < (N/T) In 2 nats/s. We may remove this limitation by increasing 



11 Provided of course the phase tracking errors are negligible; otherwise the intersymbol interfer 
ence from the quadrature component, as discussed in Sec. 2.6, can degrade performance relative to the 
biphase case. 



CHANNEL MODELS AND BLOCK CODING 77 



the code alphabet size to any integer q, although, for efficient digital implementa 
tion reasons, q is usually taken to be a power of 2. Then M < q* and 
R T < (N/T) In q nats per second, which can be made as large as desired or as 
permitted by the channel noise, as we shall find. As an aside, we note that M = N 
orthogonal signals can always be implemented as biphase-modulated time- 
orthogonal basis functions, whenever N is a multiple of 4 (see Prob. 2.5 for 
N = 2 K ,K> 2). 

For the time-orthogonal waveforms 1 or 2 of Table 2.1, the modulator for 
g-ary code symbols is commonly implemented as a multiple amplitude modulator. 
For example, with a four-symbol alphabet, the modulator input symbols might be 
chosen as {a l5 a 2 , #3, a 4 }. With equiprobable symbols and a = a 2 = a, 
a 3 = a 4 = 30, the average symbol energy is s = 5a 2 . Of course, a disadvantage 
is that the transmitted power is no longer constant. A remedy for this is to use 
multiphase rather than multiamplitude modulation. This is easily conceived as a 
generalization of the frequency-orthogonal quadrature-phase basis set of Table 
2.1, Example 4. A 16-phase modulation system would transmit a symbol from a 16- 
symbol alphabet as 

2nk 



2nk f2S 5 2nk 

cos -- sin co 1 + I sin -- cos co 1 
L T L 



0<f < T 



where k = 0, 1, 2, ..., 15 and L= 16. We note, however, that this requires two 
dimensions per symbol so that, in terms of bandwidth or dimensionality, this 
16-symbol code alphabet simultaneously modulating two dimensions of a time- 
orthogonal quadrature-phase system is equivalent to the four-symbol alphabet 
amplitude modulating one dimension at a time. The signal geometry of the two 
systems for equal average symbol energy S s is shown in Fig. 2.12. It is easily 



\ 



X 


X 3a- 


- X 


X 


X 

1 


X - 


X 


X 

1 


-3a 


a 


a 


3a 


X 


X-a- 


- X 


X 


X 


X~3a- 


- X 


X 




(a) Multiamplitude signal set in two dimensions (b) Multiphase signal set in two dimensions 
Figure 2.12 Two examples of 16 signals in two dimensions. 



78 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

shown (Prob. 2.8) that, for equal ^ s , the amplitude modulation system outper 
forms the phase modulation system, but the latter has the advantage of constant 
energy. Obviously, we could generalize to signals on a three-dimensional sphere 
or higher, but for both practical and theoretical reasons to be discussed in the next 
chapter, this is not profitable. 



2.8 CHANNEL OUTPUT QUANTIZATION: DISCRETE 
MEMORYLESS CHANNELS 

We now turn to limitations imposed by the digital implementation of the decoder. 
Considering first the AWGN optimum decoder (Fig. 2.4), we note the obvious 
incentive to implement digitally the discrete inner-product calculations (\ m , y) = 
Z= i x m n y n While the input symbols {x mn } are normally elements of a finite set as 
discussed in the last section, the outputs {y n } are continuous Gaussian variables 
and consequently must be quantized to a finite number of levels if digital multipli 
cations and additions are to be performed. An example of an eight-level uniform 
quantizer is shown in Fig. 2.13. Uniform quantizers are most commonly 
employed, although nonuniform quantization levels may improve performance 
to a slight degree. 

The performance of a quantized, and hence suboptimum, version of the opti 
mum decoder of Fig. 2.4 is difficult to analyze precisely. On the other hand, 
quantization of the output to one of J levels simply transforms the AWGN 
channel to a finite-input, finite-output alphabet channel. An example of a biphase 
modulated AWGN channel with output quantized to eight levels is shown in 
Fig. 2.14. Denoting the binary input alphabet by {a^ a 2 ] where a = a 2 = \J~^l 
and denoting the output alphabet by {b^ b 2 , . . . , b 8 ], we can completely describe 



Output 



-30 ~2a 



H 1 Input 



b s a 2a 3a 



4-b 



Figure 2.13 Uniform eight-level quantizer. 



CHANNEL MODELS AND BLOCK CODING 79 



-r n\i) 

*~(x} *~ 


t 




Uniform 
eight-level 
quantizer 


Sampled 
every T seconds 













N/2/Tsin w r 
(a) Quantized demodulator for binary PSK signals 




(ft) Quantized channel model 
Figure 2.14 Quantized demodulator and channel model. 



the channel by the conditional probabilities or likelihood functions 

IV 



n=l 



where for each m and n 




7=1,2,..., 



(2.8.1) 



and Bj is the jth quantization interval. We note that, while a k can actually be 
associated with the numerical value of the signal amplitude, bj is an abstract 
symbol. Although we could associate with bj the value of the midpoint of the 
interval, there is nothing gained by doing this. More significant are the facts that 
the vector likelihood function can be written as the product of symbol conditional 
probabilities and that all symbols are identically distributed. In this case, of 
course, this is just a consequence of the AWGN channel for which individual 
observables (demodulator outputs prior to quantization) are independent. A 
channel satisfying these conditions is called memoryless, and when its input and 
output alphabets are finite it is called a discrete memory less channel (DMC) (cf. 
Sec. 1.2). Other examples of discrete memoryless channels, derived from physical 
channels other than the AWGN channel, will be treated in Sec. 2.12. Figure 2.146 
completely describes the DMC just considered in terms of its binary-input, octal- 
output, conditional probability distribution, sometimes called the channel transi- 




80 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

l-p 



b 2 Figure 2.15 Binary-symmetric channel. 



tion distribution. Clearly, this distribution, and consequently the decoder 
performance, depends on the location of the quantization levels, which in turn 
must depend on the signal level and noise variance. Thus, to implement an effec 
tive multilevel quantizer, a demodulator must incorporate automatic gain control 
(AGC). 

The simplest DMC is the one with binary input and output symbols, which 
may be derived from a binary-input AWGN channel by utilizing a two-level 
quantizer. The quantizer, whose output is b^ for nonnegative inputs and b 2 other 
wise, is generally called a hard quantizer (or limiter), in contrast with a multilevel 
quantizer which is usually called a soft quantizer. The resulting hard-quantized 
output channel is the binary-symmetric channel (BSC). When derived from the 
AWGN channel, the BSC has the conditional distribution diagram shown in Fig. 
2.15 with p = p{y b 2 \x = a^} = p{y = b \x = a 2 }, generally called the crossover 
probability, being the same as the symbol error probability for an uncoded 
digital communication system. The principal advantage of hard quantizing the 
AWGN channel into a BSC is that no knowledge is needed of the signal energy. 
In contrast, as commented above, the soft quantizer requires this information 
and hence must employ AGC. On the other hand, as will be elaborated on in 
Sec. 2.11 and the next chapter, the hard quantizer considerably degrades per 
formance relative to a properly adjusted soft quantizer. 

With quadriphase modulation, demodulated by the system of Fig. 2.9, the 
same quantization scheme can be used on each of the two streams of observables, 
resulting in exactly the same channel as with biphase modulation, provided we 
can ignore the quadrature intersymbol interference discussed in Sec. 2.6. Multi- 
amplitude modulation can be treated in the same way as two-level modulation. 
For the case of Q input levels and ./-level output quantization, the AWGN 
channel is reduced to a Q-input, ./-output DMC. With Q-phase multiphase modu 
lation employing both quadrature dimensions, as shown in case b of Fig. 2.12, the 
quantization may be more conveniently implemented in phase rather than 
amplitude. 

Once the AWGN channel has been reduced to a DMC by output quantiza 
tion, the decoder of Fig. 2.4, or its digital equivalent operating on quantized data, 
is no longer optimum. Rather, the optimum decoder must implement the decision 
rule (2.2.7) which is optimum for the resulting memoryless channel. For 
equiprobable messages, this reduces to the maximum likelihood decoder or 
decision rule 

H A = H m if [In p(y n \ x mn ) - In p(y n \x m , n )] > for all mf * m (2.8.2) 



CHANNEL MODELS AND BLOCK CODING 81 

where for each m and n 

x mn e{a l ,a 2 ,...,a Q } 
y n e {&!, b 2 ,...,bj} 

For the BSC, (2.8.2) reduces to an even simpler rule. For, as shown in Fig. 2.15, 
the conditional probability for the nth symbol is p if y n ^ x mn and is (1 p) if 
y n = x mn . Suppose that the received vector y = (y l9 ..., y N ) differs from a trans 
mitted vector x m = (x m !,..., x m]V ) in exactly d m positions. The number d m is then 
said to be the Hamming distance between vectors \ m and y. The conditional 
probability of receiving y given that \ m was transmitted is 

p N (y |xj = ft p(y,,|O = p d "(\ - pf-"" (2.8.3) 

n=l 

Note that, because of the symmetry of the channel, this likelihood function does 
not depend on the particular value of the transmitted symbol, but only on whether 
or not the channel caused a transition from a t to b 2 or from a 2 to b^ Thus 

N N 

In f] p(y n \x mn ) = In p(y n x mn ) 

n=l n=l 

= N\n(l-p)-d m \n[(\- P )/p] (2.8.4) 

Substituting (2.8.4) into (2.8.2), we obtain for the BSC the rule 

forallm ^m 



Without loss of generality, we may assume p < \ (for if this is not the case, we may 
make it such by just interchanging the indices on b 1 and b 2 }- Then the decoding 
rule becomes 

H* = H m if d m < d m , for all m * m (2.8.5) 

where d m is the Hamming distance between x m and y. In each case, ties are 
assumed to be resolved randomly as before. 

Hence, we conclude that, for the BSC, the maximum likelihood decoder re 
duces to a minimum distance decoder wherein the received vector y is compared 
with each possible transmitted signal vector and the one closest to y, in the sense 
of minimum number of differing symbols (Hamming distance), is chosen as the 
correct transmitted vector. Although this suggests a much simpler mechanization, 
this rule could be implemented as in Fig. 2.4 if we took y and x m to be binary 
vectors and a v = b l = + 1 and a 2 = b 2 = 1. 

For discrete memoryless channels other than the BSC, the decoding rule 
(2.8.2) can be somewhat simplified in many cases (see Prob. 2.9), but usually not 
to the point of being independent of the transition probabilities as has just been 
shown for the BSC. Generally, the rule will depend on these probabilities and 
hence on the energy-to-noise ratios as well as on the quantization scheme used. 



82 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

This leads to a potential decoder mismatch or suboptimality due to unknown 
signal levels (incorrect AGC) or noise variance. Also, since the transition probabil 
ities are themselves real numbers, quantization of these is required to implement 
the rule of (2.8.2) digitally with a resulting minor degradation. As we shall discover 
in later chapters, some decoders are relatively insensitive to channel statistics, 
while others degrade rapidly as a function of decoder mismatch. However, it is 
generally true that, for binary inputs, even with a mismatched decoder, perfor 
mance of a multilevel (soft) quantized channel decoder is superior to that of a 
two-level (hard) quantized channel decoder. In performance evaluation of binary- 
input channels with variable output quantization, we shall generally treat the 
limiting cases of an AWGN channel without quantization and with hard quanti 
zation (BSC) to establish the two limits. Some intermediate cases will also be 
treated to indicate the rate of approach of multilevel (soft) quantization to the 
unquantized ideal case. 



2.9 LINEAR CODES 

Thus far we have devoted considerable attention to all parts of the communica 
tion system except the encoder. In its crudest form, encoding can be regarded as a 
table-look-up operation ; each of the M signal vectors x lt \ 2 , . . . , X M is stored in an 
N-stage register of a memory bank and, whenever message H m is to be trans 
mitted, the corresponding signal vector x m is read into the modulator. Alterna 
tively, we may label each of the M = q K messages, as in Fig. 2.1, by a X-vector 
over a g-ary alphabet. Then the encoding becomes a one-to-one mapping from the 
set of message vectors {u m = (u ml , ..., u mK )} into the set of signal vectors 
{x m = (x ml , . . . , x mN )}. We shall concern ourselves primarily with binary alphabets; 
thus initially we take u mn e {0, 1}, for all m, n and generalize later to q > 2. A 
particularly convenient mapping to implement is a linear code. For binary-input 
data, a linear code consists simply of a set of modulo-2 linear combinations of the 
data symbols, which may be implemented as shown in Fig. 2.16. The K-stage 
register corresponds precisely to the data block register in the general system 
diagram of Fig. 2.1. The coder then consists of L modulo-2 adders, each of which 
adds together a subset of the data symbols w ml , u m2 , . . . , u mK to generate one code 
symbol v mn where n = 1, 2, . . . , L as shown in Fig. 2.16. We shall refer to the vector 
v m = ( v mi> v m2 , . . . , v mL ) as the code vector. Modulo-2 addition of binary symbols 
will be denoted by and is defined by 

1 = 10= 1 

00=11=0 (2.9.1) 

It is readily verified by exhaustive testing that this operation is associative and 
commutative; that is, if a, 6, c are binary symbols (0 or 1), then 

(a b) c = a (b c) (2.9.2a) 



CHANNEL MODELS AND BLOCK CODING 83 



4 K <:ti<yp<: fc 




"ml 


"m2 


"m3 






u mK 



vy/ i 


Binary 


V / r! 


r y / m2 


symbol 


1 / 


to 


I / 


channel 


I/ 


symbol 


mL 


mapping 


I adders 





Figure 2.16 Linear block encoder. 



and 



= b a 



(2.9.26) 



Thus the first stage of the linear coding operation for binary data can be repre 
sented by 



u m 

Lm2 = "ml 012 "m2022 ", 
VmL = "ml 9\L "m202L "m 



(2.9.3) 



where g kn e {0, 1} for all k, n. The term u mk g kn is an ordinary multiplication, so that 
u mk enters into the particular combination for v mn if and only if g kn = 1. The matrix 



G = 



-011 

021 


012 
022 


02L 




"g2- 


-0X1 


0X2 " 


0KL - 




. ^ gx -" . 



(2.9.4a) 



is called the generator matrix of the linear code and {g t } are its row vectors. Thus, 
(2.9.3) can be expressed in vector form as 11 



(2.9.46) 



k= 1 



means modulo-2 addition. 



84 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where both u m and v m are binary row vectors. Note that the set of all possible 
codewords is the space spanned by the row vectors of G. The rows of G then form 
a basis with the information bits being the basis coefficients for the codeword. 
Since the basis vectors are not unique for any linear space, it is clear that there are 
many generator matrices that give the same set of codewords. 

To complete the linear encoding, we must convert the L-dimensional code 
vector v m with elements in {0, 1} into the N-dimensional real number signal vector 
x m = (x ml , x m2 , . . . , x mN ). For the simplest cases of biphase or quadriphase modu 
lation, we need only a one-dimensional mapping 



so that, in fact, L = N. For more elaborate modulation schemes, we must take 
L > N. For example, for the four-level amplitude modulation scheme of Fig. 2A2a 
we need to take L = 2N. Then the four possible combinations of the pair 
(v ml , v m ,+ j) (where / is odd) give rise to one of four values for the signal (ampli 
tude) symbol x mn [where n = (I + l)/2]. Similarly, for the 16-phase modulation 
scheme of Fig. 2.12fc, we must take L = 4N and use four consecutive y-symbols to 
select one of the 16-phase x-symbols. 

Before considering the code or modulation space further, we shall examine an 
extremely important property of linear codes known as closure: namely, the 
property that the modulo-2 termwise sum of two code vectors \ m and v k 

v m v k = (v ml v ki , v m2 v k2 , . . . , v mL v kL ) 

is also a code vector. This is easily shown, for, by applying the associative law to 
v m = u m G and \ k = u k G, we obtain 



= (u m u k )G 

But since u m and u fc are two X-dimensional data vectors, their modulo-2 sum must 
also be a data vector, for the 2 K data vectors must coincide with all possible binary 
vectors of dimension K. Thus, denoting this data vector u m u k = u r , it follows 
that 



v m v k = 

= v r (2.9.6) 

which is, therefore, a code vector. We generally label the data vectors consecu 
tively with the convention that Uj = (0, 0, . . ., 0) = 0. It follows from (2.9.4) that 
also v t = 0. The vector is called the identity vector since, for any other code 
vector, 



(2-9.7) 



CHANNEL MODELS AND BLOCK CODING 85 

We note also that as a consequence of (2.9.1) 

v w 0v m = (2.9.8) 

which means that every vector is its own negative (or additive inverse) under the 
operation of modulo-2 addition. When a set satisfies the closure property (2.9.6), 
the identity property (2.9.7), and the inverse property (2.9.8) under an operation 
which is associative and commutative (2.9.2), it is called an Abelian group. Hence 
linear codes are also called group codes. They are also called parity-check codes, 
since the code symbol v mn = 1 if the " parity " of the data symbols added to form 
v mn is odd, and v mn = if the parity is even. 

An interesting consequence of the closure property is that the set of Hamming 
distances from a given code vector to the (M 1) other code vectors is the same 
for all code vectors. To demonstrate this, it is convenient to define first the Ham 
ming weight of a binary vector as the number of ones in the vector. The Hamming 
distance between two vectors v m and v m . is then just the Hamming weight of their 
modulo-2 termwise sum, denoted w(v m v m ,). For example, if 

v m = (01 101) and v-(101 10) 

then 

w(v m 0v m ,) = w(l 1011) 
= 4 

which is clearly the number of differing positions and hence the Hamming dis 
tance between the vectors. Now the set of distances of the other code vectors from 
YJ = is clearly (w(v 2 ), vv(v 3 ), ..., W(V M )}. On the other hand, the set of distances 
from any code vector v m =f= to the other code vectors is just {w(v m v m ,): all 
m =f= m}. But, by the closure property, v m - v m is some code vector other than \ l . 
Furthermore, for any two distinct vectors v V where m =f= m, m" m, and 
m =f= m" we have 

V v m + V Vm 
and 

V v m + = Y! 

Hence, as the index m varies over all code vectors other than m, the operation 
v m v m generates all (M - 1) distinct nonzero code vectors and consequently the 
entire set except v^ It follows that 

{v m , v m : all m + m} = {v 2 , v 3 , . . . , V M } (2.9.9) 

and thus also that 

MV v m ): all m + m} = (w(v 2 ), w(v 3 ) . . . W(V M )} (2.9.10) 

which means that the set of distances of all other code vectors from a given code 
vector v m is the same as the set of distances of all code vectors from v^ Thus, 



86 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

without loss of generality, we may compute just the distances from \ l or, equiva- 
lently, the weights of all the nonzero code vectors. 

Another very useful consequence of the closure property of linear codes is 
that, when these are used on a input-binary, output-symmetric channel with maxi 
mum likelihood decoding, the error probability for the mth message is the same for 
all m ; that is, 

P Em =P E m=l,2, ...,M (2.9.11) 

as we next show. 

A binary-input symmetric channel, which includes the biphase and quadri- 
phase AWGN channels as well as all symmetrically quantized reductions thereof, 
can be defined as follows. Let, for each m and , 



p(y n \x mn = 






This binary-input channel is said to be symmetric if 



Po(-y) (2-9.13) 

It is easily verified that the AWGN channel, the BSC, 12 and any other symmet 
rically quantized AWGN channel all satisfy (2.9.13). To prove the uniform error 
property (2.9.1 1) for a binary linear code on this class of channels using maximum 
likelihood decoding, we note, using (2.3.1), (2.3.3), and (2.9.1) that 



where 



m = {y : In p N (y \ x m ,) > In p N (y \ xj for some m + m} 



Z t 
In 



hi 



> for some m =f= m 



y: 



Vm n 



Z In 



> for some m + m 



12 For the BSC we must use the convention "0" - +1 and " 
order to use the definition (2.9.13) of symmetry. 



(2.9. 14a) 



1 so that y = + 1 or 1 in 



CHANNEL MODELS AND BLOCK CODING 87 

We have 

P Em = Z_ n p(yn\v mn = Q) n p(y*\"**=i) 

y e A m n:f mn =0 n: v mn = 1 

= ZL n Po(> n ) n p*(-y.) (2.9.i5a) 

yeA m n:t; m n=0 n:v mn =l 

But if we let 

z n = y n for all n such that v mn = 

z n = y n for all such that v mn = 1 

which is just a change of dummy variables in the summation (or integration), 
(2.9.15) and (2.9.14) become respectively 



where now 



But comparing (2.9.146)and (2.9.156) with (2.9.140)and (2.9.15a), respectively, with 
m = 1 (t; lfl = for all n) in the latter pair, we find that, because of the symmetry of 
the linear code (2.9.9) and the random resolution of ties (see Sec. 2.2) 

P Em = p i for m = 1, 2, . . . , M (2.9.16) 

Thus not only are all message error probabilities the same, but, in calculating P E 
for linear codes on binary-input symmetric channels, we may without loss of 
generality assume always that the all-zeros code vector was transmitted. This 
greatly reduces the effort and simplifies the computations. 

As an example, consider a linearly coded biphase-modulated signal on the 
AWGN channel. Although computation of the exact error probability is generally 
prohibitively complicated (except for special cases like the orthogonal or simplex 
codes; see Probs. 2.4, 2.5), the union upper bound of Sec. 2.3 can easily be cal 
culated if the set of weights of all code vectors is known. For from (2.3.4) and 
(2.3.10), we obtain that, for the AWGN channel with biphase modulation 



In p (~ z ") > for some m + m (2.9.146) 



where ||x fc Xj || is the Euclidean distance between signal vectors. Now suppose 
the weight of v k , which is also its Hamming distance from Vj, is w fc . This means 
that w k of the code symbols of v k are ones and consequently that w k of the code 
symbols of x fc are ^/~S~ S (the remainder being + ^J$ s \ and of course all code 
symbols of Xj are +^/~~ s since Vj = 0. Thus 



and, consequently, for the biphase- (or quadriphase-) modulated AWGN channel 



88 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

we have the union bond 

PE < I Q(y/\M,/N )w k ) (2.9.17) 



We may readily generalize (2.9.17) to any binary-input symmetric channel by 
using the Bhattacharyya bound (2.3.15) in conjunction with the union bound 
(2.3.4). For memoryless channels, (2.3.15) becomes 



n 

n=i 



i. = 0) (2- 9 - 18 ) 

where the last step follows since each sum in the first product equals unity. Since 
v kn = v ln in exactly w fc positions we have 



and hence from (2.3.4) we find 13 that 

PE < I exp L In V^Wy)] ( 2 - 9 - 19 ) 

fc=2 I y J 



where In J] y vPoO^PiIy) i tne previously defined Bhattacharyya distance 
which becomes S /N for the AWGN channel [see (2.3.17) and (2.9.17)]. We note 
also that for the BSC 



where p is the crossover probability. Tighter results for the BSC will be obtained 
in the next section. 

In principle, we could employ the tighter Gallager bound of (2.4.8), but this 
generally requires more knowledge of the code structure than just the set of 
distances between code vectors. In fact, even the set of all code vector weights is 
not easily calculated in general. Often, the only known parameter of a code is the 
minimum distance between code vectors. Then from (2.9.17) and (2.9.19) we can 
obtain the much weaker bound for the AWGN channel 



min w fc (2.9.20) 



13 This Bhattacharyya bound is also valid for asymmetric channels, but it is a weaker bound than 
the ChernofT bound in such cases (see Prob. 2.10). 



CHANNEL MODELS AND BLOCK CODING 89 



and for general binary-input channels 

P E < (M 1) exp min 



(2.9.21) 



A seemingly unsurmountable weakness of this approach to the evaluation of 
linear codes is that essentially all those long codes which can be elegantly 
described or constructed with known distances have poor distance properties. A 
few short codes, such as the Golay code to be treated in Sec. 2.1 1, are optimum for 
relatively short block lengths and for some rates, and these are indeed useful to 
demonstrate some of the advantages of coding. But a few scattered examples of 
moderately short block codes hardly begin to scratch the surface of the remark 
able capabilities of coding, both linear and otherwise. In the next chapter we shall 
demonstrate most of these capabilities by examining the entire ensemble of codes 
of a given length and rate, rather than hopelessly searching for the optimum 
member of this ensemble. 



2.10 SYSTEMATIC LINEAR CODES AND OPTIMUM 
DECODING FOR THE BSC* 

In the last section, we defined a linear code as one whose code vectors are gen 
erated from the data vectors by the linear mapping 



m = 1, 2, ..., M 



(2.10.1) 



where G is an arbitrary K x L matrix of zeros and ones. We now demonstrate that 
because any useful linear code is a one-to-one mapping from the data vectors to the 
code vectors, it is equivalent to some linear code whose generator matrix is of the 
form 



G = 



"100 


g 1 . K+ i " 


9lL ~ 


1 


g 2 K+l " 


92L 


1 


93.K+1 " 


03L 


.000 


1 9K, K+l 


9KL - 



(2.10.2) 



We note first that a linear code (2.10.1) generated by the matrix (2.10.2) has its first 
K code symbols identical to the data symbols, that is 



v mn = u mn n = 1, 2, . . . , K 

and the remainder are as before 14 given by 
K 

= *+!,* + : 



(2.10.3fl) 



(2.10.36) 



* May be omitted without loss of continuity. 
14 means modulo-2 summation. 



90 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Such a code, which transmits the original K data symbols unchanged together 
with L K " parity-check " symbols is called a systematic code. 

Any one-to-one linear code is equivalent in performance to a systematic code, 
as is shown by the following argument. Interchanging any two rows of G or 
adding together modulo-2 any combination of rows does not alter the set of code 
vectors generated; it simply relabels them. For, denoting the row vectors of G as 
gi> 82 > SK we note that interchanging the two rows g and g 7 changes the 
original code vectors 

V m = M m igi W mi g t U mj gj U mK g K 

into the new code vectors 

Vm = "mlgl " "migj " "mjgi " U mK 8* 

But, since u mi and u mj take on all possible combinations of values, the set {\ m } is 
identical to the set {v m } except for relabeling. Similarly, adding row g, to row g, 
changes the original set into the new set of code vectors 

< = "ml 8l mi(& gj) "mjgj U 



But, since u mi g 7 - is itself a code vector, as a consequence of the closure property 
demonstrated in the last section, adding the same code vector to each of the 
original code vectors again generates the original set in different order. Hence 

K,} = {v m }. 

To complete the argument, we perform row additions and interchanges on the 
generator matrix in the following order. Beginning with the first nonzero column;, 
we take the first row with a one in the jth position, interchange its position with 
the first row, and add it to all other rows containing ones in the jth position. This 
ensures that the jth column of the reduced matrix has a one in only the first row. 
We then proceed to the next nonzero column of the reduced matrix which has a 
one in any of the last K 1 rows, interchange rows so there is a one in the second 
row, and add this second row to all rows (including possibly the first) with ones in 
this position. After K such steps we are left either with K columns, each having a 
one in a single different row, or with one or more zero rows at the bottom of the 
matrix; the latter occurs when the original matrix had two or more linearly 
dependent rows. In the latter case, the reduced generator matrix, and hence also 
the original G, cannot generate 2 K different code-vectors ; hence the mapping is not 
one-to-one and corresponds, therefore, to a poor code since two or more data 
vectors produce the same code vector. In the first case, we might need to inter 
change column vectors in order to arrive at the generator matrix of (2.10.2). This 
merely results in a reordering of the code symbols. 15 



5 This does not alter the performance on any binary-input memoryless channel; it might, however, 
alter performance on a non-binary-input channel, for which each signal dimension depends on more 
than one code symbol; this is not of interest here. 



CHANNEL MODELS AND BLOCK CODING 91 



Thus, whenever the code-generator matrix has linearly independent rows and 
nonzero columns, the code is equivalent, except for relabeling of code vectors and 
possibly reordering of the columns, to a systematic code generated by (2.10.2). 

We therefore restrict attention henceforth to systematic linear block codes, 
and consider, in particular, their use on the BSC. We demonstrated in Sec. 2.8 that 
maximum likelihood decoding of any binary code transmitted over the BSC is 
equivalent to minimum distance decoding. That is, 



if d m (y) < d m ,(y] for all m + m 



(2.10.4) 



with ties resolved randomly. If we take y n e {0, 1} and x mn = v mn e {0, 1}, the Ham 
ming distance is given by 



d m (y) = w(x m y) 
= w(v m y) 



(2.10.5) 



Also, since the code and signal symbols are the same here, L = N. Thus decoding 
might be performed by taking the weight of the vector formed by adding 
modulo-2 the received binary vector y to each possible code vector and deciding 
in favor of the message whose code vector results in the lowest weight. 

We now demonstrate a simpler table-look-up technique for decoding 
systematic linear codes on the BSC. Substituting (2.10.3a) into the right side of 
(2.10.36) and adding v mn to both sides of the latter, we obtain 



0= 



k=i 



or, in vector form 



(2.10.6a) 



where H T is the L x (L K) matrix 



H T = 



01.K+1 


01, L -l 


QK.K+I 

1 


* QK, L 


1 








1 - 



(2.10.66) 



Its transpose, the matrix H, is called the parity-check matrix. Thus, from (2.10.60), 
we see that any code vector multiplied by H T yields the vector; thus the code 
vectors constitute the null-space of the parity-check matrix. Now consider post- 
multiplying any received vector y by H T . The resulting (L X)-dimensional binary 
vector is called the syndrome of the received vector and is given by 

(2.10.7) 



92 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

This operation can be performed in exactly the same manner as the encoding 
operation (2.9.3) or (2.9.4), except here we require an L-stage register and L- K 
modulo-2 adders. Obviously, if no errors are made, y = v m and consequently the 
syndrome is zero. Now suppose that the BSC causes an arbitrary sequence of 
errors e = (e l9 e 2 , . . . , e L ), where we adopt the convention that 

1 if an error occurs in the nth symbol transmission 
c ^ 

if no error occurs in the nth symbol transmission 

Then, if v m is transmitted, 

y = v m e (2.10.8) 

and 

v m y = e (2.10.9) 

Also, the syndrome is given by 

s = y/T=(v m e)/r 

= e/f T (2.10.10) 

Now, for a given received vector y and corresponding syndrome vector s, (2.10.10) 
will have M = 2 K solutions {e m = y v m }, one for each possible transmitted 
vector. However, we have from (2.10.5) that the maximum likelihood (minimum 
distance) decoder for the BSC chooses the codeword corresponding to the smal 
lest weight vector among the set {v m y}. But, according to (2.10.9), for systematic 
linear codes this indicates that, given the channel output y, 

H* = H m if w(e m ) < w(e m ,) for all m * m (2.10.11) 

This then suggests the following mechanization of the maximum likelihood de 
coder for the BSC: 

0. Initially, prior to decoding, for each of the 2 L ~ K possible syndromes s store the 
minimum weight vector e(s) which satisfies (2.10.10) in a table of 2 L ~ K Lrbit 
entries. 

1. From the Zxiimensional received vector y, generate the (L X)-dimensional 
syndrome s by the linear operation (2.10.7); this requires an Z^stage register 
and L K modulo-2 adders. 

2. Do a table-look-up in the table of step to obtain e = e(s) from s. 

3. Obtain the most likely code vector by the operation 

Vm = y e 
and the first K symbols are the data symbols according to (2.10.30). 

The complexity of this procedure lies in the table containing 2 L ~ K vectors of 
dimension L; it follows trivially from step 3 that, because the code is systematic, 
each entry can be reduced to just a K -dimensional vector; that is, it is necessary to 



CHANNEL MODELS AND BLOCK CODING 93 

store only the errors which occurred in the K data symbols and not those in the 
L K parity-check symbols. 

As a direct consequence of (2.10.4), (2.10.5), and (2.10.9), it follows that a 
maximum likelihood decoder for any binary code on the BSC will decode correctly 
if 

w(e)<iJ min (2.10.12) 

where d min is the minimum Hamming distance among all pairs of codewords. 
Letting y = \ m . in (2.10.5) it follows that 

d min = min w(x m 0x m .) 

m ^m 

With the convention that ties are resolved randomly, correct decoding will 
occur with some nonzero probability when (2.10.12) is an equality. Thus, when 
ever the number of errors is less than half the minimum distance between code 
vectors, the decoder will be guaranteed to correct them. (However, this is not an 
only if condition, unless the code vectors are sphere-packed, as will be discussed 
below.) Nevertheless, (2.10.12) leads to an upper bound on error probability for 
linear codes on the BSC because, as a consequence of (2.9.11), we have 

PE = P Em 

<Pr{w(cJ>Kin} (2.10.13) 

Then, since e n = 1 with probability p for each n = 1, 2, . . . , L, (2.10.13) is just the 
binomial sum 



* = (<fn,in+D/2 
P < 

t (tW-Pr* ^even (2.10.14) 

k = d min /2 \ K I 

Codes for which (2.10.14) is exact include the Hamming single-error correcting 
codes which may conventionally be defined in terms of their parity-check matrix. H 
is the parity-check matrix of an (L, K) Hamming code if its L columns (L rows of 
H T ) consist of all possible nonzero L - K binary vectors. This implies that for a 
Hamming code 

L = 2 L ~ K -\ 

An example of H T for a (7, 4) Hamming code is given in Fig. 2.17. Since all rows of 
H T are distinct, each of the L unit-weight (single) error vectors has a different 
nonzero syndrome (corresponding to one row of H T ). There are, in fact, just 
2 L ~ K = L + 1 distinct syndromes, one of which is the zero vector, corresponding 
to no errors, and the remaining L correspond to the single-error vectors. For note, 
from step of the syndrome table-look-up decoder, that the minimum weight 



94 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



Figure 2.17 Transpose of parity-check matrix for (7, 4) Hamming code. 



error vector should be used for each syndrome. Since here all the unit-weight error 
vectors correspond to all the distinct nonzero syndromes, the Hamming codes all 
correct one error and only one error. This can also be verified by showing that 
</ min = 3(seeProb.2.11). 

It is instructive to investigate the linear code generated by the H matrix of the 
Hamming code (which is called its dual code) 



G = H 



Hamming 



(2.10.15) 



This is a K x L matrix where 



L = 2 K - 1 



and the columns consist of all possible nonzero K-dimensional binary vectors. 
Figure 2.18 shows the generator matrix of the (7, 3) code, which is the dual of the 
(7, 4) Hamming code whose transposed parity-check matrix was given in 
Fig. 2.17. In addition, to its right in Fig. 2.18 we adjoin the all zero column to 
create an (8, 3) code. We can generalize to a (2 K , K) code whose generator matrix 
is the transpose of the (2 K 1 ) x K matrix H T of a Hamming code augmented 
by an all-zero column, and can show that every nonzero codeword of this 
augmented code has weight 



w(v m ) = L/2 
= 2 K ~ 1 



for all m =/= 1 



(2.10.16) 



For any code vector 



= U m G = 



U m2 g 2 U mK 



where g fc is the kih row of G. Also, since the data symbols u mk are zeros and ones, 
v m is the modulo-2 sum of the remaining rows of G, after some subset of the rows 
has been deleted. But we note that deletion of one row results in a matrix of 
L = 2 K columns of dimension K 1, where each of the possible 2 K ~ l binary 
columns appears exactly twice; similarly deletion of j rows results in a matrix of 
L = 2 K columns of dimension K-j with each of the possible 2 K ~ j columns 
repeated exactly 2 j times. But in each case, half of these 2 K ~ j columns contain an 
odd number of ones and the other half an even number. Hence, adding all the 



i i 

1 o i 

1 1 



1 
1 




Figure 2.18 Generator matrices for (7, 3) regular 
I simplex and (8, 3) orthogonal codes. 



CHANNEL MODELS AND BLOCK CODING 95 

nondeleted rows modulo-2 is equivalent to adding all the nondeleted symbols of 
the L columns, half of which have even parity and the other half odd. Thus the 
result is 1/2 zeros and L/2 ones; hence, the desired result (2.10.16). 

Equation (2.10.16) also implies, by the closure property (2.9.6), that the Ham 
ming distance between all pairs of codewords is 



= 2*- 1 forallm ^m 

Consequently, the biphase-modulated signals generated by such a code (aug 
mented by the additional all-zeros column) are all mutually orthogonal, for the 
normalized inner product for any two binary signals is in general 

-[L- 2w(v m V )]#. 







2w(v 0v m . 



For the code under consideration, we thus have 

| x m (t)x m .(t) dt = forallm^m (2.10.18) 



Returning to the original code generated by the K x (2 K - 1) matrix G of 
(2.10.15), we note that the weight of each nonzero code vector v m is unchanged 
when the additional all-zeros column (of Fig. 2.18) is deleted. However, the 
biphase signals derived from the code are no longer orthogonal since now 
L = 2* - 1. From (2.10.17) we obtain 



>s 



2 K -l 



(2.10.19) 



This code is called a regular simplex or transorthogonal code. It is easily shown 
(Prob. 2.5) that (2.10.19) corresponds to the minimum average inner product of 
any equal-energy signal set. We shall discuss the relative performance of the 
orthogonal and regular simplex signal sets in the next section. 

Considerable attention has been devoted, since the earliest days of informa 
tion theory, to the study of numerous classes of linear block codes, and partic 
ularly to algebraic decoding algorithms which are of reasonable complexity and 



% FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

do not require the exponentially growing storage of the syndrome table-look-up 
approach which we have described. While some very elegant and reasonably 
powerful linear codes and decoding techniques have been discovered, particularly 
among the class of " cyclic " codes, these codes fall far short of the performance of 
the best linear codes, as will be determined in the next chapter. Also, the more 
readily implementable decoding algorithms, while guaranteeing the correction of 
a given number of errors per block, are generally suboptimum and restricted to 
hard quantized channels such as the BSC for binary codes. The last, and probably 
most important, cause for the limited practical success of linear block codes is the 
generally far superior capabilities of linear convolutional codes, to be discussed in 
Chap. 4. 

Much of the material in these last two sections can be generalized to non-bin 
ary-code alphabets, and specifically to data and code alphabets of size q, where q is 
either a prime or some power of a prime. For practical storage and implementa 
tion purposes, one almost always requires q to be a power of 2. While such 
generalization is straightforward, it requires the development of some elementary 
concepts of finite field theory. The limited utility of the results does not seem to 
warrant their inclusion here. Excellent treatments of algebraic codes over binary 
as well as nonbinary alphabets are available in Berlekamp [1968], Gallager [1968], 
Lin [1970], Van Lint [1971], Peterson and Weldon [1972], Blake and Mullin 
[1976]. 



2.11 EXAMPLES OF LINEAR BLOCK CODE PERFORMANCE 
ON THE AWGN CHANNEL AND ITS QUANTIZED 
REDUCTIONS* 

In this section, we consider briefly the performance of the two most commonly 
used linear block codes for a biphase- (or quadriphase-) modulated AWGN chan 
nel, both without and with output quantization. First we consider the classes of 
orthogonal and regular simplex signals. We found in Sec. 2.5 that the performance 
of orthogonal signals on the AWGN channel is invariant to the particular wave 
forms used. Hence, we have the union-Bhattacharyya bound (2.3.19) or the 
tighter Gallager bound (2.5.12) with M = 2 K and = 2 K S . One can also readily 
obtain the exact expression (see Prob. 2.4) which is 

1 



(n&\ M-1 
*+m dx < 2 - iu) 



This integral has been tabulated for M = 2 K for all K up to 10 (see Viterbi [1966]). 
It is plotted in Fig. 2.19, for K = 6, as a function of & b /N where b is the energy 
per transmitted bit, which is related to $ and K by the relation 



b 



K K 

* May be omitted without loss of continuity. 



(2.11.2) 



CHANNEL MODELS AND BLOCK CODING 97 



10 



- icr 3 
>, 



1 



I 10 4 



i-S 



1C) 



6 



Upper bound 
Orthogonal 
Two-level quantization 



Exact 

Golay (24, 12) 
Two-level quantization 



Upper bound 
Exact 

Orthogonal 

No quantization 



Upper bound 
Golay (24. 12) 
No quantization 




10 



Figure 2.19 Error probability for 2 6 orthogonal and Golay (24, 12) coded signals on the AWGN 
channel. 



The regular simplex signal set performs exactly as well as the orthogonal signal set 
for, as is evident from Fig. 2.18, one symbol or dimension is identical for all signals 
in the set; hence, it might as well not be transmitted for it does not assist at all in 
discrimination between signals. However, in so dropping the rightmost symbol 
from the orthogonal code to obtain the regular simplex code, we are actually 



98 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

reducing the energy per transmitted bit to 

(f b = (2 - l)"7p <^b(l ~~ 2 ) 

This means that the error probability curve as a function of $ b /N of Fig. 2.19 is 
actually translated to the left by an amount 10 Iog 10 (1 - 2~ K ) dB which for 
K = 6 is approximately 0.07 dB. For comparison purposes, the union bound for 
orthogonal codes, obtained from (2.3.4) and (2.3.10), is also shown. 

Now let us consider the limiting case of two-level (hard) quantization so that 
the AWGN channel is reduced to the BSC. In this case, we have the general bound 
(2.10.14). For orthogonal codes, however, this bound is very weak. For, while 
d min = 2 K ~ l = L/2, it is possible to decode correctly in many cases where the 
number of errors is greater than L/4 because of the sparseness of the codewords in 
the 2*-dimensional space. In fact, the bound (2.10.14) becomes increasingly poor 
as K increases (see Prob. 2.12). On the other hand, we may proceed to bound the 
BSC performance by using the union bound (2.3.4), resolving ties randomly 

2* 

P E< Z Pr{w(yx m ,)>w(y0x 1 )|x 1 } 

m = 2 

+ i Pr My *m ) = w(y \i)\*i} 
= (2 K - l)[Pr {more than 2 K ~ 2 errors in 2 K ~ l positions} 
+ \ Pr {2 K ~ 2 errors in 2 K ~ 1 positions}] 



= (2 K -i)[_ (2 2 Z 2 +i) ( 2 7 I )p t ( 1 - 



:-i-k 






(2.11.3) 



where p = Q(^/2&JN ). This result is also plotted for K = 6 in Fig. 2.19. Again, 
the performance for regular simplex codes is the same but the transmitted energy 
is slightly less. 

Probably the most famous, and possibly the most useful, linear block codes 
are the Golay (23, 12) and (24, 12) codes, which have minimum distances equal to 7 
and 8 respectively. The former is called a perfect code which means that all spheres 
of Hamming radius r around each code vector v m (i.e., the sets of all vectors at 
Hamming distance r from the code vector) are disjoint and every vector y is at 
most a distance r from some code vector v m . The only nontrivial 16 perfect binary 
codes are the Hamming codes with r = 1, and the Golay (23, 12) code with r = 3. 
The (24, 12) code is only quasi-perfect, meaning that all spheres of radius r about 
each code vector are disjoint, but that every vector y is at most at distance r + 1 



16 Two -codewords of odd length that differ in every position form a perfect code and there are many 
perfect codes with d mtn = 1. 



CHANNEL MODELS AND BLOCK CODING 99 

from some code vector v m . Here again r = 3. It is easy to show that perfect and 
quasi-perfect codes achieve the minimum error probability for the given values of 
(L, K). This second code is actually used more often than the first for various 
reasons including its slightly better performance on the AWGN channel. The 
Golay codes are among the few linear codes, besides the Hamming and ortho 
gonal classes, for which all the code vector weights are known. These are sum 
marized in Table 2.2. While an exact expression for P E on the AWGN channel is 
not obtainable in closed form, given all the code vector weights, we may apply the 
union bound of (2.9.17) and thus obtain 



PE< I N W <2V(2<? S /N> (2.11.4) 

we W 

where the index set W and the integer N w are given in Table 2.2. This result is also 
plotted in Fig. 2.19 and, although it is only a bound, it is reasonably tight as 
verified by simulation. 

On the BSC, for the (24, 12) code, minimum distance decoding always cor 
rects 3 or fewer errors and corrects one-sixth of the weight 4 error vectors. On the 
other hand, error vectors of weight 5 or more are never corrected, since by the 
quasi-perfect property, there exists some code vector at a distance no greater 
than 4 from every received vector y. Similarly for the (23, 12) code all 
error vectors of weight 3 or less, and only these, are corrected. Hence for the 
(23, 12) code the expression (2.10.14) holds exactly. For the (24, 12) code we can 
multiply the first term in (2.10.14) by 5/6 and also obtain an exact result. This 
result for p = Q(^2 S /N \ L = 24, J min = 8, <? 6 = 23 s is plotted in Fig. 2.19. 

A potentially disturbing feature of the above results is that in each case we 
have determined the block error probability. But for orthogonal and regular sim 
plex codes, we have used K = 6 bits/block while for the Golay code we have 
K = 12 bits/block, and we would expect that the block error probability might be 

Table 2.2 Weight of code-vectors in Golay 
codes (Peterson [1961]) 

Number of code-vectors of 
weight \v, N M , 



Weight, w (23, 12) code (24, 12) code 






1 


1 


7 


253 





8 


506 


759 


11 


1288 





12 


1288 


2576 


15 


506 





16 


253 


759 


23 


1 





24 





1 



Total 4096 4096 



100 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

influenced by the number of bits transmitted by the block code. We may define bit 
error probability P b as the expected number of information bit errors per block 
divided by the total number of information bits transmitted per block. For orthog 
onal and regular simplex codes, all block errors are equiprobable since all 2 K 
code vectors are mutually equidistant. Thus, since there are (*) ways in which k 
out of K bits may be in error and since an error will cause any pattern of errors in 
the data vector with equal probability P E /(2 K - 1), it follows that for orthogonal 
and regular simplex codes over any of the channels considered 



p 

~D 7 J 1 



which, for all but very small X, is very nearly 



The evaluation of P b is not nearly as simple and elegant for other linear block 
codes and in fact depends on the particular generator matrix chosen. However, for 
the Golay (24, 12) code with a systematic encoder, we may argue approximately 
as follows. Block errors will usually (with high probability) cause a choice of an 
incorrect code vector which is at distance 8 from the correct code vector. This 
means that one-third of all code symbols are usually in error when a block error is 
made. But since the code is systematic and half the code symbols are data symbols, 
the same ratio occurs among the data symbols. Hence, it follows that approxi 
mately, P b % P /3. In general, in any case, we have trivially, P b < P E and also the 
lower bound P E /K < P b . Hence the upper bounds on P E are also valid for P b , and 
the comparison of P E for two codes is nearly as useful as that of P b even when the 
block lengths are different. 

Comparison in Fig. 2.19 of the performance of each code on the AWGN 
channel and on its hard quantized reduction, the BSC, indicates that hard quanti 
zation causes a degradation of very nearly 2 dB. This result is best explained by 
using the union-Bhattacharyya bound (2.9.19). By this bound 

PE< T,e-* d (2.H.5) 

k = 2 

where 

d=-\nT ^~ 



is a function of the quantization procedure, while w 2 , w 3 , . . . , W M , the weights of 
the nonzero codewords, are invariant to quantization. As also demonstrated by 



CHANNEL MODELS AND BLOCK CODING 101 



(2.3.17), for the AWGN 17 channel 



d= -In 





dy 



2n 
(2.11.6) 



For the BSC on the other hand, we have shown in Sec. 2.9 that 

d= -1 



where 



P = 



(2.11.76) 



But in the case of orthogonal codes 



which is extremely small when K > 1. Similarly, for any code in which 



N n LN 



In such cases (2.11.76) approaches 




Thus for the BSC with S S /N < 1 (or, almost equivalently, L$> K) 

46, 




(2.11.8) 



(2.11.9) 



Comparing (2.11.6) and (2.11.9), we see that in order to obtain the same bound 
(2.11.5), we must increase the energy by a factor n/2 (2 dB) for the BSC relative to 
the AWGN channel. Even though (2.11.9) has been shown under the condition 
that <$ S /N <^ 1, the approximate 2 dB degradation for two-level quantization seems 
empirically to hold even when this condition is not met (see, for example, 
Fig. 2.19). Cases of intermediate quantization are also readily evaluated (see 
Prob. 2.13) and the resulting d is easily computed. Other measures of quantization 
loss will also be considered in the next chapter. 



l ^ For PO(V), the random variable y can be taken to have mean ^fg~ s and variance N 2. or 
equivalently we may normalize it to have mean ^/2SJN and unit variance. The latter is used in 
(2.11.6). 



102 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Upon initially defining linear block codes in Sec. 2.9, we showed that they 
could be used in conjunction with multiple-amplitude and multiple-phase modu 
lation by using several code symbols to select each signal symbol (or dimension). 
However, we have given no examples of performance of such signal sets. One 
reason is that the uniform error property P Em = P E does not generally hold for 
such cases, making the analysis of a particular code much more complex; another 
is that the results are much less revealing. On the other hand, in the next chapter 
we shall develop the technique of ensemble performance evaluation, which is no 
more difficult for these cases of nonbinary modulation than for biphase (or 
quadriphase) modulation. 



2.12 OTHER MEMORYLESS CHANNELS 

Thus far we have concentrated exclusively on the AWGN channel and its 
quantized reductions, all of which are memoryless channels. These channel 
models apply most accurately to line-of-sight space and satellite communication. 
As a result, since such channels have become commonplace, coding to improve 
error performance in digital communication has been most prevalent in these 
applications. 

2.12.1 Colored Noise 

Yet even with the AWGN channel, certain imperfections invariably enter to 
degrade performance, some of which were discussed in Sec. 2.6. For example, 
intersymbol interference is caused by linear filtering in the transmitter, channel, or 
receiver when the " predetection " filters are not sufficiently wideband for the given 
signal. But receiver filtering also modifies the noise spectral density so that the 
white noise model is no longer appropriate. The resulting zero-mean noise with 
nonuniform spectral density is called colored. It can be treated in either of two 
ways. The rigorous theoretical approach is to expand the noise process in a 
Karhunen-Loeve series 

N 
n(t)= lim X>,A(r) 

N-oo n= 1 

where the {0 n (f )} are normalized eigenfunctions of the noise covariance function 
and the {n n } are independent Gaussian variables with zero means and variances 
equal to the eigenvalues of the noise covariance function (Helstrom [1968], Van 
Trees [1968]). In particular, if the noise covariance function is positive 
definite, the eigenfunctions form a complete basis for finite-energy functions so 
that the signals (x m (t)} can also be represented in terms of their projections on the 
basis {<t> n (t)}. We then have the representation 



CHANNEL MODELS AND BLOCK CODING 103 

where 



and the channel can be represented as an infinite-dimensional additive vector 
channel 

y = \ m + n when H m is the transmitted message 

wherein the individual variances of the noise components differ from dimension to 
dimension. One can then conceive of coding the signal projections {x mn } for this 
channel model which is memoryless, but not constant since the noise variance 
varies from dimension to dimension. Such a development has been carried out by 
Gallager [1968] who obtained the code ensemble average error probability under 
a constraint on the signal energy. However, no practical channel could be rea 
sonably encoded in this way. 

An alternative and more direct, though less rigorous, approach to colored 
noise, proposed by Bode and Shannon [1950] (see also Wozencraft and Jacobs 
[1965], Chap. 7) is to "whiten " the noise by passing the received process through 
a whitening filter, the squared magnitude of whose transfer function is the inverse 
of the noise spectral density. While this also distorts the signal, it does so in a 
known manner so that the result is a known, though distorted, signal set in white 
Gaussian noise which can be treated as before. The weakness of this approach is 
that it ignores boundary effects for finite-time signals and is hence somewhat 
imprecise unless the signal symbol durations are long compared to the inverse 
noise bandwidth. Probably the best approach to this problem is to guarantee that 
the receiver predetection bandwidth is sufficiently wide, compared to the inverse 
symbol time, and that the noise spectral density is uniform in the frequency region 
of interest, so that the white noise model can be applied with reasonable accuracy. 

2.12.2 Noncoherent Reception 

Another degrading feature, noted briefly in Sec. 2.6, is that of imperfectly known 
carrier phase, as well as imperfectly known carrier frequency and symbol time. 
While the latter two parameters must always be estimated with reasonable accur 
acy, for any digital communication system will degrade intolerably otherwise, it is 
possible to operate without knowledge of the phase. Referring to Table 2.1 in 
Sec. 2.6 and to Fig. 2.9, we suppose that we have only two frequency-orthogonal 
signals whose frequency separation is a multiple of 2n/T radians per second. Note 
that this is the separation required for quadrature-phase frequency-orthogonal 
functions; the same separation is necessary when the phase is unknown, for in this 
event the sine and cosine functions will be indistinguishable upon reception. Thus 
we have 



x m (r) = N /2rf/(f)sinKf + </>) m = 1, 2 (2.12.1) 

where /(f) is a known envelope function of unit norm, (D m is some multiple of 



104 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



y(t) 




Maximum 
detector 



cos co 2 t 
Figure 2.20 Optimum demodulator for noncoherent reception. 



27T/T, coj ^ a> 2 , and (/> may be taken as a random variable uniformly 
distributed on the interval to 2n. This is generally called noncoherent reception. 
It is clear that the optimum demodulator (Fig. 2.20) consists of two devices each 
equivalent to those required by a quadrature-phase signal. When signal x^t) is 
sent, the set of four observables is 



where 



cos (/> + n ls y 2s = n 2s 
sin <t> + n lc y 2c = n 2c 



(2.12.2) 



n(t)f(t)smo} m tdt 



m= 1, 2 



w mc = >/2 f n(t)f(t) cos a) m t dt 

J o 

all four of which are mutually independent with zero mean and variance N /2. 



CHANNEL MODELS AND BLOCK CODING 105 

The likelihood function, when message 1 is sent and the phase is (/>, is therefore 



- [(y ls - J$ cos <ft) 2 4- (y lc - ^ sin 0) 2 + y 2 2s 4- y 



(2.12.3) 

But is a uniformly distributed random variable and thus the likelihood function 
of the observables y, given message 1, is just (2.12.3) averaged over 0, namely 






x exp 



> ls cos 4> + y lc sin </>] 

K yt 



where 



and where 



(2.12.4) 
(2.12.5) 





2n 

is the zeroth order modified Bessel function which is a monotonically increasing 
function of x. By symmetry, it is clear that p 4 (y |x 2 ) is the same as p 4 (y |xj) with 
the subscripts 1 and 2 interchanged throughout. Thus the decision rule for two 
messages is, according to (2.2.7) 



if In p 4 (y |xj) > In p 4 (y |x 2 ) 



or in this case 



Since / is a monotonically increasing function of its argument, this is equivalent 
to 

H, a = H 1 Ky l >y 2 (2-12.6) 

Thus the decision depends only on the sum of the squares of the observables, y\ 
and y\ , of each demodulator for each signal (or any monotonic finite function 



106 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

thereof) whose generation from y lc , y ls , y 2c , and y 2s is as shown in Fig. 2.20. 
Henceforth then, we may consider yi and y 2 to be the observables. It follows from 
(2.12.4) that these observables are independent, i.e., that 

P2(y^ y2\xi) = p(yi l^iM^I*!) 

Also from the definition (2.12.5) which is equivalent to a Cartesian-to-polar coor 
dinate transformation, and from the result of (2.12.4), it follows that 




(2.12.7) 

It is then relatively simple to obtain the error probability for noncoherent 
demodulation of two frequency-orthogonal signals. For 




(2.12.8) 
By symmetry, 



= e~ s/2N (2.12.9) 

Generalization to M frequency-orthogonal signals of the form (2.12.1) is 
completely straightforward. The demodulator becomes a bank of devices of the 
type of Fig. 2.20. The error probability can be obtained as an (M l)-term sum 
mation of exponentials (Prob. 2.14) and an asymptotically tight upper bound can 
be derived which is identical to that for coherent (known phase) reception of M 
orthogonal signals, given by (2.5.16) (see Prob. 2.15). This result does not imply, 
however, that ignorance of phase does not in general degrade performance. The 
fact that the performance of noncoherent reception of M orthogonal signals is 
asymptotically the same as for coherent reception is explained by noting that, as 
M becomes larger, so does T, and consequently the optimum receiver effectively 
estimates the phase over a long period T in the process of deciding among the M 
possible signals. As an example of the opposite extreme, consider a binary coded 



CHANNEL MODELS AND BLOCK CODING 107 

system of the type treated in the previous section where each binary symbol is 
transmitted as one of two frequency-orthogonal signals (2.12.1) which are demod 
ulated symbol by symbol, resulting in a BSC with transition probability p given 
by (2.12.9) with g = g s . Now when & S /N < 1, the union-Bhattacharyya error 
bound for such a coded system is the same as (2.11.5) but with the Bhattacharyya 
distance given by 



d= - 



-In 



This is clearly a great degradation relative to the coherent case for which 
d - (2/;r)<? s /No when g s /N <^ 1. One would suspect initially that a cause of this 
degradation is that the distance between signals for each symbol is reduced by a 
factor of 2 by the use of orthogonal signals compared to biphase signals, for the 
latter are opposite in sign and consequently have \\s^ s 2 || 2 = 2 s . There is in 
fact a technique applicable to noncoherent reception, called differential phase shift 
keying (see, e.g., Viterbi [1966], Van Trees [1968]) which effectively doubles the 
energy per symbol and produces the error probability of (2.12.9) with energy 
doubled. But this is clearly not a sufficient explanation because even if we used 
double the energy in the noncoherent case, we would merely multiply (2.12.10) 
by a factor of 4 and this would still be a negligibly small d compared to the 
coherent case when S /N <^ 1. The situation is somewhat improved with opti 
mum unquantized decoding, but there is still significant degradation. 

There is in fact no justification in a coded system for not measuring the phase 
accurately enough to avoid this major degradation, provided, of course, that the 
phase varies very slowly relative to the code block length, as assumed here. When 
the phase varies rapidly, this is usually accompanied by rapidly varying ampli 
tude, and the channel may be characterized as a fading-dispersive medium, the 
case which we consider next. 



2.12.3 Fading-Dispersive Channels 

A more serious source of degradation, prevalent in over-the-horizon propagation 
such as high-frequency ionospheric reflection and tropospheric scatter communi 
cation, is the presence of amplitude fading as well as rapid phase variations. The 
model of this phenomenon is usually taken to be a large number of diffuse scat- 
terers or reflectors which move randomly relative to one another, causing the 
signal to arrive at the receiver as a linear combination of many replicas of 



108 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

the original signal, each attenuated and phase shifted by random amounts. By 
the central limit theorem, the distribution of the sum of many independent 
random variables approaches the Gaussian distribution. Hence a sinusoidal signal 
sin co t will arrive at the receiver as 



y(t) = ^2gf(t)[a(t) sin co m t + b(t) cos co m t] + n(t) < f < T (2.12.11) 

where a(t) and b(t) are independent zero-mean Gaussian processes, with given 
covariance functions, and where n(t) is AWGN of thermal origin present in the 
observation. While we might consider more general signal sets, it should be clear 
that, in view of the random amplitude and phase perturbation by the channel, 
signals can be distinguished only by frequency. Each received signal, aside from 
the additive noise n(i), is a Gaussian random process with bandwidth dictated by 
the propagation medium and determined from the spectral densities of a(t) and 
b(t). If the frequencies a) m are spaced sufficiently far apart compared to their 
bandwidth, the signal random processes will have essentially nonoverlapping 
spectra and the problem reduces to that of detecting one of M " orthogonal " 
random processes. Once the observable statistics have been established, the prob 
lem is very similar to that of M orthogonal deterministic signals treated in 
Sec. 2.5, except that the decoding involves quadratic rather than linear operations 
on the observables (Helstrom [1968], Kennedy [1969], Viterbi [1967c]). 

A more realistic model, less wasteful of bandwidth, more amenable to coding, 
and more representative of practical systems, results from assuming that over 
short subintervals of T/N seconds the random signal is essentially constant. Then, 
assuming signal pulses of duration T/N during a given nth subinterval, we have 
the received signal 

/ n j\ 

it - \[a sin co m t + b cos o} m t] + n(t) 

(n - \)T/N < t < nT/N, m = 1, 2 (2.12.12) 

where a and b are zero-mean independent Gaussian variables with variance d 2 , a> m 
is a multiple of 2nN/T, $ s = $/N is the symbol energy, a.ndf(t) with unit norm is 
as defined in (2.6.5). Defining 

r = Ja 2 + b 2 </> = tan" 1 (b/a) (2.12.13) 

we may rewrite (2.12.12) as 
y(t) = JW s rf(t - nT/N) sin (a> m t + 0) + n(t) 

(n - l)T/N < t < nT/N, m = 1, 2 (2.12.14) 
The statistics of r and are easily obtained from those of a and b by the 



CHANNEL MODELS AND BLOCK CODING 109 

transformation 18 



a = r cos <p 
b = r sin </> 



p(r, </>) = 



= p(<t>)p(r) 0<0<27r, r>0 (2.12.15) 

Thus, is uniformly distributed on [0, 2n] and r is Rayleigh distributed ; hence the 
term Rayleigh fading. 

We shall limit attention primarily to a binary input alphabet (M = 2) based 
on two frequency-orthogonal signals, although generalization to a larger set of 
frequencies is straightforward. Comparing (2.12.14) with (2.12.1), we note that the 
only difference is the random amplitude in the former. But since the quadrature 
demodulator of Fig. 2.20 is optimum for a uniformly distributed random phase 
and any amplitude, it is clear that the fact that the amplitude is a random variable 
is immaterial. Assuming for the moment that we are merely interested in one 
symbol (or alternatively that the random variables r and </>, or a and b, are 
constant over the entire T seconds), we may readily evaluate the error probability 
for the Rayleigh fading binary frequency-orthogonal signals from that for non 
coherent detection of fixed amplitude signals. For, if r were known exactly, using 
the optimum demodulator of Fig. 2.20, 19 we would have error probability for 
noncoherent reception of (2.12.9) with S replaced by S s r 2 . Hence 



Now since r is a random variable whose distribution is given by the second factor 
of (2.12.15), we see that the symbol error probability with Rayleigh fading is 

PE= Cp(r)P E (r)dr 

J o 



2(1 + 
1 



(2.12.16) 



For the rectangular-to-polar transformation used here, the Jacobian is 



19 The demodulator integrates for T/N second intervals here rather than T seconds as shown in 
Fig. 2.20. 



110 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where we have denoted the average received energy per symbol by 

^s = <? s Cr 2 p(r) dr = 2<j 2 <$ s (2.12.17) 

J o 

It is quite interesting to note that while phase randomness does not destroy the 
exponential dependence of P E on energy-to-noise ratio, amplitude randomness 
does change it into the much weaker inverse linear dependence. 

Let us now consider the demodulation and decoding of multidimensional, or 
multiple symbol, coded Rayleigh fading signals. The most common form of coding 
for Rayleigh fading is the trivial repetitive code, using the same signal for all N 
dimensions, which is generally called diversity transmission. Before proceeding 
with the analysis even in this case, we must impose a fundamental assumption on 
the communication system: namely, that the random channel amplitude and 
phase variables are independent from symbol to symbol. Several techniques are 
commonly used to achieve this independence. First, different pairs of frequencies 
can be used for successive symbols. If the frequency pair for one symbol is widely 
separated from that of the next few symbols, the necessary independence can 
usually be acquired, but at the cost of greatly expanded bandwidth. Another 
approach, space diversity, actually transmits a single symbol, but uses N antennas 
sufficiently separated spatially that the random phases and amplitudes are in 
dependent of one another; then the N observables consist of a combination of N 
single observables from each antenna-receiver. Of course, spatial diversity cor 
responds only to the case of trivial repetitive coding. When nontrivial coding is 
used, particularly when bandwidth must be conserved, a third approach called 
time-diversity is commonly employed. This technique achieves the independence 
by spacing successive symbols of a given codeword at wide intervals in time, 
placing in between similarly spaced symbols of other codewords. This technique, 
illustrated in Fig. 2.21 and discussed further below, is generally called interleaving. 

Given the independence among symbols, we can consider an JV-dimensional 
signal where each dimension consists of the transmission of one of two binary 
frequency-orthogonal signals. We then have from the demodulator of Fig. 2.20 
(with integration over T/N second intervals) the 2N observables (y l5 y 2 , . . . , y^) = 
(yiiftii ^12^22. > yiNy2N)> consisting of N pairs of observations (where y ln , 
y 2n is the pair of observables for the nth symbol), for the two possible transmitted 
frequencies a> v and co 2 . 

Again for a fixed amplitude r and a uniformly distributed phase </>, we have 
from (2.12.7) that, for the nth symbol, the observables y ln and y 2n are independent 
with probability density functions 



-,*.,., /**.) 

W Jv. / 

P(ym n\x mn ,r) = y m . n e->"" : /2 m and w = 1 or 2, m ^ m (2.12.18) 











T d 


T 


a 






T 














/ 


Sj 














cs 






^ 






t 





1 
m 






N 






s. 






Ill 



























J O O O O O 



111 



112 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



so that the latter is independent of r. Since r is a Rayleigh distributed variable with 
parameter a 2 , we have, using (2.12.17) 



PGUO = f 

J 



dr 




exp 



= v e~ y /2 

\mn *- 



2(1 + 



Hence 



exp 



-yL 



2(1 



= y 



m and m = 1 or 2, m + m (2.12.19) 



Examining first the case of trivial repetitive coding of two equiprobable mes 
sages, we have from (2.2.7) that the optimum decoder for equal prior probabilities 
and average symbol energies, (f s , is 



which simplifies, according to (2.12.19), to 



= H 



if 



(yL - yl-n} > for all m + m (2.12.20) 



n=l 



Given that message H m was sent, we can calculate the error probability by finding 
the distribution of the sum in (2.12.20), conditioned on x m , from (2.12.19). It is 
easily shown (Wozencraft and Jacobs [1965, chap. 7]) that this is a chi-square 
distribution and that consequently the two-message repetition code error probabil 
ity is given by 



where 



P = 



(2.12.21) 



(2.12.22) 



However, more insight can be drawn from deriving the Bhattacharyya upper 



CHANNEL MODELS AND BLOCK CODING 113 

bound. From (2.3.15) and (2.12.19), we have 




(2.12.23) 

where p is given by (2.12.22). It can be shown (Wozencraft and Jacobs [1965, 
chap. 7]) that the ratio of the exact expression (2.12.21) to the bound (2.12.23) 
approaches [2^/nN( 1 2p)] -1 as N -> oo so that the bound is asymptotically 
tight in an exponential sense. Finally, we write the bound as 

P E <e~ Nd (2.12.24*) 

where 

d= -In [4p(l-p)] (2.12.246) 

p = 11(2 + <? S /N ) 

Both the decoding rule (2.12.20) and the error probability bound (2.12.24) can be 
easily generalized to the case where the symbol energies are not equal (Wozencraft 
and Jacobs [1965, chap. 7]). A most interesting conclusion can be drawn by 
comparing (2.12.16) with (2.12.24a). Both cases deal with the transmission of a 
single bit by one of two messages. Suppose the total average received energy is S. 
Then in the first case S s = S and P E decreases only inversely with //N . In the 
second case, using the repetitive N-dimensional code, we have $ s = g/N and 

P E <e~ Nd (2.12.25) 

where 

1 + (f/N,)/N 



1 



[1 + (g/N )/2NY 
as N -> oo 



4N 2 

Clearly then, making N very large degrades performance. However, we can readily 
show that the maximum of Nd, the exponent of (2.12.25), occurs when 

(2.12.26) 



114 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

in which case (2.12.25) becomes 



(2.12.27) 

Comparing to the exact expression (2.12.9) for the noncoherent case and ignoring 
the multiplicative factor of \ in the latter, we note that fading thus causes a loss of 
about 5.25 dB in effective energy. More important, we conclude that, while repeti 
tive coding has no effect (either favorable or detrimental) for the coherent AWGN 
channel, and while it has strictly a detrimental effect when the phase alone is 
unknown, it can actually improve performance in the case of fading channels 
provided that the dimensionality is chosen properly, the optimum being given 
approximately by (2.12.26). 

Finally, turning to nontrivial coding, we may again apply the union-Bhattac- 
haryya bound as in Sec. 2.9. Then if a binary linear code is used, since it is obvious 
from (2.12.19) that the channel is symmetrical, it follows that P Em = P E for all m. 
Then exactly as in (2.9.19) we have 

M 

P E < e~ Wkd (2.12.28) 

k = 2 

where d is given by (2.12.246) and w fc is the Hamming weight of the kth nonzero 
codeword. 

Of interest also is the effect of quantization. Clearly, the maximum likelihood 
decoder output (2.12.20) may be quantized by quantizing the decoder symbol 
output set {)? )?,} to any number of levels. In the simplest case of hard 
two-level quantization (positive or negative), this reduces the fading channel to a 
BSC with crossover probability given by (2.12.16). But this is exactly equal to the 
parameter p defined by (2.12.22); and, for the BSC, we found in Sec. 2.9 that the 
Bhattacharyya distance is 



<*BSC = -In VM1 - P) = -1 In [4p(l - p)] (2.12.29) 

Thus, comparing with (2.12.246), we find that for the fading channel, hard quantiza 
tion of the decoder outputs effectively reduces the Bhattacharyya distance by a 
factor of 2 (3 dB). This is a more serious degradation than for the AWGN channel, 
and is a strong argument for "soft" multilevel quantization (Wozencraft and 
Jacobs [1965, chap. 7]). 

2.12.4 Interleaving 

With the exception of the AWGN channel, most practical channels exhibit statist 
ical dependence among successive symbol transmissions. This is particularly true 
of fading channels when the fading varies slowly compared to one symbol time. 
Such channels with memory considerably degrade the performance of codes 
designed to operate on memoryless channels. The simplest explanation of this is 



CHANNEL MODELS AND BLOCK CODING 115 

that memory reduces the number of independent degrees of freedom of the 
transmitted signals. A simple example helps to clarify this point. Suppose a BSC 
with memory makes errors very rarely, say on the average once every million 
symbols, but that immediately after any error occurs, the probability of another 
error is 0.1. Thus, for example, the probability of a burst of three or more errors is 
one percent of the probability of a single error. Consider coding for this channel 
using the (7, 4) Hamming single-error correcting code. If this were a memoryless 
BSC so that errors occurred independently, the probability of error for each 
four-bit seven-symbol codeword would be reduced by coding from approximately 
7 x 10~ 6 down to approximately 3.5 x HT 11 . On the other hand, for the BSC 
with memory as just described, the codeword error probability is reduced to only 
about 6 x 10~ 7 . Coding techniques for channels with memory have been 
proposed and demonstrated to be reasonably effective in some cases (Kohlenberg 
and Forney [1968], Brayer [1971]; see also Sees. 4.9 and 4.10). The greatest problem 
with coding for such channels is that it is difficult to find accurate statistical 
models and, even worse, the channel memory statistics are often time-varying. 
Codes matched to one set of memory parameters will be much less effective for 
another set of values, as in the simple example above. 

One technique which requires no knowledge of channel memory other than 
its approximate length, and is consequently very robust to changes in memory 
statistics, is the use of time-diversity, or interleaving, which eliminates the effect of 
memory. Since in all practical cases, memory decreases with time separation, if all 
the symbols of a given codeword are transmitted at widely spaced intervals and 
the intervening spaces are filled similarly by symbols of other codewords, the 
statistical dependence between symbols can be effectively eliminated. This inter 
leaving technique may be implemented using the system shown in Fig. 2.21. Each 
code symbol out of the encoder is inserted into one of the / tapped shift registers 
of the interleaver bank. The zeroth element of this bank provides no storage (the 
symbol is transmitted immediately), while each successive element provides j 
symbols more storage than the preceding one. The input commutator switches 
from one register to the next until the (/ - l)th after which the commutator 
returns to the zeroth. / is the minimum channel transmission separation provided 
for any two code symbols output by the encoder with a separation of less than 
J = jl symbols. For a block code, J should be made at least equal to the 
block length. The output commutator feeds to the channel (including the modula 
tor) one code symbol at a time, switching from one register to the next after each 
symbol, synchronously with the input commutator. When the channel input is 
not binary, it may be preferable to interleave signal dimensions rather than code 
symbols. This is achieved, at least conceptually, by making each stage of the 
registers a storage device for a signal dimension rather than a channel symbol 
(easily implemented if each dimension contains an integral number of symbols). 
It is easily verified that, for a natural ordering of input symbols ..., t;,, v i+1 , 
v i+2 -> > the interleaver output sequence and hence the channel transmission 
ordering is as shown in Fig. 2.21, where it is clear that the minimum separation 



116 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

in channel transmission is at least / for any two code symbols generated by the 
encoder within a separation of J - 1. This is called an (/, J) interleaver. 

The deinterleaver, which must invert the action of the interleaver, is clearly 
just its converse. Observables are fed in with each dimension going to a different 
shift register. Note, however, that to store the observables digitally, the channel 
outputs must have been quantized. Hence, the deinterleaver storage must be 
several times the size of the interleaver storage. For example, if the channel input 
is binary, we require J(l l)/2 bits of storage in the interleaver. On the other 
hand, with eight-level quantization at the channel (demodulator) output, each 
output dimension contains 3 bits so that the storage required in the deinterleaver 
is three times as great. We note also that the delay introduced by this interleaving 
technique is equal to J(I 1) symbol times. 

The system of Fig. 2.21 represents a conceptually simple interleaving 
technique, and it can be shown to be the minimal implementation of an (/, J) 
interleaver in the sense of storage requirements and delay (Ramsey [1970]). 
However, shift registers of varying lengths may be considerably more costly in 
terms of numbers of required integrated circuits than, for example, a random- 
access memory with appropriate timing and control to perform the functions of 
the system of Fig. 2.21, even though the total storage of such a random-access 
memory will be double that shown in this implementation. The main point to be 
drawn from this discussion is that channels with memory can be converted into 
essentially memoryless channels at a cost of only buffer storage and transmission 
delay. This cost, of course, can become prohibitive if the channel memory is very 
long compared to the transmission time per symbol. 



2.13 BIBLIOGRAPHICAL NOTES AND REFERENCES 

The first half of this chapter, through Sec. 2.8, owes much of its organization to the 
text of Wozencraft and Jacobs [1965], specifically chaps. 4 and 5. This text 
pioneered in presenting information-theoretic concepts in the framework of prac 
tical digital communication systems. We have deviated by presenting in Sees. 2.4 
and 2.5 the more sophisticated upper bounds due to Gallager [1965] and Fano 
[1961] to establish the groundwork for the more elaborate and tighter bounds of 
successive chapters. 

Sections 2.9 and 2.10 are, in part, standard introductory treatments of linear 
codes. The proof of the uniform error property for linear codes on binary-input, 
output-symmetric channels is a generalization of a proof of this property for the 
BSC due to Fano [1961]. The evaluation of error probabilities and bounds for 
specific linear codes on channels other than the BSC carried out in Sees. 2.9 and 
2.11 is scattered throughout the applications literature. Section 2.12 follows for the 
most part the development of chap. 7 of Wozencraft and Jacobs [1965]. The 
interleaving technique of Fig. 2.21 is due to Ramsey [1970]. 



CHANNEL MODELS AND BLOCK CODING 117 

APPENDIX 2A GRAM-SCHMIDT ORTHOGONALIZATIONAND 

SIGNAL REPRESENTATION 



Theorem Given M finite-energy functions [x m (t)} defined on [0, T], there exist 
N < M unit-energy (normalized) orthogonal functions {</> n (f)} (that is, for 
which JJ 0nM0/c(0 dt = S nk ) such that 

x m (t) = i><MO m=l,2,...,M (2.1.1) 

n=l 

where for each m and n 

T 



= I * m (t)<t>n(t) 

J o 



Furthermore, N = M if and only if the set {x m (t)} is linearly independent. The 
{(t> n (t)} are said to form a basis for the space generated by the set of functions 



PROOF Let S m = JJ x%,(t) dt. Define the first normalized basis function 

0,(t) = xM^fa 
Then clearly 

xiW-V*T*i(0 

= xii*i(0 (2A.2) 

where x vl = -J~$~[ and </>i(f) has unit energy as required. Before proceeding to 
define the second basis function, define x 2 i as tne projection of x 2 (t) on 4>i(t), 
that is 



*2i = [ *2( )0i(0 dt (2A.3) 

J o 

Now define 

(f> 2 (t) = (2A.4) 

where 



(2A.5) 



118 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

It then follows from (2A.3) and (2A.4) that 

[ i(002(0<fc = (2A.6) 

J o 

and from (2A.4) and (2A.5) that 2 (0 nas un it energy since 



= 1 (2A.7) 

Also, from (2 A. 4), we have 

x 2 (f) = x 21 <MO + x 22 <MO (2A.8) 

and from (2A.6) and (2A.7) it follows that 



= f x 2 (t)<t>2(t) dt 
J 



We now proceed to generalize (2A.2) and (2A.8) to the mth function x m (t), 
by induction. Suppose that for all k < m 

x t (t) = I x ka <t> n (t} k = 1, 2, . . . , m - 1 (2A.9) 

n=l 

where 

T k (t)<l> n (t)dt (2A.10) 



and where the {</>(), n = 1, 2, . . . , k} are mutually orthogonal and each has 
unit energy. Then define 

* mn = fxMMf) dt n = 1, 2, ..., m - 1 (2A.11) 

and 



i- 1 



0m(0 = (2A. 12) 

Xmm 

where 



1-1 



It follows from (2A.11) and (2 A. 12) that 

f (t> m (t)(t> n (t) dt = foralln<m (2A.14) 



CHANNEL MODELS AND BLOCK CODING 119 

and from (2A.12) and (2A.13) that <j) m (t) has unit energy. Reordering (2A.12), 
we have 



and from (2 A. 14) 



= x m (t)4> n (t)dt 

J 



(2A.15) 



(2A.16) 



It thus follows that, for M finite-energy functions (x m (r)}, the representation 
(2.1.1) is always possible with N no greater than M. 

Suppose, however, that a subset of these functions is linearly dependent ; 
i.e., that there exists a set of nonzero real numbers a lt a 2 , . . ., a j for which 



ajx mj (t) = 



where 



< w 2 < 



< m 



In such an event, it follows that x m .(t) can be expressed as a linear combi 
nation of x mi (t) - - - x m ._ t (r) and thus as a linear combination of the basis func 
tions which generate these previous signal functions. As a result, it is not 
necessary to generate a new basis function (/> mj (t) in order to add x mj (t) to the 
set of represented functions. In this way, one (or more) basis functions may be 
omitted and hence N < M. It should be clear that a basis function can be thus 
skipped if and only if the set (x m (r)} is not linearly independent. 



PROBLEMS 

2.1 (a) For the 16-signal set shown in Fig. P2.1a, transmitted over the AWGN channel, with equal a 
priori probabilities, determine the optimum decision regions, and express the exact error probability in 
terms of the average energy-to-noise density ratio. 

(b) Repeat for the tilted signal set shown in Fig. P2Ab. 



X X3a- 


- X 


X 


X X- 


- X 


X 




1 


i 


-3a -a 


a 


3a 


X X - 


- X 


X 


a 






X X - 


r X 


X 


-3a 






X3V/2* 



ia >< 
X 



(b) 



Figure P2.1 



120 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



2.2 For the seven-signal set shown, transmitted over the AWGN channel, with equal a priori 
probabilities 

(a) Determine the optimum decision regions. 

(b) Show that one can obtain an upper bound on P Em , m = 1, 2, ..., 7, and hence on P E , by 
calculating the probability that the norm of the two-dimensional noise vector is greater than 

and calculate this bound. 




Figure P2.2 

2.3 For the signal set of Prob. 2.2, obtain a union bound on P Em of the form of (2.3.4) for each m. 
Compare the resulting bound on P E with that obtained in Prob. 2.2. 

2.4 For the orthogonal signal set of M equal-energy signals transmitted over the AWGN channel, first 
treated in Sec. 2.3 

(a) Show that the error probability is given exactly by 

PE = P El = 1 - Pr {y m < y, for all m 1 | Xl } 
where the {y m } are the M observables. 

(b) From this, derive Eq. (2.11.1). 

(c) Letting $ = $ b Iog 2 M, where S b is the energy/bit, show that 



lim 

M-+OO 



2^\1"- 10 
wJ\ I 



if b !N < In 2 
if b /N > In 2 



and consequently that lim P E is if S b /N > In 2 and is 1 if the inequality is reversed. 

Af-oo 

Hint: Use L Hospital s rule on the logarithm of the function in question. 

2.5 (a) Show that, if M = 2 K , an orthogonal signal set of M dimensions can be generated for any 
integer value of K by the following inductive construction. For K = 1, let 



2"- 



Then for any integer K > 2 



x i 

x 2 



where H = 



where H K = 



(b) Note that, for this construction, the first component of each signal vector is always equal to 
+ ^J&IM. Consider deleting this component in each vector, thus obtaining a signal set {x,} with M - 1 
dimensions and normalized inner products among all vectors 



M- 1 



for all ; = k 



CHANNEL MODELS AND BLOCK CODING 121 

where = S(M 1)/M, which is the signal energy after deletion of the first component. This new 
signal set is called a regular simplex signal set. 

(c) Show that P E for the regular simplex signal set is identical to that of orthogonal signals as 
given in Prob. 2.4, but since the energy has been reduced in the simplex case 

o 

1 



where the first parameter indicates the energy-to-noise density and the second gives the common 
normalized inner product among all signal vectors. 

(d) Show that, for any set of M equal-energy signals, the average normalized inner product 



and hence the set generated in (b) achieves the minimum. 

(e) Generalize the argument used in (c) to show that, if all normalized inner products are equal to 
p> -1/(M- 1), then 



2.6 (a) Show that an ideal lowpass filter with transfer function 

*M-!i >! ^ <n 

otherwise 
has noncausal impulse response 

sin nWt 



(b) Show that, in response to a signal z(t), the response of this lowpass filter at time n/W will be 



J-. v nW(t-n/W) 

(c) Show then that the mechanization of Fig. 2.11 is equivalent to that of Fig. 2.9 with finite-time 
integrators replaced by infinite-time integrators. 

2.7 (a) Suppose that a signal set utilizes the basis functions of Table 2.1, Example 1, but that at the 
receiver the frequency and phase are incorrectly known so that the function 

#,(f) = V/2N/T sin [(w + Aw)r + <] (n - l)T/N < t < nT/N 

is used. Assuming co T/N $> 1 and AcoT/N <^ 1, show that the observables are attenuated approxi 
mately by the factor 



(b) For the basis functions of Table 2.1, Example 2, assume a) is known exactly at the receiver 
but that the phase is incorrectly assumed to be 0. Show that, if co T/N > 1, the signal components 
of the observables become approximately 

y 2n = x 2n cos < + x 2n+ i sin </> 

/2n+l = -*2n sin + X 2n+ 1 COS 



122 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

(c) For part (b), let M = 4 and N = 2 (quadriphase transmission). Show how the decision regions 
are distorted by the incorrect 0, and obtain expressions for the resulting error probabilities. 

2.8 (a) For the signal set of Fig. I.Ylb transmitted over the AWGN channel, with equal a priori 
probabilities, determine the optimum decision regions. 

(b) Show that for all m 

P <P Em < 2P 

where P = Q(J2JN sin (rc/16)). 

(c) Compare this lower bound on P E with the exact expression for P E of the signal set of 
Fig. 2.\2a (Prob. 2.1), and thus determine which set is superior in performance for equal average 
energies. 

2.9 (a) For the binary input AWGN channel with octal output quantization shown in Fig. 2.14 obtain 
explicit expressions for the transition probabilities 



(b) Give the optimum decision rule in as compact a form as possible. 
2.10 (Chernoff Bound) 

(a) Let z be a random variable such that its distribution (density) p( ) has finite moments of all 
order. Show that 



Pr (z > 0) = p(z) < f(z)p(z) = E{f(z)} 

z>0 z 



where 



(b) Choose /(z) = e pz , p>0, and thus show that Pr (z > 0) < E[e pz ], p > 0. 

(c) In (2.3.12), let z(y | x m ) = In [p N (y \ x m >)/P N (y x m )] where y has distribution (density) p N (y \ x m ). 
Using (b), show that 

P E (m - m ) < E y [p N (y \ x m .)/p N (y \ xj]" = p N (y \ x m .Yp N (y \ xj 1 - p > 



(d) Show that the bound in (c) reduces to the Bhattacharyya bound when p = 1/2. 

(e) Consider the asymmetric binary " Z " channel specified by 




Let x w = 00 ... and x m = 1 1 . . . 1 be complementary JV-dimensional vectors. Show that the Chernoff 
bound with p optimized yields 

P E (m -> m ) < p N 

and show that this is the exact result for maximum likelihood decoding. Compare with the Bhattac 
haryya bound. 
2.11 (a) Show that the code whose parity-check matrix is given in Fig. 2.17 has the generator matrix 

100001 1 
0100101 
0010110 
.0001111 



CHANNEL MODELS AND BLOCK CODING 123 

(b) Generalize to obtain the form of G for any (L, K) Hamming single-error correcting code 
where L = 2 L ~ K - 1, L - K > 2. 

(c) Show that, for all the codes in (b), d min = 3. 

2.12 (a) For binary orthogonal codes, show that the expected number of symbol errors rj occurring on 
a BSC defined by hard quantizing an AWGN channel is 



where L = 2 K 



and that the variance is var [rj\ = Lp(\ - p). 
(b) For large K and L = 2*, show that 



and that 

var [//] = Lp(\ p) * L/4 as K -> oo 

(c) Since d min = 1/2 for the codes of (a), show that the bound (2.10.14) can be expressed as 
P E < Pr fa > L/4}. 

(d) Using (b) and the Chebyshev inequality show that 



Pr 



Thus, Pr [r\ < L/4} - as K -> oo and consequently the bound of (c) approaches unity. 
2.13 Consider the following normalized four-level quantizer used with the AWGN channel 

-2 | -1 | +1 | +2 
-a a 



a< I =x 



(a) Show that the resulting binary-input quaternary-output channel is symmetric with transition 
probabilities 



(b) Evaluate 



-2 + 



d = -In 

and optimize for SJN Q = 2. 

2.14 Generalize (2.12.8) to noncoherent detection of M orthogonal signals. 



(a) Show that 



P El = 1 - Pr {> ! 



for all / 

,> 



where p(vj x t ) and p(y 2 x 2 ) are given by (2.12.7). 
(b) Substitute as justified in (a) to obtain 



dy, 



124 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

(c) Show that the integral in (b) reduces to the finite sum 



j=2 M \J 

2.15 (Continuation and Bound) 

(a) Show that the term in brackets in Prob. 2.14(6) is upper bounded by 

1 - (1 - e y2/2 ) M ~ < min [(M - l)e~ y2 2 , 1] < [(M - I)*-" 2 / 2 ] 

(b) Use this to show that 



P E < (M- l) p exp 



-(-Ml 

N U+p/J 



0<p< 1 



which is the same as the bound (2.5.12) for coherent detection and leads directly to the exponential 
bound (2.5.16). 

(c) Give an intuitive argument for the perhaps unexpected result of (b). 

2.16 Consider a binary linear block code with K = 4, L = 7 and generator matrix 

100001 1 

1100110 

0010110 

.000 1 1 1 U 

(a) Find a parity-check matrix H for this code. 

(b) Suppose we use this code over a BSC and the received output is y = (1 1 1 1 1 1). What is the 
maximum likelihood decision for the transmitted codeword? 

(c) Repeat (b) for y= (100 1101). 

(d) What is the minimum distance of this code? 

2.17 Consider M completely known, orthogonal, time-limited, equal-energy, equally likely signals 
Xj(f), ..., x m (t) where 

r 

[ x,.(f )x/0 dt = Sd {j 
o 

These signals are used for digital communication over the usual additive white Gaussian noise channel 
with spectral density N /2. Consider a receiver that computes 

k= 1, 2, ...,M 

and decides w k * when A k * = max k {A k } provided that max k {A k } > S. If A k < d for all k, then the receiver 
declares an erasure and does not make any decision. Let 6 > b = ^/2<S /N . 

(a) Find the probability of an erasure. 

(b) Find the probability of a correct decision. 

2.18 Consider detection of a signal of random amplitude in additive white Gaussian noise such that 

H y(t) = n(t) < t < T 

H t y(t) = x4>(t) + n(t) Q<t<T 
where 

T 

E{n(t)n(t + T)} = S(r) and | </> 2 (r) dt = 1 

o 

and x is a Gaussian random variable with zero mean and unit variance. What is the minimum average 
error probability when both hypothesis H and Hj have a priori probability of |? 



CHANNEL MODELS AND BLOCK CODING 125 
2.19 Consider the three signals 

x k (r) = ] /y cos l27r^ +j**J 0<r < T 

elsewhere 

k = 0, 1, 2 

to be used to send one of three messages over an additive white Gaussian noise channel of spectral 
density N /2. 

(a) When the messages are equally likely, show that the minimum probability of error is given by 



(b) Find the minimum probability of error when the a priori probabilities are 

TT O = Pr (m is sent} = \ 
TTj = Pr {m 1 is sent) = \ 
n 2 = Pr (m 2 is sent} = 

(c) Find the minimum probability of error when 

71 = 7Ti = * 71 2 = | 

2.20 Consider the detection of two equally likely signals in additive colored Gaussian noise where 

E[n(t)n(s)] = 0(f, s) 



w 1 T; ^ r 

H 2 y(t) = x 2 (t) + n(t) 
Suppose that the functions \f/ lt \f/ 2 , ..., \l/ m and the constants a], a\ , . . . , o 2 m satisfy the equation 

T 

| 0(f, s)i// k (s) ds = at\l/ k (t) <t <T k = I, 2, . . . , m 

"o 

where 

| 7 

o ^ jf 

Suppose that the signals are 

*t(0 = 



and let 



y k = y(tW k (t) dt 

o 



126 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



(a) Show that the minimum-probability-of-error decision rule is 



if 



choose//! 



Otherwise choose H 2 

(b) Using the optimum decision rule, find an exact expression in terms of Q( ) for the error 
probability as a function of a\, a\ , . . . , o 2 m and S lt S 2t ... t g m . Check your answer for the special case 
where 



with g = &! + S 2 + + & m denoting the total signal energy. 

2.21 [Staggered (Offset) QPSK and Minimum Shift Keying (MSK)] 

Consider the signal set generated by binary modulation (x k = 1 for each k) of the basis vectors 



-f(t-(n- 



sin a> 1 



- f(t m) cos co 1 



(n I)T < t < m 
otherwise 

(n - i)t < r < (n + i)t 
otherwise 



T = o> is a multiple of 2n/r 
N 



(a) [Staggered (Offset) QPSK (SQPSK)] 
Let 



f(t) 



-T/2<r<r/2 
otherwise 



Show that the performance with optimum demodulation is the same as for QPSK, and that the 
spectral density of the modulation sequence A * x k <f) k (t) for a random binary sequence {x k } is the same 
as for QPSK: 



- co ) + S L (co + o> )] 



where 



(b) Comparing (a) with Prob. 2.1(b\ show that the cross-channel interference effect of the phase 
error is reduced relative to ordinary QPSK. 

(c) [Minimum Shift Keying (MSK)] 
Let 

| V/2COS- -r/2<t <r/2 

T 

otherwise 

Show that the performance is the same as for QPSK and SQPSK with optimum demodulation. 

(d) For MSK, show that for random binary modulation in the interval (n ^)T < t < m/2 



CHANNEL MODELS AND BLOCK CODING 127 

the signal can be expressed as 

X 2n <t> 2n (t) + X 2n +i $2n+l(0 = (2/^/1) COS [(O) + 7t/t)t] 

which amounts to continuous phase frequency shift keying. 

(e) Show that the spectral density of MSK can be expressed in the form given in (a) but with 

2 /27t\ 2 [cos(caT/2)| 2 
which decreases for large frequencies as a;" 4 rather than co~ 2 as is the case for QPSK and SQPSK. 



CHAPTER 

THREE 

BLOCK CODE ENSEMBLE PERFORMANCE 
ANALYSIS 



3.1 CODE ENSEMBLE AVERAGE ERROR PROBABILITY: 
UPPER BOUND 

In Chap. 2 we made only modest progress in evaluating the error performance of 
specific coded signal sets. Since exact expressions for error probability involve 
multidimensional integrals which are generally prohibitively complex to calculate, 
we developed tight upper bounds, such as the union-Bhattacharyya bound (2.3.16) 
and the Gallager bound (2.4.8), which are applicable to any signal set. Never 
theless, evaluation of these error bounds for a specific signal set, other than a few 
cases such as those treated in Sec. 2.11, is essentially prohibitive, and particularly 
so as the size of the signal set, M , and the dimensionality, N, become large. It 
follows that, given the difficulty in analyzing specific signal sets, the search for the 
optimum for a given M and N is generally futile. 

Actually, the exit from this impasse was clearly indicated by Shannon [1948], 
who first employed the central technique of information theory now referred to, 
not very appropriately, as " random coding." The basis of this technique is very 
simple: given that the calculation of the error probabilities for a particular set of 
M signal (or code) vectors of dimension N is not feasible, consider instead the 
average error probability over the ensemble of all possible sets of M signals with 
dimensionality N. A tight upper bound on this average over the entire ensemble 

128 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 129 

turns out to be amazingly simple to calculate. Obviously at least one signal set 
must have an error probability which is no greater than the ensemble average; 
hence the ensemble average is an upper bound on the error probability for the 
optimum signal set (or code) of M signals of dimensionality N. Surprisingly, for 
most rates this upper bound is asymptotically tight, as we shall demonstrate by 
calculating lower bounds in the latter half of this chapter. 

To begin the derivation of this ensemble upper bound, consider a specific code 
or signal set 1 of M signal vectors \ 19 x 2 , . . . , X M , each of dimension N. Suppose 
there are Q possible channel inputs so that x mn E 3C = {a^ a 2 , . . . , a Q ], and m = 1, 
2, . . ., M; n = 1, 2, . . ., N. As discussed in Sec. 2.7, these inputs may be taken as 
amplitudes, phases, vectors, or just as abstract quantities. In any case, this ensures 
that there are in all exactly Q MN possible distinct signal sets with the given par 
ameters, some of which are naturally absurd such as those for which x, = x ; for 
some i =/= j. Nevertheless, if P Em (*i, x 2 , . . . , X M ) is the error probability for the mth 
message with a given signal set, the average error probability for the mth message 
over the ensemble of all possible Q MJV signal sets is 

~P7 = - - Y Y Y P r (x, XT x^ m=l 2 M (3 1 H 

E ,/"* V Vf / / / * \ 1 2 * * 1 V/7 5 9 9 * * l*/.J..AJ 

where each of the M summations runs over all Q v possible N-dimensional Q-ary 
vectors from x = (a l9 a^ ..., a^ to x = (a Q , ..., a Q ). Hence the M-dimensional 
sum runs over all possible Q MK signal sets and we divide by this number to obtain 
the ensemble average. 

For the sake of later generalization, we rewrite (3.1.1) as 

m- 1, 2, ,..,M (3.1.2) 

where <? v ( x ) w iH be taken as any distribution over 3C N \ for now, however, we 
continue with the uniform weighting of (3.1.1) and take 

% (x m ) = -^ m=l,2, ...,Af (3.1.3) 

In Sec. 2.4 we derived an upper bound on P E for any specific signal set, namely 



x 2 



il/U+P) 



/<i+p) 



>0 
(3.1.4) 



This Gallager bound is more general than the union-Bhattacharyya bound of 
Sec. 2.3 to which it reduces for p = 1. Initially, for the sake of manipulative 
simplicity, we consider only m = 1. Then inserting (3.1.4) into (3.1.2) and changing 



1 Throughout this chapter we shall use the pairs of terms code and signal set, and code vector and 
signal vector, interchangeably. 



130 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



the order of the summations, we obtain as the upper bound on ensemble error 
probability when message 1 is sent 



*. < Z I*v(xi)PN(y|x 1 ) 



i/d+P) JV V ... 

\L L 

\X2 X3 XM 

M 



W (X M 



p>0 (3.1.5) 



To proceed further, we must restrict the arbitrary parameter p to lie in the unit 
interval < p < 1. Then limiting attention to the term in braces in (3.1.5) and 
defining 

/ N (x 2 ,...,x M )EE ZP*(yK ) 1/(1+P) 0<p<l (3.1.6) 



we have from the Jensen inequality (App. IB) 



vv 



X2 X3 XM 



yy ... 

Lmi L-t 



Z %( 



,v(x 2 , . . . , X M ) 



(3.1.7) 



since /ft is a convex n function of/for all x when < p < 1. Here q N (x) > and 

l (3.1-8) 



Next, using (3.1.6), we can evaluate the right side of (3.1.7) exactly to be 



Z Z 



X2 XM 



X2 XM 

M 



I = 2 



(3.1.9) 



where the last step follows from the fact that each vector x m , is summed over the 
same space 3C N . Combining (3.1.5) through (3.1.7) and (3.1.9) and recognizing 
that, since the factors of the summand of (3.1.5) are nonnegative, upper bounding 
any of them results in an upper bound on the sum, we have 



P l <(M-l)"ZZ 

y xi 



< i 



(3.1.10) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 131 

Now if, for any m =/= 1, we were to interchange the indices 1 and m throughout the 
above derivation from (3.1.5) on, we would arrive at the same bound which is 
consequently independent of m. Thus, trivially over-bounding M 1 by M , we 
obtain finally the following upper bound on the ensemble average error probabil 
ity when any message m is sent 



< p < 1 m = 1, 2, ..., M 

(3.1.11) 

This bound is valid for any discrete (Q-ary) input and discrete or continuous output 
channel, provided in the latter case we replace the summation over ^ v by an 
N-dimensional integral and take p N ( ) to foe a density function. It is also note 
worthy that the steps followed in deriving (3.1.11) are formally similar to those 
involved in the derivation of P for orthogonal signals over the AWGN channel in 
Sec. 2.5. This similarity will become even more striking in the next section. 

Note, however, that we have not yet restricted the channel to be memoryless. If 
we so restrict it, we have 



(3.1.12) 



If we also restrict q$(\) to be a product distribution 

N 

flvM = 4(*") 
n=l 



(3.1.13) 



[which is trivially true for the special case (3.1.3) in which q(x) = l/Q] then upon 
inserting (3.1.12) and (3.1.13) in (3.1.11), we have for a memoryless channel that 



II-I 

XI X2 X\ 






1+p 



I y.v 



1+p 



1+p 



< p < 1 



(3.1.14) 



where p(y|x) is the symbol transition probability (density). [In the special case 
(3.1.3), q(x) = l/Q for all x e 3C] 

Before proceeding to evaluate the consequences of the elegantly simple result 
(3.1.14), let us generalize it slightly. We began in (3.1.1) and (3.1.2) by taking a 



132 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

uniform average over the entire ensemble of possible coded signal sets. However, 
for some signal sets and for some channels, it will develop that certain choices are 
preferable to others. In evaluating an average where the ultimate goal is to bound 
the performance of the best member of the ensemble, it is logical that, based on 
some side information or intuition, we might wish to weigh certain sets of signal 
vectors (or certain signal vectors, or certain symbols or components of signal 
vectors) more heavily than others. An appropriate, though banal, example would 
be to use the average test score of a class of students to lower bound the score of 
the best student. However, if an instructor s experience is that red-haired, green- 
eyed students generally perform above average and green-haired, red-eyed stu 
dents perform below average, he may choose to use a weighted average which 
weighs the score of any student from the first group most heavily, that of any 
student from the second group least heavily, and that of any other student some 
where between the two extremes. The only constraint is that the sum of the 
(nonnegative) weights be unity or, equivalently, that the vector of weights be a 
distribution vector. If the instructor s bias is justified, this weighted average will 
then be a tighter lower bound on the performance of the best student than the 
original uniform average, but it will always be a valid lower bound regardless of 
the validity of bias. 

We can easily achieve such a priori biasing from (3.1.2) on by allowing q N (\) 
to be any distribution on the Q N possible signal vectors. Thus (3.1.2) may be 
regarded as a weighted ensemble average where the weighting of the signal sets, 
which are members of the ensemble, are given by the product measure 
Oif= i <7jv( x m)- The same may be said of all subsequent ensemble averages through 
(3.1.11). For a memoryless channel, defined by (3.1.12), we further restrict this 
arbitrary weighting to be of the form (3.1.13) which corresponds to weighting each 
component of each codeword independently according to q(x). For many classes 
of channels, including all binary-input, output-symmetric channels, a nonuniform 
weighting does not reduce the bound on the ensemble average error probability. 
For others such as the Z channel (Prob. 3.1), there is a marked improvement at 
some rates. And clearly, if by nonuniform weighting of the members of the en 
semble we manage to reduce this average, then the best signal set must perform 
better than this newly reduced average. The advantage of nonuniform weighting 
depends generally on the skewness of the channel. 

We may express (3.1.14) alternatively in terms of the data rate per dimension 

R = nats/dimension (3.1.15) 

which is of course related to the rate R T in nats per second defined in Sec. 2.5 by 

R = R T (T/N) 

% R T /2W (3.1.16) 

Thus, since M = e^ R , we obtain for memoryless channels 

)- P R] 0<p<l (3.1.17) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 133 

where 2 

= -In 



and where q = (q(a\), q(<^2\ > q( a o)} is an arbitrary distribution vector; that is, q 
is an arbitrary vector over the finite space 3C = {a^ a 2 , . . . , a Q } with the properties 

q(x) > for every x 6 3C 
and 

I (*)=! (3.L19) 

JC 

We observe finally that, since (3.1.17) is an error probability bound for any 
message sent, it must also be a bound on the ensemble average of the overall error 
probability P E no matter what the message prior probabilities may be, provided the 
maximum likelihood decision rule is used. Also, since p is arbitrary within the unit 
interval and q is an arbitrary distribution vector subject to the constraints (3.1.19), 
we may optimize these parameters to yield the tightest upper bound. This is 
achieved, of course, by maximizing the negative exponent of (3.1.17) with the 
result that the average error probability over the ensemble of all possible signal 
sets for a Q-ary input memoryless channel may be bounded by 

T E <e~ NE(R) (3.1.20) 

where 

E(R) = max max [E (p, q) pR] 

q 0<p< 1 

E (p, q) is given by (3.1.18) and q is a distribution vector subject to the constraints 
(3.1.19). It obviously follows that at least one signal set in the ensemble must have 
P E no greater than this ensemble average bound. 

We leave the detailed discussion of this remarkably simple result to the next 
section where we utilize it to prove Shannon s channel coding theorem. 



3.2 THE CHANNEL CODING THEOREM AND ERROR 
EXPONENT PROPERTIES FOR MEMORYLESS CHANNELS 

The key to assessing the value of the bound on the ensemble average error 
probability given by (3.1.20) lies in determining the properties of the function 
E (p, q) given by (3.1.18). The important properties of this function that depend 
only on the memoryless channel statistics {p(> |x)} and the arbitrary input 
weighting distribution q( ) are summarized in the following. 

2 The function (p, q) appears in other bounds as well. It was first defined by Gallager [1965] and 
is referred to as the Gallager function. 



134 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Lemma 3.2.1 (Gallager [1965]) Let 



U/U+P) 



"I I+P 



J 



where q( ) is a probability distribution over the finite space #* = 
a Q }, and suppose that 3 



(3.2.1) 



a 2 




is nonzero. Then the function E (p, q) has the following properties: 
E (p, q) > p > 
E (p, q) < - 1 < p < 

with equality in either case if and only if p = 0; and 

, q) 



dp 



>0 



,q) 



Sp 



<0 



p> - 



with equality in (3.2.5a) if and only if 

In 



(3.2.3) 

(3.2.4) 
(3.2.5a) 

(3.2.5*) 



for all x e 



such that 



x)>0. 



In (3.2.2) and (3.2.5b) we find the function 7(q), called the average mutual 
information of the channel, first defined in Sec. 1.2 4 where it was shown to be 
nonnegative. Direct substitution of p = in (3.2.1) shows that E (0, q) = and 
hence that the inequalities (3.2.3) follow from (3.2.4); the proof of inequalities 
(3.2.4) and (3.2.5a) is based on certain fundamental inequalities of analysis. 
Appendix 3A contains these inequalities and gives the proof of (3.2.4) and (3.2.50 ). 

Thus, in all cases except when the condition (3.2.5fc) holds, E (p, q) is a posi 
tive increasing convex n function, for positive p, with a slope at the origin equal to 

3 /(q) = /(#; <&) was first defined in Sec. 1.2. Henceforth, the channel input distribution is used as 
the argument, in preference to the input and output spaces, because this is the variable over which all 
results will be optimized. 

4 Note that average mutual information here evolves naturally as a parameter of the error probabil 
ity bound,, while in Sec. 1.2 it was defined in a more abstract framework. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 135 




Figure 3.1 Function E (p, q). 



/(q). An example is sketched in Fig. 3.1. On the other hand, if (3.2.56) holds, the 
second derivative of E (p, q) with respect to p is zero for all p, and consequently in 
this case E (p, q) = p/(q). While it is possible to construct nontrivial examples of 
discrete channels for which (3.2.56) holds (see Prob. 3.2), these do not include any 
case of practical importance. 

Then restricting our consideration to the case where (3.2.5a) is a strict inequal 
ity, we have that, for any particular distribution vector q, the function to be 
maximized in (3.1.20), [ (p, q) pR\, is the difference between a convex n func 
tion and a straight line, and hence must itself be convex n for positive p as shown 
in Fig. 3.2. Defining 



E(R, q) = max [E (p, q) - pR] 

0<p<l 



(3.2.6) 



E Q (p,q)~pR 



(fl)/?0 (p,q)/8pL =] 



E Q (p,q)-pR 








Po 

(b)R = bE (p,q)lbp\ p= 



PO 



, q)/3plp=i Figure 3.2 Function E (p, q) - pR. 



136 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

we note that, as a consequence of Lemma 3.2.1, the exponent has a unique maxi 
mum. For small R (Fig. 3.2<a), the maximum of [E (p, q) pR] occurs for p > 1 
and, consequently, the maximum on the unit interval lies at p = 1. For larger R 
(Fig. 3.2b), the maximum occurs at the value of p for which R = dE (p, q)/dp. 
Since the second derivative is strictly negative, the first derivative is a decreasing 
function of p, so that we can express the maximum of (3.2.6) for low rates as 



E(R, q) = E (l, q)-R < R < dE (p, q)/dp 



(3.2.7) 



while for higher rates we must use the parametric equations 

E(R, q) = E (p, q) - pdE (p, q)/dp (3.2.8) 

R = dE (p, q)/dp 



, q)/dp 



<R< dE (p, q)/dp 



For this higher-rate region, the slope is obtained as the ratio of partial derivatives 

dE(R, q) = d[E (p, q) - pdE (p, q)/dp]/dp 
dR dR/dp 

= -p (3.2.9) 

and the second derivative is 

d 2 E(R, q) _ d[dE(R, q)/dR]/dp 
dR 2 dR/dp 

-1 



, q)/dp 2 
>0 (3.2.10) 

Hence, while E(R, q) for low rates is linear in R with slope equal to - 1, for higher 
rates it is monotonically decreasing and convex u. Its slope, which is -p 
(0 < p < 1), increases from 1 at R = dE (p, q)/dp \ p = 1 , where it equals the slope 
of the low-rate linear segment, to at R = I(q) = dE (p, q)/dp | p = , where the 
function E(R, q) itself goes to zero. A typical E(R, q) function demonstrating these 
properties is shown in Fig. 3.3a. 

For the special class of channels for which condition (3.2. 5b) holds so that the 
second derivative of E (p, q) is everywhere zero, we have 

E(R, q) = max p[/(q) - R] 

0<p< 1 

= I(q)-R 0<R</(q) (3.2.11) 

Thus, as shown in Fig. 3.3b, the curved portion of the typical E(R, q) function 
disappears and only the linear segment remains. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 137 




3- (p,q)/3pl p=1 7(q) 

(a)3 2 (p. q)/3p 2 <0 
Figure 3.3 Examples of E(R, q) function. 



/(q) 

()3 2 E (p,q)/3p 2 = 



The negative exponent E(R) of (3.1.20) is, of course, obtained from E(R, q) by 
maximizing over all possible distribution vectors. That is 

E(R) = max E(R, q) (3.2.12) 



where 



with the properties 



q - {q(x): x e 



q(x) > for all x e 3C 
Z q(x) = 1 



Note that, as a consequence of these distribution constraints, the space of allowed 
q is a closed convex region. For certain channels, including some of greatest 
physical interest (see Sec. 3.4), a unique distribution vector q maximizes E(R) for 
all rates; for other channels (Prob. 3.3c), two or more distributions maximize E(R) 
over disjoint intervals of R , for still other channels (Prob. 3.5), the maximizing 
distribution varies continuously with R. Regardless of which of the above situa 
tions holds, we have shown that, as a consequence of Gallager s lemma, E(R, q) is 
a bounded, decreasing, convex u, positive function of R for all rates R, < R < 
/(q). E(R) as defined by (3.2.12) is then the upper envelope of the set of all 
functions E(R, q) over the space of probability distributions q. It is easily shown 
that the upper envelope of a set of bounded, decreasing, convex u, positive func 
tions of R is itself a bounded, decreasing, convex u, positive function of R. Thus, 
for all rates R 



0<R<C = max /(q) 



= max 

q 



<l( x )p(y 




(3.2.13) 



138 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

E(R) is a bounded, decreasing, convex u, positive function of R. C is called the 
channel capacity. This then proves the celebrated channel coding theorem. 

Theorem 3.2.1 (Shannon [1948] et al. 5 ) For any discrete-input memory less 
channel, there exists an N-symbol code (signal set) of rate R nats per symbol 
for which the error probability with maximum likelihood decoding is 
bounded by 

P E < e~ NE(R} (3.2.14) 

where E(R), as defined by (3.1.20) and (3.1.18), is a convex u, decreasing, 
positive function of R for < R < C, where C is defined by (3.2.13). 

Channel capacity was first defined in conjunction with average mutual infor 
mation in Sec. 1.2. Like the latter, it emerges here naturally as a fundamental 
parameter of the error bounds namely, the rate above which the exponential 
bound is no longer valid. Its significance is increased further by the converse 
theorem of Sec. 1.3, as well as by that to be proved in Sec. 3.9. 

In spite of its unquestionable significance, this coding theorem leaves us with 
two sources of uneasiness. The first disturbing thought is that, while there exists a 
signal set or code whose error probability P E , averaged over all transmitted 
messages, is bounded by (3.2.14), the message error probability P Em for some 
message or signal vector \ m , may be much greater than the bound. While this may 
indeed be true for some codes, we now show that there always exists a signal set or 
code in the ensemble for which P Em is within a factor of 4 of the coding theorem 
bound for every m. 

Corollary For any discrete-input memoryless channel, there exists an A/ 
symbol code of rate R for which maximum likelihood decoding yields 

P Em <4e~ NE(R) 0<R<C m=l, 2, ...,M (3.2.15) 

PROOF The proof involves applying the channel coding theorem to the en 
semble of codes of the same dimensionality but with twice as many messages. 
Let us assume further, arbitrarily, that the 2M messages are all a priori 
equiprobable. Then from the above theorem, we have that there exists at least 
one code in the ensemble of codes with 2M messages for which 



2M 



< e -NE(ln(2M)/N) (3.2.16) 

since the rate of this code is In (2M)/N. Now suppose we discard the M code 

5 Shannon actually proved that P - as N -* oo, while the exponential bound was proved in 
various progressively more explicit forms by Feinstein [1954, 1955], Elias [1955], Wolfowitz [1957], 
Fano [1961], and Gallager [1965]. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 139 

(signal) vectors with highest P Em . This guarantees that the remaining M code 
vectors have 

P Em< 2e~ NE(ln(2M)/N) (3.2.17) 

for, if this were not so, just the average of the M code vectors with largest 
error probabilities would exceed the bound (3.2.16). Substituting (3.1.20) for 
the exponent, we have for rate In (2M)/N 

P Em < 2 exp {-N max max [E (p, q) - p(ln M)/N - p(ln 2)/N]} 

q 0<p< 1 

< 2 exp {-N max max [ (p, q) - p(ln M)/N - (In 2)/N]} 

q 0<p<l 

= 4e~ NE(R) (3.2.18) 

for each of the M code vectors. (Note that, while the above development was 
for the code set of 2M messages and the corresponding maximum likelihood 
decision regions, reducing the set to M messages can only reduce P Em by 
expanding each decision region.) This proves the corollary. 

The second disturbing thought is that, even though a code exists with low 
error probability, it may be difficult, if not nearly impossible, to find. We may 
dispel this doubt quickly for ensembles where uniform weighting (that is, 
q(x) = l/Q for all x 6 3C = [a l9 a 2 , ..., a Q }) is optimum. For in this case at least 
half the codes in the ensemble must have 



PE< 



for again, if this were not so, the ensemble average could not be bounded by P E as 
given by (3.1.20). For nonuniformly weighted ensembles, the argument must in 
clude the effect of weighting and reduces essentially to a probabilistic statement. 
In any case, the practical problem is not solely one of finding codes that yield low 
P E , but codes which are easily generated and especially which are easily decoded, 
that yield low P E . This will be the problem addressed throughout this book. 

Before leaving the coding theorem, we dwell a little further on the problem of 
finding the weighting distribution q which maximizes the negative exponent of the 
bound at each rate. To approach this analytically, it is most convenient to rewrite 
the exponent as 



E(R)= max \-pR + max E (p, q) (3.2.19) 

0<p<l [ q 

Thus we need only maximize E (p, q) or, equivalently, minimize with respect to q 



(3.2.20) 



140 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where 



) i/<1+ >) (3.2.21) 

x 

The key to this minimization lie? in the following lemma. 

Lemma 3.2.2 The quantity exp [ E (p, q)] is a convex u function on the 
space of probability distributions q for all fixed p > 0. 

PROOF The convexity follows from the fact that a(y, q) is linear in q, while 
the function v. l + p is convex u in a for all p > 0. Thus, by the definition of 
convexity (App. 1A), for every y e ^, a(y, q) 1 + p must be a convex u function 
of q. Finally, the sum of convex u functions must also be convex u ; hence the 
lemma is proved. 

Also of interest, although it may be regarded as a byproduct of the maximiza 
tion of the exponent, is the problem of maximizing /(q) to obtain the channel 
capacity C. It turns out that this problem is almost equivalent to minimizing 
exp [-E (p, q)] because /(q) has similar properties, summarized in the following. 

Lemma 3.2.3 /(q) is a convex n function on the space of probability distribu 
tions q. 

PROOF We begin the proof by rewriting the definition (3.2.2) of /(q) as 



(3.2.22) 



The first term is linear in q; hence it is trivially convex. The second term can 
be written as 

I My) In [I/fly)] 
y 
where 



is linear in q(x). But d 2 [& In (l/fi)]/dp 2 = - l/j3 < since > 0. Hence 
P In (\/p) is convex n in ft, and /? is linear in q. Thus by the same argument as 
for the previous lemma, fl(y) is convex n in q; and /(q), which is the sum 
(finite, infinite, or even uncountably infinite) of convex n functions, is itself 
convex n, thus proving the lemma. 

The minimization (maximization) of convex u (n) functions over a space of 
distributions is treated in App. 3B where necessary and sufficient conditions, due 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 141 

to Kuhn and Tucker [1951], are derived for the minimum (maximum). For appli 
cation to the problems at hand, the following theorems are proved there. 

Theorem 3.2.2: Exponents Necessary and sufficient conditions on the dis 
tribution vector q which minimizes exp [-E (p, q)] [or, equivalently, maxi 
mizes E (p, q)], for p > 0, are 




all x e 3C = {fli, a 2 , . . . , a Q ] 

(3.2.23) 
where ct(y, q) is defined by (3.2.21), with equality for all x for which q(x) > 0. 

Theorem 3.2.3: Average mutual information Necessary and sufficient condi 
tions on the distribution vector q which maximizes /(q), to yield C, are 

for all x e % = {a l9 a 2 , . . . , a Q ] 

(3.2.24) 
with equality for all x for which q(x) > 0. 

The above theorems do not give explicit formulas for min q exp [ (p, q)] 
and C. However (3.2.23) and (3.2.24) do serve the purpose of verifying or disprov 
ing an intuitive guess for the optimizing distribution. As a very simple example, 
for a binary input (Q = 2) output-symmetric channel as defined in Sec. 2.9, these 
necessary and sufficient conditions verify the intuitive fact that the optimizing 
distribution in each case is uniform [q(ai) = q(a 2 ) = i]. In the special case where 
the output space as well as the input space is {a l9 a 2 ,...,a Q ] and where the Q by Q 
transition matrix {p(y|x)} is nonsingular, it can be shown (Prob. 3.4) that the 
conditions of both theorems are easily satisfied with the inequalities all holding 
with equality, and explicit formulas may be obtained both for the optimizing q 
and for the quantities to be optimized. In general, however, the maximization 
(minimization) must be performed numerically. This is greatly facilitated by the 
fact that the functions are convex, which guarantees convergence to a maximum 
(minimum) for any of a class of steepest ascent (descent) algorithms. Appendix 3C 
presents an efficient computational algorithm for determining channel capacity. 
Similar algorithms for computing E(R) have been found by Arimoto [1976] and 
Lesh [1976]. 

Even when the optimum q is known and is the same for all rates, the actual 
computation of (p, q), E(R), and C is by no means simple in general. Usually the 
simplest parameter to calculate is 

r 2 

(3.2.25) 

Since E(R) is a decreasing function of rate, this provides a bound on the exponent. 
Also, [ (1, q) R] as given by (3.2.7) is the low-rate exponent when maximized 



142 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

over q. For any binary-input channel, this maximum is readily evaluated, since the 
optimizing distribution is uniform, and yields 

.(!) = max E.(l,q) 
q 

= In 2 - In (1 + Z) (3.2.26) 

where 

(3.2.27a) 



y 

For the special cases of the BSC and A WON channel, 6 Z is readily calculated to 
be 



Z = V4/7(l - p) (BSC) (3.2.276) 

Z = e~ s * /N (AWGN) (3.2.27c) 

It follows upon applying the Schwarz inequality (App. 3A) to (3.2.27a) that 
0<Z<1. 

Now suppose the rate is sufficiently low that the linear bound (3.2.7) is appro 
priate. Then, for a binary-input channel, we have 



On the other hand, we showed in Sec. 2.9 that for linear binary block codes used 
on this class of channels 

M 
P E < e wklnz (2.9.19) 

fc = 2 

where w 2 , . . . , VV M are the weights of the nonzero code vectors and Z is given by 
(3.2.27a). In particular, as was shown in Sec. 2.10, if we restrict M = N [so that 
R = (In N)/N - as N -> oo], then orthogonal codes exist for all N = 2 K with the 
property that vv k = N/2 for all /c ^ 1. In this case we have the bound 

P <M<T N( -* lnZ) (3.2.29) 

Since in this latter case the rate approaches zero asymptotically with N, it is clear 
that the bound (3.2.29) should be compared with the ensemble average upper 
bound (3.2.28) as (In M)/N - in both cases. It is easily shown that the negative 
exponent of (3.2.29) dominates that of (3.2.28), that is 

-i In Z > In 2 - In (1 + Z) < Z < 1 (3.2.30) 

with equality if and only if Z = 1. 

The two exponents of (3.2.28) and (3.2.29) are shown in Fig. 3.4. Note from 



6 See (2.11.6) for the definition of p (y) for the AWGN channel. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 143 



In 2 



In 2- In (1 +Z) 




Figure 3.4 Negative exponents of (3.2.28) and (3.2.29). 

the examples (3. 2.21 b and c) that Z = applies to a noiseless channel and that Z 
grows monotonically with increasing noise, with the channel becoming useless in 
the limit of Z = 1. We note also from the figure that the curves diverge as Z -> 0, 
while they have the same negative slope at unity. 

Now it may not be surprising that a particularly good code (e.g., the orthog 
onal code) is far better than the ensemble average which includes the effect of 
some exceedingly bad codes having two or more code vectors which are identical. 
In fact, however, this discrepancy occurs only at low rates; we shall show in 
Sec. 3.6 that, for R > dE (p, q)/dp \ p = 1 [that is, over the curved portion of the E(R) 
function], the best code can perform no better asymptotically than the ensemble 
average. Nevertheless, if at very low rates certain bad codes can cause such a 
dramatic difference between the ensemble average and the best code, it stands to 
reason that the ensemble average as such is not a useful bound at these rates. 
While this might lead the more skeptical to discard the averaging technique at this 
point, we shall in fact see that, with cleverness, the technique may be modified in 
such a way as to eliminate the culprits. This modification, called an expurgated 
ensemble average bound, is treated in the next section and shown for the special 
case of binary-input, output-symmetric channels to yield the exponent of (3.2.29) 
at asymptotically low rates. 



3.3 EXPURGATED ENSEMBLE AVERAGE ERROR 
PROBABILITY: UPPER BOUND AT LOW RATES 

The approach to improving the bound at low rates is to consider a larger en 
semble of codes (or signal sets) with the same dimensionality N but having twice 
as many code vectors, 2M. If our conjecture in the last section was correct, then 
the error probability for most codes can be improved considerably by eliminating 
the particularly bad code vectors. Thus we shall resort to the expurgation of the 
worst half of the code vectors of some appropriate code of the ensemble. The 
result will be a code with M code vectors of dimensionality N whose average 
error probability can be shown to be much smaller at low rates than the upper 
bound given in the channel coding theorem (Theorem 3.2.1). 



144 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



In developing this expurgated bound, it is convenient to work not with the 
ensemble average P Em , but rather with the ensemble average of a fractional power 
P s Em , where < s < 1. We obtain such a bound in the form of 

Lemma 3.3.1 For the ensemble of codes, defined by a distribution q^(x) on 
SE N with M vectors of N symbols used on a discrete-input channel, the en 
semble average of the sth power of the error probability for the mth message, 
when maximum likelihood decoding is used, is bounded by 



P S E <B 0<s< 



m= 1, 2, ..., 



where 



*> V X" 1 / \ / /\ I V 

= M x I %W^(x ) x 



(3.3.1) 
xx L y 

Consequently, the sum of these averages over all messages is bounded by 

M _ 

P s Em < MB (3.3.2) 

OT = 1 

PROOF The derivation of (3.3.1) is along the lines of Sec. 3.1. First of all, since 
we shall be interested principally in low rates, we use only the union- 
Bhattacharyya bound, which coincides with the more elaborate Gallager 
bound at p = 1. Hence from (3.1.4) we have 



But if we restrict s to lie in the unit interval, we may use the inequality 



I a.. 







(3.3.4) 



which follows from the Holder inequality (App. 3 A), to obtain 

Z 



o<s<i (3.3.5) 



Now taking the ensemble average as in Sec. 3.1, we obtain 



sj^X-X 

= (M - 1) 1 1 



(3.3.6) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 145 

where the last step follows from the facts that each vector x m is summed over 
its entire space and ZX^N^JC) = 1- From this, (3.3.1) follows trivially, as does 
(3.3.2) since the former is a uniform bound for all messages; hence the lemma. 

We now proceed to the key result which will induce us to expurgate half the 
code vectors of a particular code with 2M code vectors to obtain a much better 
code for M messages. 

Lemma 3.3.2 At least one code, in the ensemble of all codes with 2M vectors 
of N symbols, has 

PJ.(x I ,...,x 2M )<2B 0<s<l (3.3.7) 

for at least M of its code vectors {x m }, where B is given by Lemma 3.3.1 with 



PROOF The proof is by contradiction and follows easily from Lemma 3.3.1. 
Suppose Lemma 3.3.2 were not true. Then every code in the ensemble would 
have 

P s l (x 1> ...,x 2M )>2B (3.3.8) 

for at least M + 1 of its code vectors {x t }. But then the sum over all code 
vectors of the ensemble average of the sth moment would be lower bounded 
by 

2M _ 2M 

Z P L = Z XX"- I %(Xl)^(X2)" 
tn1 m 1 xi X2 X2M 

2M 

= Z Z Z flvfrikvM 4v(x 2M ) Z 

XI X2 X2M m=l 

^ 1 1 " I %(xikv(x 2 ) <?N(x 2M )2B(M + 1) 

XI X2 X2M 

where the last step follows from the facts that at least (M + 1 ) terms of the 
(2M)-term sum are lower bounded by (3.3.8), that the remainder of the terms 
are nonnegative, and that the weighting distribution factors q^(\i) 9 <j.v(x 2 ), 
i <?.v( x 2M ) are nonnegative. Finally, since the sum of the distribution over the 
entire Q 2MN terms of the ensemble is unity, we have 

2M _ 

Y P s Em > 2MB 



which is in direct contradiction to (3.3.2) of Lemma 3.3.1 for M = 2M. The 
lemma is thus proved by contradiction. 

On the basis of this result, we note that if we expurgate (eliminate) the M code 
vectors with the highest error probability P Em from the code which satisfies (3.3.7) 



146 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



of Lemma 3.3.2, we are left with a code of only M code vectors of dimension N 
such that 



P Em (x 1 ,x 2 ,...,x M )<(2B) 



1/s 



m=l,2, ..., 



0<s<l (3.3.9) 



where we denote the unexpurgated code vectors by {x m }. However, in justifying 
(3.3.9), we must note that the unexpurgated code vectors will have their error 
probabilities altered by removal of the expurgated code vectors, but this alteration 
can only lower the P Em of the remaining vectors, since removal of some vectors 
causes the optimum decision regions of the remainder to be expanded. Hence 
(3.3.9) follows. In combining (3.3.9) with (3.3.1) to obtain the explicit form of the 
expurgated bound, if we make the further substitution p = 1/s, where 1 < p < oo, 
we obtain a result whose form bears a striking similarity to the form of the coding 
theorem of Sec. 3.2. For we then have that for every message m of the expurgated 
code, 



zz 

X X 



I>/i 



1 < p < oo 

(3.3.10) 
If we finally impose the memoryless condition (3.1.12) on the channel, 



and similarly take the distribution to be a product measure, 



then we obtain by the identical set of steps used in deriving (3.1.14) 

1 </ 



< 00 

(3.3.11) 

To obtain the tightest bound, we must minimize with respect to the distribution 
q(x) and the parameter p > 1. The result can be expressed in exponential form as 

Theorem 3.3.1: Expurgated coding theorem (Gallager [1965]) For a discrete- 
input memoryless channel there exists at least one code of M code vectors 
of dimension N for which the error probability of each message, when 
maximum likelihood decoding is used, is bounded by 

(3.3.12) 
(3.3.13) 
(3.3.14) 



where 



P Em <e- NE " (R) m=l,2,...,M 

LxW = max sup E x (p, q) - p\ R + ^- I 

q P>1 [ \ N >\ 

Z Z 4(x)q(x 



= -pin 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 147 

Note that, since the region of p is semi-infinite, the "maximum" over p 
becomes a supremum. One slight inconvenience in the form of this theorem is the 
appearance of the nuisance term (In 4)/N added to the rate. Of course, this term 
is negligible for most cases of interest. In fact, this term can be made to disappear 
by an alternative proof (Prob. 3.21); hence we shall ignore it henceforth. In 
order to assess the significance of this result and the range of rates for which it is 
useful, we need to establish a few properties of E x (p, q), which are somewhat 
analogous to those of E (p, q) discussed in Sec. 3.2. These are summarized as 

Theorem 3.3.2 For any discrete-input memoryless channel for which 7(q) =/= 0, 
for all finite p > 1 



,(1, q) = .(1, q) 
E x (p, q) > 
SE x (p, q) 



dp 



>0 



.q) 



dp- 



<o 



(3.3.15) 
(3.3.16) 

(3.3.17) 
(3.3.18) 



with equality in (3.3.18) if and only if, for every pair of distinct inputs x and x 
for which q(x) > and q(x ) > 0, p(y|x)p(y x ) = for all y. [If so, E x (p, q) 
is just a constant multiple of p; such a channel is said to be noiseless.] 

The equality (3.3.15) follows directly from (3.3.14) and (3.2.1) since 



,(l,q)=-lnyy 9 (x) <? (x 



= (!, q) 



The remainder of the theorem, consisting of inequalities whose form is identical to 
those for E (p, q), is proved in App. 3A. 

We note also that E x (p, q) and E (p, q) are of interest for their corresponding 
bounds over disjoint intervals of the real line except for the common point p = 1, 
where the functions are equal. Figure 3.5 shows a composite graph of the 




oO><l) 



Figure 3.5 (p, q) and x (p, q) for typical channel. 



148 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

two functions for a typical channel. It can, in fact, be shown (Prob. 3.10) that 
dE x (p, q)/dp | p =i < dE (p, q)/dp \ p = 1 unless the channel is useless [that is, C = 
max q /(q) = 0]. The maximization of (3.3.13) with respect to p is quite similar to 
that of the ensemble average error exponent in Sec. 3.2. Letting 



, q) . sup [E x (p, q) - pR] 



(3.3.19) 



and taking the channel to be other than noiseless or useless, we have from (3.3.18) 
that E x (p, q) is strictly convex n in p, so that the supremum occurs at 



R = 



, q) 



provided R < 



, q) 



(3.3.20) 



P =i 



dp dp 

Then in this region the negative exponent of (3.3.12) for a fixed q is given by the 
parametric equations 

x (p, q) 



E ex (R, q) = E x (p, q) - p 



dp 



R = 



UP> q) 

dp 



lim 

p-oo 



SE x (p, q) 



dE x (p, q) 



(3.3.21) 



p=l 



Also, by exactly the same manipulation as in (3.2.9) and (3.2.10), we have 



, q) 



dR 



= -P 



, q) 



dR 2 



>0 



(3.3.22) 



so that the exponent is convex u with negative slope p > 1 over the region given 
in (3.3.20). Furthermore, it is tangent to the straight line 



at the point 



R = 



dE (p, q)/dp 



P=I 




E(R,q) 



/(q) 



Figure 3.6 Composite exponent function. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 149 

The composite of the ex (K, q) and E(R, q) functions for a typical channel is as 
shown in Fig. 3.6. For all physical channels, ex (#) is bounded for all rates as 
shown, for example, in Fig. 3.6. However, for certain nonphysical but interesting 
channels (e.g., that in Fig. 3.7), ex (K) becomes unbounded for sufficiently small 
but positive rates, and consequently the error probability is exactly zero for cer 
tain codes of finite length. 

The low-rate behavior of both classes of channels can be determined by 
examining E x (p, q) and its first derivative in the limit as p -> oo. We note that 
(3.3.21) holds only for rates greater than the limiting value of the derivative. This 
value is readily determined from the definition (3.3.14), by use of L Hopital s rule, 



= lim^> 



p * oo 



= -In 



q(x)q(x )4>(x, x ) 



(3.3.23) 



where 



4>(x, x ) = 



if I >//(> 



otherwise 



Thus 



R x (co, q) = if 



^p(y\x)p(y\x ) * 



for all pairs of inputs x, x e 3C while 



R X (OO, q) > o if X Jp(y\x)p(y\x r ) = o 



(3.3.24) 



(3.3.25) 



for some pair of inputs x, x e 3C such that q(x)q(x ) ^ 0. In the latter case, we note 
also that, since according to (3.3.22) the slope of ex (K, q) approaches - oo as 




In 2 2 In 2 -J6(p) 



Figure 3.7 Composite exponent function for channel , 



150 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

p -> oo, the function ex (K, q) approaches infinity as R -> R x (vo, q) > from the 
right, and is thus infinite for all lower rates. An example of such an exponent and 
the corresponding channel, which is the disjoint parallel combination of two 
BSCs, is shown in Fig. 3.7 (Prob. 3.9). The intuitive explanation of this nonphysi- 
cal result is that if a channel has two distinct inputs which cannot both reach the 
same output for any particular output, then the exclusive use of these two input 
symbols will result in error-free communication even without coding. 

Returning to the physical situation where # x (oo, q) = 0, it is of interest also to 
determine the value of the zero-rate expurgated exponent. Here, according to 
(3.3.17) and (3.3.19), and letting s = l/p, we have 

E ex (0, q) = sup E x (p, q) 

p>i 

= lim E x (p, q) 



= lim - In I q(x)q(x ) Jp(y \x)p(y \x ) (3.3.26) 

s-O s x x 



Finally, using L Hopital s rule, we have 



(<)) = max 



q 



q(x)q(x ) In Jf(y \xtfy\ (3.3.27) 

y 

The optimization with respect to q is exceedingly difficult because E x (p, q) is 
not convex n over q and can have several local maxima. 7 Little is known about 
the optimum weighting distribution except for a special class of channels 
(Prob. 3.11) and for binary-input channels. In the latter case, it follows easily from 
(3.3.14) that 



E,(p, q) = -P In ^(a,) + q 2 (a 2 ) + 2q(a l )q(a 2 ) 

= -p In [1 - 2q( ai )q(a 2 )(\ - Z 1 ")] (3.3.28) 



where 



It follows trivially that this is maximized for all p by the vector q = (^, ^), and that 

(1 -I- Z 1/p \ 
^5 (3-3.29) 

z / 



7 Furthermore, memoryless channels exist for which the product measure 

w= n <?(*-) 

n=l 

does not optimize the expurgated exponent (Jelinek [19686]). 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 151 

Thus, for all binary-input memoryless channels 



= sup 



(i _i_ 7 l /p\ 1 

i^-J-pK 



(3.3.30) 



Furthermore as R -> 0, we have, from (3.3.27) with q = (^, \\ that 

E ex (0)=-ilnZ (3.3.31) 

Note that this is the same as the exponent of (3.2.29) for low-rate orthogonal 
signals (Z being the same in both cases), which being considerably greater than 
the ensemble average bound exponent (Fig. 3.4) originally prompted our further 
investigation of low-rate exponents. Also noteworthy are the facts that the present 
results are for binary-input channels which need not be output-symmetric and, 
somewhat surprisingly, that the uniform input weighting distribution is optimum 
for all these channels. 



3.4 EXAMPLES: BINARY-INPUT, OUTPUT-SYMMETRIC 
CHANNELS AND VERY NOISY CHANNELS 

The computation of exponential bounds for explicit channels is generally very 
involved. Except for certain contrived (generally nonphysical) examples 
(Probs. 3.2, 3.5) and some limiting cases, explicit formulas are not available. Even 
for the particularly simple, often studied, binary-symmetric channel, the high-rate 
exponents of both the ensemble average and the expurgated ensemble average 
bounds can only be obtained in parametric form. These are nevertheless valuable 
for later comparison. 

For the BSC with crossover probability p < 1/2, beginning with the ensemble 
average bound of the coding theorem (Sec. 3.2), we have from (3.2.1) 



max E (p, q) = p In 2 - (1 + p) In [p 1 * 1 *" + (1 - p) 1 / 1 ^] (3.4.1) 

q 

since the maximizing distribution for this completely symmetrical channel is 
always the uniform distribution. Upon substituting in (3.2.7) and (3.2.8), after 
considerable calculation of derivatives and manipulations, letting 

jT(x) = -x In x - (1 - x) In (1 - x) (3.4.2) 

T x (y) = -y In x - (1 - y) In (1 - x) (3.4.3) 

which is the line tangent to 3tf(x) at y = x, and letting 

p = ? (3.4.4) 

we find for low rates 

E(R) = In 2 - 2 In (Jp + ^T^p) -R 0< J R<ln2-. 



1 -p/ 
(3.4.5) 



152 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

and we find for high rates the parametric equations 
E(R) = T p (p p ) - jT(p p ) 

R = \n2- jf(p) < p < 



In 2 - tf ^ < R < In 2 - j#>(p) (3.4.6) 

v 1 - 



As for the expurgated ensemble average error bound (Sec. 3.3), for the class of 
binary-input channels 8 (which includes the BSC as the simplest case), we have 
from (3.3.29) 

(1 + 7 l /p\ 
-% I ( 3A? ) 

and consequently maximizing (3.3.30) we obtain, after some manipulation, the 
parametric equations 

E ex (K) = -<5mZ 

R = \n2-jV(d) Q<R<\n2-jr(^\ (3.4.8) 

where 



For the BSC, as was shown in (3.2.276), Z = ^4p(l - p). 

The exponent of the exponential upper bounds for any channel is charac 
terized mainly by three parameters: 

1. (0) = max E (l, q), the zero-rate ensemble average exponent 

q 

2. E ex (0) = lim max E x (p, q), the zero-rate expurgated ensemble average 

p->oo q 

exponent 

3. C = max 7(q) = lim max ^ - , the channel capacity 

q p^O q dp 

These are important for two reasons. First, as can be seen in Fig. 3.6 the latter two 
represent the E-axis and K-axis intercepts of the best upper bounds found, while 
E(0)-R is the "support" line of the bound to which the low-rate and high-rate 
bounds are both tangent. 9 More important, as we shall find in the next two 
sections, both ex (0) and C are similar parameters of the exponent of the lower 

8 Recall that output symmetry is not required for the expurgated bound, because the optimizing 
distribution is (^, ^) for any binary-input channel; this is not the case for the ensemble average 
bound, however. 

9 For convolutional codes, as we shall discover in Chaps. 5 and 6, E(0) is the most important 
parameter, especially in connection with sequential decoding. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 153 

bound on error probability of the best code, and at least one point of the line 
E(Q)-R also lies on the exponent curve for the lower bound. 

It is particularly instructive to examine these three parameters for the subclass 
of binary-input, output-symmetric channels, which includes the binary-input 
AWGN channel and its symmetrically quantized reductions, the simplest of which 
is the BSC, as originally described in Sec. 2.8. For this class, all parameters are 
optimized by the uniform distribution q = (^, ^). The first two parameters are 
easily expressed in terms of the generic parameter Z [see (3.2.26) and (3.3.31)] as 

(0)- In 2 -In (1 + Z) (3.4.10) 

ex (0)=-ilnZ (3.4.11) 

where 

~y) (3.4.12) 



Capacity is more difficult to calculate but is readily expressed, upon using the fact 
that p v (y) = Po(-y) in (3.2.13), as 

C = I Po(y) In Po (y) - Z p(y) In p(y) (3.4.13) 

y y 

where 

y) (3A14) 



For the AWGN channel, the first two parameters are characterized by 

z = e -*jN (AWGN) (3.4.15) 

as was first established in (3.2.27c), and the capacity is 

C = i In 27K? - f p(y) In p(y) dy (AWGN) 

* oo 

where 10 

2 (3.4,6) 




and p(y) is given by (3.4.14). 

For the BSC considered as a two-level quantized reduction of the AWGN 
channel, we have from (3.2.276) and (3.4.13) or (3.4.6) 

~^p) (BSC) (3.4.17) 



C = In 2 - jf(p) (BSC) (3.4.18) 

where 



See Eq. (2.11.6). 



154 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Intermediate cases of soft quantization require calculation of p (y) for y e {6 1? b 2 , 
..., bj} as a function of S S /N . With symmetric octal quantization, these are 
determined in (2.8.1). Calculation of Z and C using (3.4.12) and (3.4.13) is straight 
forward, but tedious and can best be handled numerically. The results for the 
AWGN channel, BSC, and the binary-input octal-output-quantized channel are 
shown in Fig. 3.8 where all three parameters, (3.4.10), (3.4.11), and (3.4.13), 
normalized by JN , are plotted as a function of &JN . 

Most noteworthy is the behavior as <$JN -> 0. It appears from the figure that 
for the AWGN channel 



C 



for <$ S /N 1 (AWGN) 



(3.4.19) 



1.0 
0.8 
0.6 
0.4 
0.2 


0.5 
0.4 
0.3 
0.2 
0.1 



0.5 
0.4 
0.3 
0.2 
0.1 






-10 



10 



/=00 




J=2 



-10 



10 



J=2 



-18 -14 -10 -6-226 
S//V decibels 



10 



Figure 3.8 Exponents and capacity for 
binary-input symmetrically quantized- 
output AWGN channels J = 2 -> hard 
quantization; J = 8 -> octal; J = oo - 
unquantized. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 155 

while for the BSC 

C 2 - for SJN < 1 (BSC) (3.4.20) 



For the octal channel with uniform quantization and a = \^/N /2 (see Fig. 2.13) 

& 
C % 0.95 -*- for <f S /N <^ 1 (octal) (3.4.21) 

^0 

Also of interest is the fact that for all these channels 

ex (0) * (0) * C/2 for g s /N 1 (3.4.22) 

Thus, as the symbol energy-to-noise density ratio becomes very small, it appears that 
the expurgated ensemble bound blends into the ensemble bound and both have an 
-axis intercept at C/2; hard quantization causes a loss of a factor of 2/n in all 
parameters, and soft (octal) quantization causes a negligible loss relative to un- 
quantized decoding. 

The asymptotic relations (3.4.19), (3.4.20), and (3.4.22) can be easily shown 
analytically (Prob. 3.12). In each case, letting & S /N - results in a channel which 
is an example of a very noisy channel This class of channels is characterized by the 
property 



aax,y (3.4.23) 

where 

k(*. y)\ * i 

and 

) (3.4.24) 



Since q(x) is the input weighting distribution used in all bounds, it follows that 
p(y) is also a distribution, sometimes called the output distribution. 11 Hence 

i = I p(>> I *) = I p(yi + 1(*> y)] = i + Z p(vM*. y) (3.4.25) 



and 



(3.4.26) 



1 Note that p(y) is the actual output distribution when the input distribution is <?(x); however, the 
weighting distribution g(x) is only an artifice used to define an ensemble of codes it says nothing 
about the actual input distribution when a particular code is used on the channel. 



156 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



From (3.4.25) we obtain 

Z p{y)t(x, y) = for all x e & (3.4.27) 

y 

and from (3.4.26) 

Z q(x)t(x 9 y) = for all y for which p(y) > (3.4.28) 

X 

Since the optimizing input distribution q(x) = \ for both inputs, it is easy to 
verify that, for a BSC with p = ^(1 e) where \c\ <^ 1, p(y) = % for both outputs and 
(3.4.23) holds with 

+ e for x = y 
e for x ^ y 

A similar but more elaborate argument (Prob. 3.13) shows that, for S /N <^ 1, the 
unquantized AWGN channel satisfies the definition (3.4.23) of very noisy channels, 
as one would expect. Now using the definition (3.4.23) and the resulting properties 
(3.4.27) and (3.4.28), we obtain for the basic function of the ensemble bound (3.2.1) 

I \ l+p 

E (P, q) = -In Z Z *(x)p(y) 1/(1+p) [l + c(x, y)] l/(l+p) 



Since \e\ <^ 1, we may expand (1 + cY l(l + p) in a Taylor series about = and 
drop all terms above quadratic powers. The result is 



1- 



2(1 + P) 2 



where the last step follows from (3.4.28). Expanding the result in a Taylor series 
about e=0 and again dropping terms above quadratic, we obtain 



1 - 



2(1 + p) 



x y 



(3.4.29) 



But for the same class of channels, performing the same operations, we obtain for 
the channel capacity 



C = max 7(q) 



p(y)[l+c(x,y)] 



= max <J(*)PGO[1 + f(x, y)] In 

q * y P(y) 



1 1 <?(*)pM[i + <(*,> )] 

X y 

/,.\_/ .\ V ^/ 



(3.4.30) 



q x y 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 157 

Thus maximizing (3.4.29) over q and using (3.4.30), we obtain 

E = max (p,q) T ^C (3.4.31) 

For this class of very noisy channels, the ensemble average error bound exponent 
(3.1.20) thus becomes 



E(R)= max max E (p, q) - pR 

0<p<l [ q 



max 

0<p< 1 



C- pR 



(3.4.32) 



But this is identical to the problem of maximizing the negative exponent of (2.5.15) 
required to obtain the tightest bound on orthogonal signal sets on the AWGN 
channel. Thus we employ the same argument that led from (2.5.15) to (2.5.16) to 
obtain 



iC-R 



E(R) 



< R/C < i 
i<R/C<l 



(3.4.33) 



which is the function shown in Fig. 2.7. We defer comment on this remarkable 
equivalence until we have also evaluated the expurgated bound exponent. For the 
class of very noisy channels, we have from (3.3.14) 



X X 



E x (p, q)=-plnZZ q(x)q(* ) |Z p(>V[l + e(x, y)][l + e(x , y)] 

, fay) t 2 (x,y) 



* -p In 






8 



i <> 



-p In X I 



x x y 



6(x, y)e(x , y) 2 (x, y) + e 2 (x , y) 



Thus finally 



max ,(/?, q) % max ^ Z 
q q x y 



(3.4.34) 



158 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

and from (3.3.19) we have that the expurgated bound exponent is 



E eK (R) = sup 



max E x (p, q) - pR 



&$C-R (3.4.35) 

Since this coincides with the straight-line portion of the ensemble average bound 
(3.4.33), it is clear that expurgation produces no improvement for a very noisy 
channel. We note also that (3.4.33) and (3.4.35), evaluated at zero rate, confirm the 
previous limiting result (3.4.22). 

We turn now to showing that the coincidence of the results (3.4.33) for very 
noisy channels with those of (2.5.16) for orthogonal signal sets on the AWGN 
channel is not so surprising after all. For, while Sec. 2.5 dealt with arbitrary 
orthogonal signal sets, we found in Sec. 2.10 that a binary orthogonal signal set 
could be generated from an orthogonal linear code with the number of code 
vectors equal to the number of symbols N. Now the symbol energy for this signal 
set is $ s = &/N where $ is the energy per signal. Thus no matter how large S/N 
may be, for large N, $ S /N becomes arbitrarily small; hence the code is operating 
over a very noisy channel. To complete the parallelism, we note from (2.5.13) and 
(2.5.14) that 



TN T C (3A36) 

while 



r 

Thus (2.5.16) may be rewritten using (3.4.36) and (3.4.37) as 

P E < e -TE(R T ) _ e ~NE(R) 

where E(R) is given by (3.4.33). 

This concludes our treatment of upper bounds on error probability of general 
block codes. To assess their tightness and consequent usefulness, we must deter 
mine corresponding lower bounds on the best signal set (or code) for the given 
channel and with the given parameters. In the next three sections, we shall 
discover an amazing degree of similarity between such lower bounds and the 
upper bounds we have already found, thus demonstrating the value of the latter. 



3.5 CHERNOFF BOUNDS AND THE 
NEYMAN-PEARSON LEMMA 

All lower bounds on error probability depend essentially on the following 
theorem which is a stronger version of the well-known Neyman-Pearson lemma 
for binary hypothesis testing. After stating the theorem, we shall comment on its 
uses and applications prior to proceeding with the proof. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 159 

Theorem 3.5.1 (Shannon, Gallager, and Berlekamp [1967]). Let p ( ff(y) and 
PN } (y) be arbitrary probability distributions (density functions) on the N- 
dimensional observation space & N , and let / a and / b be any two disjoint 
subspaces of 3/ N with W a and W b their respective complements. Let there be at 
least one y e $/ N for which p ( N } (y)pN\y) = 0. Then for each s, < s < 1, at least 
one of the following pair of inequalities must hold 

P. = I_ pj?(y) 

ye*a 

> i exp MS) - sn (s) - sj2j?$ft (3.5.1) 

n s _ PJ? (y) 

ye*, 

> i exp Oi(s) + (1 - sMs) - (1 - *X/2/Tj] (3.5.2) 
where 

H(s) = In pj?>(yr !#>&) (3.5.3) 



y 



is a nonpositive convex u function on the interval < s < 1. Furthermore, for 
the choice 



= b (3.5.4) 

then both of the following upper bounds hold 

(s) (3.5.5) 



These latter two inequalities are known as Chernoff bounds. If we associate & N 
with the observation space for a two-message signal set, p ( \y) and pjj^y) with the 
likelihood functions of the two signals, and ty a and 3/ b with the corresponding 
decision regions, it follows that P a and P b are the error probabilities for the two 
messages. Thus, this theorem is closely related to the Neyman-Pearson lemma 
(Neyman and Pearson [1928]) as can best be demonstrated by inspecting the 
graph of n(s), a convex u nonpositive function on the unit interval, shown for a 
typical channel in Fig. 3.9. We note in particular that //(O) = /z(l) = if and only 
if, for every y e %/ N , p ( N\y)p ( N \y) + 0, a condition met by most practical 
channels. 12 We note further that for memoryless channels, since 



n=l 



12 However the Z channel described in Probs. 2.10 and 3.17 does not meet this condition; it has 



160 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 




Figure 3.9 Typical /i(s) and relation between exponents of P a and P b . 



we have for the two code vectors x a and 



= I ^ 



n=l 



(3.5.7) 



which grows linearly with N. Consequently, as N -> oo, the square roots in (3.5.1) 
and (3.5.2) become asymptotically negligible in comparison with the other terms 
in the exponents. Thus, if we disregard these terms as well as the asymptotically 
even less significant factors of 1/4, we find that the alternative lower bounds 
become identical to the upper bounds. Then it follows, as shown in Fig. 3.9, that 
the line tangent to //(s) at some point s will intercept the two vertical lines s = 
and s = 1 at negative values of ^ exactly equal to the two exponents. It also 
follows from the statement of the theorem that, fixing the exponent (and hence the 
asymptotic value) of P b at [jt(s ) + (1 s )n (s )] where s e [0, 1], guarantees that 
the exponent of P a will be [^(s ) - s n (s )] and that no lower (more negative) 
exponent is possible for P a . A lower value for the exponent of P b (or P a ) requires 
repositioning of the tangent line on this functional "see-saw," with a resulting 
increase in the value of the other exponent. Thus it should be apparent that the 
theorem is essentially equivalent to the Neyman-Pearson lemma, although it 
contains somewhat more detail than the conventional form. The parallel is com 
plete if we note that the subspaces, which make both upper bounds equal asymp 
totically to the lower bounds and hence the best achievable, are given by (3.5.4). 
But these correspond to the likelihood ratio rule, which is the optimum according 
to the Neyman-Pearson lemma, with threshold //(s), which is the slope of the 
tangent line in Fig. 3.9. 

We note finally that, in the two-message case over an N-dimensional mem- 
oryless channel, if we require P a and P b to be equal, then we must choose s such 
that n (s) = in (3.5.4). Then (3.5.5) and (3.5.6) give identical upper bounds, and 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 161 

(3.5.1) and (3.5.2) give asymptotically equal lower bounds. We conclude that, if 
s = s where //(s ) = 0, then 

^ (So )-.vo<) < p x < e , (So} for x = aorb ( 3 5 g) 

where 13 o(N)->0 as N -> oo. With reference to Fig. 3.9, it is clear that this 
corresponds to the case where the straight line is tangent to the minimum point 



PROOF We now proceed to prove the theorem, beginning by twice differen 
tiating (3.5.3) to obtain 



(3 5 9) 



I pW p&Wn MfoVitfWSP 

"(*) = - ypyfr i - pg frT ~ [//(s)]2 (3 5 10) 

y 
Now we denote the log likelihood ratio by 

D(y) = In [^(yVP/v^y)] (3.5.11) 

Also, in the interval < s < 1, we define the " tilted " probability density 

(a)/ \l-s (b)/ \s 



As the tilting variable 5 approaches and 1, fiJJ^y) approaches p ( v } (y) and 
pjv } (y), respectively. Now if we take y to be a random vector with probability 
(density) Q ( ^(y), it is clear from (3.5.9) through (3.5.12) that the random 
variable D(y) has a mean equal to //(s) and a variance equal to // (s); con 
sequently, [i"(s) > 0. Furthermore, it follows from (3.5.3) that /^(O) < and 
/z(l)<0 with equality in either case if and only if, for every ye^ N , 
P ( N\y)p ( N ) (y) 1= 0- T nus it follows that ^(s) is a nonpositive convex function in 
this interval. 

Comparing (3.5.3), (3.5.11), and (3.5.12), we see immediately that 

) (3.5.13) 

v%) (3.5.14) 

We can now establish the upper bounds (3.5.5) and (3.5.6). Let the decision 
regions be chosen according to (3.5.4), corresponding to a likelihood ratio 



1 3 Hereof N) ^ 



162 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

decision rule with threshold fjf(s). Then, using (3.5.11), these may be expressed 
as 



= W b (3.5.15) 

from which it follows that 

-sD(y) < -s//(s) for all y e <^ fl , < s < 1 (3.5.16) 

and 

(1 - s)D(y) < (1 - s)n (s) for all y e ^ fc , < s < 1 (3.5.17) 

Consequently, from (3.5.13) and (3.5.14), we have 

P a = I P ( N\y) < exp MS) - sn (s)] <2S?(y) (3-5.18) 

y e a y e a 

P k = I Ptf (y) < exp MS) + (1 - sMs)] Z Q^(y) (3.5.19) 

y e ^b y e # b 

and, since QS?(y) is a probability (density), the sums (integrals) in (3.5.18) and 
(3.5.19) are bounded by unity, which yields (3.5.5) and (3.5.6). 

We now prove the lower bounds of (3.5.1) and (3.5.2) for arbitrary disjoint 
decision regions. We begin by defining the subspace 

p (s)\ < v/W) (3-5-20) 



Then, recalling that fjf(s) and /z"(s) > are respectively the mean and variance 
of D(y) with respect to the probability density Qj?(y), we see from the Cheby- 
chev inequality that 

!_ Q ( N\y) = Pr { I D(y) - E s [D(y)] \ > J2n"(s)} 

var s [D(y)] 1 /-j< 91 \ 

< - = - (3.J.21) 

where E s [ ] and var s [ ] indicate the mean and variance with respect to Q$( ). 
Thus 



Z QSv s) 

e^ s 



and we may lower bound P a and P b by summing over a smaller subspace in 
each case, as follows. 



I PJ?>(y) (3.5-22) 

y e ^ fl n ^, 



> J] PJf (y) (3.5-23) 

y e <^j, n <^ s 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 163 

But, for all y e ^ s , it follows from (3.5.20) that 

Ai (s) - v/2/7 7 ^) < D(y) < AI (S) + v^M (3-5.24) 

and consequently, from (3.5.13), (3.5.14), and (3.5.24), that for all y e & s 

pjfly) > exp MS) - s/i (s) - sy27((y) (3-5.25) 

p>(y) exp MS) + (1 - s)/i (s) - (1 - sX/OTlfiSHy) (3-5.26) 

Then since the regions of summation (integration) for the right sides of 
(3.5.22) and (3.5.23) are subspaces of # s , it follows that 

P a >expWs)-sv (s)-sj2iS{s)] fijf>(y) (3.5.27) 

y e # a n # s 

P b > exp MS) + (1 - 5)^(5) -(1- 5)v/27M] I 2.!v s) (y) (3.5.28) 

y e 3> b n ^ s 

Finally, since % a and ^ b are disjoint, we have 

W a uW b = & N 
Hence, it follows from this and the consequence of (3.5.21) that 



Thus, at least one of the following inequalities must hold 

fijfly) > i (3.5.29) 



y e 



I &%)>i (3.5.30) 

y e W b n *, 

Combining (3.5.27) through (3.5.30) yields the lower-bound relations (3.5.1) 
and (3.5.2), and hence the balance of the theorem. 

We have already drawn the immediate parallel to binary hypothesis testing. 
In applying the theorem in the next section to lower-bound code error probabili 
ties, we shall demonstrate its further power relative to M hypotheses. Before 
proceeding with this more general case, however, we specialize the result to obtain 
an upper bound on the tail of the distribution of N independent identically dis 
tributed random variables y n . Thus let 



and 



!> (3-5.31) 

n=l 



p { ff\y)= pOO (3-5-32) 

n= 1 



164 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where p( ) is the common probability distribution (density) of all N variables. Let 
us further define the dummy distribution (density) 

ptf (y) = <?- N pW(y) (3.5.33) 

where a is a constant chosen to properly normalize its sum over 3/ N to unity. This 
will allow us to apply the previous theorem since 



In W(y)/ptf(y)] = n ~ Net (3.5.34) 

Consequently (3.5.4) reduces to 

9. = {y: 1 < M) + No} (3.5.35) 

and (3.5.5) reduces to 



(*) (3.5.36) 

where, as follows from (3.5.31) and (3.5.33) 



= Nln ( 

y 
Thus, if we let 



(3.5.37) 



we obtain from (3.5.36) as an upper bound on the tail of the distribution of rj 

Pr {r\ > 8} < e y(s) ~ sy (s} (3.5.38) 

where y (s) = 6 and 

y(s) = N\n^ p(y) e sy < s < 1 

y 

This is also a Chernoff bound and, as one would suspect, can be derived more 
directly than from the above theorem (see Probs. 2.10 and 3.18). Furthermore, by 
arguments very similar to those used in the proof of the theorem, the bound (3.5.38) 
can be shown to be asymptotically tight (Gallager [1968]). 

3.6 SPHERE-PACKING LOWER BOUNDS 

Theorem 3.5.1 provides the tools for obtaining lower bounds for any discrete- 
input memoryless channel. Its application in the general proof, however, involves 
an intellectual tour-de-force, for which the reader is best directed to the original 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 165 

work (Shannon, Gallager, and Berlekamp [1967]). 14 We shall content ourselves to 
state the general result at the end of this section. On the other hand, the flavor, 
style, elegance, and even the major details of the general lower-bound proof are 
brought out simply and clearly by the derivation of lower bounds for two special, 
but important, cases: the unconstrained bandwidth AWGN channel with equal- 
energy signals, and the BSC. We proceed to consider them in this order, and then 
return to a discussion of the general result. 



3.6.1 Unconstrained Bandwidth AWGN Channel with Equal-Energy 
Signals 

Let each of the M signals have duration T seconds and equal energy <?, while the 
additive white Gaussian noise has one-sided spectral density N W/Hz. By lack of 
bandwidth constraints, we mean that no limitations are placed on the signal 
dimensionality N or, equivalently, on W - N/2T as discussed in Sec. 2.6. How 
ever, as we found in Sec. 2.1, any set of M finite-energy signals can be repre 
sented using at most M dimensions. Thus unconstrained bandwidth means simply 
that we do not restrict N to be any less than M. In Sec. 2.1, we found that the 
likelihood function for the mth signal-vector sent over this channel is 



n =i 
where 



Ixj| 2 = 



= 6 (3.6.1) 

We express this more conveniently for our present purpose as 



Our immediate goal is to lower-bound 

My (3.6.3) 



for the maximum likelihood decision region A m given by 

A. = {y: p N (y |xj > p N (y |x m .) for all m + m] (3.6.4) 

with boundary points resolved arbitrarily. We have at our disposal Theorem 3.5.1. 



14 Or the more recent and somewhat more direct approach of Blahut [1974] and Omura [1975] (see 
Probs. 3.22 and 3.24). 



166 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



Clearly, we wish to associate p N (y \ \ m ) and A m with one inequality in this theorem, 
but the choice of the other appears to be an enigma. We proceed, just as in the last 
example of the previous section [(3.5.31) through (3.5.38)], by choosing the other 
to be a convenient " dummy " probability density ; namely 



-n 



n=l 



nN, 



(3.6.5) 



and we let 



while 

a = \n = b (3.6.6) 

We have then met the conditions and hypotheses of Theorem 3.5.1 and may 
therefore apply (3.5.1) through (3.5.3) to conclude that, for each transmitted signal 
vector \ m , at least one of the following pair of inequalities must hold. 



*=[" f fl(y) dy > 1 exp MS) - sn (s) - 

1 J 



exp MS) + (1 - 



- (1 - 



where 



p(yy- s p N (y\x m Ydy 0<s<l 

oo 

Substitution of (3.6.2) and (3.6.5) in (3.6.9), using (3.6.1), yields 



(3.6.7) 

(3.6.8) 
(3.6.9) 



li(s) = In exp 



n=l 



exp 



- Z l>. - 



//2 



dy 



(3.6.10) 



Thus n(s) is invariant to the signal vector s orientation and depends only on its 
energy. To determine the significance of the auxiliary variable \l/ m of (3.6.7), we 
sum over all messages m. Since the optimum decision regions (3.6.4) are disjoint 
and their union covers the entire N-space, we obtain 

I*.- Z f - U(y)rfy=f -f" p(y)dy = i (3.6.1 1) 

m=l m=lye A m oo oo 

Hence, for at least one message m, we must have 

^ < 1/M (3.6.12) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 167 

for otherwise the summation (3.6.1 1) would exceed unity. It follows therefore that, 
for this message m, ij/ A may be upper-bounded by 1/M. Consequently, letting 

P max = max P Em (3.6.13) 

m 

we conclude from (3.6.12), (3.6.13), and (3.6.7) through (3.6.10) that at least one of 
the following pair of inequalities must hold. 

1/M > \li A > i exp [/i(s) s//(s) s^/2// (s)] (3.6.14) 

PE^ > PE* > 4 exp MS) + (1 - s)|i (s) - (1 - s)^2^(s)] (3.6.15) 
where 



= -TC T s(l-s) (3.6.16) 
Consequently 

Ai (s)=-7r T (l-2s) (3.6.17) 

// (s) = 2TC T (3.6.18) 
In the last three equations, we employed the notation of Sec. 2.5, namely 

C T = (g/N.)/T (2.5.13) 
We shall also use the rate parameter defined there, namely 

K r = -^nats/s (2.5.14) 

Upon use of (3.6.16) through (3.6.18), (2.5.13), and (2.5.14), the lower bounds 
(3.6.14) and (3.6.15) become the alternative bounds 15 

R T < T~ l [TC T s 2 + 2s^fTC~ T + In 4] 

= C T s 2 + o(T) (3.6.19) 

and 

P Emn >exp{-[rC 7 (l- s) 2 + 2(1 - s)^/TC~ T + In 4]} 

= exp {- T[C T (\ - s) 2 + o(T)}} (3.6.20) 

Since at least one of this last pair of inequalities must hold, we choose s = s such 
that 

R T = C T s 2 + o(T) (3.6.21) 

where 

< R T < C T 



Here 



168 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

or equivalently 



- o(T)]/C T 
where 

< s < I 

Then (3.6.19) is not satisfied; consequently (3.6.20) must be satisfied with s = s 
yielding finally 



= exp {- T[(CV - JR T - o(T)) 2 

= exp { - 7\(^C- T - y^) 2 + o(T)]} (3.6.22) 



While (3.6.22) lower-bounds the probability of error for the worst case, we 
actually desire to bound the average error probability 

P E (M) = (1/M) P Em (3.6.23) 

m=l 

Now suppose we have the best code of M/2 signals. From (3.6.22), we see that the 
maximum error probability among this set of signals is lower-bounded by 

P max (M/2) > exp { - T[(x/C; - T^V) 2 + o(T)]} (3.6.24) 

where 

In (M/2) 
RT= ~^~ 

= R T - o(T) 

Thus R T can be replaced by R T in (3.6.24). On the other hand, for the best code of 
M signals, at least M/2 of its code vectors have 

P Em < 2P E (M) (3.6.25) 

But this subset can be regarded as a code for M/2 signals. Hence, the error 
probability for the worst signal in this case must be lower-bounded by (3.6.24) 
which pertains to the best code of M/2 signals. As a result, we have 

P (M)>iP max (M/2) 

>exp{-TlE sp (R T ) + o(T)]} (3.6.26) 

where 

E SP (R T ) = (Vc~ T - y^) 2 o < R T < C T 



Amazingly enough, for the range of rates C r /4 < R T < C T , this lower bound 
agrees asymptotically with the upper bound for orthogonal signals of (2.5.16). For 
lower rates, the upper bound and this lower bound diverge. However, in the next 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 169 

two sections, we shall determine tighter lower bounds for low rates that agree with 
(2.5.16) also for < R T < C T /4. 

One minor consequence of these results then is that they establish that orthog 
onal signals are asymptotically optimum (as T and M->oo) for the uncon 
strained AWGN channel. (Regular simplex signal sets are always better, but 
asymptotically they are indistinguishable from orthogonal sets.) More impor 
tantly, we have demonstrated in a special case a very powerful technique for 
obtaining asymptotically tight lower bounds at all but low rates. This bound is 
called the sphere-packing bound for essentially historical reasons, based on the 
classical proof for this example and the next example (Fano [1961]). (See 
Probs. 3.22 and 3.24 for another proof of the sphere-packing bound.) 

3.6.2 Binary-Symmetric Channel 

We now turn to the application of Theorem 3.5.1 to the classically most often 
considered channel, the BSC, repeating essentially the arguments used for the 
AWGN channel but with different justification. In Sec. 2.8, we showed that the 
likelihood function for this channel is 

P(y|xJ = p d "(l-pr d " (2.8.3) 

where d m = w(y x m ) is the Hamming distance between the channel input and 
output binary vectors. For the dummy distribution, we pick in this case the 
uniform distribution 

p N (y) = 2~ x for ally e # v (3.6.27) 

Here we identify 16 p v (y) with p^Hy) an< ^ Av(y| x m) w i tn P.v^y), and consequently 
also identify A m with ^ fl and A m with % b . Since these quantities meet the condi 
tions of Theorem 3.5.1, we can then apply (3.5.1) and (3.5.2) to assert that, for 
message m, at least one of the following pair of inequalities must hold: 

^ m EE X 2- > i exp MS) - s/i (s) - Sv/W] (3.6.28) 

yeA m 



> i exp [/i(s) + (1 - s)n (s) - (1 - sX/2J7] (3.6.29) 

where 

H(s) = In X 2- V(1 - s) [p dm O - P) v ~ dm ] s < 5 < 1 (3.6.30) 

y 

and d m = w(y x m ). But, since x m is some N-dimensional binary vector and y runs 
over the set of all such vectors, it is clear that there exists exactly one vector y 
(namely, x m ) for which d m = Q,N vectors y for which d m = 1 (at Hamming distance 

16 It is actually immaterial whether this or the opposite association is chosen. In the latter case, we 
would have to define s = p/(l + p) instead of (3.6.39). 



170 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

1 from x m ), (2) vectors y for which d m = 2, and generally ( ) vectors y for which 
d m = k (0 < k < N). Thus (3.6.30) may be written and summed as 



-N(l-s) 



= N{\n [(I - p) s + p s ] - (1 - s) In 2} 



(3.6.31) 



To identify \j/ m , we again recognize that the A m are disjoint decision regions whose 
union covers the total space ^ N . Hence, summing over all messages, we have 



M 



M 



= 1 m = 1 y e A m 



and hence for some m 

* < l/M (3.6.32) 

From (3.6.28) and (3.6.29) we have the two alternative inequalities for some mes 
sage m 



l/M > ^ > | exp \jt(s) - s/i (s) - 



where from (3.6.31) we have 

/i(s) - s^ (s) = N{-ln 2 + In [(1 - p) s + p s ] - 
Ai(s) + (1 - s)ji (s) = Mln [(1 - P) s + P s ] + (1 - 5) <5 
where 

(1 - p) s In (1 - p) + p s In p 






and 



17 



Finally, if we make the substitution 

1 



s = - 



< p < oo 



(3.6.33) 
(3.6.34) 

(3.6.35) 
(3.6.36) 

(3.6.37) 
(3.6.38) 
(3.6.39) 



we find that (3.6.35) and (3.6.36) become, after some algebraic manipulation, 



H(s) - sn 



= -NEJ0) 



(3.6.40) 



= -N[E (p)-pE (p)] (3.6.41) 



Hereo(N) * 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 171 

where 

E (p) = p In 2 - (1 + p) In [(1 - p) 1 <*+* + p 1 " 1 ^] 

Note that this is identical to Eq. (3.4.1) which represents the basic exponent 
function for the BSC with input weighting distribution optimized at q = (i, j). 

We may now conclude the argument by defining the rate in nats per binary- 
channel symbol as 

R = (In M)/N nats/symbol (3.6.42) 

and choosing p = p* positive. Consequently s* = 1/(1 + p*) e [0, 1] is the appro 
priate value such that 

R = (In M)/N 
In 



+ o(N) (3.6.43) 

where we have used (3.6.40). This then satisfies (3.6.33) with equality and con 
sequently requires that (3.6.34) must be an inequality. Thus, using (3.6.41), we 
have 



F^ tf l/(s*) + (l-s*)Ai (s* 
Emax > e 

= e -/v[ ( P *)-p* (p*) + o(.v)] < p* < oo (3.6.44) 

By exactly the same argument which led to (3.6.26), we then have 



(3.6.45) 

where E sp (R) is defined by the parametric equations 

sp (K) = E (p*) - p*;(p*) < p* < oo 

R = E (p*) 0<R<C (3.6.46) 

The limits of/? are established from the properties of E (p) (Sec. 3.2); namely, the 
facts that E (p) is a convex n monotonically increasing function and that 
lim E (p) = C and lim E (p) = 0. But (3.6.46) is then identical to the upper bound 

p-0 p-0 

E(R) of (3.2.8) for the higher-rate region, E (l) < R < C, for the BSC for which the 
latter is optimized for all rates by the choice q = (^, 3). For lower rates, < R < 
0(1), the lower-bound exponent E sp (R) continues to grow faster than linearly 
since the function is convex u, while the upper-bound exponent E(R) grows only 
linearly (see Fig. 3.10). The gap between the upper and lower bounds at low rates 
will be reduced in the next two sections. 

By analogy to (3.2.6), (3.2.8), and (3.2.12), it follows also that the lower-bound 
exponent (3.6.46) can be written as 

E sp (R) = max sup [.(p, q) - pR] (3.6.47) 

q P>O 



172 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



E (0) L- \ / Lo w - ra te improved lower bound 




Figure 3.10 Exponents E(R\ (/?), 



C E sp (R) and low-rate lower bound. 



Thus the construction from the E (p, q) function (see Figs. 3.1 and 3.2) is the same 
as for the upper bound but, rather than terminating at p = 1 for R = E (l) (see 
Fig. 3.3a), it continues on for all p and hence approaches R = 0. This also explains 
why the bounds diverge for rates below R = E (l). 

We have thus obtained almost the same result for the BSC as we had 
previously for the AWGN channel; namely, that the lower bound is asymptot 
ically the same as the upper bound (and identical in exponent) for all rates above 
some critical medium rate. The results for both of the above special cases can be 
obtained in a more intuitive, classical manner using a so-called sphere-packing 
argument (see, e.g., Gallager [1968]). We have chosen this less intuitive approach 
for two reasons: first, it augments and illustrates the power of Theorem 3.5.1, the 
strong version of the Neyman-Pearson lemma; second, it demonstrates the key 
steps in the proof for any discrete-input memoryless channel. By these same basic 
arguments, augmented by other somewhat more involved and sophisticated 
steps, 18 the following general sphere-packing lower bound has been proved. 



18 The simplicity of the proofs for the BSC and AWGN channel is due to the considerable input and 
output symmetry of these channels. Without this natural symmetry, one must impose the formalism of 
"fixed composition codes," whose justification and eventual removal obscures the basic elegance of 
the above technique. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 173 

Theorem 3.6.1 (Shannon, Gallager, Berlekamp [1967]) For any discrete mem 
ory less channel, the best code of M vectors [rate R = (In M)/N] has 



where E sp (R) is given by the parametric equations (3.6.46) and E (p) is identi 
cal to the function defined for the upper bound on P E for the given channel. 



3.7 ZERO-RATE LOWER BOUNDS 

As we have just noted, the upper and lower bounds, which agree asymptotically 
for R > o(l), diverge below this rate and are farthest apart at R = 0. We now 
remedy this situation by deriving new zero-rate lower bounds for the AWGN 
channel and for all binary-input, output-symmetric channels, which agree asymp 
totically with the least upper bound in each case at zero rate. This consequently 
guarantees that the expurgated upper bound is asymptotically exact at zero rate. 
The low-rate problem is treated in the next section. 



3.7.1 Unconstrained Bandwidth AWGN Channel with Equal-Energy 
Signals 

The principal parameter utilized in low-rate bounds is the minimum distance 
between signal vectors. For M real signal vectors of equal energy in an arbitrary 
number of dimensions, we upper-bound this minimum distance by first upper- 
bounding the average distance between distinct vectors. 19 



M- 1 

2M 
M- 1 

2M 
M- 1 



2M 
M- 1 



M- 1 M 



1 



with equality if and only if the centroid Y x m = 0. 

~ 



9 The average involves only those terms for which i /_/; hence the denominator is the number of 
such terms. However, in the summation we include the i = j terms since they are all zero. 



174 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Consequently 



d min = min II x - x. 



(3.7.1) 



Equality holds in (3.7.1) if and only if the signal set is the regular simplex defined 
by (2.10.19). 

We now apply this result to lower-bounding the error probability for any such 
signal set on the AWGN channel. It is reasonable to expect that the greatest 
contribution to this error probability will be that resulting from the closest signal 
pair. Arbitrarily designating these two signals as x 1 and x 2 , we have 

PE^>P El >P E (l->2) (3.7.2) 

where the notation for the right-hand inequality is that of (2.3.4) and denotes the 
pairwise error probability when only the two signals Xj and x 2 are possible and 
the former is transmitted. This inequality follows from the fact that eliminating all 
signals but Xj and x 2 from the signal set allows us to expand both decision regions 
and thus obtain a lower error probability. Further, in Sec. 2.3 [Eq. (2.3.10)], we 
determined this error probability to be 




> Q(J4M/N (M - 1)) (3.7.3) 

where the last inequality follows from (3.7.1) and the fact that the function Q(x) is 
monotonically decreasing in x. Finally, from (3.7.2) and the classical bounds for 
the error function given in (2.3.18), we have 



P-^. ~-g/2N + o(T) (1 7 A\ 

E ma ^ e (6. /A) 

where o(T) goes to zero as Tgoes to infinity. Thus, using the same argument 
which led to (3.6.26) and the same notation as (2.5.13), we have 20 

P (M)>iP max (M/2) 



While this lower bound on P E for the best code is independent of rate, it agrees 
asymptotically with (2.5.16), the upper bound (for orthogonal signals), only at 
R T = 0. Also, at high rates, it is clearly looser (smaller) than the sphere-packing 
bound (3.6.26). In fact, in the next section, we shall discuss a low-rate bound which 
begins with this result and improves on it for all rates < R T < C T /4. 



This form could also have been obtained from (3.5.8). 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 175 

3.7.2 Binary-Input, Output-Symmetric Channels 

The zero-rate lower bound is as easily obtained for this more general class as for 
the BSC. The first step again is to upper-bound the minimum distance among 
code-vectors. Unsurprisingly, the argument is somewhat reminiscent of the above 
for the Gaussian channel. We summarize it in the following lemma due to Plotkin 
[1951]. 

Lemma 3.7.1: Plotkin bound For any binary code of M code vectors of 

dimension TV, the normalized minimum distance between code vectors is 
upper-bounded by 






PROOF We begin by listing all binary code vectors in an M x N array 

*11 X \2 X 1N 

X il X i2 X iN 

X jl X j2 X jN 

X M2 X MN 



Let d(\i, \j) = wfa \j) be the Hamming distance between x, and \ jt and 
consider the sum over all pairwise Hamming distances (thus counting each 
nondiagonal term twice and not bothering to eliminate the case i = j since 
it contributes to the sum) 

MM N M M 

where 

10 if x,,. = x , 



Let v(n) be the number of zeros in the nth column. Clearly for any good 
code, v(n) < M ; for otherwise that column could be omitted without decreas 
ing d min . Then, for each column , there is an m for which x mn = 1. Thus there 
are v(n) values of m for which x m = and hence for which x m . n x mn . 
Consequently 



176 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Furthermore, by the same assumption, there are M v(n) values of m for 
which x mn = 1. Thus 

Z I d(x mn , *,,) = [M-v(n)]v(n) (3.7.8) 

m: x mn = 1 m = 1 

At the same time there are v(n) values of m for which x mn = 0, and con 
sequently M v(n) values of m for which x m , n = 1. Thus 

Z Z d(x mn , Xm . n ) = V (n)[M-v(n)] (3.7.9) 

m: x mn = Q m =l 

Adding (3.7.8) and (3.7.9), we obtain 



m= 1 m = 



(3-7.10) 

since the factor v(M v) is maximized by v = M/2. Substituting in (3.7.7), we 
obtain 

X d(x m9 x m ,) = 2v(n)[M-v(n)] 

m=l m = l n=l 

(3.7-11) 

But, since d(x m , x m ) = trivially for all diagonal terms, letting 

d min = min d(x m ,x m .) 



be the minimum of the nondiagonal terms, we have 
MM M 

X Z rf(x m ,X m ,)= X I <*(XmX ) 
m= 1 m = 1 m= 1 m f m 

> M(M - 1) d min (3.7.12) 

Combining the inequalities (3.7.11) and (3.7.12) we obtain 



which is just (3.7.6) and hence proves the lemma. 

We now proceed just as for the AWGN channel. Denoting by X! and x 2 two 
code vectors at minimum distance, we use (3.7.2) again 

>2) (3.7.2) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 177 

with the same justification as before. But in Sec. 3.5 we showed that the two- 
message error probability is bounded by (3.5.8) 

p {\ 2\ = P O i ^ ^58^ 



where 



and s e [0, 1] is such that 



(3.7.13) 



= 



But since the channel is a memoryless binary-input, output-symmetric channel, we 
have 21 

P(y|xi) d T p(y ft |*ifc) 



P(y|x 



where k refers to any component for which x u i= x 2k = x lk . Suppose that x lk = 
in / of these components and x lfc = 1 in the remaining d min - /. Then (3.7.13) 
becomes 



=ln I 



p. 



PoO ) 



III 



XP.GO 



PoO ) 



But for this class of channels, the output space is symmetric [i.e., for every y, there 
corresponds a -y such that p^y) = p ( y)]- Thus 



= I PO(V) 

>->o 1 



Po(-J>) 



Po(>0 


j + Po(0) 
(3.7.14) 


Po(-y) 



Hence 



(3.7.15) 



Since n(s) is convex u, it has a unique minimum in (0, 1). Furthermore, since 
n(s) = n(l s) this minimum must occur at s = \, and at this point 



= 



(3.7.16) 



21 To avoid dividing by 0, if p(y k \x lk ) = 0, we replace it by e. Then we calculate the exponent 
(3.7.18), which depends only on Z, and finally let c - 0. The result is that Z is exactly the same as if Z 
were calculated directly for the original channel with zero transition probability. 



178 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Thus choosing s = s = \, we have from (3.5.8) and (3.7.15) 

P E (l - 2) > exp IN \(d min /N) In Jp (y) Po (-y) - o(N)\ I (3.7.17) 
\ I y J I 

Consequently, as in (3.7.5), we have, using (3.7.2), 

P E (M)>P Em jM/2) 

> e N[(d m - tn /N)\nZ-o(N)] (3 7 18) 

where 



Finally from Lemma 3.7.1, letting 22 M > N for any fixed rate as N becomes large, 
we have 



Hence 

P E (M)>e N[ * lnZ ~ 0(N)] 

_ e -N[E e *(0) + o(N)] /} 7 J9\ 

where we have used (3.3.31), which is the zero-rate upper-bound exponent. 

Once again we have obtained a result which is asymptotically tight at zero 
rate. This same result has been shown for the entire class of memoryless discrete- 
input channels (Shannon, Gallager, and Berklekamp [1967]). 



3.8 LOW-RATE LOWER BOUNDS* 

We have just closed the gap between the asymptotic lower-bound and the upper- 
bound expressions for zero rate, as well as for rates above R = E (l). We now turn 
to narrowing the gap for the range < R < E (\). This is partially accomplished 
by the following useful theorem. 

Theorem 3.8.1 (Shannon, Gallager, and Berlekamp [1967]) Given two rates 
R" < R for which error bounds on the best code of dimension N are given by 

P E (R ) > e ~ NlEsp(R 
P E (R") > 



22 This restriction is inconsequential since, provided M grows no faster than linearly with N, 

= (\nM)/N-+QasN-+ao. 

* May be omitted without loss of continuity. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 179 

where o(N) -> as N -> oo, E sp (R) is the sphere-packing bound exponent, and 
/(#) is any tighter low-rate exponent. Then, for the intermediate rate 
R = Ji(R ) + (1 - A)/T, < A < 1, the error probability for the best code of 
dimension N is lower-bounded by 



P E (R) > g-MA,p(ji )+(i-A)KK") + o(.v)] when = AK + (1 - A)/?", < A < 1 

(3.8.1) 

In other words, if we have a point on the sphere-packing bound exponent and 
another on any other asymptotic lower-bound exponent, the straight line connect 
ing these points is itself an asymptotic lower-bound exponent for all intermediate 
rates. In connection with the results of the last two sections, this suggests that we 
connect the asymptotically tight result at zero rate with the sphere-packing bound 
by a straight line which intersects the latter at a rate as close to E (l) as possible. 
This of course, is achieved by drawing a tangent from the zero-rate exponent value 
to the curve of the sphere-packing bound exponent. The result (see Fig. 3.10) is a 
bound which is everywhere asymptotically exact for the unconstrained AWGN 
channel and for the limit of very noisy channels, 23 while for all other channels 
when (0) is finite, it is generally reasonably close to the best (expurgated) upper 
bounds. 

The proof of this theorem is best approached by first proving two key lemmas, 
which are interesting in their own right. The first has to do with list decoding, an 
important concept with numerous ramifications. Suppose that in decoding a code 
of M vectors of dimension N we were content to output a list of the L messages 
corresponding to the L highest likelihood functions and declare that an error 
occurred only if the transmitted message were not on the list. Then naturally the 
probability of error for list-of-L decoding is lower than for ordinary decoding with 
a single choice. However, a lower bound, which is identical in form to the sphere- 
packing lower bound, holds also in this case. 

Lemma 3.8.1 For a code of M vectors of dimension 24 N with list-of-L decod 
ing the error probability is lower bounded by 

P (N, M, L) > *- w.p<*> + <x*>] (3. 8 2) 

where 

C (3.8.3) 



N 



PROOF (for binary-input, output-symmetric and AWGN channels) The argu 
ment is almost identical to those in Sec. 3.6 with the exception that now the 
decision regions are enlarged to 

A m = {y: p(y |x m ) > p(y\x k ) for all x k i |x m , x mi , x m2 , ..., x mi _J} 



23 These channels, for which ex (0) = (0) = (!), are the only channels for which everywhere 
asymptotically exact results are known. See also Sec. 3.9. 

24 For the unconstrained AWGN channel, the lemma holds with N replaced by T throughout. 



180 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

That is, A m is the region over which p(y \ x m ) is among the top L likelihood 
functions. Consequently, each point y e <& N (the observation space) must lie 
in exactly L regions; specifically if p(y |x mi ) > > p(y |x m j are the L greatest 
likelihood functions, then y e A mk , k = 1, 2, . . . , L. With this redefinition of 
A m , the pairs of inequalities (3.6.7), (3.6.8), and (3.6.28), (3.6.29), as well as the 
forms of n(s), (3.6.9) and (3.6.30), appear exactly as before. However, the 
values of the \j/ m now differ, and this requires changes in (3.6.12) and (3.6.32). 
For now 

I> m = I 

m=l m=l y A m 



(3.8.4) 

since it follows that, if each y lies in exactly L regions, summing over each of 
the M regions {A m } results in counting each point in the space L times. Thus 
(3.6.12) and (3.6.32) are replaced by 

t* < L/M (3.8.5) 

and the rest of the derivation is identically the same. For binary-input, output- 
symmetric channels, this means that we replace 25 (3.6.42) by 

- In (M/L) 

R= ^r 

and proceed in exactly the same manner as before, thus obtaining (3.6.45) and 
(3.6.46) with R replaced by R, which are just (3.8.2) and (3.8.3) of this lemma. 

The other key lemma relates ordinary decoding with list decoding as an 
intermediate step. 

Lemma 3.8.2 For arbitrary dimensions N\ and N 2 , code size M, and list size 
L, on a memoryless channel 

P E (N l + N 2 , M) > P (N!, M, L)P (N 2 , L + 1) (3.8.6) 

where P E (N, M, L) is the list-of-L average error probability for the best code 
of length N with M codewords, and P E (N , M ) is the ordinary average error 
probability for the best code of length N with M codewords. The two 
argument error probabilities apply to ordinary decoding; the three argument 
probabilities apply to list decoding. 

The intuitive basis of this result is that an error will certainly occur for a 
transmitted code vector of length A/^ + N 2 if L other code vectors have higher 

25 For the AWGN channel we replace (2.5.14) by R T = " 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 181 

likelihood functions over the first N x symbols and if any one of these has a higher 
likelihood function over the last N 2 symbols. 

PROOF Let each transmitted code- vector x m of dimension N 1 + N 2 be 
separated into a prefix 

*m = ( X m 1 > *m2 > > X mN i / 

and a suffix 



X = 



m, (N i 



Similarly let the received vector y be so separated into an N^ -dimensional 
prefix y and an N 2 -dimensional suffix y". The overall error probability for 
ordinary decoding is, of course, given by (2.3.1) and (2.3.2) as 

f * = i I Ip(yK) (3.8.7) 

M m=l yeA m 

For each prefix y let 

A;(y ) = {y :y = (y ,y")eA m } (3.8.8) 

be the set of suffixes for which the overall vector y is in the mth decision 
region. Then, since the channel is memoryless, we may rewrite (3.8.7) as 

Af 



m ~ y y"eA ^(y ) 

= i I IP,,(y l" m )PUy ) (3.8.9) 

1V1 m = 1 y 

where P m (y ) is the error probability for message m given that the prefix y 
was received. 

Let Wj(y ), w 2 (y ), ..., w L (y ) be the L values of m (the L messages) for 
which the overall error probabilities, conditioned on the prefix y being 
received, are the smallest. That is 

PE..&) < P Em , 2 (y ) < < P Em , L (y ) < P Em , k (y ) p.8.io) 

for every k > L Consequently, for every 

* {wii(y ), m 2 (y ), ..., m L (y )} 
it follows that 

PE,. t (y )>PE(N 2 ,L+l) (3.8.11) 

For suppose on the contrary that 



182 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Now restrict the code to only the L+ 1 messages m l9 w 2 , ..., W L , m k . The 
decision regions could then be expanded leading to 

P Em ,(y , L + 1) < P Em ,(y ) j = 1, 2, ..., L, k 

where the left side of the inequality refers to error events for the L 4- 1 message 
code. Combining these two inequalities, we obtain 

P m .(y ,L+l)<P (N 2 ,L+l) 

which is obviously in contradiction to the fact that P E (N 2 , L+ 1) is a lower 
bound for the best code of L + 1 vectors. Thus (3.8.11) must hold and we can 
lower-bound the inner summation in (3.8.9) by 



me{m 1 (y ),m 2 (y ),...,m L (y / )} 

P E (N 2 , L + 1) m = m k (y ) where k> L 

(3.8.12) 

Substituting (3.8.12) in (3.8.9) and changing the order of summation, we 
obtain 

^il I p Nt (y \x m )P E (N 2 ,L+l) (3.8.13) 

Ivl y > m = m k (y ):k>L 

Finally, consider the prefix symbols x x , x 2 , . . . , x^ as a code of M vectors of 
dimension N^ Then again interchanging the order of summation, we have, 
using (3.8.10), 

^Z Z p Nl (y \* m ) = ^ p Nl (y\x m ) (3.8.14) 

M y m = m k (y ):k>L M m=l y e 7^ 

where A^ = {y : m e {m^y ), m 2 (y ), ..., m L (y )}}. 

Hence, the right side of (3.8.14) is just the overall error probability for a 
list-of-L decoder and consequently is lower-bounded by P E (N^ M, L). Substi 
tuting this lower bound for (3.8.14) into (3.8.13) we obtain 

P E (Ni + N 2 , M) > P E (N 19 M, L)P (AT 2 , L + 1) (3.8.6) 
which thus proves the lemma. 

PROOF (of Theorem 3.8.1) Substituting (3.8.2) for P (ATj, M, L) and an 
arbitrary low-rate exponential lower bound for P (JV 2 , L + 1) into (3.8.6), 
we have 



JV 2 , M) > e - Nl[E ** (R )+0(Nl)] - N2[El(R " ) + 0(N2)] (3.8.15) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 183 

where o(N l ) - and o(N 2 ) -> as A^ - oo and JV 2 - oo, respectively. From 
(3.8.3), we have 



and we let 



Defining 






we have, using (3.8.16) through (3.8.18) 

InM 



,3.8,6) 



(3.8.18) 









(3.8.19) 



Hence, letting N = N l + N 2 where both N l -> oo and N 2 -> oo, we obtain 
from (3.8.15) 



where 



and 



P E (R) > 

R = AK + (1 - 



< A < 1 



R" <R< R 

which is just (3.8.1) and hence proves the theorem. 

The application of Theorem 3.8.1 involves letting R" = and using the zero- 
rate bound of Sec. 3.7 for ,(0). Thus in (3.8.1) we let R" = 0, R = R/A, and 
,(0) = ex (0) so that 

P (R) > 



184 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where R l = R/k and hence 



The best choice of R l9 obviously, is the one for which the line from ex (0) at R = 
has maximum slope, i.e., the rate at which a tangent line from ex (0) at R = 
strikes the sphere-packing bound exponent (see Fig. 3.10). 



3.9 CONJECTURES AND CONVERSES* 

In the preceding three sections, we have found lower bounds on the best code, for 
given N and M, which agree asymptotically at R = and R > E (\) with the 
upper bounds derived in the first half of this chapter by ensemble average argu 
ments. For the low-rate region < R < E (\), asymptotically tight results are not 
available, although the exponents of upper and lower bounds are generally close 
together and become asymptotically the same in the limit of very noisy channels. 
The most likely improvement in the lower bound for this region will come 
about as a result of an improvement in the upper bound on minimum distance. In 
the case of binary-input, output-symmetric channels, we found in Sec. 3.7 the 
lower bound 



P E > -N[-(d m in//V)lnZ + o(JV)] (3.7.18) 

where 



Thus, an upper bound on d m[n is needed to complete the error bound. In Sec. 3.7, 
we derived the Plotkin bound 



where o(N) -> as N - oo, which then led to the lower error bound (3.7.19) which 
is tight at zero rate. But it is intuitively clear that, the higher the rate, the more 
code vectors are placed in the JV-dimensional space and the achievable minimum 
distance is lower. It is possible to modify the Plotkin bound so as to obtain a form 
which decreases linearly with rate (see Prob. 3.33), specifically 

(3.9.1) 



but this is by no means tight either. A tighter upper bound on d min is due to Elias 
[I960]. 26 Also of interest is the tightest known lower bound on d min ; this was derived 



* May be omitted without loss of continuity. 

26 An even tighter upper bound has been derived by McEliece, Rodemich, Rumsey, and Welch 
[1977]. Also, see McEliece and Omura [1977]. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 185 



by Gilbert [1952] using an essentially constructive argument (see Prob. 3.34). [One 
can also derive the Gilbert bound 27 by using the expurgated upper bound (3.4.8) 
and the lower error bound (3.7.18) for the binary-input, output-symmetric chan 
nel.] The Gilbert lower and Elias upper bounds on normalized minimum distance 
for a binary code of N symbols are, respectively. 



N 



<2S(R)[\-6(R)] 



where 6(R) is the function defined by 
R = In 2 - 



(3.9.2) 



< -5 < i (3.9.3) 

The Plotkin and Elias upper bounds and the Gilbert lower bound are all plotted 
in Fig. 3.11. 

It is tempting to conjecture, as have Shannon, Gallager, and Berlekamp 
[1967], that in fact the Gilbert bound is tight, i.e., that 



N 



= 6(R) + o(N) [conjecture] 



(3.9.4) 



where 3(R) is given by (3.9.3). For then, at least for binary-input, output- 
symmetric channels, we would have, using (3.7.18) and (3.9.3) 



27 Also known as the Varshamov-Gilbert bound, in recognition of independent work of Varshamov 
[1957]. 



0.5 



.V 



Plotkin upper bound 
Elias upper bound 

Varshamov-Gilbert lower bound 




0. 



0.1 0.2 0.3 

Figure 3.11 Bounds on d m{ JN. 



0.4 



0.5 



0.6 0.7 



186 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where 

E t (R) = -din Z [conjecture] 

R = In 2 - JP(6) (3.9.5) 

But interestingly enough, this coincides asymptotically with the expurgated bound 
for these channels derived in Sec. 3.4 [see (3.4.8)] so that 

,(#) - ex (K) [conjecture] < R < In 2 - jjfLJL j (3.9.6) 

Finally, for rates In 2 - Jj?[Z/(l + Z)] = E x (l) <R< E (\\ the upper-bound 
exponent is a line of slope 1, tangent to the curved portions for low and 
high rates at ^(1) and E (\\ respectively. Similarly, by Theorem 3.8.1 if the lower 
bound (3.9.6) holds, we could then connect it at the highest rate E x (\) to the 
sphere-packing bound at E (\) by the same straight line. Thus it appears, at least 
for the class of binary-input, output-symmetric channels, that the missing link in 
showing that the best upper bounds are asymptotically tight everywhere, is being 
able to show that the conjecture (3.9.4) on the asymptotic tightness of the Gilbert 
bound is indeed true. No evidence exists to the contrary, but no real progress 
toward a proof is evident. Historical precedents demonstrate that when a particu 
lar result is proven for the BSC, the proof can ultimately be bent to cover essen 
tially all memoryless channels. Thus, the asymptotic tightness of the Gilbert 
bound is one of the most important open questions in information theory. 

The other gap in the results of this chapter involves the behavior of any of the 
channels considered at rates above capacity. Since both upper and asymptotic 
lower-bound exponents approach zero as R -> C from below, it would appear that 
there is little chance for good performance above C. In fact, for rates above 
capacity, two very negative statements can be made. These are known as the 
converses to the coding theorem. The first, more general result due to Fano [1952] 
was derived and discussed in Sec. 1.3. It shows that, independent of the encoding 
and decoding technique, the average (per symbol) error probability is bounded 
away from zero. The second converse, which holds only for block codes, is 
the following stronger result. 



Theorem 3.9.1: Strong converse to the coding theorem 28 (Arimoto [1973]) 
For an arbitrary discrete-input memoryless channel of capacity C and equal 
a priori message probabilities, the error probability of any block code of 
dimension N and rate R > C is lower bounded by 

P E > 1 - e - NE * c(R) (3.9.7) 



28 Earlier versions are due to Wolfowitz [1957], who first showed that lim P E = 1 for R > C, and 

N-oo 

Gallager [1968], who first obtained an exponential form of the bound. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 187 

where 



E SC (R) = max min E (p 9 q) - pR 



- 1 <p<0 I q 

and E (p, q) is given in (3.1.18). 



>0 for#>C (3.9.8) 



{Note that E SC (R) is a dual to the form of E(R) given in Sec. 3.1. The main 
difference is that the parameter p is restricted to the interval (- 1, 0).} 

PROOF We bound the average probability of correct decoding of an arbitrary 
code by first examining the form 

PC = I IP.v(y|x_) 

M m=l y6A m 

= iXn>axp. v (y|xJ (3.9.9) 

* y m 

This follows from the fact that the optimum decision regions are defined as 
A m = |y:maxp,.(y|x m ,) = P.v(y|x m )j m = 1,2, ...,M (3.9.10) 

m 

Now for any ft > 0, we have 

max p N (y \ x m ) = I max p N (y \ x m ) l/(i I 

m V m 

(3.9.11) 

Defining a special probability distribution on codewords, namely 

w=l,2,...,M 

|0 otherwise (3.9.12) 

gives us the relation 

max p. v (y|x m ) < (M ^ fttHM?!*)") 

m \ x 

% (x)p, v (y|x) 1 ^) i (3.9.13) 

/ 

Using this in (3.9.9), we have the bound 



y \ x 

1 



(3.9.14) 



188 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where the maximization is over all distributions q N on SC N , not just the special 
distribution of (3.9.12). Defining the parameter 

P = 0-1>-1 (3.9.15) 

where p = - 1 is taken as the limit as p -* - 1 from above, we have 

(\I+P 
E^JMrW* 1 ** ( 3 - 9 - 16 ) 

_-.._- / 

In Lemma 3.2.2, we showed that 

l+p 

(3.9.17) 

is a convex u function over the space of distributions q N ( ) on 3E N for p > 0. 
For p < 0, the same proof of Lemma 3.2.2 shows that (3.9.17) is a convex n 
function over the space of distributions q N ( ) on 3C N . We now restrict p to the 
semi-open interval p e (- 1, 0]. The Kuhn-Tucker theorem (App. 3B) shows 
that there is a unique maximum of (3.9.17) with respect to distributions on 3C N 
and that it satisfies the necessary and sufficient conditions 



Z My |x) l/(1+rt (y, *,Y < Z (y, iY +p (3.9.18) 

y y 

where 



for all x 6 & N with equality when <? N (x) > 0. This maximization is satisfied by 
a distribution of the form 



*(*)= uw ( 3 - 9 - 19 ) 

n=l 

where q( ) satisfies the necessary and sufficient conditions 

Z p(y ixr + My, qK < Z (y. q) 1+p (3-9.20) 

y y 

where 



for all x e 3C with equality when q(x) > 0. Hence from (3.9.16), we have 

, 1 + p 
P c <M P max 



= exp I-N mm [E (p, q) - pR}\ (3.9.21) 

q 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 189 

Minimizing the bound with respect to p e ( 1, 0] yields 

P c < e~ NE *< (R) (3.9.22) 

and hence (3.9.7) when we use P E = 1 P c . 

For R > C we can show E SC (R) is greater than zero by examining proper 
ties for E (p, q) for 1 < p < 0. Using Lemma 3.2.1, which is proved in 
App. 3A, we have for 1 < p < 

E (p,q)<0 (3.9.23) 

with equality if and only if p = 0. Further, we have 

8E (p, q) 



dp 
and 



> (3.9.24) 

< (3.9.25) 



dp 
, q) 



^ (3.9.26) 



dp 

With these properties, we see that E SC (R) > for R > C by using arguments 
dual to those used in Sec. 3.2 to show that E(R) > for R < C. 

This concludes our discussion of converses as well as our treatment of error 
bounds for general block codes. 

3.10 ENSEMBLE BOUNDS FOR LINEAR CODES* 

All the bounds derived so far pertain to the best code over the ensemble of all 
possible codes of a given size, M, and dimension, N. However, virtually all codes 
employed in practical applications are members of the much more restricted 
ensemble of linear codes. Clearly the best linear code can be no better than the best 
code over the wider set of all possible codes. Hence, all the previous lower bounds 
also apply to linear codes. The problem is that the upper bounds, based on 
averages over the wider ensemble of all codes, must now be proved over the 
narrower ensemble of linear codes. It turns out that this task is not nearly as 
formidable as would initially appear. We shall consider here only binary linear 
codes and binary-input, output-symmetric channels, but the extension to the codes 
over any finite field alphabet is straightforward. 

A binary linear code of M = 2 K code vectors, as defined in Sec. 2.9, is one in 
which the code vectors {v m } are generated by a linear algebraic operation on the 
data vectors {u m }, the latter being lexicographically associated with all 2 K possible 
binary vectors from Uj = to u 2 * = 1. We generalize the definition of linear codes 



* May be omitted without loss of continuity. 



190 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

of Sec. 2.9 to one for which the code-vectors contain a constant additive vector v , 
that is 



m 



= 12 2* 

A, ..,..., z. 



(3.10.1) 



where 



G = 



011 012 01N 

021 022 " 02N 

fikl 0K2 " 0KJV 



and 



are an arbitrary binary matrix and binary vector. We take here L = N and we take 
the signal vectors (x m = v m } to be the code vectors. The additive vector v is an 
unnecessary artifice for output-symmetric channels but becomes necessary for the 
proof in the absence of symmetry. It is clear that the ensemble of all possible 
binary linear codes contains 2 (K + 1)/N members, corresponding to all distinguishable 
forms of G and v . 

The average error probability of the mth message over the ensemble of all 
possible linear codes is, analogously to (3.1.1) 



1 



(3.10.2) 



where <& (K +D N is the space of all possible signal sets generated by (3.10.1). Substi 
tuting the error probability bound for a specific signal set (3.1.4), we have for 
m= 1 

~P~ < V - Y---Y n (\\M W< 1+ p) 

r i < L ">(K + i)N Zj L PNU| X I; 



M 



i = 2 



p >0 



(3.10.3) 



But 



i = Vj = OG 



and hence can take on any of 2 N values. However, once Xj is fixed by the choice of 
v , the remaining signal vectors x 2 , . ., X M jointly can take on just 2 KN possible 
values depending only on the KN binary degrees of freedom of the matrix G. Thus 
we may express (3.10.3) as 



I-I 



m = 2 



p>Q (3.10.4) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 191 



where KN is the space of all signal sets generated by (3.10.1) when v is fixed. 
Using the definition (3.1.6) we find, analogously to (3.1.7) and (3.1.9) with 
< p < 1, that the expression in brackets in (3.10.4) becomes 



v Z Z [/*(x 2 ,...,x M )] 



i I- 

2 KN (x 2> ,X M 

y 

1 M 



> > X Af) 



Af 



I I -I pN(y|x m .)" ( 



= 2 (x 



(3.10.5) 



But clearly, for m ^ 1, any given value of x m - e 9E N can be obtained by choosing 
some row vector of the G matrix to be an appropriate distinct function of the 
remaining row vectors. However, this only leaves 2 (K ~ 1)N choices for the remain 
ing vectors. Thus 



I- I 



<* ..... -- 



*.W> e - S (lt-l).V 



IP K (y|x.. 



1/<1+ " 



(3.10.6) 



Combining (3.10.4) through (3.10.6), using definition (3.1.6), we obtain 



y x 



Ili 

y x, ^ 






Af 



Z Zw 



:1/(1+P) 



i + p 



< 1 



(3.10.7) 

which is identical to (3.1.10) with <?.v(x) = 1/2 N . Clearly P^T can be identically 
bounded by interchanging indices m and 1 throughout, and the rest of the en 
semble average upper-bound derivation (i.e., the balance of Sees. 3.1 and 3.2) 
follows identically to that for the wider ensemble of all block codes. Thus all the 
results of Sec. 3.2 hold for binary linear codes also (with 2 = 2) when g(0) = 
tf(l) = i which holds for output-symmetric channels. As we found in Sec. 3.6, 
this bound is asymptotically tight for all rates R > E (l) for this class of channels. 
To improve the upper bound at /ow rates, for the wider ensemble of all block 
codes we employed an expurgation argument (Sec. 3.3). However, for binary 



192 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



linear codes, the proof of the expurgated bound is easier than the more general 
proof of Sec. 3.3. Indeed, expurgation of codewords is not necessary. For M 
binary codewords of a linear code used over any binary-input memoryless chan 
nel, we have from (2.3.16) the Bhattacharyya bound for the mth message error 
probability 

y|x m K(y|x m ,) (2.3.16) 



PE.* Z Z 

m +m y 

For binary code vectors x m and x m , , we have 



(3.10.8) 



where w(-) denotes the weight of the vector and here w(x m 0x m <) equals the 
number of symbols in which x m . differs from x m . Thus 



PE - - ? m 

For any linear code of the form 



(3.10.9) 
(3.10.10) 



we have from 29 (2.9.10), for any m 

{w(x m x m ,) for all m + m} = (w(u 2 G), w(u 3 G), . . . , W(U M G)} 
Thus 



M 

P Em < Z 



for m = 1, 2, . . . , M. Since the bound is the same for all codewords, we have 

M f _ lw(u m G) 



PE< Z 

m=2y 



(3.10.12) 



Note that this is exactly the form of (2.9.19) but holds for arbitrary binary-input 
memoryless channels, without the requirement of output symmetry. 
Defining 



and the parameter < s < 1 we have the inequality (App. 3 A) 



m=2 
M 



(3.10.14) 



We identify v m there with v m v here and note that x m x m - = (u m u m .)G. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 193 

P S E and its bound (3.10.14) depend on the particular code generator G as 
shown in (3.10.10). We next average P 5 E and its bound over the ensemble of all 
possible binary linear codes which contain 2 KN members, corresponding to 
all distinguishable forms of G. The average of (3.10.14) over all possible linear 
codes is 

n < I i I z ( - } 

G L m=2 

= I I^Z"^ (3.10.15) 

m = 2 G Z 

where we sum over the space of all possible generator matrices G. Noting that 
each generator matrix consists of K rows of dimension TV, we can express (3.10.15) 
in terms of row vectors of the generator matrices as follows. 



M / i \K 

^ Z X " ZUiv zsw<Uml8l@Um282 " eUm ^ ) (3.10.16) 

=2 \ 2 / 



where now, for each row, we sum over the space of all possible row vectors. In this 
case, all the row vector spaces are the same JV-dimensional binary vector space, 
3f N . For each m ^ 1, we have u m ^ and hence, in u m G = u^g^ u m2 g 2 
" u mK gx > at least one row vector adds into the sum to form u m G. Varying over 
all 2 NK possible matrices G and taking the sum of the rows & , for which u mk ^ 
results in 2 N( *~ 1) -fold repetition of each of the 2 N possible AT-dimensional vectors 
x = u m G. Thus 



(?FS- 

v^ / i 



ii 



L 7 sk 

. r ^-^ 

(3.10.17) 



Combining (3.10.16), (3.10.17), and using (M - 1) < M yields 

(3.10.18) 



Hence there exists at least one linear code for which 

\-\-Z* \ N/S 






\-\-Z* \ N/S 
-^) 0<s<l (3.10.19) 

or with parameter 1 < p = 1/s < oo 



(3.10.20) 



194 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where 



/_ _- W l 

1 + \L\/po(y)pi(y)\ 

Up,i)=-Pln[ 



(3.10.21) 



Minimizing over p > 1, we have that, for at least one binary linear code 

P E < e - NE (R) (3.10.22) 

where 

E ex (K) = sup [E x (p, i) - pK] 

which corresponds to (3.3.12) and (3.3.13) for this class of channels and is given in 
parametric form in (3.4.8) with Z given by (3.10.13). 

Thus, for linear block codes over output-symmetric channels, we have obtained 
the ensemble average upper bound of Sec. 3.1, and we have demonstrated that the 
expurgated error bound of Sec. 3.3 holds regardless of whether or not the channel 
is output-symmetric. In the next three chapters we shall consider a special class 
of linear codes which can be conveniently decoded and which achieves performance 
superior to that of linear block codes. 

3.11 BIBLIOGRAPHICAL NOTES AND REFERENCES 

The fundamental concepts of this chapter are contained in the original work of 
Shannon [1948]. The first published presentation of the results in Sees. 3.1 and 3.2 
appeared in Fano [1961], as did those of Sec. 3.10 for the ensemble average. The 
present development of Sees. 3.1 through 3.4 is due to Gallager [1965]. The lower- 
bound results in Sees. 3.5 through 3.8 follow primarily from Shannon, Gallager, 
and Berklekamp [1967]. The strong converse in Sec. 3.9 was first proved by 
Wolfowitz [1957]; the present result is due to Arimoto [1973]. 



APPENDIX 3A USEFUL INEQUALITIES AND THE 
PROOFS OF LEMMA 3.2.1 AND THEOREM 3.3.2 



3A.1 USEFUL INEQUALITIES (after Gallager [1968], 
Jelinek [1968]) 30 

Throughout this appendix we use real positive parameters r > 0, s > 0, and 
< A < 1. Letting / = {1, 2, . . . , A} be an index set, we define real nonnegative 
numbers indexed by / 

a t > b t > for i e I 

30 See also Hardy, Littlewood, Polya [1952]. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 195 



and probability distributions indexed by /, 

Pi > ft > for / e / 
where 

A A 

I P, = 1 I Q, = 1 

i = 1 i = l 

We proceed to state and prove 1 1 basic and useful inequalities. 
(a) In r < r 1 with equality iff 31 r = 1 

PROOF /(r) = In r - (r - 1) has derivatives 



Since /"(r) < 0, we have a unique maximum at r = 1. Hence /(r) = In r 
(r - 1) </(!) = with equality iff r = 1. 



< p. fl . with equality iff 



PROOF From (a) we have 



, 



with equality iff 



fl,= I/V.j for 



7=1 



z ^-^ 

7=1 



such that p f 



Note that iff denotes "if and only if. 1 



196 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 
A 

(c) Z Qi p i ~ A ^ ! with equality iff P, = Qi for all i e /. 
PROOF From (b) we have for each i e / 



with equality iff a { = b t . Hence, substituting P f and Q t for ^ and b t and 
summing over z, 



i = 1 i = 1 

= 1 

with equality iff P, = Q t for all i e /. 



(d) Z "A < ( Z ^ 1/A ) A ( Z ^ 1/(1 A) ) 1 (Holder inequality) 
i = i \i = i / \i = i / 

with equality iff, for some c, a} ~ A = cb-: for all i e /. 

PROOF In (c\ for each / e /, let 

Q= al u Pi= 



7=1 7=1 

The special case A = \ gives 

A I A \l/2/ >1 \l/2 

Z <*& < Z fl M Z ^? I (Cauchy inequality) 

and the integral analog 

r /r \ 1/2 /r \ 1/2 

I a(x)/?(x) dx < I a 2 (x) i/x I I b 2 (x) dx I (Schwarz inequality) 

A I A \A/ X \^~ X 

(e) Z ^i fl i^i ^ I Z p i a l u ] I Z ^ib; 1/(1 ~ A) I (variant of Holder inequality) 

i = l \i = l / \i=l / 

with equality iff, for some c 



for all i e / 

More generally, if the g t are any nonnegative real numbers indexed by /, then 

I-A 



i=l i=l 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 197 

PROOF Let a f = g\a ( and b { = g\ " *b { be used in (d). 

(A \1M X / A \A 

I P, a, A S I P.O, <; j P,a? (Jensen inequality) 

i = l i=l \il / 

with equality iff, for some c, P, a, = cP t for all i e /. 

PROOF The upper bound follows from (e) with b t = 1 for all i E I. The lower 
bound follows from (e) with a f = af- and ^ = 1 for all i E I. 



(9) 



(A vA A i A \1/A 

Z a l * I ^ Z a i - I Z fl N w ith equality iff only one a t is nonzero. 
i = i f=i \i=i / 

PROOF Let 

P t = - for all / e / 

Since P i < 1 we have 

pl/A ^ p ^ pA 

with equality iff P, = or 1, and thus 



A A A 

I P, I/A < i P, = i < s 

= 1 i = i i = i 



with equality iff only one a, is nonzero. 
Thus 



I," 

\;=i / 



i/i^i 



and 

i "i 




(\l/r / >1 \l/s 

lP,flf SJIPX 0<r<s 

i = 1 \ i = 1 

with equality iff for some constant c, P t a t = cP { for all / e /. 



198 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

PROOF Let b t ,= 1, a t = a r h and X = r/s e (0, 1) in (e). 



As+Ar A 



Z<&ap 

where I = 1 - A, with equality iff, for some c 

cfiifl? /s = Qi^l" for all i e / 



PROOF Let 



n 



^A/(As+Ar) L _ -A/fAs+Af) 





(/) Let flj k be a set of nonnegative numbers for 1 < j < J and 1 < k < K. Then 



A K / J \A 



and 



J / K \1/A 

Z Z* 

j=i \fc=i / 



K I J \1/A J / K \ All/A 

Z S a Jk] ^ Z Z fl jk (Minkowski inequality) 

k=l \j=l / j=l \k=l / \ 



PROOF Note that 



/ * \ 1/A / * W * \ (1/ 

u=i Jk ) u=i J /\k=i Jk / 

X / K \ 

-Z-J.Z-JI 

fc = 1 \ i = 1 / 



K / K \(1-A)/A 



and from (d) 



J / K \1/A J K / K \(1-A)/A 

Z Z* =1 Z* .Z* 

j=l \k=l / j=l k=l \ t = l / 



-z 



J / K \(1-A)/A 



k=l 
K 1 J \A 



J / K \1/A 



/ J \A J / X \ 

I4 M Z Z% 

l \ j= i / j= i \i=i / 



l-A 



or, by dividing both sides by the second term on the right, we have 

J / K \1/A A K I J \A 

Z Z% ^Z Z4 U 

j=i \k=i / k=i \j=i / 

The second inequality follows from this one with the substitution a jk = aj k 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 199 



(k) Let a jk be a set of nonnegative numbers for 1 <j < J, 1 < k < K. Then 



J / K \1/A A K / J \A 

id? a * silzw 1 

7=1 \ k = l / k = l \J = 1 / 



and 



K I J \1/A J i K \ All/A 

Z Z 2/4 ^ Z Qj\ Z % (Variant of Minkowski inequality) 

k =l \7=i / 7=1 \ k = i / J 

PROOF Let a jk = Qja jk in (j) for the first inequality and a jk = Q}^a jk in (j) for 
the second inequality. 



3A.2 PROOF OF LEMMA 3.2.1 



ll/d+p) 



From inequality (h) we have, for 1 < p t < p 2 

\i/u + pi) 



1+P2 



(3A.1) 



(3A.2) 



with equality iff, for some c, q(x)p(y\x) = cq(x) for all x e &. Hence 

(pi, q) < E (p 2 , q) (3A.3) 

with equality iff, for every y e $ , p(y \ x) is independent of x e $ for those x for which 
q(x) > 0. But this is impossible since we assumed /(q) > [see property 1 given in 
(1.2.9)]. Thus (p, q) is strictly increasing for p > - 1 and hence 

dE (p, q) 



Also 



>0 p> - 



E.(P, q) > (0, q) = p > 



(3A.4) 



(3A.5) 



with equality iff p = 0. The inequality is reversed for 1 < p < 0. 

Letting A E (0, 1), and p A = ^-Pi + ^P2 (where I = 1 - >l), we have from in 
equality (/) upon letting s = 1 + p l and r = 1 + p 2 



l+PA 



A(l-l-pi) 



1 



(3A.6) 



200 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Summing (3A.6) over all y e ^, we have 



y x 



A(l+p2) 



(3A.7) 



Applying inequality (d) to the right side of (3 A. 7), we have 



II 



kl/U+P2) 



1+P2 



(3A.8) 



Taking - In ( ) of both sides of this last equation yields the desired result 

E (^Pi + 2p 2 <l) ^ ^E (pi, q) + ^E (p 2 , q) (3A.9) 

This proves that E (p, q) is convex n in p for p > - 1 and therefore 

*,q) 



<0 



(3 A. 10) 



Equality is achieved in (3A.9) and (3 A. 10) iff we had equality in the application of 
inequalities (i) and (d) that led to (3A.9). Inequality (i) resulted in (3 A.6) where for 
the given y we have equality iff, for some c y 

q(x)p(y\x) 1/(l+pl) = c y q(x)p(y\x) ll(l+p2 > for all x (3A.11) 

Thus equality holds in (3A.7) iff (3 A. 11) holds for each y. Inequality (d) used to 
obtain (3A.8) holds with equality iff, for some c 

1 +P1 1 +P2 

,I/U + PI) _ ^ \ v /,SvWi,l v-Wd + p2) for all y 

(3A.12) 
In (3 A. 12), because of (3 A. 11), we can factor out p(y\x) = c" y > to obtain 






= c 



x:p(y\x)>0 



+ P2 



This implies that for some constant a 

q(x) = a for all y 

x:p(y\x)>0 

Thus, for all x, y such that q(x)p(y | x) > 0, we have 



= a 



for all y (3A.13) 
(3A.14) 

(3A.15) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 201 

or as a consequence of definition (3.2.2) 

In ^ , .v , . ^ = /(q) (3 A. 16) 



3A.3 PROOF OF THEOREM 3.3.2 



= -In 



Let 1 < P! < p 2 . From inequality (/z), we have 

I/PI 



1 1 4(*)9(.x ! 



* 1 1 <?(*)<?(* ) 

I X JC 



with equality iff, for some c 



= c 



(3A.17) 



1/P2JP2 



(3A.18) 



(3A.19) 



for all x, x such that q(x]q(x ) > 0. Hence E x (p, q) is an increasing function of p 
for p > 1. 

Let us examine the condition for equality given by (3 A. 19). For any x such 
that q(x) > 0, we have trivially q(x)q(x) > and 



Furthermore inequality (c) states that 



(3A.20) 



)^! (3A.21) 

y 

with equality iff p(y\x) = p(y|x ). Hence equality in (3A.18) is achieved iff 



for all y and all x, x such that q(x)q(x ) > 0. This is impossible since we assume 
/(q) > 0. Thus E x (p, q) is strictly increasing with p for p > 1 



, q) 



>0 



(3A.22) 



202 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



and 



E x (p, q) > ,(1, q) > for p > 1 



(3A.23) 



Next, from inequality (/), it follows that, for any A e (0, 1) and p x = Ap x 
Ip 2 , we have 



ZZ <?(*)<?(* ) 



I/Pi 



X W 

IH 

with equality iff, for some c 



for all nonzero values of 
I 



* ) 


Z ^p(y\x)p(y\x ) 


j (3A.24) 
(3A.25) 


p(y|x)p(y|x ) = c 



where <?(.x)<?(x ) > 



From inequality (c), we again have that this sum is 1 iff for all y, p(y \ x) = p(y \ x ). 
The sum is iff, for all y, p(y \ x)p(y \ x ) = 0. Thus from (3A.24) we have 

E x (l Pl + Ip 2 , q) > *E x (p l9 q) + lE,(p 2 , q) (3A.26) 

or equivalently 

, q) 



dp 



<0 



(3A.27) 



with equality iff, for every pair of inputs x and x for which g(x)g(x ) > 0, either 
p(y I x )p(y I x/ ) = for all y or p(y \ x) = p(y \ x ) for all y. 



APPENDIX 3B KUHN-TUCKER CONDITIONS AND PROOFS 

OF THEOREMS 3.2.2 AND 3.2.3 



3B.1 KUHN-TUCKER CONDITIONS 

Theorem (Gallager [1965] special case of Kuhn and Tucker [1951]) Let/(q) 
be a continuous convex n function of q = (q lt q 2 , . . . , QQ) defined over the 
region 

Q 



/=! 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 203 



Assume that the partial derivatives df(q)/dq k , k = 1, 2, ..., Q exist and are 
continuous, except possibly when q k = (on the boundary of ^ Q ). Then/(q) 
has a maximum for some q e J> Q and necessary and sufficient conditions on 
... f qfy to maximize /(q) are that, for some constant X 



q = 



3 



= A for all /c such that q% ^ 



(3B.1) 
(3B.2) 



It is well known that, in real vector spaces without constraints, a convex n 
function either has a unique maximum or, if it possesses more than one maximum, 
they are all equal, and all points on the line, plane, or hyperplane, connecting 
these maxima, are also maxima of the function. Also, necessary and sufficient 
conditions for maxima are that all partial derivatives be zero. 

Now, if we impose a linear constraint such as 



k=l 



then, by the standard technique of Lagrange multipliers, this can be treated as the 
problem of maximizing 



/(q) + 



q k 



which yields then (3B.2), and /I can be obtained from the constraint equation. On 
the other hand, if the region .J? Q is bounded by hyperplanes (q k > 0), we must 
recognize that a maximum may occur on the boundary, in which case (3B.2) will 
not hold for that dimension, but it would appear that (3B.1) should (see Fig. 3B.1 
for the one-dimensional case). We now proceed to prove (3B.1) and (3B.2). 







(a) Maximum at interior point (3B.2) (6) Maximum on boundary (3B.1) 
Figure 3B.1 Examples of maxima over regions bounded by hyperplanes. 



204 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



PROOF Necessity: Assume/ (q) has a maximum at q. Let q = (q lt q 2 , . . . , q o) 
be a distribution vector with q k > for all k. Since q maximizes/(q), we have, 
for any e (0, 1) 

0>/(0q + (l-0)q )-/(q ) (3B.3) 

[Note: 0q + (1 - 0)q is interior to 0> Q since q is interior to ^ Q .] Then 
consider 



k = 



0q +(l-0)qO 



(q k - q ,) (3B.4) 



Since q = 0q -I- (1 0)q is interior to 0> Q , all partial derivatives exist by the 
hypothesis of the theorem, and consequently the left side also exists. 
Obviously 



q = 



so that, by the mean value theorem, we have from (3B.3) 

(1 - a)q ] 



0> 



Using (3B.4) and letting 6 -> 0, we obtain 



for some a e (0, 9) 



(3B.5) 



and since the derivatives are continuous by hypothesis 

Q Af( n \ 

r*-2) 



k=l 



(3B.6) 



Now, for some k = k lt we must have q^ 0. Let k 2 be any other integer from 
1 to Q. Now since q was an arbitrary point in 0> Q , let us choose it such that 

(3B.7) 
(3B.8) 

This is always possible, since q^ and (3B.7) and (3B.8) guarantee that q 
so chosen is a distribution vector. Substituting (3B.7) and (3B.8) in (3B.6), 
we have 



i 2 - <& = ?, - 4, = > 

q k = q% for all k--j=k l or k 2 



= q 



Now 



(3B.9) 



(3B.10) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 205 



Since f > in (3B.9), it follows that 



(3B.11) 



But, since k 2 is arbitrary, this establishes the necessity of (3 B.I). Furthermore, 
if q 2 0, we could take 6 < in (3B.7). This reverses inequality (3B.11), 
which thus proves the necessity of (3B.2). 

Sufficiency: Now given (3B.1) and (3B.2), we show that 

/(q)>/(q ) for all q e ^ Q 
Given (3B.1) and (3B.2), we have 



3ft .. . 

with equality if q% ^ 0. Summing over /c, we have 



k = 



(<?; - rf) < 4 X <?; - Ef)=o 

q=q \l=l 1=1 



Now (3B.4) yields 



- fl)q ] 



k = 



or equivalently 



(3B.12) 



But, since /(q) is convex n, the left side of (3 B. 12) can be replaced by 
f[0q + (1 - 0)q ] -/(q) . Of(q ) + (1 



which proves the sufficiency of (3 B.I) and (3B.2). 

3R2 APPLICATION TO E (p, q) AND /(q) 

PROOF OF THEOREM 3.2.2 We showed in Lemma 3.2.2 that e~ E (p q) is convex u. 
Thus 



is convex n, and maximizing f(p, q) is equivalent to maximizing 
E (p, q) = -In [-/(p, q)]. Then applying (3B.1) and (3B.2), we have 



206 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



snce 



Thus 

I/>(y|x) 1/(1+p) a(}>, q) p >/l = T Z ^- for all x e 3T (3B.13) 

y 1 + P 

with equality if q(x) 0. Summing over $T, after multiplying by <?(x) and 
interchanging the order of summation, we have for the left side of (3 B. 13) 



and for the right side 

Z (*K = A 

x 

Thus (3B.13) requires 

a(y,q) 1+ = A (3B.14) 

[since (3B.13) holds as an equality if q(x) > while, if q(x) = 0, it did not 
figure in the sum on either side]. Thus combining (3B.13) and (3B.14), we have 



1/(1+p) a(y, q) p > a(y, q) 1 + /? for all x e ^r (3.2.23) 
with equality for all x such that g(x) > 0. 

PROOF OF THEOREM 3.2.3 In Lemma 3.2.3, we proved that /(q) is convex n. 
Thus applying (3 B.I) and (3B.2), we have 



dq(x ) dq(x ) 



x y 



x y 



</l (3B.15) 

Summing over x e 9C, after multiplying by q(x \ we have for the left side 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 207 

and for the right side, of course 

!(* )*-* 

x 

Thus / = /(q) 1, and consequently (3B.15) becomes 
? ^ X) " IgfrKvl* ) - 

x 

[since q maximizes /(q)] with equality for all x such that q(x) > 0. 



APPENDIX 3C COMPUTATIONAL ALGORITHM FOR 
CAPACITY (Arimoto [1972], Blahut [1972]) 



We have a DMC with input alphabet 3C, output alphabet ^, and transition 
probabilities p(y \ x) for x e 3 , y e %/. Let q = (g(x): x 6 JT) be a probability dis 
tribution on 3C. Then channel capacity is 

C = max/(q) (3C.1) 

q 

where 



)q(x) In ^Y (3C.2) 

x y <?V X j 

where 



pcj) 

and 

9W (3C4) 



Let Q = {Q(x \y): x e & 9 y e &} be any set of conditional probability distribu 
tions; then 

Q(x|y)>0 for all x, y (3C.5) 

and 

Xfi(.x|y)=l for ally (3C.6) 



208 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Let 

l,Q) = IIp(>>l*W*)ln%-^ (3C7) 



Lemma For any Q we have 

/(q) > F(q, Q) (3C.8) 

with equality iff g(x | y) = q(x \ y) for all x, y. 

PROOF From inequality (1.1.8) we have for any y 

win (3C9) 



with equality iff Q(x \ y) = q(x \ y) for all x. Observing that 



we see that (3C.8) follows directly from (3C.9). 
This lemma then yields 



= maxF(q,Q) (3C.10) 

Q 



where the maximum is achieved by 

V W^ for all x, y 



Channel capacity can be expressed in the form 

C = max max F(q, Q) (3C.12) 

q Q 

Suppose now we fix Q and consider the maximization of F(q, Q) with respect to 
the input probability distribution q. First we note from (3C.7) that, for fixed Q 

Q) = Z q(x) In -- + I p(y | x)q(x) In Q(x \ y) (3C.13) 



x y 



is a convex n function of the set of input distributions q. The Kuhn-Tucker 
theorem (App. 3B) states that necessary and sufficient conditions on the q that 
maximizes F(q, Q) are 



<A for all x (3C.14) 

dq(x) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 209 



with equality when q(x) > 0. A is chosen to satisfy the equality constraint 

!<?(*)=! 

X 

For q(x)~> 0, this becomes 

-1- ln<x 



or 



(3C.15) 



(3C.16) 



Choosing A to meet the equality constraint 



we have for q(x) > 



(*)= 



(3C.17) 



Hence we have (3C.11) for the Q that maximizes F(q, Q) for fixed q, and we have 
(3C.17) for the q that maximizes F(q, Q) for fixed Q. Simultaneous satisfaction of 
(3C.11) and (3C.17) by q and Q achieves capacity. 

The computation algorithm consists of alternating the application of (3C.11) 
and (3C.17). For any index k = 0, 1, 2, . . . , let us define 



(3C.18) 



for all x (3C.19) 



exp 2. 
^+( X ) = !f 



x \ y 



and 



The algorithm is as follows: 



(3C.20) 



Step 1. Pick an initial input probability distribution q (0) and set k = 0. (The 
uniform distribution will do.) 



210 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

Step 2. Compute Q (fc) according to (3C.18). 
Step 3. Compute q (k + 1) according to (3C.19). 
Step 4. Change index k to k + 1 and go to Step 2. 

To stop the algorithm, merely set some tolerance level 5 > and stop when index 
k first achieves 

\C(k + 1) - C(k)\ <d (3C.21) 

There remains only the proof that this algorithm converges to the capacity. 

Theorem For the above algorithm 

lim \C-C(k)\ = (3C.22) 

k->oo 

PROOF Let 

r (k+1) (x) = exp p(y\x) In Q (k} (x\y) k = 0, 1, 2, ... (3C.23) 



y 

so that, from (3C.19) 



(fc+i)/\ 



(3C24) 



From (3C.12), we have C > C(k + 1) where now 
l)=F(q (k+1) ,Q (k) ) 



r (k+ % 



- In r (k+1) (x) + In / r (fc+1) (x ))[ (3C.25) 

\ x /I 

From the definition of r (fc+ 1} (x)in (3C.23), we see that the first two terms cancel 
giving us 

(3C.26) 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 211 



Now suppose q* achieves capacity so that C = /(q*). Consider 

M in ^-S = i *M in "T"^ 



= -C(k + 1) + I ,*(.x) In ^^ + X q*(x) In r k + 
= -C(k+ 1) 



= -C(k + 1) + I I p(y|.x),*(.x) In 



C + I P*(y) n T^ (3C.27) 



where 



and 



Again using inequality (1.1.8), we have 

? p * Mln ^- (3C28) 

and, from (3C.27) 

l)<X?*(x)lnl (3C.29) 



Noting that C > C(/c + 1) and summing (3C.29) over k from to N - 1, we 
have 

N-I a (N} (\} 

I |C - Cf/c + 1)| < q*(x) In ^J (3C.30) 

k = x ^ (X) 

Again from inequality (1.1.8), we have 

X <?*(*) In q (N \x) < X ^*(x) In q*( x ) (3C.31) 



212 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

and thus 

N ^*/V\ 



k= 



(3C.32) 



The upper bound on (3C.32) is finite and independent of N. Hence 
{|C - C(/c)|}* =1 is a convergent series, which implies 

lim |C-C(/c)| =0 (3C.33) 



fc->oo 



Similar efficient computational algorithms have been developed for the ex 
purgated exponent E ex (K) (Lesh [1976]) given by (3.3.13) and (3.3.14) and for the 
sphere-packing exponent E sp (R) (Arimoto [1976], Lesh [1976]) given by (3.6.47). 
Recall that the ensemble average exponent equals the sphere-packing exponents for 
higher rates and is easy to derive from E sp (R). 



PROBLEMS 

3.1 Compute (1, q) and E (l) = max q (1, q) for each of the following channels: 
\-p \-p 




(a) BSC 
Figure P3.1 



(Z>) BEC 



(c) Z channel 



3.2 (a) Compute max q E (p, q) = E (p) and C = max q /(q) for the following channels. 
(b) Compute E(R) for each channel. 
Hint: Check conditions (3.2.23) for the obvious intuitive choice of q. 




903 




Figure P3.2 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 213 
33 (a) Compute E (p\ C, and E(R) for the Q-input, Q-output noiseless channel 

I _ . J, k= 1, 2, ... Q 

(b\ Compute E (p\ E (p\ (1), and C = lim E (p] for the Q-input, Q-ouiput channel 

p~0 

p(b t |a,-) = p for all / 4 k 

p(b k \a k ) = p where p + (Q-l)p=l 

Do not compute E(R\ but sketch it denoting key numerical parameters. 

(c) Find the optimizing q and sketch E(R) for the six-input, four-output channel. 



5 




i,k = 1,2,3,4 



a 6 

Figure P3.3 

H/nf: Show that (c) can be regarded as the superimposition of (b) with Q = 4 and (a) with Q = 2. 
3.4 (a) Show that, for a Q-input, J-output memoryless channel, the necessary and sufficient condition 
(3.2.23) on the input distribution q which maximizes E (p, q) can be stated in matrix form as follows 

[Pi (1+p) ] > (exp [- (p)])u where * T = [Pj k (l+f>) ]q T 
with equality for all k for which q k 0, where we have used the notation 

<?* = q(a k ) 7=1.2 J 

P jk = p(bj\a k ) k-l,2,...,fi 
=(a 1 , a 2 ,...,a Q ) a p = (a?, a^ a) 

u = (1, 1, . . . , 1) (p) = max >. q) (scalar) 
-Q - 

and X r is the transpose of X. 

(fe) Under the following conditions 
(i) J = Q 

(ii) det[Pj k (1+p) ]*Q 
(in) <? k > for all k 
show that 

(1) E ^ 

(2) e^ p}l 
and consequently 



214 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 
(3) Applying the constraint equation (u, q) = 1, show that 



3.5 Apply the results of Prob. 3.4 to obtain, for the channel of Prob. 3.1c with p = 



(b) Find q in terms of p and thus show that the optimizing distribution varies with R. Indicate 
specifically q| p=0 and q| p=1 . 

(c) Find C. 

(d) Sketch E(R). 

3.6 For the Q-input, (Q + l)-output "erasure" channel with 



P( b Q+l\ a k)= 2 

p(bj\a k ) = Q j^k j =1,2,. ..Q /c=l,2,...Q 

(a) Determine the maximizing distribution q for all p e [0, 1]. 

(b) Determine E (p\ E (p) and E(R) explicitly and sketch E(R). 

3.7 For all three channels of Prob. 3.1, determine 

(a) E x (p) = max p E x (p, q) 

(b) E x (p) 

(c) E eK (R) and sketch 

3.8 (a) For the channel of Prob. 3.3a, determine E x (p) and ex (/?). 

(b) Repeat for the channels of Prob. 3.2 and discuss the difference in the results of (/) and (ii). 

3.9 For the channel of Fig. 3.7 

(a) Find the maximizing q, E (p], and C. Sketch E(R). 

(b) Find E x (p, q) using the same q as in (a). Sketch E ex (K, q) on the same diagram as (a). 

3.10 For any distribution q, show that 



if 



W q ) w i t h equality iff C = 



3.11 (a) Show that if the Q x Q matrix with elements 



is nonnegative definite, then the function 

/(q) = II 

X X 

is convex u in the probability distribution space of q (Jelinek [1968b]). 

(b) Obtain necessary and sufficient conditions on q to minimize /, and consequently maximize 
E x (p, q), for any channel satisfying (a). 

3.12 (a) For the binary-input AWGN channel and for the BSC derived from it by hard quantization 
of the channel output symbols, verify (3.4.19) and (3.4.20). 

(b) Verify Fig. 3.8 (a), (b), (c) in the limit as SJN -> and SJN - oo. 

(c) Verify (3.4.21) and obtain curves for the octal output quantized AWGN channel (for the 
quantizer of Fig. 2.13, let a 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 215 

3.13 Show that the AWGN channel with SJN 1 satisfies the definitions (3.4.23) and (3.4.24) of a 
very noisy channel. 

3.14 (Parallel Channels) (Gallager [1965]) Let the independent memoryless channels 1 and 2 with 
identical input and output alphabets be used in parallel. That is. for each symbol time, we send a 
symbol x over channel 1 and simultaneously a symbol z over channel 2. 

(a) Treating these parallel channels as a single composite channel, show that for the composite 



where the subscripts 1 and 2 refer to the corresponding exponent function for the individual channels 
and q = (q^ q 2 ) is a 2Q-dimensional vector where qj and q 2 are each Q dimensional. 
(b) Show then that 

max E (p. q) = max E 0l (p, q x ) + max ,(p. q 2 ) 

q q, q 2 

3.15 (Sum Channels) (Gallager [1968]) Suppose we have n independent memoryless channels, possibly 
with different input and output alphabets. At each symbol time, a symbol is sent over only one of 
the channels. We call this a sum channel. 

Let E 0i (p) = max E .(p. q) for the fth channel, i = 1, 2, . . . , n 
q 

E (p) = max E (p. q) for the sum channel 
q 

/?(/) = Pr (using ith channel) 
Hence if the weighting vector for the ith channel is (tff, . . . , q ( g) = q\ the sum channel weighting is 

i-MiHftjPW" ..... /W"). 

(a) Show that 



and 



y [..!<?)/*] 

1 = 1 

(b) Show from this that the sum channel capacity C is related to the individual channel capacities 
C.by 

C = In f e c < 
1 = 1 

(c) Apply these results to obtain E (p) and C for the channel of Fig. 3.7. 

3.16 (List Decoding) Suppose the decoder, rather than deciding in favor of a single message m, con 
structs a list m l , m 2 m L of the L most likely messages. An error is said to occur in list decoding if 

the correct message is not in this list. 

(a) Show that, for a memoryless channel 

PE. = X>v(y O 
y e A* 

where A^ = y : m > 1 for some set of L messages m t , m 2 , . . . , m L where m, m for all /. 



(b) Using the techniques of Sec. 2.4, show that 
A_c A* = v: Y Y 



216 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



and 



where 



and thus that, with A 



i n 



X m) 



p >0 



pL 



y \mitm m L *m 1=1 

(c) Now applying the techniques of Sec. 3.1, obtain an ensemble average bound 

L\p 



and, since ( M L 1 ) < (M - 1) L , show that 

P e < X I w (x)p w (y |x) 1/(1+pL) (M - i) <M*K(y I*! 

y x 

< e -JV[.(p.q)-plt] 



0<p<l 



where p = pL so that < p < L. 

(d) Compare this result, after maximizing with respect to q, with the sphere-packing lower 
bound. 

3.17 Find /i(s) of (3.5.3) for the two N-symbol code vectors (a, a, ..., a) and (b, b, ..., b) for each 
of the following channels 



a 



1-p 



1-p 



z? 




a 





(a) BSC 
Figure P3.17 



(b) Z channel 



3.18 (Chernoff Upper Bound on the Tail of a Distribution) 

(a) If?/ is an arbitrary random variable with finite moments of all orders and 9 is a constant, show 

Pr{r,>0}<E[e s( - e} ] s>0 



where 



= In 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 217 

(b) Show that minimizing on s results in 

Pr fa > 9} < e r <5> -*r-<,) 

where = F(s) = ^^ 
as 

Hint: Show first that T(s) - sO is convex u by comparing it to n(s) of (3.5.3). 

(c) Let 



where the y B s are independent identically distributed random variables. Verify (3.5.38) 

Pr [rj > 9} < eW>- V< 
where 

> (*) = ^ = i n Z ^ p(y) 

3.19 (a) Apply Prob. 3.18c to the binomial distribution by letting 

10 with probability 1 - p 

n = / y n where y_ = 

B ti 1 1 with probability p 



Obtain upper bounds on Pr (^ > ND) and Pr (r] < ND): 



p<D 
p > D 



Hint: For p > D, replace r\ by N q, p by I p, and D by 1 D. 
Also show that, when p = \ and D < | 



Pr ^ < 

where 

R(D) = In 2 - 

(6) Apply Prob. 3.18fc to the Gaussian distribution showing that, if f/ is a zero-mean unit variance 
Gaussian random variable 

Pr {r, > 6} = Q(0) < e~ e2;2 

3.20 Find the sphere-packing bound for all the channels of Probs. 3.2, 3.3, 3.5, and 3.6. 

3.21 Alternative proof of the expurgated bound: For any DMC channel, the expurgated bound given 
by Theorem 3.3.1 can be proven using a sequence of ensemble average arguments rather than expurgat 
ing codes from an ensemble as is done in Sec. 3.3. 

Begin with a code of block length N and rate R = (In M)/N given by # = {x 1? x 2 , . . . , X M }. The 
Bhattacharyya bound of (2.3.16) gives 



EVft 

m *m 

for m = 1, 2, . . . , M. 



218 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 



(a) Show that for any s e (0, 1] 



form- 1,2,..., M. 

(b) Consider an ensemble of codewords where any codeword x is chosen with probability 



Assume that the M 1 codewords (x m .} m .^ m are fixed and average B^(^) with respect to codeword x, 
chosen from the above ensemble. Denote this average by B s m (^) and show that 



, q)] 



where 



7(5, q) = max 



[Here, without loss of essential generality, assume that all codewords in <# satisfy q N (\) > 0.] 

(c) Given code #, show that there exists a codeword x m such that a new code # m which is the 
same as # with x m replaced by x m satisfies 

p m (^m) ^ B m(^ m ) ^ M 1/s [y(s, q)] w/s for any s e (0, 1] 

(d) Using (c), construct a sequence of codes 

t = ^ = { Xl ,x 2 ,...,x M } 

#1={*1> X 2 ,X 3 ,...,X M } 

^ 2 = {x 1 , x 2 ,x 3 ,...,x M } 



V V V 

X 2 X 3 X MJ 



such that 



where 



for m = 1, 2, . . . , M. 

(e) For code ^ M = {x 1} x 



for m= 1, 2, ..., M. 
(/) For code # M , 



1 I y 



. , X M }, show that 

M 



l y 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 219 

and the average error probability is defined by 






m=l 

Show that for any s e (0, 1] 



(g) By examining necessary conditions for achieving a minimum, show that, over all probability 
distributions on 3C 

min Z Z <?( X M- X )(Z x/PtH-^PM* )) = min 7( s <l) 

This then proves that, for any distribution q(-) and any p = 1/s e [1, oo), there exists a code < of 
block length N and rate R such that 

nNp 



where 

E x (p, q) = -p In 



"I? 



By maximizing the exponent with respect to the distribution q and the parameter p e [1, oo), obtain 
the expurgated bound. Note that this proof does not give the inconvenient term (In 4)/N added to the 
rate R as does that in Theorem 3.3.1. 

3.22 Discrimination functions and the sphere-packing bound : The sphere-packing lower bound can be 
proven for discrete memoryless channels using discrimination functions (see Omura [1975]). Here this 
approach is demonstrated for the BSC channel with crossover probability p. 

Define a "dummy" BSC with crossover probability p and capacity = In 2 - Jf (p). The dis 
crimination between the actual channel and the dummy channel is defined in terms of channel 
transition probabilities as 



- vr r/ 4 r\r \~t / 

= p In - + (1 - p) In - - 
P 1 - P 

For any 7 > and any x e jT v , define the subset G..(\) a 3/ N as follows: 



(a) For any code % = (x^ x 2 , . . . , \ M ] of block length N and rate R = (In M)/N, show that 
PE = Z Z P.v(y|O 



.P + v> _ 

^ m=l yeS^ 



M m=l yG..(x,.) 

where P is the error probability when code % is used over the dummy BSC. 



220 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

(b) Show that 

y e G~ y (\ m ) 

goes to as N -> oo. 

(c) Using the converse to the coding theorem for p chosen such that 

C = In 2 - Jf(p) < R 
show that there exists an a > such that for N large enough 

Since this is true for any y > and dummy BSC where C < R, define the limiting exponent 

where p satisfies 

In 2 - jf(p) = R 

and check that this is the sphere-packing exponent for the BSC. 

3.23 Consider sending one of two codewords^! or x 2 , over a BSC with crossover probability p. Using 
the method of Prob. 3.22 where M = 2 and p = \, show that for any y > and for all N large enough 

P (l -> 2) > i exp [-w( Xl x 2 ){-ln ^(1 - p) + y}] 

where w(\ i x 2 ) is the Hamming distance between the two codewords. [For large N we assume 
w(x 1 x 2 ) is also large.] 

Hint: Consider only those coordinates where x ln =/= x 2n , n = 1, 2, . . . , N. 

3.24 For the unconstrained AWGN channel, prove the sphere-packing lower bound on P E for any 
code <g = {x t , x 2 , . . . , X M } with 

|| x m || 2 = $ m = 1, 2, . . . , M 

by following the method of Prob. 3.22. Here use the "dummy" AWGN channel that multiplies all 
inputs by p. That is, for codeword x e S N the dummy AWGN channel has transition probability 
density 

/ J \N,2 

P*(yW=(-) .-.--* 

whereas the actual channel has transition probability density 

(1 \ N > 2 
_) <--.<*. 

3.25 Consider AWGN channels that employ m frequency-orthogonal signals such as in (2.12.1). These 
are commonly called MFSK signals. Show that, for these M-ary input memoryless channels, the 
expurgated function E x (p) = max q E x (p, q) has the form 



E x (p)= -pin 



M 

where 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 221 

for any x x . Find Z for the following cases: 

(a) Coherent channel with hard M-ary decision outputs. 

(b) Noncoherent channel with hard M-ary decision outputs. 

(c) Coherent channel with unquantized output vectors. 

(d) Noncoherent channel with unquantized output vectors. 
Show that the expurgated exponent D = E e% (R) satisfies 

R = In M - jf(D/d) - (D/d) In (M - 1) 

where d = - In Z. 

3.26 Suppose we have a DMC with transition probabilities p(y |x). The decoder mistakenly assumes 
that the transition probabilities are p(y\x) and bases the maximum likelihood decision rule on these 
incorrect transition probabilities. Following Sees. 2.4 and 3.1, derive an ensemble average upper bound 
to the probability of a decoding error (Stiglitz [1966]). 

It should have the form 

P E < exp {-N[-pR + F(p, q. p, p]} < p < 1 
The quantity 

K (p, p) = max F(l,q, p, p) 

+ q 
can be used to examine the loss due to not knowing the actual channel parameters. 

3.27 Repeat Prob. 3.25 for the noncoherent fading channels with MSFK signals that are discussed 
in Sec. 2.12.3. 

3.28 Suppose we have a DMC with input alphabet 3C containing Q symbols. Let 



satisfy the " balanced channel " condition 



for all x e 3C. This shows that the set of Bhattacharyya distances from any input x to all other inputs are 
the same for all x e 3C. For these channels, show that the expurgated exponent D = () is given 
parametrically by 



for s = l/p [ 1, 0]. Give the specific form of these equations for the multiphase signal set of 
Fig. 2.12/> used over the AWGN channel. 

3.29 Consider a DMC with input alphabet 3t, output alphabet #, and transition probabilities 
{p(y | x) : x e 3", y e &}. Given a code # = (x^ x 2 , . . . , X M } of block length N and rate R = (In M)/N, 
following the proof of Theorem 3.9.1, 

(a) Show that the probability of correct decoding is 



(Assume all messages are equally likely and that the optimum decision rule is used.) 



222 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 
(b) For any /? > show that 



(c) Consider an ensemble of codes where code # = {x 1} x 2 , . . . , X M } is selected with probability 



where 



qM= ft <?( 



and g( ) is any distribution on 3C. For < ft < 1, show that P c averaged over this code ensemble 
satisfies 



- P ji} where 
and , = -^ 



y x 

(d) Show that, for some distribution q(-) and p [ 1, 0] 

(P, q) - pK > for K > C 

Here C is channel capacity for the DMC. 

(e) From (d), it follows that, over the ensemble of codes of block length N and rate R with some 
distribution q(-\ the average probability of correct decoding satisfies 

pT< e -NE ic (R) 

where 

E SC (R)= max {>, q) - p/?} > 
- i <P<O 

for K > C. Compare this result with the strong converse coding theorem in Sec. 3.9. What is the 
difference between these results? Explain why the above result is not useful. 
3.30 Consider the K-input, K-output DMC where 

P(b k \a k )=l-p /c=l,2,...,X 
P(bk + iM = P *c=l,2,...,K-l 
and 

^i I %) = P where < p < J 

(a) Find E (p) = max E (p, q) and E (p). 

q 

(b) Find channel capacity. 

(c) Suppose codeword x l which has N components gives an output y. We now randomly select 
x 2 according to the probability 



What is the probability that x 2 is chosen such that it is possible for x 2 also to give output y? 

(d) Suppose x 2 , x 3 , . . . , X M are randomly selected as in (c). Find a union upper bound for the 
probability that one or more of the codewords x 2 , x 3 , . . . , X M can give output y. 



BLOCK CODE ENSEMBLE PERFORMANCE ANALYSIS 223 

(e) Determine the ensemble average exponent E(R ) for the case where p = ^, and compare this with 
the exponent in the bound obtained in (d). 

(/) Determine E(R) for p = and explain why it is finite. 

331 Consider M signals and an additive white Gaussian noise channel with spectral density N /2. The 
signal set is 

*,( )= Z***k(0 0<f<T,/=l,2,...,M 
* = i 

where ($ k ( )}*=! is a set of orthonormal functions. Suppose we now randomly select codewords by 
choosing each x ik independently from the ensemble of random variables with zero mean and variance . 
Using a union of events bound, show that there exists a set of codewords such that the error 
probability satisfies 

P e < M2~ vc 

Find C when 

(a) x is a Gaussian random variable. 

l + \ f with probability I 
I - v ^ with probability \ 

Hint: Assume .x 1 (f) is sent and bound the error probability by the sum of the two signal error 
probabilities between Xj(f) and each of the other signals. Then use the bound 



1 



and average the bound over the ensemble of codewords. 
3.32 Consider the four-input, four-output DMC shown. 

(a) What is the channel capacity? 

(b) Determine E (p) = max E (p, q). 

q 

(c) Determine and sketch E(R), E ex (K), and E sp (R). 





P kk =0 All 



3.33 (Improved Plotkin Bound) Assume a systematic binary linear code of M = 2* code vectors of 
dimensionality N. Let d mm be the minimum distance between code vectors in this code. 

(a) For any 1 <j<K. consider the 2 j code vectors in the code with the first K j information 
bits constrained to be 0. By eliminating these first K -j components in these code vectors, a binary 
code of 2 j code vectors of dimensionality N - (K - j) is obtained. Use Lemma 3.7.1 to show that 



(b) Next, show the improved Plotkin bound 



224 FUNDAMENTALS OF DIGITAL COMMUNICATION AND BLOCK CODING 

where 

R = (In M )/N 

(c) Show that the improved Plotkin bound is valid for all binary codes of M code vectors of 
dimensionality N. 
3.34 (Gilbert Bound for Binary Codes) 

1. List all 2 N possible distinct binary vectors of length N. 

2. Choose an arbitrary binary vector from this list and denote it as x t . Delete from the list x t and all 
other binary vectors of distance d 1 or less from \ l . 

3. From the remaining binary vectors on the list arbitrarily pick x 2 , then delete from the list x 2 and all 
other binary vectors that are distance d 1 or less from x 2 . 

4. Repeat Step 3 for vectors x 3 , x 4 , . . . , X M until the list is empty. 

(a) Show that the number of binary vectors selected, M, satisfies 

2 N 



i=0 

(b) Using the Chernoff bound (see Prob. 3.19a), show that 

4-1 

and, choosing p = |, show 

d- 1 

Y (?) < e N -* (dlN) 

i = 

(c) From (a) and (b), show that, for any rate R = (In M)/N < In 2, there exists a code of 
minimum distance d min where 

d min 

and 5 satisfies S < \ and 

K = In 2 - 



(d) Rederive the Gilbert bound for large N by using the expurgated upper bound (3.4.8) and the 
lower error bound (3.7.18) for the binary-input, output-symmetric channel. Furthermore, show that 
the Gilbert bound holds for linear codes as well by using the expurgated upper error bound derived in 
Sec. 3.10. 



PART 



TWO 



CONVOLUTIONAL CODING AND 
DIGITAL COMMUNICATION 



CHAPTER 

FOUR 

CONVOLUTIONAL CODES 



4.1 INTRODUCTION AND BASIC STRUCTURE 

In the two preceding chapters we have treated digital communication over a 
variety of memoryless channels and the performance enhancement achievable by 
block coding. Beginning with the most general block codes, we proceeded to 
impose the linearity condition which endowed the codes with additional structure, 
thus simplifying both the encoding-decoding procedure and the performance 
analysis for many channels of interest. Of particular significance is the fact that, 
for a given block length and code rate, the best linear block code performs about 
as well as the best block code with the same parameters. This was demonstrated 
for a few isolated codes and channels in Chap. 2, and more generally by ensemble 
arguments in Chap. 3. 

In the narrowest sense, convolutional codes can be viewed as a special class of 
linear block codes, but, by taking a more enlightened viewpoint, we shall find that 
the additional convolutionai structure endows a linear code with superior proper 
ties which both facilitate decoding and improve performance. We begin with the 
narrow viewpoint, mainly to establish the connection with previous material, and 
then gradually widen our horizon. In this chapter and the next, we shall exploit 
the additional structure to derive a maximum likelihood decoder of reduced 
complexity and improved performance, first for specific codes and channels and 
then more generally on an ensemble basis, following essentially the outlines used 
for block codes in Chaps. 2 and 3. Finally, in Chap. 6, we treat sequential decod 
ing algorithms which reduce decoder complexity at the cost of increased memory 
and computational speed requirements. 

227 



228 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Consider first the linear block code specified by the binary generator matrix 



G = 



21 



g l 3 g? 
,,(3) 



(K + 2) 

IK- i 



(B-K+l) 



(4.1.1) 



where gj = (g ( -l 



- g ( ) is an rc-dimensional binary vector and blank areas in 
the matrix G indicate zero values. G describes an (nB, B - K + 1) linear block 
code which could be implemented, as shown in Fig. 2.16, by a (B - K + l)-stage 
fixed register and nB modulo-2 adders. A simpler mechanization, particularly 
since generally B $> K, utilizes a K-stage shift register with n modulo-2 adders and 
time-varying tap coefficients 0$, as shown in Fig. 4.1. The shift register can be 
viewed either as a register whose contents are shifted one stage to the right as each 
new bit is shifted in from the left, with the rightmost stage contents being lost, or 
as a digital delay line in which each delay element stores one bit between arrival 
times of the input bits. Both representations are shown in Fig. 4.1, with the former 
shown dotted, the latter being the preferred form. 

We note also that in the shift register or delay line implementation the 



i 1 p\ i 1 p\ 

rrKMrH>- 







Figure 4.1 A time-varying convolutional encoder: rate r = \jn bits/channel symbol. 



CON VOLUTION AL CODES 229 

(B K + l)st (last) bit must be followed by K 1 zeros to clear the register 1 and 
to produce the last K - 1 output branches, which are sometimes called the tail of 
the code. 

Thus it appears that the encoder complexity is independent of block length 
nB, and depends only on the register length X, and the code rate, 2 which, when 
measured in bits per output symbol, approaches l/n as B-> oo. K is called the 
constraint length of the convolutional code. On the basis of the shift register im 
plementation it should also be clear that the greater the ratio B/K, the less the tail 
" overhead " in the sense that, since the last K 1 input bits are zeros, the tail 
reduces the code rate in proportion to (K - \)/B. 

The term " convolutional " applies to this class of codes because the output 
symbol sequence v can be expressed as the convolution of input (bit) sequence u 
with the generator sequences. For, since the code is linear, we have 

v = uG 

and, as a consequence of the form of G of Eq. (4.1.1) 

1=1,2,... (4.1.2) 



k = max ( 1 , i - K + 1 ) 

where v, = (t; fl , v i2 , , v in ) is the ^-dimensional coder output just after the /th bit 
has entered the encoder. 

While, for theoretical reasons, in the next chapter we shall be interested in 
the ensemble of time-varying convolutional codes just described, virtually all 
convolutional codes of practical interest are time-invariant (fixed). For such 
codes, the tap coefficients are fixed for all time, and consequently we may delete 
all superscripts in the matrix (4.1.1) with the result that each row is identical to 
the preceding row shifted n terms to the right. An example of a fixed convolutional 
code with constraint length K = 3 and code rate | is shown in Fig. 4.2a. Here 
the generator matrix of (4.1.1) has the form 

111011 

111011 

111011 

111011 
G = 

111011 

111011 

111011 

(4.1.3) 

1 This is required to terminate the code. Alternatively, the convolutional code may be regarded as a 
long block code (with block length nB arbitrarily large) and this termination with (K 1) zeros clears 
the encoder register for the next block. 

2 The code rate is actually [1 - (K - l)/B]/n because of the (K - 1) zeros in the tail. However we 
generally disregard the rate loss in the tail since it is almost always insignificant, and henceforth the 
rate shall refer to the asymptotic rate; that is to the ratio of input bits to output symbols, exclusive 
of the tail (\/n in this case). 



230 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



The rate of the class of convolutional codes defined in this manner is 1/n bits 
per output symbol. 3 To generalize to any other rational rate less than unity, 
we must generalize the matrix G of (4.1.1) or its implementation in Fig. 4.1. We 
may most easily describe higher code rate convolutional codes by specifying that 
b > 1 bits be shifted together in parallel into the encoder every b bit times, and 
simultaneously that the bits already within the encoder are shifted to the right 
in blocks of b. Here K is the number of ^-tuples in the register so that a total 
of bK bits influence any given output and consequently bK is now the constraint 
length. In terms of the generator matrix (4.1.1), we may describe a convolutional 
code of rate b/n by replacing the rc-dimensional vector components gj with 
b x n matrices. The implementation of Fig. 4.1 can best be generalized by 
providing b parallel delay lines, every stage of each delay line being connected 
through a tap multiplier to each modulo-2 adder. Examples of fixed convolutional 
codes of rates f and | with K = 2 are shown in Fig. 42b and c, respectively. 
The generalization to time-varying convolutional codes of any rate b/n and any 
constraint length K is immediate. 

A fixed convolutional coder may be regarded as a linear time-invariant finite- 
state machine whose structure can be exhibited with the aid of any one of several 
diagrams. We shall demonstrate the use and insight provided by such diagrams 
with the aid of the simple example of Fig. 4.2a. It is both traditional in this field 
and instructive to begin with the tree diagram of Fig. 4.3. On it we may display 
both input and output sequences of the encoder. Inputs are indicated by the path 
followed in the diagram, while outputs are indicated by symbols along the tree s 
branches. An input zero specifies the upper branch of a bifurcation while a one 
specifies the lower one. Thus, for the encoder of Fig. 4.2a, the input sequence 01 10 
is indicated by moving up at the first branching level, down at the second and 
third, and again up at the fourth to produce the outputs indicated along the 
branches traversed: 00, 11, 01, 01. Thus, on the diagram of Fig. 4.3, we may 
indicate all output sequences corresponding to all 32 possible sequences for the 
first five input bits. 

From the diagram, it also becomes clear that after the first three branches the 
structure becomes repetitive. In fact, we readily recognize that beyond the third 
branch the code symbols on branches emanating from the two nodes labeled a are 
identical, and similarly for all the identically labeled pairs of nodes. The reason for 
this is obvious from examination of the encoder. When the third input bit enters 
the encoder, the first input bit comes out of the rightmost delay element, and 
thereafter no longer influences the output code symbols. Consequently, the data 
sequences lOOxy . . . and OOOxy . . . generate the same code symbols after the third 
branch and thus both nodes labeled a in the tree diagram can be joined together. 

This leads to redrawing the tree diagram as shown in Fig. 4.4. This new figure 
has been called a trellis diagram, since a trellis is a tree- like structure with remerg- 



3 We use small r to denote code rate in bits per output symbol; that is, when we use the logarithm 
to the base 2 to define rate. 



CONVOLUTIONAL CODES 231 





\\ 1] 

g i - Li i oj 

(b)K=2,r=2/3 




fl ll 
0101 

LO o i ij 

I 1 , 
0| 
IJ 



Figure 4.2 Fixed convolutional encoder examples. 



ing branches. We adopt the convention here that code branches produced by a 
" " input bit are shown as solid lines and code branches produced by a " 1 " input 
bit are shown dashed. We note also that, since after B K + 1 input bits the code 
block (4.1.1) is terminated by inserting K - 1 zeros into the encoder, the trellis 
terminates at an a node as shown in Fig. 4.4. The last two branches are then the 
tail of the code in this case. 

The completely repetitive structure of the trellis diagram suggests a further 
reduction of the representation of the code to the state diagram of Fig. 4.5. The 
states of the state diagram are labeled according to the nodes of the trellis 
diagram. However, since the states correspond merely to the last two input bits to 
the coder, we may use these bits to denote the nodes or states of this diagram. 



232 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



00 











00 a 














11 A 








00 a 
















10 c 










11 b 














01 tf 






00 


















11 










10 c 














b 








11 b 
















01 c 










01 d 














10 jf 




00 




















00 a 










11 a 














11 ft 








10 c 
















10 c 










00 b 














01 cf 






11 


















11 










01 c 














00 


) 






01 d 
















01 










10 d 














10 xf 
























00 xr 










00 a 














11 b 








11 a 






1 










10 










11 ft 














01 d 






10 


















11 










10 c 














00 








00 b 
















01 










01 </ 














10 d 




11 




















00 a 










11 a 




a 


- 00 








11 ft 








01 c 






b 


= 01 








10 c 










00 ft 














01 . 


c 


= 10 


01 


















11 




1 1 






01 c 




a 


11 








00 








10 d 
















01 c 










10 d 














10 rf 



Figure 4.3 Tree-code representation for encoder of Fig. 4.2a. 



00 00 00 00 00 



VI 1 VI 1 \11/ \11/ \H 




CONVOLUTIONAL CODES 233 

00 00 00 




c = 



Figure 4.4 Trellis-code representation for encoder of Fig. 4.2a. 

Throughout the text, we shall adopt this convention of denoting the state of a rate 
I/H convolutional encoder by the latest K 1 binary symbols in the register with 
the most recent bit being the last bit in the state. 

We observe finally that the state diagram can be drawn directly by observing 
the finite-state machine properties of the encoder and particularly by observing 
the fact that a four-state directed graph can be used to represent uniquely the 
input-output relation of the K = 3 stage machine. For the nodes represent the 
previous two bits, while the present bit is indicated by the transition branch; for 
example, if the encoder contains Oil, this is represented in the diagram by the 




Figure 4.5 State diagram for encoder of 
Fig. 4.2fl. 



234 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



Oil 




Figure 4.6 State diagram for encoder of 
Fig. 4.26. 



transition from state b = 01 to state d = 11 and the corresponding branch indi 
cates the code symbol outputs 01. 

To generalize to rate b/n convolutional codes, we note simply that the tree 
diagram will now have 2 b branches emanating from each branching node. 
However, the effect of the constraint length K is the same as before, and hence, 
after the first K branches, the paths will begin to remerge in groups of 2 b ; more 
precisely, all paths with b(K - 1) identical data bits will merge together, produc 
ing a trellis of 2 b(K ~ 1} states with all branchings and mergings occurring in groups 
of 2 b branches. Here K represents the number of ^-tuples stored in the register. 
Consequently, the state diagram will also have 2 b(K ~ l) states, with each state 
having 2 b output branches emanating from it and 2 b input branches arriving into 
it. An example of a state diagram for the rate f code of Fig. 4.2b is shown in 
Fig. 4.6. Other examples are treated in the problems. 

Up to this point in our treatment of nonblock codes, we have only considered 
linear codes. Just as linear block codes are a subclass of block codes, convolu 
tional codes are a subclass of a broader class -of codes which we call trellis codes. 
Rate b/n trellis encoders also emit n channel symbols each time b source bits enter 
the register. However, general trellis encoders can produce symbols from any 
channel input alphabet, and these symbols may be an arbitrary (nonlinear) func 
tion of the bK source bits in the encoder register. Since the K-stage register is the 
same for the general class of trellis codes as for convolutional codes, the tree, 
trellis, and state diagrams are the same and the trellis encoder output symbols can 
be associated with branches just as was done previously for the subclass of convo 
lutional codes. It is clear that general trellis codes have the same relationship to 
general block codes as convolutional codes have to linear block codes. 

We have seen here that the tree, trellis, and state diagram descriptions of 
convolutional and trellis codes are quite different from our earlier description of 
block codes. How then do we compare block codes with convolutional codes? 



CONVOLUTIONAL CODES 235 

Returning to our earlier discussion on the generation of convolutional codes, we 
see that the parameters bK, the constraint or " memory " length of the encoder, 
and r = b/n, the rate in bits per channel symbol, are common to both block and 
convolutional encoders. For both cases, the same value of these parameters result 
in roughly the same encoder complexity. We shall soon see that the complexity of 
a maximum likelihood decoder for the same bK and r is also roughly the same for 
block codes and convolutional or trellis codes. Hence, for the purpose of compar 
ing block codes and convolutional codes, we use the parameters bK and r. We 
shall see that, for the same parameters bK and r, convolutional codes can achieve 
much smaller error probabilities than block codes. 

We began the discussion in this section by viewing convolutional codes as a 
special case of block codes. By choosing K = 1 and n = N in the above, we get a 
rate b/N block code, and thus paradoxically linear block codes can themselves be 
considered special cases of convolutional codes, and the broader class of block 
codes can be considered special cases of trellis codes. It is a matter of taste as to 
which description is considered more general. 



4.2 MAXIMUM LIKELIHOOD DECODER FOR 
CONVOLUTIONAL CODES THE VITERBI ALGORITHM 

As we have seen, convolutional codes can be regarded as a special class of block 
codes; hence the maximum likelihood decoder for a convolutional code, as 
specified by (4.1.1), can be implemented just as described in Chap. 2 for a block 
code of B - K + 1 bits, and will achieve a minimum block error probability for 
equiprobable data sequences. The difficulty, of course, is that efficient convolu 
tional codes have a very large block length relative to the constraint length K, as 
discussed in the preceding section; in fact, rarely is B less than several hundred, 
and often the encoded data consists of the entire message (plus final tail). Since the 
number of code vectors or code paths through the tree or trellis is 2 b(B ~ K + i \ a 
straightforward block maximum likelihood decoder utilizing one decoder element 
per code vector would appear to be absurdly complex. On the other hand, just as 
we found that the encoder can be implemented with a complexity which depends 
on K 1 rather than on B, we shall demonstrate that the decoder complexity 
need only grow exponentially with K 1 rather than B. For the sake of simple 
exposition, we begin this discussion by treating the K = 3, rate = 1/2 code of 
Fig. 4.2a, and we assume transmission over a binary symmetric channel (BSC). 
Once the basic concepts are established by this example, the maximum likelihood 
decoder of minimum complexity can be easily found for any convolutional code 
and any memoryless channel. 

We recall from Sec. 2.8 that, for a BSC which transforms a channel code 
symbol "0" to " 1 " or " 1 " to "0" with probability p, the maximum likelihood 
decoder reduces to a minimum distance decoder which computes the Hamming 
distance from the error-corrupted received vector y^ y 2 , . . . , )>_,-, . . . to each pos- 



236 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

sible transmitted code vector x l9 x 2 , . . . , x jt . . . and decides in favor of the closest 
code vector (or its corresponding data vector). 

Referring first to the tree diagram code representation of Fig. 4.3, we see that 
this implies that we should choose that path in the tree whose code sequence 
differs in the fewest number of symbols from the received sequence. However, 
recognizing that the transmitted code branches remerge continually, we may 
equally limit our choice to the possible paths in the trellis diagram of Fig. 4.4. 
Examination of this diagram indicates that it is unnecessary to consider the entire 
received sequence of length nB (n = 2 in this case) in deciding upon earlier seg 
ments of the most likely (minimum distance) transmitted sequence, since we can 
eliminate segments of nonminimum distance paths when paths merge. In particu 
lar, immediately after the third branch we may determine which of the two paths 
leading to node or state a is more likely to have been sent. For example if 010001 is 
received, then since this sequence is at distance 2 from 000000 while it is at 
distance 3 from 111011, we may exclude the lower path into node a. For, no 
matter what the subsequent received symbols will be, they will affect the distances 
only over subsequent branches after these two paths have remerged, and con 
sequently in exactly the same way. The same can be said for pairs of paths merging 
at the other three nodes, b, c and d, after the third branch. Of the two paths 
merging at a given node, we shall refer to the minimum distance one as the 
survivor. Thus it is necessary to remember only which was the survivor (or 
minimum-distance path from the received sequence) at each node, as well as the 
value of that minimum distance. This is necessary because, at the next node level, 
we must compare the two branches merging at each node that were survivors at 
the previous level for possibly different nodes; thus the comparison at node a after 
the fourth branch is among the survivors of comparisons at nodes a and c after the 
third branch. For example, if the received sequence over the first four branches is 
01000111, the survivor at the third node level for node a is 000000 with distance 2 
and at node c it is 110101, also with distance 2. In going from the third node 
level to the fourth, the received sequence agrees precisely with the survivor from c 
but has distance 2 from the survivor from a. Hence the survivor at node a of the 
fourth level is the data sequence 1 100, which produced the code sequence 1 10101 1 1 
which is at (minimum) distance 2 from the received sequence. 

In this way, we may proceed through the trellis and, at each step for each 
state, preserve only one surviving path and its distance from the received 
sequence; this distance is the metric 4 for this channel. The only difficulty which 
may arise is the possibility that, in a given comparison between merging paths, the 
distances or metrics are identical. Then we may simply flip a coin to choose one, as 
was done for block codewords at equal distances from the received sequence. For 
even if we preserved both of the equally valid contenders, further received symbols 
would affect both metrics in exactly the same way and thus not further influence 



4 As defined in Sec. 2.2, the metric is the logarithm of the likelihood function. For the BSC, it is 
convenient to use the negative of this which is proportional to Hamming distance [see also (4.2.2)]. 



CONVOLUTIONAL CODES 237 







Received 
vector 




01 00 



Figure 47 Example of decod 
ing for encoder of Fig. 4.2a on 
BSC: decoder state metrics 
are encircled. 



our choice. The decoding algorithm just described was first proposed by Viterbi 
[19670]; it can perhaps be better appreciated with the aid of Fig. 4.7 which 
shows the trellis for the code just considered with the accumulated distance and 
corresponding survivors for the particular received vector 0100010000 .... 

It is also evident that, in the final decisions for the complete trellis of Fig. 4.4, 
the four possible trellis states are reduced to two and then to one in the tail of the 
code. While at first glance this appears appropriate, practically it is unacceptable 
because it requires a decoding delay of B as well as the storage, for each state, 
of path memories (i.e., the sequence of input bits leading to the most likely set 
of four states at each node level) of length B. We shall demonstrate in Sees. 4.7 
and 5.6 that performance is hardly degraded by proper truncation of delay and 
memory at a few constraint lengths. For the moment, however, we shall ignore this 
problem and be amply content with the realization that we have reduced the 
number of decoding elements per data bit (metric calculations) to an exponential 
growth in K 1 (2 K ~ l = 4 in this case) rather than in B. 

Another description of the algorithm can be obtained from the state-diagram 
representation of Fig. 4.5. Suppose we sought that path around the directed state 
diagram, arriving at node a after the /cth transition, whose code symbols are at a 
minimum distance from the received sequence. But clearly this minimum distance 
path to node a at time k can be only one of two candidates: the minimum distance 
path to node a at time k 1 and the minimum distance path to node c at time 
k I. The comparison is performed by adding the new distance accumulated in 
the kih transition by each of these paths to their minimum distances (metrics) at 
time k 1. 

It thus appears that the state diagram also represents a system diagram for 
this decoder. With each node or state, we associate a storage register which 
remembers the minimum-distance path into the state after each transition, as well 
as a metric register which remembers its (minimum) distance from the received 



238 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

sequence. Furthermore, comparisons are made at each step between the two paths 
which lead into each node. Thus, one comparator must also be provided for each 
state, four in the above example. 

Generalization to convolutional codes of any constraint length K and any 
rational rate b/n is straightforward. The number of states becomes 2 b(K ~ l \ with 
each branch again containing n code symbols. The only modification required for 
b > 1 is that due to the fact that 2 b paths now merge at any given level beyond the 
(K l)st; comparison of distance or metric must be made among 2 b rather than 
just two paths, and again only one survivor is preserved. Hence the potential 
path population is reduced by a factor 2~ b at each merging level, but it then 
grows again by the factor 2 b before the next branching level, thus keeping the 
states constant at 2 b(K ~ 1] . 

Generalization to arbitrary memoryless channels is almost as immediate. 
First, we note that, just as in Sec. 2.9, we may map the branch vectors f 1 , v i2 , . . . , 
v in into non binary signal vectors x f (of arbitrary dimension up to n) over an 
arbitrary finite alphabet of, symbols (for example, amplitudes, phases, etc.). The 
memoryless channel (including the demodulator, see Fig. 2.1) then converts these 
symbols into noisy output vectors y t of dimension up to n. The Viterbi decoder 5 
is then based on the metric 



or equivalently its logarithm 

B B 

i = 1 i = 1 

where x m/ is the code-subvector of the mth message sequence for the ith branching 
level. For the BSC just considered, this reduces to 

where d mi is the distance between the rc-dimensional received vector and the code- 
subvector for the ith branch of the mth path. The logarithm of this metric for a 
particular path is 

B B /I - P\ 

Z In p(y t |x mi .) = -d mi In I 1 + n In (1 - p) (4.2.2) 

Maximizing this metric is equivalent to maximizing 



5 While the exact terminology is maximum likelihood decoder using the Viterbi algorithm (VA), it 
has become common usage to call this simply the Viterbi decoder; we have chosen to adhere to 
common usage, with apologies by the first author for this breach of modesty. 



CONVOLUTIONAL CODES 239 

where a is a positive constant (for p < i) and /? is completely arbitrary. We should 
choose paths with maximum metric or, equivalently, we should minimize the 
Hamming distance as we have done. For the binary-input constant energy 
AWGN channel, on the other hand we have [see (2.1.15)] 



f=l 



= I .y.jX.,,-11 H.2.3) 

1=1 j=l 

where y {j is thejth symbol of the ith branch, x mij is thejth symbol of the zth branch 
for the mth possible code path, and /? is a constant. Maximizing this metric is 
equivalent to maximizing the accumulated inner product of the received vector 
with the signal vector for each path. Comparisons are made exactly as for the 
BSC, except that the survivor in this case corresponds to the maximum inner 
product rather than the minimum distance. A similar argument applies to any 
memoryless channel, based on the accumulated metric given by (4.2.1). 

For maximum likelihood decoding of general trellis codes, the Viterbi algor 
ithm proceeds exactly as for convolutional codes. Thus, only the encoder of trellis 
codes differs essentially from that of convolutional codes as discussed in Sec. 4.1. 
It would of course be desirable to be able to generate code symbols using the 
simpler convolutional encoders, if the performance is the same. In Chap. 5, we 
shall find that in most applications this is in fact the case. 



4.3 DISTANCE PROPERTIES OF CONVOLUTIONAL CODES 
FOR BINARY-INPUT CHANNELS 

We found in Chap. 2 that the error probability for linear codes and binary-input 
channels can be bounded simply by (2.9.19) in terms of the weights of all code 
vectors, which correspond to the set of distances from any one code vector to all 
others. Error performance of convolutional codes, which constitute a subclass of 
linear codes, can similarly be bounded, but with considerably more explicit results 
as we shall discover below. 

The calculation of the set of code path weights, or equivalently the set of 
distances from the all-zeros path to all paths which have diverged from it, is 
readily performed with the aid of the code trellis or state diagram. For expository 
purposes, we again pursue the example of the K = 3, r = \ code of Fig. 4.2a whose 
trellis and state diagram are shown in Figs. 4.4 and 4.5, respectively. We begin by 
redrawing the trellis in Fig. 4.8, labeling the branches according to their distances 
from the all-zeros path. 

Consider now all paths which merge with the all-zeros for the first time at 
some arbitrary node;. It is seen from the diagram that, of these paths, there will be 
just one path at distance 5 from the all-zeros path, and that this path diverged 
from the latter three branches back. Similarly, there are two at distance 6 from the 



240 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

00000 




Figure 4.8 Trellis diagram labeled 
with distances from the all-zeros 
path. 



all- zeros path, one which diverged four branches back and the other which 
diverged five branches back, and so forth. We note also that the input bits for the 
distance 5 path are 00 0100, and thus differ in only one input bit from those of 
the all-zero symbols path (which of course consists of all input zeros) while the 
input bits for the distance 6 paths are 00 001100 and 00 010100, and thus 
each differs in 2 input bits from the all-zeros path. The minimum distance, 
sometimes called the free distance, among all paths is thus seen to be 5. This 
implies that any pair of errors over the BSC can be corrected, for two or fewer 
errors will cause the received sequence to be at most distance 2 from the trans 
mitted (correct) sequence but it will be at least at distance 3 from any other 
possible code sequence. It appears that with enough patience the distance of all 
paths from the all-zeros (or any arbitrary) path can be determined from the trellis 
diagram. 

However, by examining instead the state diagram, we can readily obtain a 
closed-form expression whose expansion yields all distance information directly. 
We begin by labeling the branches 6 of the state diagram of Fig. 4.5 either D 2 , D, or 
D = 1, where the exponent corresponds to the distance of the particular branch 
from the corresponding branch of the all-zeros path. Also we split open the node 
a = 00, since circulation around this self-loop simply corresponds to branches of 
the all-zeros path, whose distance from itself is obviously zero. The result is 
Fig. 4.9. Now, as is clear from examination of the trellis diagram, every path which 
first remerges with state a = 00 at node level j must have at some previous node 
level (possibly the first) originated at this same state a = 00. All such paths can be 
traced on the modified state diagram. Adding branch exponents we see that path 
a b c a is at distance 5 from the correct path, paths a b d c a and a b c b c a are 
both at distance 6, and so forth, for the generating functions of the output sequence 
weights of these paths are D 5 and D 6 , respectively. 

Now we may evaluate the generating function of all paths merging with the 
all-zeros at the Jth node level simply by summing the generating functions of all 
the output sequences of the encoder. This generating function, which can also be 



6 The parameters D, L, and / in this section are abstract terms. 



CONVOLUTIONAL CODES 241 



D 




a 



Figure 4.9 State diagram labeled 
with distances from the all-zeros 
path. 



regarded as the transfer function of a signal-flow graph with unity input, can most 
directly be computed by simultaneous solution of the state equations obtained 
from Fig. 4.9 



= Dc b + Dc d 

= D 2 t c (43.1) 

where t b , c , and d are dummy variables for the partial paths to the intermediate 
nodes, the input to the a node is unity, and the output is the desired generating 
function T(D\ Solution of (43.1) for T(D) results in 



T(D) = 



D 5 



1-2D 

= D 5 + 2D 6 + 4D 7 + 



+ 2 k D k + 



(43.2) 



This verifies our previous observation, and in fact shows that, among the paths 
which merge with the all-zeros at a given node, there are 2 k paths at distance k + 5 
from the all-zeros path. 

Of course, (4.3.2) holds for an infinitely long code sequence; if we are dealing 
with the jth node level, we must truncate the series at some point. This is most 
easily done by considering the additional information indicated in the modified 
state diagram of Fig. 4.10. The L terms will be used to determine the length of a 
given path; since each branch has an L, the exponent of the L factor will be 
augmented by one every time a branch is passed through. The / term is included 
only if that branch transition was caused by an input data " 1," corresponding to a 
dotted branch in the trellis diagram. Rewriting the state equations (43.1), includ 
ing now the factors in / and L shown in Fig. 4.10, and solving for the augmented 



242 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




a = 



Figure 4.10 State diagram labeled 
with distance, length, and number 
of input " 1 "s. 



generating function yields 
T(D, !>/) = 



D 5 L 3 / 



1 - DL(\ + L)I 
= D 5 L 3 / + D 6 L 4 (1 



L)/ 2 + D 7 L 5 (1 + L) 2 / 



23 



* + (4.3.3) 

Thus we have verified that of the two distance 6 paths, one is of length 4 and the 
other is of length 5, and both differ in two input bits from the all-zeros. Thus, for 
example, if the all-zeros was the correct path and the noise causes us to choose one 
of these incorrect paths, two bit errors will be made. Also, of the distance 7 paths, 
one is of length 5, two are of length 6, and one is of length 7; all four paths 
correspond to input sequences with three " 1 "s. If we are interested in they th node 
level, clearly we should truncate the series such that no terms of power greater 
than Ii are included. 

We have thus fully determined the properties of all code paths of this simple 
convolutional code. The same techniques can obviously be applied to any binary- 
symbol code of arbitrary constraint length and arbitrary rate b/n. However, for 
b > 1, each state equation of the type of (4.3.1) is a relationship among at most 
2 b + 1 node variables. In general, there will be 2 b(K ~ 1} state variables and as many 
equations. (For further examples, see Probs. 4.6, 4.17, and 4.18.) In the next two 
sections we shall demonstrate how the generating function can be used to bound 
directly the error probability of a Viterbi decoder operating on any convolutional 
code on a binary-input, memoryless channel. 



4.4 PERFORMANCE BOUNDS FOR SPECIFIC 
CONVOLUTIONAL CODES ON BINARY-INPUT, OUTPUT- 
SYMMETRIC MEMORYLESS CHANNELS 

It should be reasonably evident at this point that the block length nB of a 
convolutional code is essentially irrelevant, for both the encoder and decoder 
complexity and operation depend only on the constraint length K, the code rate, 



CONVOLUTIONAL CODES 243 



k k 




Figure 4.11 Example of error events. 



and channel parameters; furthermore, the performance is a function of relative 
distances among signals, which may be determined from the code state diagram, 
whose structure and complexity depends strongly on the constraint length but not 
at all on the block length. Thus it would appear that block error probability is not 
a reasonable performance measure, particularly when, as is often the case, an 
entire message is convolutionally encoded as a single block, whereas in block 
coding the same message would be encoded into many smaller blocks. Ultimately, 
the most useful measure is bit error probability P b which, as initially defined in 
Sec. 2.11, is the expected number of bit errors in a given sequence of received bits 
normalized by the total number of bits in the sequence. 

While our ultimate goal is to upper-bound P b , we consider initially a more 
readily determined performance measure, the error probability per node, which we 
denote P e . In Fig. 4.1 1 we show (as solid lines) two paths through the code trellis. 
Without loss of essential generality, we take the upper all-zeros path to be correct, 
and the lower path to be that chosen by the maximum likelihood decoder. For this 
to occur, the correct path metric increments over the unmerged segments must be 
lower than those of the incorrect (lower solid line) path shown. We shall refer to 
these error events as node errors at nodes /,_/ , and k. On the other hand, the dotted 
paths which diverge from the correct path at nodes/ and k may also have higher 
metric increments than the correct path over the unmerged segments, and yet not 
be ultimately selected because their accumulated metrics are smaller than those of 
the lower solid paths. We may conclude from this exposition that a necessary, but 
not sufficient, condition for a node error to occur at node j is that the metric of an 
incorrect path diverging from the correct path at this node accumulates higher 
metric increments than the correct path over the unmerged segment. 

We may therefore upper-bound the probability of node error at node; by the 
probability that any path diverging from the correct path at node j accumulates 
higher total metric over the unmerged span of the path. 



{AM(x;, xj) > 0} 



(4.4.1) 



where x} is an incorrect path diverging from the correct path at node;, ^ (j) is the 
set of all such paths, known as the incorrect subset for node), AM(x^-, x 7 -) is the 
difference between the metric increment of this path and of the correct path \j 
over the unmerged segment. 

Employing the union bound, we obtain the more convenient, although looser, 
form 

Pe())< I Pr[AM(x}, Xj )>0] (4.4.2) 



244 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

But each term of this summation is the pairwise error probability for two code 
vectors over the unmerged segment. For a binary-input channel, this is readily 
bounded as a function of the distance between code vectors over this segment. For, 
if the total Hamming distance between code vectors \ 7 and x} (over their un 
merged segment) is d(\ jt \j) = d t we have from (2.9.19) that, for an output- 
symmetric channel, the pairwise error probability is bounded by the 
Bhattacharyya bound 



P d < exp 



(4A3) 



where p,(y) is the conditional (channel transition) probability of output y given 
that the input symbol was /(/ = 0, 1). Equivalently, we may express this bound in 
the more convenient form 

P d < Z d (4.4.4) 

where 



Z EE VpoOOPiGO (3A12) 

y 

Thus given that there are a(d) incorrect paths which are at Hamming distance d 
from the correct path over the unmerged segment, we obtain from (4.4.1) through 
(4.4.4) 

V P l error cause d by any one of a(d)\ 
~ d = df (incorrect paths at distance d f 

< I a(d)P d 

d=df 



(4.4.5) 

d=df 

where d f is the minimum distance of any path from the correct path, which we 
called the free distance in the last section. Clearly (4.4.5) is a union-Bhattacharyya 
bound similar to those derived for block codes in Chap. 2. 

We also found in the last section that the set of all distances from any one path 
to all other paths could be found from the generating function T(D). For demon 
stration purposes, let us consider again the code example of Figs. 4.2a, 4.4, and 4.5. 
We found then that 

T(D) = = 5 



Thus in this case d f = 5 and a(d) = 2 d ~ df . The same argument can be applied to 
any binary code whose generating function we can determine by the techniques of 



CONVOLUTIONAL CODES 245 

the last section. Thus we have in general that 



T(D)= j a(d)D d (4.4.6) 

d=df 



and it then follows from (4.4.5) and (4.4.6) that 



* T ( D ) 



(4.4.7) 



We note also that this node error probability bound for a fixed convolutional code 
is the same for all nodes when B = oo and that this is also an upper bound for 
finite B. 

Turning now to the bit error probability, we note that the expected number of 
bit errors, caused by any incorrect path which diverges from the correct path at 
node 7, can be bounded by weighting each term of the union bound by the number 
of bit errors which occur on that incorrect path. Taking the all-zeros data path to 
be the correct path (without loss of generality on output-symmetric channels), this 
then corresponds to the number of " 1 "s in the data sequence over the unmerged 
segment. Thus the bound on the expected number of bit errors caused by an 
incorrect path diverging at node j is 

E[n b (j)] < I I ia(d, i)P d < ia(d, i)Z d (4.4.8) 

i=l d = df i = l d = df 

where a(d, i) is the number of paths diverging from the all- zeros path (at node)) at 
distance d and with i " 1 "s in its data sequence over the unmerged segment. But 
the coefficients a(d, /) are also the coefficients of the augmented generating func 
tion 7(D, /) derived in the last section. For the running example, we have from 
(4.3.3) (with L = 1 since we are not interested in path lengths) 

D 5 I 

= D 5 I + 2D 6 I 2 + 4D 7 / 3 + + 2 k 



1 -2D1 

00 

V" yd - 5 r\d rd - 4 

and hence 

" |0 otherwise 

In this case then. 



d = 5 



1 = 1. D = Z 



246 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

In general it should be clear that the augmented generating function can be 
expanded in the form 



T(D, I) = 

whose derivative at / = 1 is 

ST(D, I) 



dl 



a(d, i)D"r 



ia(d, i 



i=l d = d f 

Consequently, comparing (4.4.8) and (4.4.10), we have 

dT(D, I) 



E[n b (j)] < 



dl 



(4.4.9) 



(4.4.10) 



(4.4.11) 



This is an upper bound on the expected number of bit errors caused by an 
incorrect path diverging at any node j. 

For a rate \/n code, each node (branch) represents one bit of information into 
the encoder or decoder. Thus the bit error probability defined as the expected 
number of bit errors per bit decoded is bounded by 

dT(D, I) 



P b (j)=E{n b (j)}< 



dl 



(4.4.12) 



7 = 1, D=Z 



as shown in (4.4.11). For a rate b/n code, one branch corresponds to b information 
bits. Thus in general 



E{n b (j)} ldT(D,I) 

~~~ ~~~ 



(4.4.13) 



1 = 1, D = Z 



where Z is given by (3.4.12). 



4.5 SPECIAL CASES AND EXAMPLES 

It is somewhat instructive to consider the BSC and the binary-input AWGN 
channel, special cases of the channels considered in the last section. Clearly the 
union-Bhattacharyya bounds apply with [see (2.11.6) and (2.11.7) and (3.4.15) 
and (3.4.17)] 



= V4p(l - P) 



and 



(Z) 



A WGN 



(4.5.1) 
(4.5.2) 



We note also, as was already observed in Sec. 2.1 1, that if the AWGN channel is 
converted to the BSC by hard quantization for &JN, < 1, then 




CONVOLUTIONAL CODES 247 



in which case 



-In Z * -In L/l - 4S s /nN ] ^ -In 



for a loss of 2/n, or approximately 2 dB, in energy-to-noise ratio. 

However, for these two special channels, tighter bounds can be found by 
obtaining the exact pairwise error probabilities rather than their Bhattacharyya 
bounds. For the BSC, we recall from (2.10.14) that, for unmerged segments at 
distance d from the correct path 7 



I tf)p k (l- 



dodd 



d 

k=d!2 + 



(4.5.3) 



This can be used in the middle expressions of inequalities (4.4.5) and (4.4.8) to 
obtain tighter results than (4.4.7) and (4.4.12) (see also Prob. 4.10). 

Similarly, for the binary-input AWGN channel, we have from (2.3.10) that the 
pairwise error probability for code vectors at distance d is 



P = 



(4.5.4) 



While we may substitute this in the above expressions in place of Z d = e d * * 9 a 
more elegant and useful expression results from noting that (Prob. 4.8) 



x > o, y > o 



Since d>d f we may bound (4.5.4) by 



P d <Q 




(4.5.5) 



(4.5.6) 



which is tighter than the Bhattacharyya bound. Substituting in the middle terms 
of (4.4.5) and (4.4.8), then using (4.4.6) and (4.4.10), we obtain 




(4.5.7) 



7 Ties are assumed to be randomly resolved. Note that unlike the block code case for which 
(2.10.14) holds, all probabilities here are for pairwise errors. 



248 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

and 



b ~ b \ 




(4.5.8) 

The last bound has been used very effectively to obtain tight upper bounds for 
the bit error probability on the binary-input AWGN channel for a variety of 
convolutional codes of constraint lengths less than 10. For, while the computation 
of T(D, /) for a constraint length K code would appear to involve the analytical 
solution of 2 b(K ~ X) simultaneous algebraic equations (Sec. 4.3), the computation of 
T(D, I) for fixed values of D = Z and / becomes merely a numerical matrix 
inversion. Also since T(D, I) is a polynomial in / with nonnegative coefficients and 
has a nondecreasing first derivative for positive arguments, the derivative at / = 1 
can be upper-bounded numerically by computing instead the normalized first 
difference. Thus 

,/) 



dl 

Even the numerical matrix inversion involved in calculating T(D, /) for fixed D 
and / is greatly simplified by the fact that the diagonal terms of the state equations 
matrix [see (4.3.1) and Probs. 4.17 and 4.18] dominate all other terms in the same 
row. As a result, the inverse can be computed as a rapidly convergent series of 
powers of the given matrix (see Prob. 4.18). The results for optimum rate \ codes 8 
of constraint length 3 through 8 are shown in Fig. 4.12. To assess the tightness of 
these bounds we show also in the figure the results of simulations of the same 
codes, but with output quantization to eight levels. For the low error probability 
region ($ b /N > 5 dB), it appears that the upper bounds lie slightly below the 
simulation. The simulations should, in fact, lie above the exact curve because the 
quantization loss is on the order of 0.25 dB (see Sec. 2.8). This, in fact, appears to 
be the approximate separation between simulation and upper bounds, attesting to 
the accuracy of the bounds. 

In all codes considered thus far, the generating function sequence 

T(D)= a(d)D d (4.5.10) 

d = d f 

was assumed to converge for any value of D less than unity. That this will not 
always be true is demonstrated by the example of Fig. 4.13. For this code, the self 



8 The codes were selected on the basis of maximum free distance and minimum number of bit 
errors caused by incorrect paths at the free distance, i.e., minimum a(df, i) (Odenwalder [1970]). 



10 



r-3 



r 

X 



-5 



o 10 



io- ( 



10- 



-o o- Simulation 
Upper 
bound 



K = 5 




5 6 

h /Nn, decibels 



(a) A" =3, 5, 7 



10~ 4 



2 10~ 



10~ 6 



io- 7 



-o- Simulation 
Upper 
bound 




b 



decibels 



(b) K = 4, 6,8 

Figure 4.12 P b as a function of b /N for Viterbi decoding of rate \ codes: simulations with eight- 
level quantization and 32-bit path memory (solid); upper bounds for unquantized AWGN (dotted). 
(Courtesy of Heller and Jacobs [1971].) 



249 



250 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 





Figure 4.13 Encoder displaying catastrophic error propagation and its state diagram. 



loop at state d does not increase distance, so that the path abddd . . . ddca will be at 
distance 6 from the correct path no matter how many times it circulates about this 
self-loop. Thus it is possible on a BSC, for example, for a fixed finite number of 
channel errors to cause an arbitrarily large number of decoded bit errors. To 
illustrate in this case, for example, if the correct path is the all-zeros and the BSC 
produces two errors in the first branch, no errors in the next B branches and two 
errors in the (B + l)st branch, B 1 decoded bit errors will occur for an arbi 
trarily large B. For obvious reasons, such a code, for which a finite number of 
channel errors (or noise) can cause an infinite number of decoded bit errors, is 
called catastrophic. 

It is clear from the above example that a convolutional code is catastrophic 
if and only if, for some directed closed loop in the state diagram, all branches 
have zero weight; that is, the closed loop path generating function is D. An even 
more useful method to ensure the avoidance of a catastrophic code is to establish 
necessary and sufficient conditions in terms of the code-generator sequences g,. 
For rate \/n codes, Massey and Sain [1968] have obtained such conditions in terms 
of the code generator polynomials which are defined in terms of the generator 



CONVOLUTIONAL CODES 251 

sequences as 9 

g k (z) = 1 +0i, k z + 02,k^ 2 + + ( K- 1) , k z*- 1 k= 1, 2, ...,n 

In terms of these polynomials, the theorem of Massey and Sain (see Prob. 4.11) 
states that a fixed convolutional code is catastrophic if and only if all generator 
polynomials have a common polynomial factor (of degree at least one). Also of 
interest is the question of the relative fraction of catastrophic codes in the en 
semble of all convolutional codes of a given rate and constraint length. Forney 
[1970] and Rosenberg [1971] have shown that, for a rate \/n code, this fraction is 
l/(2" 1), independent of constraint length (see Prob. 4.12). Hence generally, the 
search for a good code is not seriously encumbered by the catastrophic codes, 
which are relatively sparse and easy to distinguish. 

One subclass of convolutional codes that are not catastrophic is that of the 
systematic convolutional codes. As with systematic block codes, systematic convo- 

9 In this context, z is taken to be an abstract variable, not a real number. The lowest order 
coefficient can always be taken as one without loss of optimality or essential generality. 

Table 4.1 Maximum free distance 
of noncatastrophic codes 

Rate r = 



Systematict Nonsystematic 
K d, d, 



2 


3 


3 


3 


4 


5 


4 


4 


6 


5 


5 


7 


6 


6 


8 


7 


6 


10 


8 


7 


10 


Rate r = j 




Systematict 


Nonsystematic 


X 


/ 


<*/ 


2 


5 


5 


3 


6 


8 


4 


8 


10 


5 


9 


12 


6 


10 


13 


7 


12 


15 


8 


12 


16 



t With feed-forward logic. 



252 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

lutional codes have the property that the data symbols are transmitted unchanged 
among the coded symbols. For a systematic rate b/n convolutional code, in each 
branch the first b symbols are data symbols followed by n - b parity or coded 
symbols. The coded symbols are generated just as for nonsystematic codes, and 
consequently depend on the last Kb data symbols where Kb is the constraint 
length. Since data symbols appear directly on each branch in the state or trellis 
diagram, for systematic convolutional codes it is impossible to have a self-loop in 
which distance to the all-zeros path does not increase, and therefore these codes 
are not catastrophic. 

In Sec. 5.7, we show that systematic feed-forward convolutional codes do not 
perform as well as nonsystematic convolutional codes. 10 There we show that, for 
asymptotically large X, the performance of a systematic code of constraint length 
K is approximately the same as that of a nonsystematic code of constraint length 
K(l r) where r = b/n. Thus for rate r = J and very large X, systematic codes 
have about the performance of nonsystematic codes of half the constraint length, 
while requiring exactly the same optimal decoder complexity. 

Another indication of the relative weakness of systematic convolutional codes 
is shown in the free distance, d f , which is the exponent of D in the leading term of 
the generating function T(D). Table 4. 1 shows the maximum free distance achiev 
able with binary feed-forward systematic codes and nonsystematic codes that are 
not catastrophic. We show this for various constraint lengths K and rates r. As 
indicated by the results of Sec. 5.7, for large K the differences are even greater. 



4.6 STRUCTURE OF RATE 1/n CODES AND ORTHOGONAL 
CONVOLUTIONAL CODES 

While the weight or distance properties of the paths of a convolutional code 
naturally depend on the encoder generator sequences, both the unmerged path 
lengths and the number of " 1 "s in the data sequence for a particular code path 
are functions only of the constraint length, K, and rate numerator, b. Thus for 
example, for any rate 1/n, constraint length 3 code [see (4.3.3)] 



To obtain a general formula for the generating function T K (L, I) of any rate 1/n 
code of constraint length X, we may proceed as follows. Consider the state just 
prior to the terminal state in the state diagram of a constraint length K code (see 
Fig. 4.10 for K = 3). The (K - l)-dimensional vector for this state is 10 ... 0. 
Suppose this were the terminal state and that when a path reached this state it was 
considered absorbed (or remerged) without the possibility to go on to either of the 



10 It can be shown (Forney [1970]) that for any nonsystematic convolutional code, there is an 
equivalent systematic code in which the parity symbols are generated with linear feedback logic. 



CONVOLUTIONAL CODES 253 

states ... or ... 01. Then the initial input into the encoder register could be 
ignored, and we would have a code of constraint length K 1. It follows that the 
generating function of all paths from the origin to this next-to-terminal state, must 
be T K _ t (L, /). Now, if an additional "0" enters when the encoder is in this state, 
the terminal state is reached. If, on the other hand, a " 1 " enters we are effectively 
back to the situation of initial entry into the state diagram; that is, the " 1 " takes 
us to state 00 ... 1 with the branch from 10 ... playing the same role as that from 
the initial state. This implies that the recursion formula for the generating function 
T K (L I) is 

T K (L, I) = LTi.^L, /) + Ti.^L, I)T K (L I) K>2 (4.6.2) 

In words, to arrive at the terminal state, since we must first pass through the state 
100... 0, the first term on the right corresponds to a "0" entering when the 
encoder is in this state, in which case the terminal state is reached with an addition 
of one branch length (with data zero); the second term on the right corresponds 
to an input " 1," in which case we may treat the state 100 ... as if it were the 
initial state and the terminal state can only be reached by following one of 
the paths of T K (L I). From (4.6.2), we immediately obtain 

T K (L,i)= ."V ^/n K ~ 2 (4 63) 

" K-IV 4 -* *J 

Trivially, for K = 1 

7i(L./) = U (4.6.4) 

Then the solution of (4.6.3) is obtained by induction as 

IL K 



T K (L.I) = 



-7L(1 +L + 
/L A (1 - L) 



K>2 (4.6.5) 



" 1-1(1 + /(I -I?- 1 )] 

If only the path length structure is of interest, we may restrict attention to 

T K (L) = T K (L, 1 ) = t^^ff K>2 (4.6.6) 

We shall utilize these results in the next chapter when we treat convolutional 
code ensembles. We conclude this discussion by considering a class of codes 
whose distance or weight properties are the same for all branches, and con 
sequently whose performance depends only on the path structure. Such a class of 
codes is the orthogonal, rate 2~ K convolutional codes generated by the encoder of 
Fig. 4.14. The block orthogonal encoder generates one of 2* orthogonal binary 
sequences of dimension n = 2 K (as described in Sec. 2.5). Hence the weight of any 
branch not on the all-zeros data path is exactly 2 K ~ 1 = n/2. Thus for this class of 



254 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



-Dn 



K inputs 



Block orthogonal encoder 



I O 
O 

O 
O 

I O 




n = 2 K outputs 
Figure 4.14 Convolutional orthogonal encoder: constraint length K, rate r = 2~ K . 

codes, since each branch has weight n/2, T K (D, I) is obtained from T K (L, /) by 
replacing L by D n/2 everywhere, and thus 

ID Kn/2 (l - D n/2 ) 



T K (D,I) = 



T K (D) = 



))] 



O Knl2 ({ - D n/2 ) 



1 - 2D n/z + D 



12 i nKn/2 



(4.6.7) 



(4.6.8) 



Consequently, employing (4.4.7) and (4.4.12), we obtain the node and bit error 
probabilities for the AWGN channel for which Z = e ~ 6s N as 



e < T K (D) 



Z Kn/2 (l - Z" 12 ) Z Kn 2 



D = Z 



- 2Z n/2 + Z Kn/2 1 - 2Z n/2 



P h < 



dT K (D, I) 



di 



Kn/2 



vT^ < 



Z Kn / 2 (l - Z n/2 ) : 
I - 2Z n/2 + Z*" /2 ) 2 " (1 - 2Z" /2 ) 2 

Recognizing that, since for a rate I/H code there are n code symbols/bit 



(4.6.9) 
(4.6.10) 



= 1, D = Z 



we find 



-KSb/2N 



,-KSb/2N 



S b /N > 2 In 2 



(4.6.11) 



We recall also from Sec. 2.5 [Eqs. (2.5.13) and (2.5.18)] that 

g b /(N \n2)=C T /R T (4.6.12) 

where R T , the transmission rate, and C r , the capacity, are in nats per second and 



CONVOLUTIONAL CODES 255 



the transmission time per bit is 

T b = In 2/R T s/bit 
Thus (4.6.11) becomes 



< 4 - 6 - 13 ) 



[1-2 1 

For orthogonal block codes we were able to show in Sec. 2.5 that the error 
probability decreases exponentially with block length for all R T < C T . We recall, 
however, that to obtain that result we employed a more refined bounding 
technique than the union bound; we now use a similar approach for convolutional 
codes to demonstrate an exponential bound in terms of constraint length for all 
rates up to capacity. 

We begin with node error probability, and recall that an error can occur at 
node j only if an incorrect path diverging at this node from the correct path has 
higher metric upon remerging. From the generating function T K (L) of (4.6.6), we 
can determine all unmerged path lengths, and from this the totality of diverging 
paths which remerge again a number of branches ahead. This formula can be 
somewhat simplified, if we bound (4.6.6), in the sense of counting for every L 
more paths than actually exist, as follows 

T K (L)<-^ = ^2 k L K+k (4.6.15) 

1 Z.L fc = 

Thus, of the totality of paths diverging at a given node, there are no more than 2 k 
incorrect paths which merge after K + k branches; as we shall find, this overesti 
mate of number of paths (by approximately double) has negligible asymptotic 
effect. Now for an orthogonal convolutional code, all paths which are unmerged 
from the correct path for K + k branches have code vectors which are orthogonal 
to it over this entire unmerged segment. The node error probability can be 
bounded by 

P e < I Q n k (4.6.16) 

(error caused by any one of no more than 2 k \ 
where n k = Pr . 

I incorrect paths unmerged over K + k branches | 

This, of course, is a union bound, but rather than summing over all individual 
path error events, as in Sec. 4.4, we treat as a single event all errors caused by 
paths unmerged for the same number of branches. Now instead of bounding the 
probability of these events by a union bound over their members, as was done 
before, we employ a Gallager bound. In fact, we may apply precisely the deriva 
tion of Sec. 2.5, based on the Gallager bound (2.4.8), to the set of up to 2 k incorrect 



256 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

paths, unmerged with, and hence orthogonal to, the correct path over K + k 
branches. Noting that this argument does not require all code vectors to be 
mutually orthogonal, but only that each incorrect code vector be pairwise ortho 
gonal to the correct code vector, we thus have from (2.5.12), for orthogonal codes 
on the AWGN channel 



U k < 2 kp exp 



l+p 



0<p<l 



(4.6.17) 



since the energy over this segment is (K + k) times the energy per branch, which 
equals & b for a rate l/n code. Substituting (4.6.17) into (4.6.16), then using (4.6.12) 
yields 



P e < exp 



] exp \-kp 

o 

0<p< 1 



p) 



-In 2 



(4.6.18) 



Clearly if we take p = 1, we obtain the bound (4.6.13). On the other hand, taking 
for some < e <^ 1 



(4-6.19) 



we have 



(4.6.20) 



Thus we have an exponential decrease in K for all R T < C T (\ - e). For asympto 
tically large K, the denominator becomes insignificant so that we can let c -> 0. 
To bound the bit error probability requires little more effort if we recognize 
that a node error due to an incorrect path which has been unmerged over K + k 
branches can cause at most k + 1 bit errors; for in order to merge with the correct 
path, the last K 1 data bits for the incorrect path must coincide with those of the 
correct path. It follows then that the bit error probability is bounded by a summa 
tion of the form of (4.6.16) with each term weighted by k + 1. Thus 



< S (k + i)n 



Now using (4.6.17), and recognizing that 



CONVOLUTIONAL CODES 257 



we have 



-KS 



N. 



.(^)]| o (* + l) CT p(-*p 



P b < exp 



~ (1 - 2 p{1 " Cr/t/?r(1+p)1} ) 2 

Finally, applying (4.6.19) we obtain, analogously to (4.6.20) 

2 ~K[(C T /R T ) -1 /(!-)] 
^< (1 _ 2 - t (C r /*r)-l/(l-0])2 Q/2 < K r < C r (l - 6 

Combining (4.6.13), (4.6.14), (4.6.20), and (4.6.22), we obtain 



|0 < R T < C T /(l 

\T b = In 2/R T 



In2 



(4.6.21) 



(4.6.22) 



\- 2-* 

2-KE^Rrt/RT 



(4-6.23) 



where d(R T ) > for a > and 



- *,/,-<, c 

Figure 4.15 compares E C (R T ), the convolutional exponent as e -> 0, with the block 
exponent E(R T ) of (2.5.16) as a function of R T /C T . Comparing the latter with 
(4.6.23), we note that T for block codes is the time to transmit K bits, as is KT b for 
convolutional codes; thus (2.5.16) can be expressed as 



with E(R T ) as defined there. Hence, as is seen from the comparison of E C (R T ) and 
E(R T ) in Fig. 4.15, the convolutional coding exponent clearly dominates the block 
coding exponent for orthogonal codes. 



lim c 



C T /2 



\ 



\7 



C T /2 



C-, 



Figure 4.15 Limiting form of E C (R T ) for orthogonal convolutional codes and comparison with 
orthogonal block codes. 



258 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Comparing decoding complexity, a maximum likelihood block decoder per 
forms 2 K comparisons every K bits or 2 K /K comparisons per bit while a Viterbi 
maximum likelihood convolutional decoder performs 2 K ~ l comparisons per bit; 
the difference becomes insignificant for large K. On the other hand, the bandwidth 
expansion of block codes is proportional to 2 K /K while it is proportional to 2 K ~ l 
for convolutional codes, a severe drawback; this, however, is a feature only of 
orthogonal codes, and in the next chapter we shall show by ensemble arguments 
that, for the same bandwidth expansion (or code rate), the convolutional code 
exponent dominates the block exponent in a similar manner for all memoryless 
channels. 



4.7 PATH MEMORY TRUNCATION, METRIC 
QUANTIZATION, AND CODE SYNCHRONIZATION 
IN VITERBI DECODERS 

In deriving the Viterbi algorithm for maximum likelihood decoding of convolu 
tional codes in Sec. 4.2, we made three impractical assumptions which are, in 
order of importance, arbitrarily long path memories, arbitrarily accurate metrics, 
and perfect code synchronization. All three of these requirements can be eli 
minated with a minimal loss of performance, as we shall now discuss. 

As initially described, the algorithm requires that a final decision on the most 
probable code path be deferred until the end of the code block, or message, when 
the trellis merges into the single state by insertion of a b(K 1) zero "tail" into 
the coder register. Thus if the message or block length is (B K + 1 )b bits, the 
decoder must provide a register of this length for each of the 2 b(K ~ i} possible 
states. One obvious remedy to this situation is to limit B to some manageable 
number, say on the order of 1000 or less, by terminating the "block" with a 
b(K - 1) zero bit tail every (B - K + l)b data bits. This has two disadvantages, 
however. First, it reduces the efficiency by increasing the effective required S b /N , 
and the required bandwidth, by a multiplicative factor of 1 + [(K - !)/]; also it 
requires interruption of the data bit stream periodically to insert nondata tails, a 
common drawback of block codes. 

These drawbacks can be avoided by simple modification of the basic algor 
ithm. The simplest approach is to recognize that, other than in a catastrophic 
code, a path which is unmerged from the correct path will accumulate distance 
from it as an increasing function of the length of the unmerged span. Thus, upon 
merging, an incorrect path with a very long unmerged span will have very low 
probability of having higher metric than the correct path since this probability 
decreases exponentially with distance. Consequently, with very high probability, 
the best path to each of the 2 b(K ~ l) states will have diverged from the correct path 
only within a reasonably short span, typically a few constraint lengths. Thus 
without ever inserting tails, we may truncate the path memory to say five con 
straint lengths, using shift registers of length 5bK for each state. As each new set of 
b bits enters the registers of each state, the b bits which entered 5K branch times 



CONVOLUTIONAL CODES 259 

earlier are eliminated, but as this occurs the decoder makes a final decision on 
these bits, either by choosing for each set of b bits the oldest shift register contents 
of the majority of the states, or more simply, by accepting the contents of an 
arbitrary state, on the grounds that, with high probability, all paths will be identical 
at this point and before. The analysis of the loss in performance caused by these 
truncation strategies appears quite difficult, but simulations indicate a minimal 
loss when path memories are truncated more than five constraint lengths back. 

A better, but more complex, truncation strategy is to compare all likelihood 
functions or metrics after each new branch, not only in groups of 2 b but also 
among all 2 b(K ~ l) surviving paths, to determine the most probable path leaving the 
given node at the truncation point; then among the 2 b(K ~ n paths, we choose the 
outputs corresponding to the highest metric (several constraint lengths forward). 
In a somewhat superior manner, this strategy permits reduction of the memory 
length to 4bK or less in practical situations as has been determined by simulation. 
In addition, the loss in performance due to truncation with this decision strategy 
can be analyzed on an ensemble basis, as will be shown in Sec. 5.6 (also see 
Prob. 4.24). 

The second inherent assumption has been that accumulated branch metrics 



can be stored precisely. We note that, other than for the BSC where they simplify 
to integers, the metrics will be real numbers. For example, on an AWGN channel, 
they consist of linear combinations of the demodulator outputs for each symbol 
[see (4.2.3)]. Even if these symbols are quantized to J levels, this does not mean 
that the metrics are quantized; for as shown in Sec. 2.8 (see Fig. 2.14) the 
quantized channel is characterized by a transition probability matrix {p(bj\aj)} 
whose logarithms are real numbers. 11 Nevertheless, it has been found, again by 
extensive simulation (see Heller and Jacobs [1971]), that for all values of S b /N , 
for an eight-level quantized AWGN channel with binary (biphase or quadriphase 
modulated) inputs, use of the definitely suboptimal metric 



where the y) are /7-dimensional vectors of quantized demodulator outputs with 
integer values from zero to seven, results in a total performance loss of only 
0.25 dB relative to that with unquantized demodulator outputs and unquantized 
metric (see Fig. 4.12). Another problem arises because, even though we may 
quantize each branch metric to a reasonable number of bits to be stored, the 
accumulated metrics will grow linearly with the number of branches decoded. This 
difficulty is easily avoided by renormalizing the highest metric to zero simply by 
subtracting an equal amount from each accumulated metric after each branch 



See Prob. 4.20 for a bounding technique for a decoder which uses integer metrics. 



260 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

/ /+! j + 2 j + 3 j + K-2 j + K-\ 



State a 




State a 



State b 

Figure 4.16 Unnormalized metrics for direct paths from state a at node j to states a and b at node 
+ K-L 



calculation. The maximum spread among all 2 b(K * } state metrics is easily bounded 
as follows. Suppose that the greatest branch metric possible is zero and the least is 
the negative integer - v, which we can guarantee by subtracting a constant from 
all possible branch metrics. For the binary-input, octal-output quantized AWGN 
channel just discussed with rate l/n coding, v = In. Then it follows easily that the 
maximum spread in metrics for a constraint length X, rate b/n code is (K l)v, 
for any state can be reached from any other state in at most K - 1 transitions. At 
any node depth j + K 1, consider the highest metric state a, without normaliza 
tion, and any other state b. There exists a path (not necessarily the surviving path) 
which diverged from the path to state a at node j and arrives at state b at node 
j + K - 1 (see Fig. 4.16). Now since all branch metrics lie between zero and - v, 
the metric change in the path to a over the last K 1 branches is nonpositive 
while the metric change in the path to state b must be between zero and 
(K l)v; hence the spread is no greater than (K l)v. Now if this particular 
path did not survive, this can only be due to the fact that the surviving path to 
state b has higher metric than this path; hence the spread will be even less. In 
conclusion then, if we renormalize by adding an integer to bring the highest state 
metric to zero after each branch calculation, the minimum branch metric is never 
smaller than -(K - l)v, so we need only provide [log (K - l)v] bits of storage 
for each state metric (where \x] denotes the least integer not less than x). 

Finally, we consider the synchronization of a Viterbi decoder. For block 
codes, it is obvious that, without knowledge of the position of the initial symbol of 
each received code vector, decoding cannot be performed. Hence block coding 



CONVOLUTIONAL CODES 261 

systems either incorporate periodic uncoded synchronization sequences which 
permit the receiver initially to acquire the code synchronization, or they modify 
the block code so as to cause unsynchronized code vectors to be detected as such. 
In the first case, the effective data rate is reduced by insertion of the uncoded 
synchronization sequence, while in the second a relatively complex synchroniza 
tion system must be provided in the decoder. These difficulties are greatly reduced 
in convolutional decoders. In an unterminated convolutional code, it would 
appear that we require both branch and symbol synchronization. In a binary- 
input, rate b/n code, symbol synchronization refers to knowledge of which of n 
successive received symbols initiates a branch; let us assume initially that this has 
already been acquired. On the other hand, branch synchronization refers to know 
ledge of which branch in the code path is presently being received. But if symbol 
synchronization is known, branch synchronization is not required. For suppose 
that, rather than initiating the decoding operation at the initial node (all-zeros in 
the encoder) as we have always assumed, we were to begin in the middle of the 
trellis. The Viterbi algorithm is identical at each node; the only problem would be 
how to choose the initial values of the 2 b(K ~ 1] state metrics. In normal decoding 
when correct decisions are being made, one metric, generally corresponding to the 
correct path or at least to a path which diverged from it only a few branches back, 
will be largest; but, when errors are occurring, paths unmerged from the correct 
path will have the highest metric so that conceivably the correct path might even 
have the lowest metric at a given node. Yet we have seen that with probability one 
for all but catastrophic codes, error sequences are of finite length so that, even 
from the worst condition when the correct path has lowest metric at a given node, 
the decoder will eventually recover and resume making correct decisions. Thus it 
is clear that if we start decoding at an arbitrary node with all state metrics set to 
zero, the decoder performance may initially be poor for several branches but, after 
at most a few constraint lengths, it will recover in the sense that the correct path 
metric will begin to dominate, and the data will be decoded correctly in much the 
same way as the decoder recovers after a span of decoding errors. Analysis of this 
effect on an ensemble basis is very similar to that of path memory truncation, and 
is treated in Sec. 5.6. 

Thus the only real synchronization problem in Viterbi decoding of convolu 
tional codes is that of symbol synchronization. Here, fortuitously, we have the 
reverse of the above situation. In a rate b/n code, if the wrong initial symbol 
out of n possible consecutive symbols is initially assumed, the correct path branch 
metrics will appear much like those of other paths; thus all path metrics will tend 
to remain relatively close together with no path emerging with more rapidly 
growing metric than all the others (see for example Prob. 4.19); this condition can 
be easily detected and the initial assumption of the initial symbol changed to 
another of the n possibilities. Thus at most n positions must be searched; even 
with enough time spent at each position to be able to exclude the incorrect 
hypotheses with low probability of error, symbol synchronization can be achieved 
within a few hundred bits when n is on the order of four or less. 



262 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

4.8 FEEDBACK DECODING* 

We have thus far considered only maximum likelihood decoding of convolutional 
codes, developing and analyzing the Viterbi algorithm which results naturally 
from the structure of the code. Its major drawback is, of course, that while error 
probability decreases exponentially with constraint length, the number of code 
states, and consequently decoder complexity, grows correspondingly. Partially to 
avoid this difficulty, a number of other decoding algorithms have been proposed, 
analyzed, and employed in convolutionally coded systems. Sequential decoding 
achieves asymptotically the same error probability as maximum likelihood decod 
ing, but without searching all possible states; in fact, the number of states searched 
is essentially independent of constraint length, thus rendering possible the use of 
very large K and resulting in very low error probabilities. This very optimistic 
picture is clouded by the fact that the number of state metrics actually searched is 
a random variable with unbounded higher-order moments; this poses some rather 
subtle difficulties which degrade performance. To do justice to the complex sub 
ject of sequential decoding, we must first explore more of the ensemble properties 
of convolutional codes. This is done in Chap. 5, and then Chap. 6 is devoted to 
sequential decoding. 

Another class of decoding algorithms, known collectively as feedback decod 
ing, has received much attention for its simplicity and applicability to interleaved 
data transmission. The principles of operation of a feedback decoder, or more 
precisely a syndrome-feedback decoder, are best understood in terms of a specific 
example; Fig. 4.17 shows the ultimately simplest rate = \ encoder, and its trellis 
and tree diagrams. Clearly, the free distance is 3, so that, if on a BSC only one 
error occurs in a sequence of two branches (the constraint length), the error will be 
corrected by a maximum likelihood (minimum distance) decoder. Now suppose 
that instead of a true maximum likelihood decoder as described and analyzed 
earlier, we use instead a truncated-memory decoder which makes a maximum 
likelihood decision on a given bit or branch based only on a finite number of 
received branches beyond this point. For the example at hand, suppose the deci 
sion for the first bit were based on only the two branches shown on the tree 
diagram in dotted box A. If the metric at nodes a or b is greatest, we decide that 
the first bit was a "0"; while if the metric at nodes c or d is greatest, we decide in 
favor of a " 1." Specifically, if the sequence received over a BSC is y = 100110, the 
metric (negative Hamming distance) is - 1 at node c and less at all other nodes at 
the third branching level. This decoder will then decide irrevocably that the first 
transmitted bit was a " 1." From this point, only paths in the lower half of the tree 
are considered. Thus the next decision is among the paths in dotted box B and is 
based on the metric at the four nodes e, f, g, and h. For the given y, the metric, 
based on branches 2 and 3, at nodes e and f is 1 and at all other nodes it is less. 
Hence the second bit decision is a " 0." Note that the effect on the metric of the 



* May be omitted without loss of continuity. 




CONVOLUTIONAL CODES 263 



Encoder 



00 00 00 





(b) Trellis diagram 



00 



\ 01 





1 










00 










00 a 


















11 






00 










1 












01 










11 b 


















10 




























00 e 1 










01 c 


















11 f 






11 






















01 g 










10 d 


















10 h 




* 


I 














1 


* 






i 


Received 
vector 




10 




01 




10 



(c) Tree diagram 
Figure 4.17 Code example for feedback decoding. 



264 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

first branch could be removed because all paths in B have the same first branch, 
this having been irrevocably decided in the previous step. The decoder can then 
proceed in this manner, operating essentially as a "sliding block decoder" on 
codes of four codewords of 2 branches each. This decoder is also called a, feedback 
decoder because the decisions are " fed back " to the decoder in determining the 
subset of code paths which are to be considered next. On the BSC, in general, it 
performs nearly as well 12 as the Viterbi decoder in that it corrects all the more 
probable error patterns, namely all those of weight (d f 1 )/2, where d f is the free 
distance of the code. Hence, some (though not necessarily all) of the minimum 
weight error patterns which cause a decision error with this decoder will also cause 
decision errors with the Viterbi decoder. 13 

The above example is misleadingly simple in that the memory length, which 
we henceforth denote by L, needs only to be equal to the constraint length to 
guarantee that the minimum distance between the correct path and any path 
which diverges from it will be at least d f within L branches. Examining the trellis 
of the rate = ^, K = 3 code of Fig. 4.8, for which d f = 5, we find that it takes L = 6 
branches for all paths (unmerged as well as merged) which diverged from the 
path at the first node to accumulate a weight equal to d f or greater. In fact, the 
worst culprit is the path whose data sequence 101010 takes exactly six branches to 
reach weight d f = 5. This code then guarantees the correction of all two-error 
patterns in any sequence of 12 code symbols (six branches). To correct three-error 
patterns, we must have a code for which d f is at least 7, which with rate = \ 
requires K > 5. The best K = 5, r = \ code requires L = 12 for all unmerged 
incorrect paths to reach weight 7. It turns out, however, that there is a K = 10, 
r = \ systematic code for which all paths which diverge from reach weight 7 by 
L = 1 1 ; hence a feedback decoder for this code with L = 1 1 corrects all three- 
error patterns in any sequence of 22 symbols. We shall return to the question of 
systematic codes momentarily. 

The fact that this decoder can be regarded as a sliding block decoder can be 
exploited to simplify its implementation for a BSC. We recall from Sec. 2.10 that a 
maximum likelihood, or minimum distance, decoder for a systematic block code 
on a BSC can be efficiently implemented by calculating the syndrome of the 
received vector, and from this obtaining the most likely error vector by consulting 
a table which contains the most likely error vector for each syndrome. The simple 
example under consideration is a systematic code since one of its generators 
contains only one tap, and hence the information symbols are transmitted 



12 For the special case of the code of Fig. 4.17, Morrissey [1970] has shown that on the BSC the 
feedback decoder coincides with the Viterbi decoder. However, this is the only case for which this is 
known to hold. 

13 Note that the Viterbi decoder must also truncate its memory for practical reasons, as discussed in 
Sec. 4.7, but that if this is done after about five constraint lengths negligible degradation results. In the 
present discussion, memory is truncated much earlier so that the decoders are generally much more 
suboptimal, although not in the special case of Fig. 4.17. 



CONVOLUTIONAL CODES 265 

unmodified through this tap. The transmitted code vector can be written for 
convenience as 



x = 



p 2 



where u ; - is they th information symbol (upper generator in this case) and p 7 - is the 
jth parity (lower generator) symbol. This is generated from the data source by the 
operation 

x = uG 



where 



G = 



1 1 
1 1 
1 1 



1 1 
1 



Note that for convenience here we have departed from the convention established 
in Sec. 4.1 of writing consecutively 14 all the generator outputs for the jth input, 
choosing rather to partition the vector into two (generally n) subsequences, one 
for each generator. This then requires that the generator matrix also be parti 
tioned into submatrices, one for each subgenerator sequence. It follows from 
Sec. 2.10 then that the transpose of the parity-check matrix for this code is 

1 1 
1 1 
1 1 



H T = 



1 1 

1 



1 

But since each submatrix of the parity check matrix also has the property that 
each row is shifted from the preceding row by one term, it is clear that the 



14 This is merely a convention and does not alter the fact that PJ is transmitted immediately after 
Uj and thus that the u and p subsequences are interleaved together on transmission. 



266 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




i i 

Syndrome generator 



Syndrome storage, 

decision logic, 
and error feedback 



Figure 4.18 Feedback decoder for code of Fig. 4.17. 



syndrome 



can be generated by passing the received noise-corrupted information symbols to 
a single generator sequence convolutional encoder and adding its output to the 
received noise-corrupted parity symbols. In Fig. 4.18 we show the information 
and parity symbols as if they were on two separate channels. In fact, a commuta 
tor at the encoder output provides for consecutive transmission of Uj and PJ , while 
a decommutator before the encoder separates the error-corrupted information 
and parity subsequences. However, since the channel is memoryless, we may treat 
the interleaved information and parity subsequences as if they were transmitted 
on separate BSCs with the same error statistics. 

Clearly, in the absence of errors, the syndrome is always zero since it is then 
the sum of two identical sequences. In the presence of errors, " 1 "s will appear. 
Returning to the " sliding block " decoding viewpoint, we see that if we preserve L 
symbols of the syndrome, then this represents the syndrome for a specialized 
block code corresponding to a segment of the tree over L branches. For the 2 L 
syndromes of this block code, we could provide a table-look-up corresponding to 
the most likely (lowest weight) error pattern for each syndrome. But, in fact, the 
decision at each step of a feedback decoder is only on which half of the tree is more 
likely. Equivalently, the decision may be on whether the received information 
symbol corresponding to the first branch was correctly received or had an error in 
it. This then requires only that the syndrome table store a "0" or " 1," the latter 
corresponding to an error, which can then be added modulo-2 to the correspond- 



CONVOLUTIONAL CODES 267 

Table 4.2. Syndrome look-up table for code of Fig. 4.17 



Syndrome 


Most likely error pattern 


Table output 


s, s 2 


Error in Uj 


Pi 


11 2 


Pi 


Error in M t 




















1 








1 








1 





1 











1 1 


1 











1 



ing information symbol, which itself also had to be stored in an L-stage shift 
register. 

For the code under consideration the syndromes, most likely error sequences, 
and required output are shown in Table 4.2. Thus in general, the look-up table 
can be implemented by a read-only memory logic element with L inputs whose 
single output is used to correct the information bit. In this simple case, as shown 
in Fig. 4.18, the general logic element may be replaced by an and-gate. There 
remains, however, one more function to be implemented in this feedback decoder. 
This is to feed back the decision just made; if the decision was that an error 
occurred in u l9 this is most easily implemented by adjusting in the decision device 
for the effect of that error. But the decision device here consists of just the stored 
syndrome for the past L branches and a time-invariant table. The error in u l here 
produced i- l"s in both syndrome stages, but, for the next decision (on u 2 ) the 
contents of the rightmost stage is lost and replaced by the contents of the previous 
stage. Thus to eliminate the effect of the u v error, we need only add modulo-2 the 
decision output to the rightmost stage and store this until the arrival of the next 
syndrome bit. 

As long as no decision errors occur, the decoder continues to operate in this 
manner, in this case correcting all single errors in any sequence of four symbols. 
However, if two out of any four consecutive symbols are ever in error, a decision 
error occurs that may propagate well beyond a single decision error since the 
error is in fact fed back to affect further decisions. This is called the error propaga 
tion effect, which is common to some extent to all convolutional decoders. That 
the error propagation in this case is finite, however, is evident from the fact that, if 
no channel errors occur in two consecutive branches, both syndrome stages 
cannot simultaneously contain " 1 "s; hence, as seen in Table 4.2, no decision error 
is detected or fed back, thus returning the syndrome register to the all-zeros state 
and passing the correct information symbols unmolested. The above example for 
rate r = \ single-error-correcting codes can be generalized to any rate (n 1 )/n 
single-error-correcting code (see Prob. 4.13). 

Generalization to any systematic convolutional code of any rate b/n is 
straightforward. For such a code, the information sequence is subdivided into 
subsequences of length b bits. Each symbol of each length b information sub 
sequence is transmitted and simultaneously inserted into the first stage of b en- 



268 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

coder registers, each of which is of length K. The contents of these registers are 
linearly (modulo-2) combined to generate (n b) parity symbols after each inser 
tion of the b information symbols. The syndrome generator in the decoder then 
operates in almost the same way as the encoder. That is, the b error-corrupted 
information symbols are again encoded as above into (n - b) symbols, which are 
added to the corresponding error-corrupted parity symbols to form syndrome 
symbols. Thus for each subsequence of b information symbols, a subsequence of 
(n b) syndrome symbols is generated, which, in the absence of errors, would 
be all zeros. If the truncated maximum likelihood decision is to be based on L 
branches back, the (n b)L syndrome bits must be stored and, in general, the 
syndrome table-look-up must consist of 2 (n ~ b)L entries of b bits each. A "1" in 
any of the b table-look-up outputs indicates an error to be corrected in an 
information bit and corresponding corrections to be made in the syndrome. 

To illustrate a somewhat more powerful code, as well as a further 
simplification in the decoder which is possible for a limited subclass of convolu- 
tional codes, let us consider a two-error-correcting, r = \ code. As discussed 
earlier, the nonsystematic K = 3, r = \ code of Fig. 4.20 requires a memory of 
six branches to ensure that all incorrect unmerged paths reach the free distance 5 
from the correct path. But the complexity of the feedback decoder depends almost 
exclusively on the complexity of the syndrome logic and hardly at all on con 
straint length. Furthermore, for correcting information errors, systematic codes 
are more natural to work with than nonsystematic ones. 15 In general, with true 
maximum likelihood decoding whose complexity depends directly and almost 
exclusively on constraint length, nonsystematic convolutional codes are superior 
as noted in Sec. 4.5 since they achieve higher free distance for a given K. On the 
other hand, since a feedback decoder is really a sliding block decoder of block 
length L branches and, as shown in Sec. 2.10, for every nonsystematic block code 
there is a systematic block code with equal performance, there appears to be every 
advantage to using systematic convolutional codes with feedback decoders. The 
systematic code which may be feedback decoded with a syndrome memory of six 
branches is shown in Fig. 4.19. It may be verified from the corresponding tree 
diagram that d f = 5 and that all paths which diverge from the all-zeros at a given 



15 Actually a syndrome can be calculated almost as easily for a nonsystematic code, but errors then 
must be found in all the received symbols which must then be combined to generate the information 
symbol. 




Figure 4.19 Systematic encoder capable of two-error correction (K = 6, r = 



CONVOLUTIONAL CODES 269 

node are at least at distance 5 from it within six branches. Thus a feedback 
decoder with a memory of six branches can correct any two symbol errors in any 
sequence of 12 symbols of this code. 

While the general structure of this feedback decoder has already been 
described, we now demonstrate a considerable simplification on the general syn 
drome lookup-table procedure that is possible for a limited class of codes of 
which this is a member. Suppose we denote possible errors in the information and 
parity symbols on they th branch by e" and ej, respectively, each of which equals 
zero if the corresponding symbol is received correctly and equals one if it is in 
error. Then, since the syndrome symbols are all zero when no errors are made 
(assuming no previous errors), the first six syndrome symbols, corresponding to as 
many branches, are (see Fig. 4.20) 



S 3 = e\ e\ 

(4.8.1) 

5 4 = el e\ e\ 

5 5 = el e u 5 e u 2 e\ 

5 6 = el el el e\ e\ 

Now suppose we consider the set of equations for 5 l5 S 4 , 5 6 and the modulo-2 
sum S 2 S 5 

S, = e\ e\ 

S 4 = e p 4 el e\ 

S 2 S 5 = e p 5 e\ e\ e\ 



These equations have two important properties: (a) each equation contains e\, the 
error in the first information symbol and (b) no other symbol error occurs in more 
than one equation. Such a set of equations is said to be orthogonal on e\. Hence if 
u l is in error so that e\ = 1 and no other symbol errors occur in the first six 
branches, all the syndrome linear combinations of (4.8.2) will equal 1. If any other 
error occurs alone among the first 12 symbols [or, more precisely, among the 1 1 of 
those 12 symbols whose error terms appear in (4.8.2)] only one sum in (4.8.2) will 
be 1 and the other three will be 0. If e\ = 1 and any other error occurs among the 
first 12 symbols, three of the sums are 1 and the other is a 0. Finally, if e\ = but 
two other symbols are in error, at most two sums of (4.8.2) will be 1, since each 
such error term occurs in one equation (possibly both in the same equation). Thus 
we conclude that e\ = 1 if and only if three or four of the sums of (4.8.2) equal 1, 
and e\ = if less than three of the sums equal 1. This suggests the alternate 
mechanization of the syndrome table-look-up shown in Fig. 4.20. Aside from the 
simplification of the general logic element, the remainder of the syndrome-feedback 



270 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




Threshold device: output = 1 if 3 or 4 inputs = 1 



Errors 



Figure 4.20 Threshold decoder for encoder of Fig. 4.19. 



decoder is as previously described. This special form of a feedback decoder is 
called a threshold decoder because of the threshold logic (also called majority logic) 
involved in the error decisions. The class of threshold-decodable convolutional 
codes was first defined and extensively developed by Massey [1963], 

Clearly, what has just been described for the first branch applies to all further 
branches, with a correction to the information symbol Uj performed by adding e" 
to it. Also wherever e" = 1, the effect of the error is canceled by adding, in the 
appropriate positions of the syndrome register, the parity-check symbols gen 
erated by a 1 in the erroneous information symbol. In this way it is clear that any 
single error or pair of errors in six consecutive branches are corrected. It should be 
recognized, however, that, since there are 64 possible syndromes and only 
1 + (i) + (2) = 22 error patterns of weight less than or equal to 2, this decoding 
procedure does not necessarily replace exactly the table-look-up corresponding to 
the truncated maximum likelihood decoder. That is, while it does guarantee that 



CONVOLUTIONAL CODES 271 

all error patterns of weight up to (d f l)/2 are corrected, there may be some 
three-error patterns corrected by the table-look-up procedure which are not cor 
rected here. Obviously, not all systematic convolutional codes may be decoded by 
a threshold decoder. It should be clear from the above example that a code is 
e-error-correctable by a threshold decoder if and only if 2e linear combinations of 
syndrome symbols can be formed which are orthogonal on one information 
symbol error. Then the threshold logic declares an error whenever more than e of 
the linear sums equal 1. It is also clear that our original example of Fig. 4.18 
was a threshold decoder for a rate r = J code with e = 1. 

Another difficulty with threshold decoding is that for more than three-error 
correction, the required syndrome memory length grows very rapidly. As noted 
previously, there exists a systematic rate = J convolutional code (Bussgang 
[1965]) for which all incorrect paths are at distance 7 from the correct path within 
L = 11 branches, thus affording the possibility of correcting any error pattern of 
up to three errors in any sequence of 22 symbols. However, it is not orthogonaliz- 
able and hence not threshold-decodable. The existence of read-only memories 
containing 2 1 * bits in a single integrated circuit makes it possible to implement the 
entire table-look-up function (with an 11-bit input and a single output) quite 
easily. The shortest L for r = y, three-error-correcting orthogonalizable con 
volutional code is 12 affording correction of up to three errors in sequences of 
length 24 symbols. The situation is much less favorable, however, for a r = \ 
four-error-correcting code which requires that all incorrect paths be at distance at 
least 9 from the correct path. Computer search (Bussgang [1965]) has shown that 
the minimum value of L is 16. This code is not orthogonalizable and the table- 
look-up would require a memory of 2 16 bits. On the other hand, the shortest 
orthogonalizable, and hence threshold-decodable, four-error-correcting r \ 
code requires L = 22. L grows rapidly beyond this point with L = 36 for a 
threshold-decodable five-error-correcting convolutional code. (Lucky, Salz, 
Weldon, [1968]). 

While the feedback decoding, or sliding block decoding, concept can apply to 
channels other than the BSC, its appeal decreases considerably when the binary 
operations must be replaced by operations involving more elaborate metrics. 
Then also, as we have noted, even for the BSC, complexity hardly justifies the 
procedure for the correction of more than a few errors. On the other hand, one 
rather important feature of the procedure is that it can be very simply adapted to 
interleaved operation on channels with memory where bursts of errors are preva 
lent. While a general interleaver, applicable to any channel, was described in 
Sec. 2.12, this was external to the encoder and decoder, constituting an interface 
between the latter and the channel. A somewhat more direct approach to inter 
leaving with convolutional codes is to replace all the single-unit delay elements in 
the encoder shift register by /-unit delay elements, 16 where / is the degree of inter- 



integrated circuits providing thousands of bits delay-line storage are common. 



272 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

leaving. As a result, the encoder becomes effectively / serial encoders each 
operating on one of the subsequences of information symbols u ( , u i + I , u i + 2I , . . . 
where / is an integer between 1 and /. This technique is called internal interleaving. 
Its main drawback is that the storage required in the decoder is multiplied by /. 
Thus, for Viterbi decoding, the 2 b(K ~ 1} state metrics and path memories must be 
stored for each of the / interleaved codes, making the resulting storage require 
ments prohibitive, when / is in the hundreds or thousands. A feedback decoder, 
on the other hand, consists merely of the replica of the encoder, to generate the 
syndrome, and a syndrome memory shift register. Thus interleaved decoding, 
corresponding to / serial decoders, can be implemented just as in the encoder by 
replacing all single-unit delays by /-unit delays both in the syndrome generator 
and the syndrome memory register, leaving the syndrome table-look-up or 
threshold logic unchanged. Thus, for degree-/ interleaving, a constraint length X, 
rate b/n, syndrome-memory L code requires (K - \}b /-unit delay elements in the 
encoder and [(K - l)b + (L - l)(n - b)] /-unit delay elements in the decoder. 
Such decoders represent a simple approach to effective decoding of bursty channels 
such as are common in HF ionospheric propagation. Several techniques for 
embellishing this basic concept by varying the delays between coder and decoder 
stages have been proposed (Gallager [1968], Kohlenberg and Forney [1968], 
Peterson and Weldon [1972]) with moderate degrees of improvement. 



4.9 INTERSYMBOL INTERFERENCE CHANNELS* 

The Viterbi algorithm, originally developed for decoding convolutional codes, has 
also led somewhat surprisingly to a fundamental result in the optimum demodula 
tion of channels exhibiting intersymbol interference. This phenomenon, first dis 
cussed in Sec. 2.6, arises whenever a digital signal is passed through a linear 
channel (or filter) whose transfer function is other than constant over the band 
width of the signal. The narrower the channel bandwidth, the more severe the 
intersymbol interference. A general model of a band-limited channel is shown in 
Fig. 4.21. We first treat only the uncoded case of digital (pulse amplitude or 
biphase) modulation over an AWGN channel with intersymbol interference. In 
the next section and in Sec. 5.8, we shall extend our results to the coded case. 

The digital signal is characterized by a sequence of impulses, 1 7 or Dirac delta 
functions 



"(0= I u k d(t-kT) (4.9.1) 



k=-N 



* Intersymbol interference is treated in Sees. 4.9, 4.10, and 5.8 only. These sections may be omitted 
without loss of continuity. 

17 If the signal is instead a pulse train, i.e., a sequence of pulses each of duration T and amplitudes 
(uj, the channel output can still be represented by (4.9.2) with h(t) being the channel impulse response 
convolved with a single unit amplitude pulse of duration T. 



CONVOLUTIONAL CODES 273 



n(t) 



u(t} = Z u k 8(t-kT) 



* 


Impulse 
modulator 




D 


(/> A 

1 


/K-,, 




Hoc 




lulator/channel Matched filte 
filter 
I*" Demodulate 



(a} Analog model 



n k , correlated Gaussian noise 




(b) Digital equivalent model 
Figure 4.21 Intersymbol interference (band-limited) channel and matched filter demodulator. 



where {u k } is a sequence from a finite alphabet, usually binary (u k = + 1) in what 
follows. The total transmission sequence is taken to be of arbitrary length 2N. 
Then the first part of the channel, which is characterized by a linear impulse 
response h(t), has output 



N- 



-00 < t < 00 



(4.9.2) 



The additive noise, n(t ), is taken as usual to be white Gaussian noise with spectral 
density N /2, and the received signal is denoted, as in Chap. 2, by 



y(t) = x(t) + n(t) 



(4.9.3) 



The decision rule which minimizes the error probability, based on the entire 
received signal, is as derived in Sec. 2.2: 
choose x m (t ), m 1, 2, . . . , M if 



for all m 4 m 



where y and \ m are the coefficients of the Gram-Schmidt orthonormal representa 
tion of the functions y(t) and x m (t ). The number, M, of possible sequences {\ m } is 
bounded by M < Q 2N where Q is the size of the alphabet of the components x mn . 
As shown in Sec. 2.2, assuming a priori equiprobable sequences, the log likelihood 



274 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

ratio for m and m is given by 



ln 



- -J- J" K(t) - **.(,)] dt 

i\ J- 



(4.9.4) 

For this case, the integral representation of the log likelihood function is more 
useful. The infinite limits of the integrals are a consequence of the fact that h(t ) is 
defined over at least the semi-infinite line. 

Returning to the representation (4.9.2) of x(t ), and letting 



*( )= Z u mk h(t- 



m=l,2,...,M 



N- 

z 

k= -N 

it follows that the maximum likelihood decision rule can be based on 



where 



and 



-oo k=-N 

oo /V- 1 N- 1 



I oo /V- 1 N- 1 

-TTl Z I u mk u mj h(t - kT)h(t - JT) 

MO -ac k= -N j= -N 



dt 



2 N- 1 | 

= TT Z "mfc^-TT 



N- 1 N- 1 



y(t)h(t - kT) dt= y(t)h(t - t) dt 



T = kT 



- /c7>(f -; 



dco 



i-fc 



(4.9.5) 



(4.9.6) 



(4.9.7) 



where H(w) is the channel transfer function, which is the Fourier transform of its 
impulse response. The variables y k are the observables on which all decisions will 



CONVOLUTIONAL CODES 275 

be based. Note that it follows from (4.9.6) that these are formed by sampling, at 
intervals of T seconds, the received waveform y(t) convolved with the function 
h( T). But this is just the output of the filter matched to the channel impulse 
response h(r) when its input is y(t). Thus, the observables are just the outputs of a 
matched filter. The result is similar and reminiscent of that first derived in Sees. 2.1 
and 2.2 for maximum likelihood decisions in AWGN, except that here the infinite 
duration channel impulse response replaces the finite duration signal. 

The constants, {/i,}, which depend only on the channel impulse response are 
called intersymbol interference coefficients. Although, according to (4.9.7), /i, = /i_, 
is potentially nonzero for all i, in practice, for sufficiently large i, h { ^ 0. We shall 
accept this approximation to limit the dimensionality of the problem. Thus we 
take 

h t ^ for i > & where & <^ N (4.9.8) 

Also, by virtue of the symmetry of the coefficients h t , the symmetrical quadratic 
form in (4.9.5) can be written as the sum of the diagonal terms plus twice the upper 
triangular quadratic form. Thus 

j N-l N-l 2 N ~ l k-1 1 N ~ l 

-7T Z Z u mk u mj h k _j= - Z UrtU mJ h k -j- X U 2 mk h 

1 *ok=-Nj=-N 1 *ok=-Nj=-N iy ok=-N 

J N-l 2 N ~ l k+N 

= -TT Z "mA-TT Z "* Z".*-A- 
** k=-N ^0 k=-N i=l 

Substituting this in (4.9.5) and using (4.9.8), we have 

j N-I i &-\ \ 

^m = TT Z 2u ^y k ~ "mfc ^o ~ ^ mk ^ m. * - fa 
9 k=-N \ i=l 

N- 1 

= Z ^fifcty*; "m*. "m.ik-l ".ik-(^-l)) ( 4 - 9 - 9 ) 

k=-N 

where we let u mj = for j < - N, and define 

No^m/cUk U mk , U m fc _ 1? ..., U m k _ ( ^_ 1} ) 

y-i 
= 2u mfc y fc - u^^ - 2u mfc X m.*-,-fci (4-9.10) 

i=l 

This expression is reminiscent of the branch metric which establishes the decoding 
criterion for convolutional codes on a binary-input AWGN channel. The latter, 
as given by (4.2.3) restated in the present notation, is 

^o^*(y fc ; "*, "m,*-i - w m , k - ( K-i)) = 2(x mk ,y fc )-^N (4.9.11) 
where 

V * = (> fcl, > k2 Vkn] 



276 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

and 

X mfc ~ ( X mkl-> X mk2 > > X mkn) 

m, k-(K-l) 

are respectively the rc-symbol received vector for the /cth branch and the n-symbol 
code vector for the /cth branch of the mth code path, whose inner product properly 
scaled constitutes the fcth branch metric. The last expression for x mfc follows from 
the observation that the code vector for the /cth branch of a constraint K con- 
volutional code depends on the present data symbol and the preceding K - 1. 

Equation (4.9.10) differs from (4.9.1 1), the branch metric expression for convo- 
lutional codes, in two respects : the " branch " observable is a scalar rather than a 
vector and the expression is quadratic in the w mk s rather than linear in the x mkj -s 
which are algebraic (finite field) functions of the w mfc s. However, in the most 
important characteristic, namely the dependence on finitely many past data 
inputs, the expressions are fundamentally the same. This then leads us to the 
important conclusion that the maximum likelihood demodulation of binary data, 
transmitted over an AWGN channel with intersymbol interference of finite 
memory J^ 7 , can be based on a 2^~ l state trellis 18 where the states are determined 
by the preceding & 1 data symbols. In other words, to maximize A m , it suffices 
to maximize over all paths through the 2^~ l state trellis whose branch metrics are 
given by (4.9.10). This, of course, is achieved by the Viterbi algorithm (VA) 
developed in Sec. 4.2, which we restate as follows. Given the 2^~ l best paths 
through branch k 1, denoted by 

&i&2 " u k-^ u k-(x>-i) "fc-i where u k . (y . l} "U k . 1 

denotes one of the 2^~ 1 binary state vectors and u^ u k -^ are the best path 
" memories " for that state, and given the corresponding path metrics to that point, 
Mfc-.^Mfc-i w k _ ( ^_ 1} ), the best paths to each state through branch k are 
determined by the pairwise maximization 



M k - 

= max 



u k, "k-i, > "*-<*- 2> + 1 ) (4.9.12) 

If the upper branch of (4.9.12) is the greater, the resulting path memory symbol is 
w fc _ ( ^_ 1} = 1 for the given state; while if the lower is greater, w k _ ( ^_ 1} = + 1 for 
this state. 



This generalizes trivially for Q-level data input sequences to a Q y l state trellis. 



CONVOLUTIONAL CODES 277 

We proceed now to evaluate the performance of this maximum likelihood 
decision, based on the fact that it is implementable using the VA. Given the 
correct path due to message m and another path through the trellis corresponding 
to message m (or, equivalently, the state diagram), with corresponding metncs 19 A 
and A for correct and incorrect paths respectively, an error will occur if A > A. 
The probability that this incorrect path causes an error is given by (2.3.10) 

P E (m -> m ) = Pr {A > A} 



(4.9.13) 



where from (4.9.2) and (4.9.7) we get 

. QO 

|| x - x || 2 = I (x(f) - x (t)) 2 dt 



N- 1 



k=-N 



- u k )h(t - kT)\ dt 



N-l N-l 

= Z Z K - "*)("./ - ";) 

k=-N j=-N -oo 

N-l N-l 

Z ("*-"fc)K--"}K-; 

k=-N j=-N 



h(t - kT)h(t - jT) dt 



(4.9.14) 



Again note that x is a vector of coefficients of the Gram-Schmidt orthonormal 
expansion of the signal x(r). Defining error signals 



u k = 1, u k = 1 



(4.9.15) 



and noting that /7 fc _ 7 = fy_ fc , we rewrite (4.9.14) as 






N- 



= 4 



fc+N 



k=-N 

N- i / y- i 

4 x k 2 *.+2 ! 

k=-N V i=-l 



\ 
A-j 

/ 
\ 

_ I .* i 

\ 

-,*, 

/ 



(4.9.16) 



19 In what follows, we avoid first subscripts m and m and use instead only a superscript prime to 
distinguish the incorrect path from the correct path. 



278 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Thus for a given error sequence e = (e_ N , e_ N + 1 , ..., , ..., ^_j), we have 
the probability of error given by 

P (e) = Pr (V > A) 

jv^i / y^\ \~] 

(4.9.17) 



Using the bound Q(x) < e x2/2 first used in Sec. 2.3, we have 

[ 1 N ~ 1 / J ^~ 1 \1 

P (e) < exp - \h cl + 2 Ji, t k _, 1 

= f] exp -TrlVk+ 2 ^ e k k-il (4.9.18) 

k j* w N \ 1 /] 

We use the notation P (e) to indicate that the error probability depends on the 
differences between the data symbols along the two paths. Note further that the 
result depends on the sign of the differences and their locations. That is, unlike 
the error probability for linear codes over binary-input, output-symmetric chan 
nels, where performance does not depend on the sign of the channel input and 
hence the uniform error property holds, the situation is complicated here by the 
fact that the sign of the errors and hence of the data symbols must be accounted 
for. Of course, either sign is equally likely for each data symbol, and hence the 
error components for each pair of (correct and incorrect) paths can take on values 
0, -I- 1, and - 1. 

Up to this point, we have examined the transmitted binary sequence u and 
any other sequence u . Defining the error sequence 

= i(u-u) (4.9.19) 

we defined the error probability P (e) which is bounded by the expression (4.9.18). 
We now focus our attention on only those sequences u and u that can cause an 
error event as shown in Fig. 4.11. Equivalently, we restrict error sequences {e} to 
those that begin at some fixed time and have no consecutive & 1 zeros until 
after the last nonzero component. Define the number of nonzero components of e 
as w(e), and note that e uniquely specifies the source sequence u in exactly w(e) 
places. Over the ensemble of all equally likely source sequences, the probability 
that a source sequence can have the error sequence e is 2~ w(e) , and thus the 
probability of an error event occurring at a given node is union bounded by 

^<!^/Me) (4.9.20) 



where the summation is over all error sequences starting at a given node and 
terminating when the two paths merge. This probability includes averaging over 
all transmitted sequences. To determine the bit error probability at any given 
node, we observe that, for any pair of sequences u and u with the resulting error 



CONVOLUTIONAL CODES 279 

sequence e, the number of bit errors associated with this error event is w(e). Hence 

n<l|^() (4.9.21) 



Using the bound (4.9.18), the bounds on probabilities (4.9.20) and (4.9.21) can be 
bounded further as 



N-l 

n 



< 4 - 9 - 22 ) 



and 

.v - 1 1 1 / y - 1 \] 

(4.9.23) 



k = - N 



The evaluation of (4.9.22) and (4.9.23) is facilitated by the use of the error-state 
diagram. The dimensionality of the diagram is 3^~ *, since each pair of paths at a 
given node will differ in each state component by 0, + 1, or 1, with + 1 and 1 
being equally likely. The all-zero error state is the initial and final state, as usual; 
Figs. 4.22 and 4.23 illustrate the error-state diagram for & = 2 and < = 3. Note 
that the weighting factors of (4.9.22) are accounted for by preceding the branch 
transfer function by a factor of \ if the transition involves a discrepancy (error) 
between states. If the bit error probability is desired, it is for exactly these transi 
tions that a bit error is made. Hence the factor / should also be inserted on these 
branches. P b is then obtained by differentiating the generating function with respect 
to / and setting 7=1, just as in Sec. 4.4 for convolutional codes. 

For the case < = 2 of Fig. 4.22, in the complete error-state diagram of 
Fig. 4.22d, the + 1 and - 1 states are equivalent 20 and can be combined into a 
single state resulting in the simpler error-state diagram of Fig. 422e. It follows 
directly from this that 



dl 

and 



1 - (a, + a 2 )I/2 



,., [1 - (a, + a 2 )/2]- 






This result is of particular interest since it applies to duobinary transmission, 

20 There is no way to distinguish these states by observing branch values. 



280 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 







(a) Digital equivalent model for = 2 



u mk 




(b) Branch metric generator 




(c) Branch metric generator for error-state diagram 



CONVOLUTIONAL CODES 281 



a, 1/2 




(d) Error-state diagram for bit error computation 
+ 2 )//2 








-0 



+ a 2 )//2 



+ , - ~ a/ 

(e) Reduced error-state diagram and bit error bound 
Figure 4.22 ISI channel example for <e = 2. 



+a 2 )/2] 



2 



where each transmitted signal is made a pulse of double width, that is 

| N /*/2T 0<T<27 
1 otherwise 

For this case, it is easily verified from (4.9.7) that 



h t = for all i > 2 



Thus 



~ SIN 



)] 2 



Of particular importance is the fact that the asymptotic exponent is not degraded 
relative to the case without intersymbol interference. 

The error-state diagram shown in Fig. 4.23d for the J^ 7 = 3 case has four 
equivalent pairs of states which can be combined to form the simpler reduced 
error-state diagram shown in Fig. 4.23e. This type of reduction always occurs so 
that in general the error-state diagram for intersymbol interference of memory 
length ^ has a reduced error-state diagram of (3^ -1 - l)/2 nonzero states. 




(correlated 
n k Gaussian 
noise) 



t mk 



(a) Digital equivalent model for L = 3 




(b) Branch metric generator 




(c) Branch metric generator for error-state diagram 
Figure 4.23 ISI channel example for J^ = 3. 

282 



00 



a, I 



+0 



>00 



aj "8 



dol 



a 7 I 



?/i. = 



2a 2 = 



(d) Error-state diagram for bit error computation 



00 




00 



(t?) Reduced error-state diagram 



283 



284 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

We have treated here the case of a known modulator/channel filter followed 
by additive white Gaussian noise. The output of the filter is modeled as some 
linear weighting of a finite number of data symbols, and it can be viewed as a " real 
number convolutional code " which we decode with the Viterbi algorithm. Except 
for the error-state diagrams, there is nothing special about linear intersymbol 
interference channels of this type. Just as the Viterbi algorithm is a maximum 
likelihood decoding algorithm for arbitrary nonlinear trellis codes, it can also be 
applied as a maximum likelihood demodulator for data sequences that enter any 
channel which consists of an arbitrary (possibly nonlinear) but noiseless finite 
memory part followed by a memoryless noisy part. The noiseless finite memory 
part of the channel acts like a trellis code which the Viterbi algorithm demodula 
tor decodes in the presence of additive memoryless noise. 



4.10 CODING FOR INTERSYMBOL INTERFERENCE 
CHANNELS* 

Considering the commonality in structure of the optimum decoder for convolu 
tional codes and the optimum demodulator for intersymbol interference channels, 
it is reasonable to expect that a combined demodulator-decoder for coded trans 
mission over intersymbol interference channels would have the same structure as 
each component. This is readily shown by examining the intersymbol interference 
channel model of Fig. 4.21 preceded by a convolutional encoder, and the 
modifications this produces in Eqs. (4.9.2) through (4.9.10). 

With coding, the channel output signal, prior to addition of the AWGN, is 



k= -Nn 

where x k is now the /cth code symbol and hence, for a constraint length K convolu 
tional code, it depends on K binary data symbols for a rate \/n code, or on K 
b-dimensional binary data vectors for a rate b/n code. Thus the 2Nn code symbols 
{x k } are generated from the 2N data vectors u t by the expression 

x k = y 1+k - nuk/n j(u lk/llj , UU/HJ-!, ..., u lfc/nJ _ (K _ 1} ) -Nn<k<Nn-l 

(4.10.2) 

where [vj is the greatest integer not greater than v and u, = for / < - N. For 
a rate \/n code, the data vectors u, become binary scalars and the y 7 function is the 
scalar projection of the vector formed from the terms of they th tap sequence of the 
code (0 07 , 0i_/, . . . , 0*- 1, 7 - in Fig. 4.1). For a rate b/n code, the u, are b-dimensional 
binary vectors and the y j function is the corresponding matrix operation on 
the data matrix (e.g., in Fig. 4.2/7 and c, this matrix is formed from theyth rows of 
the matrices g and gj. 

Upon replacing (4.9.2) by (4.10.1) and (4.10.2), the remainder of the derivation 

* May be omitted without loss of continuity. 



CONVOLUTIONAL CODES 285 

of Sec. 4.9 proceeds as before with u mk replaced by x mk . Thus (4.9.10) becomes, 
upon dropping the first subscript m for notational simplification 



W<A(y k ; x k , x k _!, ..., x k _ ( < f _ l) } = 2x k y k -x$h -2x k **-A (4.10.3) 
But from (4.10.2), we have tnat x k depends on 

U lk/nj> U lk/nj-l> U lk/n J-(K- 1) 

and hence similarly, for i = 1, 2, . . ., J2" - 1, x k _,- depends on 



Thus the fcth branch metric of (4.10.3) can be written as the function of the 
- l)/nl + X data vectors 



U lk/nJ> U Lk/nJ- 1 "i U l(fc 

(where [vl denotes the least integer not less than v) by substituting (4.10.2) with 
the appropriate index for each term x k and x k _,- in (4.10.3). 

Thus, maximizing (4.10.3) over all possible data paths {u fc } is exactly the same 
problem as maximizing (4.9.10) for uncoded intersymbol interference channels, or 
(4.9.11) for coded channels without interference. The only differences are that, for 
those cases, the state vectors are of dimensions j^ 7 1 and K 1, respectively, 
while here their dimension is \(<? - l)/n] + (K - 1), and the functions which 
define the branch metrics are somewhat more elaborate, being a composite of the 
previous two. 

However, once the branch metrics are formed, the maximizing algorithm is 
again the VA, exactly as before. Thus, the algorithm is again expressed by (4.9.12) 
but with < - 1 replaced by \(y - \}/n\ + (K - 1). We conclude thus that the 
optimum demodulator-decoder for coded intersymbol interference channels is no 
more complex, other than for dimensionality, than the corresponding uncoded 
channel. 

Unfortunately, however, the calculation of error probability is greatly com 
plicated here, even though the error probability development in Sec. 4.9, begin 
ning with (4.9.13) and leading to the expression (4.9.18) for P (e), can proceed with 
u k and u k replaced by x k and x k , respectively, and c k = ^(u k u k ) of (4.9.15) 
through (4.9.18) replaced by 

4 =*(**-**) (4.io.4) 

But the difficulty arises when we attempt to average over all possible error events 
as in (4.9.20). For, in the uncoded case, all error sequences are possible and, 
given that the data symbol is in error (c k ^ 0), it is equally likely to be + 1 or 1. 
This is not the case for coded transmission. First of all, not all error sequences e 
are possible; for if the correct and incorrect path code symbols are identical in 
the /cth position, then 

4 = i(x k - xi) = 



286 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

a condition dictated by the code. To make matters worse, if c k ^ 0, it is not 
necessarily equally likely to be + 1 or - 1 ; this too depends on the code. 
In principle, an expression similar to (4.9.20) can be written in the form 

PE< Z Z /(x,x )P (e = i(x-x )) (4.10.5) 

all correct all incorrect 
paths paths 

where /(x, x ), the distribution function, is dictated by the code and by the fact 
that all data sequences u are equally probable. While this calculation can be 
carried out with considerable effort in a few very simple cases (Acampora [1976]), 
it provides little insight into the general problem. Using ensemble average 
techniques to be developed in Chap. 5, we shall obtain in Sec. 5.8 some rather 
general and revealing results on the effect of intersymbol interference on the 
ensemble of time-varying convolutional codes. 

Before concluding, however, it is worth noting that some simplification is 
possible in the branch metric expression whenever n > 5 1 ; for the total 
memory then becomes 

\(< - l)/nl + (K- l) = K n>-\ (4.10.6) 

and the branch metric function given by (4.10.3) with (4.10.2) can be expressed 
functionally as 

^Afe "Lfc/nj, "lk/i.J-1. > U lfc/nj-*) (4.10.7) 

Now, while it would appear that the condition n > & - 1 is overly restrictive, this 
is not at all the case. For suppose & - 1 = 3, and the code rate b/n = 1/2 ; without 
any change in the code implementation, we may treat it as if it were a rate 
b/n = 2/4 code and thus achieve the desired condition for (4.10.6). Of course, the 
data vectors are now two-dimensional rather than scalar, but all code representa 
tions from shift register implementation to state diagram can be redrawn in this 
way without changing the code symbols generated and consequently the perfor 
mance in any way. For the code itself (not considering the intersymbol interfer 
ence channel), the state vector dimensionality is the same as before but the 
connectivity of the state diagram increases; yet the generating function does not 
change in any way (see Prob. 4.26), and thus it is clear that all that has changed is 
the representation. 



4.11 BIBLIOGRAPHICAL NOTES AND REFERENCES 

The concept of convolutional codes was first advanced by Elias [1955]. The first 
important decoding algorithm, known as sequential decoding, was introduced by 
Wozencraft [1957] and refined by Reiffen [I960]. This material, and the later more 



CONVOLUTIONAL CODES 287 

efficient algorithm due to Fano [1963], led to an important class of decoding 
techniques which will be treated in Chap. 6. The material in this chapter, while 
chronologically subsequent to these early developments, is more fundamental 
and, for tutorial purposes, logically precedes the presentation of sequential decod 
ing algorithms. 

Sections 4.2 through 4.6 follow primarily from three papers by Viterbi [1967a], 
[19676], and [1971]. The last is a tutorial exposition which contains most of the 
approach of Sees. 4.2 through 4.5 and 4.7. The material in Sec. 4.6 appeared in the 
second of the above papers. The so-called Viterbi algorithm was originally pre 
sented in the first paper as " a new probabilistic nonsequential decoding algor 
ithm." The tutorial exposition [1971] appeared after the availability in preliminary 
form of some most enlightening clarifications by Forney (which later appeared in 
final form in Forney [19726], [1973], and [1974]); to this work, we owe the concept 
of the trellis exposition of the decoder. In the same work, Forney also recognized 
the fact that the VA was a maximum likelihood decoding algorithm for trellis 
codes. Omura [1969] first observed that the VA could be derived from dynamic 
programming principles. 

The state diagram approach as a compressed trellis and the generating func 
tion analysis first appeared in Viterbi [1971]. The first code simulation which led 
to the recognition of the practical value of the decoding algorithm was performed 
by Heller [1968]. The code search leading to Table 4.1 was performed by Oden- 
walder [1970]. 

Feedback decoding traces its conceptual roots to the threshold decoder of 
Massey [1963]. Important code search results which revealed properties of convo- 
lutional codes necessary for feedback decoding appeared in Bussgang [1965]. The 
exposition in Sec. 4.8 follows primarily from the work of Heller [1975]. 

The important realization that maximum likelihood demodulation for 
intersymbol interference channels can be performed using the VA is due to Forney 
[19720]. The development of Sec. 4.9 follows this work conceptually, although the 
derivation is basically that of Acampora [1976]. Section 4.10 on combined demod 
ulation and decoding for intersymbol interference channels follows the work of 
Omura [1971], Mackechnie [1973], and Acampora [1976]. 



PROBLEMS 

4.1 Draw the code tree, trellis, and state diagram for the K = 2, r = % code generated by 



Data 





Code 



Figure P4.1 



288 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



4.2 Draw the trellis and state diagram for the K = 3, r = ^ code generated by 




Figure P4.2 



4.3 Draw the state diagram for the K = 4, r = \ code generated by 




Figure P4.3 



4.4 Draw the code tree, trellis, and state diagram for the K = 2, r = J code of Fig. 4.2c. 

4.5 Given the K = 3, r = \ code of Fig. 4.4 of the text, suppose the code is used on a BSC and the 
received sequence for the first eight branches is 

00 01 10 00 00 00 10 01 

Trace the decisions on a trellis diagram labeling the survivor s Hamming distance metric at each node 
level. If a tie occurs in the metrics required for a decision, always choose the upper (lower) path. 

4.6 (a) Solve for the generating function (in D only) of the labeled state diagram of Fig. 4.6 of the text 
and show that the minimum distance between paths is 3. 

(b) Repeat for the K = 2 code of Prob. 4.1 and show that the minimum distance between paths 
is 3. 

(c) Repeat for the K = 4 code of Prob. 4.3 and show that the minimum distance between paths 
is 6. 

4.7 Determine the node error probability bounds and the bit error probability bounds for all codes of 
Prob. 4.6 for a binary-input, output-symmetric channel for which z is known. 

4.8 Verify inequality (4.5.5) of the text. 

4.9 It is of interest to determine the maximum value of the free distance for any fixed or time-invariant 
code of a given constraint length and rate. The following sequence of steps leads to an upper bound, 
nearly achievable for low K, for rate 1/n codes. Consider the rate l/ fixed convolutional code whose 
generator matrix is given by (4.1.1) with all rows shifted versions of the first row. 

(a) Show that for any binary linear code, if we array the code in a matrix, each of whose rows is a 
code vector, any column has either all zeros or half zeros and half ones. 

(b) Consider the set of all finite-length data sequences of length no greater than k. Show that the 



CONVOLUTIONAL CODES 289 

code generated by these finite-length data sequences has length (K - 1 + k) branches, or (K - 1 -I- k)n 
symbols, and show that the average weight (number of" 1 "s) of all codewords (excluding the all-zeros) 
is no greater than 



(c) Using (b) show that the code has minimum distance between paths (free distance) d f < 
for any k. 

(d) Let k vary over all possible integers and thus show that (Heller [1968]) 

2"- 1 (K- l + k)n 
d f < min - - 

k z ~ 

That is, for small K and n = 2, r = \ this yields 





Upper bound on d {ree 


Achievable 


K 


(integer) 


(noncatastrophic) 


2 


4t 


3 


3 


5 


5 


4 


6 


6 


5 


8t 


7 



t Achievable with catastrophic code. 



4.10 (Van de Meeberg [1974]) For a BSC where Z = ^/4p(l - p) show that (4.4.4) can be replaced by 

P d <Z d+1 when d is odd 

This can be shown by examining the decision region boundary and the Bhattacharyya bound on 
decoding error when d = 2t and when d = 2t 1. Using this show that 

P e (j) < i[(l + Z)T(Z) + (1 - Z)T(-Z)] 

can replace (4.4.7) when we have a BSC. 

Hint: First show that for two codewords at odd distance d, the decoding error probability 
(maximum likelihood) is the same as for two codewords at distance d + I. 

4.11 Consider a rate l/n fixed binary convolutional code and define the code generator polynomials 



Show that this convolutional code is catastrophic if and only if all the n generator polynomials have a 
common polynomial factor of degree at least one. Use the fact that for a catastrophic code some 
infinite-weight information sequence will result in a finite-weight code sequence. 

4.12 Use the result of Prob. 4.1 1 to show that, for rate l/n fixed binary convolutional codes the relative 
fraction of catastrophic codes in the ensemble of all convolutional codes of a given constraint length is 
l/(2" 1), which is independent of constraint length. 

4.13 For any integer n, find a rate (n - l)/n single-error-correcting code and its feedback decoding 
implementation, which is a generalization of Figs. 4.17 and 4.18. 

4.14 Consider a rate l/n convolutional code with constraint length K. Let a(l) be the number of paths 
that diverge from the all-zeros path at node j and remerge for the first time at node j + K + I. 



290 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(a) Show that 



/ = 

1 < /< K 



(b) Directly from (a) prove that 



T K (L}= 



- L) 



1 - 2L + L* 

(c) Noting that a(l) is the number of binary sequences of length / - 1 which do not have K 
consecutive " "s, show that 



Hence when l2~ (K ~ i} < 1, we have the approximation 

0(0*2 - 

For most codes with large constraint lengths K, this approximation would be valid for all values of / 
that contribute significantly to error probabilities. 

4.15 Given a K = 3, r = \ binary convolutional code with the partially completed state diagram 
below, find the complete state diagram and sketch a diagram for this encoder. 




Figure P4.15 

4.16 Suppose the K = 3, r = 3 convolutional code given in Prob. 4.2 is used over a BSC. Assume 
that the initial code state is the (00) state. At the output of the BSC, we receive the sequence 

y = (101 010 100 110 Oil rest all "0") 

Find the maximum likelihood path through the trellis and give the decoded information bits. In case of 
a tie between any two merged paths, choose the upper branch coming into the particular state. 



CONVOLUTIONAL CODES 291 

4.17 In Fig. 4.10 let b , c , and d be dummy variables for the partial paths to the intermediate nodes. 
Let 



fi-l 

UJ 



and write state equations of the form 

$ = A$ + b 

Find A, a 3 x 3 matrix, and vector b. , can be found by 

^(I-Aj- b 

where I is the 3x3 identity matrix. Solve this to find T(D, L, /) and check your answer with (4.3.3). 

4.18 In Prob. 4.17 consider the expansion 

(I - A)~ 1 = I + A + A 2 + A 3 + A 4 + 

(a) Use the Cayley-Hamilton theorem to show that for L = 1 and / = 1 

A 3 = DA 2 + DA 

Hint: The Cayley-Hamilton theorem states that a matrix A satisfies its characteristic equation 

p(/)= A -All =0 

(b) Use (a) to find (I - A)~ 1 by the above expansion, and then find T(D) for Prob. 4.17. 

(c) Show that terms in A* decrease at least as fast as D k/2 . 

(d) Repeat (a) and (b) for arbitrary L and /. 

4.19 Given the K = 3, r = \ code of Fig. 4.4 of the text, suppose the code is used on a BSC and the 
transmitted sequence for the first eight branches is 

11 01 01 00 10 11 00 

Suppose this sequence is received error free, but somehow the first bit is lost and the received sequence 
is incorrectly synchronized giving the assumed received sequence 

10 10 10 01 01 10 

This is the same sequence except the first bit is missing and there is now incorrect bit synchronization 
at the decoder. Trace the decisions on a trellis diagram, labeling the survivor s Hamming distance 
metric at each node. 

Note that, when there is incorrect synchronization, path metrics tend to remain relatively close 
together. This fact can be used to detect incorrect bit synchronization. 

4.20 Suppose that the Viterbi decoder uses only integer branch metrics ye (/i, / 2 , -, 0/2}* 
where J is even, giving rise to a channel with input and 1, transition probabilities PQ(J) with 
Pot/) ^ Po(-7) for) > 0, and P,(j) = P (-j). Let 

FI(z)= P (z)zJ and A(z)= a k z k 

J=-ij2 k=-N 

and define 

M(z)}_ = l/2 flo + I a k 



292 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(a) Show that the pairwise error probability for an incorrect path at Hamming distance d from 
the correct path upon remerging is exactly 



(b) If the code-generating function is T(D, I) and 

dT(D, I) 



dl 
show that 



/=! d=d, 



P b < X b(d)P d 
d=d f 

(c) In (a) show that 



where Z = min n(z) 

z< 1 
and using (b) show that 

1 dT(D, /) 



P ^2 

(d) For the BSC, show that Z = Z. 

4.21 For the DMC with input alphabet #", output alphabet ^, and transition probabilities 
|: y e ^, x e #"}, define the Bhattacharyya distance between any two inputs x, x e #" as 



For two sequences x, x e #" N define the Bhattacharyya distance as 



Show that, for any two diverging and remerging paths of a trellis whose Bhattacharyya distance is d, 
(4.4.3) generalizes to 

P d <e~ d 

and (4.4.5) generalizes to 



where a(d) is the number of paths of Bhattacharyya distance d from the transmitted path. What is 
necessary to be able to define generating functions that generalize (4.4.7) and (4.4.13)? 
4.22 Consider the r = \ convolutional code of Prob. 4.2. Suppose each time an information bit enters 
the register, the three code bits are used to transmit one of eight orthogonal signals over the white 
Gaussian noise channel. At the output of the channel, a hard decision is made as to which one of the 
eight signals was sent. This results in a DMC with transition probabilities 



CONVOLUTIONAL CODES 293 

Following the suggested generalization of Prob. 4.2 1, find the generating function T(D, /) and give the bit 
error bound of (4.4.13). Repeat this problem when the outputs of the channel are not forced to be 
hard decisions. 
4.23 Show that the bound in (4.6.15) can be made tighter by 



i -i- Z : 

k = 



4.24 In Fig. 4.9 let 



c x (D, t) = generating function for all paths that go from state 
a to state x in exactly r branches. 




x = b, c, d 
Let 

(a) Show that 



and find A. 
(b) Find 



and show that 

T(D) = [0 D 2 0] f $(0, t) 

r=l 

(c) Suppose we have a BSC with crossover-probability p and in the Viterbi decoder we truncate 
path memory after T branches and make a decision by comparing all metrics of surviving paths 
leaving the given node at the truncation point. Metrics are computed only for the T branches following 
the truncation point. Show that the probability of a node error at node j is bounded by 

P e (j, T) < [0 D 2 0] X (A + [1 1 1R(A T) 

r = 1 

where D = ^/4p( 1 - p). Give an interpretation of each term in the bound. Note that ^(D, r) is easily 
found recursively with initial condition 

pn 

UD, n= o 



(d) Obtain a closed-form expression for the bound in (c) for T = 7. 

Jfi nf : Use the result of Prob. 4.18(fc). 

4.25 Consider a binary convolutional code, a BSC with parameter p, and a feedback decoder of 
memory length L. In the usual state diagram of the convolutional code, label distances from the 
all-zeros path and define 

^(D, f) = generating function for all paths that initially leave the all-zero state and go to state x in 
exactly r branches (return to the all-zeros state is allowed) 



294 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(a) Show that the probability that the decoded path leaves the correct path at node j, P e (j, L), is 
bounded by 



< I UA L) 



where the summation is over all states including the all-zeros state. 

(b) Evaluate a closed-form expression for the bound in (a) for L = 6 and a code with the distance- 
labeled state diagram below: 



6=01 




c= 10 



Figure P4.25 



Hint: Find a recursive equation for 




D 2 






and use the Cayley-Hamilton theorem [see Prob. 4.18(6)]. 

4.26 Treat the K = 3, r = % code of Fig. 4.2a as if it were an r = | code; i.e., define a branch as the 
four symbols generated for every two data bits. Draw the corresponding state diagram and determine 
T(D, /). From this, compute the upper bound on P b using (4.4.13) and verify that the result is the same 
as computed directly from the original state diagram for the r = \ code. 

4.27 Show that the noise components of the matched filter output in Fig. 4.21 have covariance 



Instead of the matched filter, assume that the suboptimum " integrate and dump " filter is used. That is, 
assume that the observables are 



h = y(t)p(t - kT] dt k= -N, -N+ 1, ..., N- 1 



CONVOLUTIONAL CODES 295 



where 



0<r 



otherwise 

Show that the maximum likelihood demodulator based on observables y- N , y- N + l , ..., y N -i is 
realized with the Viterbi algorithm with the bit error bound analogous to (4.9.23) given by 

. N ~ l 1 



where 

h k _ j = I h(t - jT)p(t - kT) dt 

- oc 

For J^ = 2, give the state diagram, determine the transfer function, and find the generating function 
for the bit error bound in terms of K and /z\. 

4.28 (Whitened Matched Filter, Forney [1972a]) Consider the intersymbol interference example of 
Fig. 4.22 where & = 2. Suppose the matched filter outputs {y k } are followed by the following digital 
filter with outputs {y k }. 






Figure P4.28 

Here we choose / and/ t to satisfy 



fl 



/o/i - *i 

The matched filter combined with this transversal filter is called a whitened matched filter. 
(a) Show that the outputs {y k } are given by 



> * =/o"k 



where 



(b) Show that the maximum likelihood demodulator based on observables {y k } is realized with 
the Viterbi algorithm, and give the error-state diagram for this case. 

(c) Show that the bit error bound based on the error-state diagram in (b) is also given by (4.9.24). 

(d) Generalize the above results to arbitrary y. First define 



H(D)= 



296 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

and show that there exists a polynomial of degree < 1 



such that 

Show that the transversal filter with inputs {yj and outputs {y k } that satisfy the difference equation 

I fi~y k +i = yk 

result in outputs satisfying the form 

<e- i 
~ V f. 4- ~ 

y k ^j J i k i k 

i = 

where 



(e) Describe the error-state diagram when using the whitened matched filter in (d) and derive the 
bit error bound 



(/) In (d), show that 

- 



4.29 (a) For the rate r = 2~ K orthogonal convolutional encoder shown, consider a noncoherent 
demodulator on each branch with linear combining so that path metrics are formed as the sums of the 
branch metrics z/m). Using techniques of Sees. 2.12 and 4.6, show that the probability of error caused 
by an incorrect path merging after J unmerged branches is bounded by 



< max [ [ 

0<p< 1 j=l 

= Z J 

where 

1 
Z = max 



From this, derive the bound on the bit error probability (Viterbi and Jacobs [1975]). 

Z* 



CONVOLUTIONAL CODES 297 



Select one of 2 K 
orthogonal-frequency signals 






- \)T<t<jT 



*- 



integer 




r,(0) 



1C 



2* noncoherent 

demodulators 

on each branch 



Viterbi decoder 



y(T)exp 



K )t 



dl 



= 0, 1, . ., 2* - 



Figure P4.29 



L30 (a) For the same noncoherent channel and demodulator-decoder as in Prob. 4.29, show that the 
ate r = \ quaternary code generated by the K = 5 encoder shown above yields 

1 + Zfl(Z) 



1 + Zb(Z) 
vhere a(Z) and b(Z) are polynomials in Z with integer coefficients. 




Select 
one of four 
orthogonal- 
frequency 

signals 



Figure P4.30 



298 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(b) Generalize the results to a rate r = 2 k code of constraint length K = 2k + 1 and a 2*-ary 
orthogonal signaling alphabet where k is any integer. This has been called the class of semiorthogonal 
convolutional encoders. 

4.31 Suppose the K = 3, r = j convolutional code shown in Fig. 4.2a is used over the intersymbol 
interference channel with < = 2 shown in Fig. 4.22a. Assume the whitened matched filter (see 
Prob. 4.28) so that the discrete model for the system becomes 




Finite-state machine 
Figure P4.31 

Here, the in the convolutional encoder is a modulo-2 sum and u is the binary data sequence with 
symbols from (0, 1}. The summer in the intersymbol interference is a real sum. For each binary symbol 
into the convolutional encoder, two coded symbols from { 1, 1} enter the intersymbol interference 
channel and there are two corresponding outputs of the channel. 

(a) Regard the combined convolutional encoder and intersymbol interference as a single finite- 
state machine with binary inputs {u k } and pairs of outputs from {z k }. Defining the state of the system as 
the binary sequence (a, b, c) shown as the contents of the encoder register, sketch the state diagram for 
the device with pairs of the outputs (z k , z k+ 1 ) on the branches from state to state. 

(b) Suppose the transmitted data sequence is u = 0. Consider another data sequence u where 
4 = <5fco That is MQ = 1 and u k = 0, k 0. What is the pairwise error probability P (u -* u )? 

(c) Assuming transmitted data sequence u = 0, construct a state diagram which will give a 
generating function with which we can bound P (u) and P b (u). Express the generating function in terms 
of vectors and matrices. 

4.32 Consider a channel with memory j? = 2, input alphabet SC = (0, 1}, output alphabet 
3f = {a, b, c, d] which consists of a noiseless memory part followed by a DMC as shown. 













i 


**e9C 




f(v v ^ 


z k eZ 


DMC 


i 






f(.x k ,x k _ 1 ) 




P(y\z) 


i 












i 

j 



Channel 



Figure P4.32 



CONVOLUTIONAL CODES 299 



Here 



t =/(** **-i 



x k = 0, x k _ l = 1 
x = 1. *- =0 



and 



= y = {a, fc, c, 



pq p* pq 



p 2 pq q 2 pq 

pq P pq q 



l-p>p 



(a) Assume x = and equiprobable binary data symbols x k (k = 1, 2, ...) are sent over the 
channel. For the channel output sequence 



> i = a y 2 = b y 3 = c y 4 = b y k = 
determine the maximum likelihood data sequence x k (k = 1, 2, ...). 
Hint: Consider branch metric - 



-(W-B 



k>5 



q nl \P> 

(b) Determine the union-Bhattacharyya bound on P b (x), the bit error probability when x = is 
sent. 

4.33 (Unknown Intersymbol Interference Channel) For the linear channel of memory y and the 
suboptimum " integrate and dump" filter discussed in Prob. 4.27, determine the performance degrada 
tion when the Viterbi demodulator is designed under the mistaken impression that the impulse- 
response is n(t) rather than h(t), the true channel impulse-response. Here, define 

x<v 
h k _j=\ fi(t - jT)p(t - kT) dt 

and h k _ t as in Prob. 4.27. 

(a) Show that, if the demodulator is realized with the Viterbi algorithm designed for /j(r), then the 
bit error bound is given by 



where ? is the summation over all incorrect sequences u which diverge from the correct sequence 
u at some fixed initial time and remerge with it later, w(u, u ) is the weight of the error sequence 
between u and u , and 



(y-\ 
X 
= 



+ 4 Z X 



Assume that h(t) and n(t) are zero for t > <?T. 

(b) For evaluation of the bit error bound using a generating function, we need a state diagram 
in which each state consists of a pair of states S and S\ where S is the correct state and 5 is an incorrect 
state. Initial and final states are states in which S = S . There are 2*~ 1 initial and 2^~ l final states. 
Introduce an initial dummy state and a final dummy state and note that there is probability 2 ~ ( ^~ l) of 
transition from the initial dummy state to each initial state. For = 2, consider the pair state diagram 
as shown below and find all transitions and the transfer function. Note that this state diagram can be 
reduced. 



300 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




(c) Show that the bit error bound is given by 

cosh | -""!_ " I cosh p^fo"*- , exp 



Figure P433 



- /To) 



II cosh I -^ ^- I exp - 

\ ** / o 



CHAPTER 

FIVE 

CONVOLUTIONAL CODE ENSEMBLE 

PERFORMANCE 



5.1 THE CHANNEL CODING THEOREM FOR TIME-VARYING 
CONVOLUTIONAL CODES 

This chapter treats for convolutional codes the same ensemble average error 
bounds which were studied in Chap. 3 for block codes. However, useful tight 
bounds can be found only for time-varying convolutional codes, corresponding to 
the matrix (4.1.1) with gj fc) and gf~ l) not necessarily equal. (For a fixed convolu 
tional code, each row is a shifted replica of every other row.) 

For any convolutional code, we have from (4.4.1) and (4.4.2) that the node 
error probability at thejth node of a maximum likelihood decoder employing the 
Viterbi algorithm is bounded by 

F e (j)<Pr| U [AM(x},x,.)>0]|< I Pr [AM(x;, Xj .) > 0] (5.1.1) 

\Xj fff (j) x je& U) 

where x} is any incorrect path stemming from node 7, 3"(j) is the set of all such 
incorrect paths, x ; is the correct path after node;, and AM(x}, x,) is the difference 
between the metric increment of incorrect path x} and correct path \j over the 
branches of their unmerged span. 

For rate l/n (binary-trellis) convolutional codes, we determined in Sec. 4.6 the 
structure of all paths through the trellis. In particular, the bound (4.6.15) indicated 
that for a constraint length K code, there are less than 2* paths which diverge from 
the correct path at node; and remain unmerged for exactly K + k branches. This 
conclusion can be arrived at alternatively by the following argument. Without loss 
of generality, since a convolutional code is linear, we may take the all-zeros path 

301 



302 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

to be the correct path. Then any incorrect path which diverges from the correct 
path at node j and remains unmerged for K + k branches must have binary data 
symbols 

1, u j+l , Uj+ 2 , ..., u j+k . l9 1,0,0, ...,0 

l-K-l-H (5.1.2) 

where u j+l , ..., Mj+ fc _i is any binary vector containing no strings of more than 
K 2 consecutive zeros. While the exact number of such incorrect paths is best 
computed by the generating function technique of Sec. 4.6, 2 k is an obvious 
upper bound on the number of such paths. We shall concentrate on rate l/n codes 
initially, and later generalize to rate b/n. 

Now, as was done for block codes in Chap. 3, we average this error probabi 
lity bound over all possible codes in the ensemble. We begin by noting that each 
term of the sum in the rightmost bound of (5.1.1) is a pairwise error probability 
between the correct path x, and the incorrect path x} over the unmerged segment 
of K + k branches, where k > 0. Using the Bhattacharyya bound (2.3.15) for each 
such term, we have 



Pr [AM (xj, x,) > 0] < X Jp N (y\x j)p N (y\Xj) (5.1.3) 

y 

where N = (K + k)n is the number of symbols on the unmerged segment of length 
K + k branches and y is the received vector for this unmerged segment. 

We must average over all possible values of X, and x} in the ensemble of 
time-varying convolutional codes. Suppose, as for block codes, that the channel 
input alphabet is Q-ary and that the time-varying convolutional code is generated 
by the operation (Fig. 5.1) 



,-m 

(5.1.4) 



where g}}*, g ( f, . . . , gL j are time-varying binary connection vectors of dimension /; 
w,_x + j, HJ-K + 2 > > U i are binary data symbols; v, is the ith binary branch vector 
with / symbols (where / is a multiple of n) and v ,- is an arbitrary binary branch 
vector with the same dimensionality as v, . Here v , plays the same role as in the 
linear block code ensemble of Sec. 3.10, and is required for nonbinary and asym 
metric binary channels. J^(v f ) is a memoryless mapping from sequences v, to 
sequences x,- of n Q-ary symbols (Q < 2 l/n ) (see Fig. 5.1 with b = 1). 

The mapping j? must be chosen carefully, particularly to ensure that the 
ensemble over which averages will be taken is properly constructed. In the first 
part of the derivation [through (5.1.9)], we shall deal with uniform weighting on the 
ensemble, just as was done in the earlier part of Sec. 3.1. Then / and n must be 
chosen so that 2 l Q", and each binary /-tuple input sequence should be mapped 
into a unique Q-ary n-tuple output sequence. This can be achieved exactly if Q is a 
power of 2, and otherwise approximated as closely as desired by choosing / and n 



5 % 



^ "o 

2 c-e 

rz C 





3 O 

i 




O 



O 



O 




303 



304 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

sufficiently large. This results in approximately Q" possible Q-ary sequences with 
uniform weighting if the original 2 binary sequences have uniform weighting. 
We shall consider nonuniform weighting on the Q-ary sequence below. 

Now let us consider the correct path and any incorrect path unmerged with it 
from node j to node j + K + k. If we take the correct path to correspond to the 
all-zeros data (without loss of generality), then its code sequence is v 0>J -, v j+1 , 
v o,j+2 "-> v o,j + x + k-i an d there are 2 l(K + k) possible binary sequences over the 
(K + /c)-branch unmerged span. After mapping this sequence onto the signal 
vector x , we have Q n(K+k) possible Q-ary x sequences over this span. As for 
the incorrect path in question, it must correspond to a data vector u} over 
the unmerged span of the form of (5.1.2) where u j+1 }+*_ i contains no strings 
of more than K 2 consecutive zeros. This implies then that each of the corres 
ponding branch vectors of the form vj is formed by the modulo-2 sum of v and 
at least one of the vectors g}}*, g^ , . . . , gL j [see (5.1.4)]. Thus v}, v}+ 1, . .., \ j+K + k- 1 
can be any one of 2 l(K + k) possible binary sequences over the (K + /c)-branch 
unmerged span, and therefore x} can be any one of Q n(K+k) Q-ary sequences, 
independent of what the correct sequence x 7 - may be. As a result, we may average 
the bound (5.1.3) over the Q 2N [where N = n(K + k)] possible correct and incor 
rect sequences as follows 



Pr [AAf(xJ, x,) >0] = - I Pr [AM (x}, x,) > 0] 

\L X j Xj 




which is clearly independent of the node j. 

Also, using the fact that the channel is memory less and letting q(x) = l/Q for 
all x in the channel input alphabet, we obtain 




2\N 



(5.1.5) 
where N = n(K + k) and 

K.(q)=-lnl I<?W7^rf (5.1.6) 

y x ] 

Finally, inserting (5.1.5) into the ensemble average of (5.1.1) and using 2 k as the 
upper bound on the number of incorrect paths x j diverging from the correct path 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 305 

at node) and remerging K + k branches later, we obtain a bound on the ensemble 
average of the node error probability for each node, namely 



,(/) < 4 I2* 

e -KnK (q) 



2- 1] 



(5.1.7) 



Since r = l/n is the rate in bits per channel symbol, to define rate in nats per 
symbol as for block codes, we let 

R = r In 2 

= (In 2)/n nats/channel symbol (5.1.8) 

and thus obtain 



, _ 2 - .-,,, 



< R < RM (5.1.9) 



where R (q) is defined in (5.1.6). 

Note also that, just as was done for block code ensemble averages in Sec. 3.1, 
we may impose a nonuniform weighting q(x) on the channel input symbols. To 
achieve this nonuniform weighting, we must choose the binary to Q-ary mapping 
of Fig. 5.1 differently from that described after (5.1.4) for uniform weighting. Now 
let / = nA and let each binary A-tuple be mapped into a Q-ary symbol. Further let 
the mapping be chosen such that exactly r, of the 2 ; binary /.-tuples map into the 
Q-ary output symbol x t , where i = 1, 2, . . ., Q, and 

Q 



Thus by choosing A, and hence /, sufficiently large, any nonuniform distribution 
can be approximated arbitrarily closely [by the distribution (r,-2~ A )] starting with 
a uniform distribution on the binary /-tuples. Thus, (5.1.9) is valid even when R (q) 
is defined with an almost arbitrary nonuniform q(x). 

The bit error probability, defined as the expected number of bit errors per bit 
decoded, can be bounded by the same v argument used in Sec. 4.6 [preceding 
(4.6.21)]. There we noted that an incorrect path which has been unmerged over 
K + k branches can cause at most k + 1 bit errors, for, in order to merge with the 
correct path, the last K - 1 data bits must coincide. Thus it follows, using (5.1.8), 
that the ensemble average of the expected number of bit errors, caused by a node 
error which begins at node y, is bounded by 



KO )]< L (^ + i)2**~" c)R (q) 

__ ^ 2 < R < /? (q) (5.1.10) 

[12 



306 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Comparing R (q) as defined in (5.1.6) with the Gallager function E (p, q) as de 
fined by (3.1.18), we find 

R (q) = E (l q) (5.1.11) 

which is strictly less than capacity C for all physical channels, but may equal 
capacity in certain degenerate cases as was shown in Sec. 3.2 [see (3.2.11)]. 

To extend our bounds for rates up to capacity, we must employ a more refined 
argument than the simple union bound used so far. The technique, based on the 
Gallager bound, is similar to that used in the latter half of Sec. 4.6. We begin by 
considering the set of all incorrect paths diverging from the correct path at node; 
and unmerged for exactly K + k branches, and take the sum over all k > 0. Thus 
the node error probability at any node is bounded by 

P e (j)< ln k (/) (5.1.12) 

where 

I error caused by any one of up to 2 k incorrect 
(paths unmerged from node j to node j + k + K 

This, then, is still a union bound, but over larger sets. For Tl k (j) for a given code, 
we can again use the Gallager bound (2.4.8) 

I p 

n (/) < y v (vix-W (1+p) y p (vlx-W (1+p) (5113) 

y |tj 6 (j) 

where N = n(K + /c), &(j\ whose cardinality is no greater than 2 fc , is the set of all 
incorrect paths diverging at node j and remerging K + k branches later and x ; - is 
any member path of this set. 

As before we note that x ; , defined by (5.1.4) with u = 0, can be any one of Q N 
possible sequences. However the set 3C(j) is somewhat more restricted. For exam 
ple, suppose k = 2. Then there are just two compatible 1 paths in the set 9C(j) whose 
data sequences, between node j and node j + K + 2, are 

10 1000---0 and 1 1 1000---0 

But obviously over the first branch, after diverging from the correct (all-zeros) 
path, the two paths in question are still merged and hence their branch symbols x 
are identical for this branch. Yet, even though the cardinality of %(j) is limited, 
any single path in this set can take on any one of Q N code sequences, as can be 
shown by exactly the same argument as before. However, when one path has been 
chosen, all the others compatible with it are restricted in the choice of their code 
symbols, to a lesser or greater extent depending on the span over which they are 

1 Compatible refers to those incorrect paths which are unmerged from the correct path in the given 
number of branches. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 307 



merged with already chosen paths. Let us then assign the weight q N (\j) to the 
N = (K + k)n symbols of the correct path between nodes; and; + K + /c; q N (x.j) 
equals l/Q N if we use a uniform weighting. Also, we assign the weight 

q N u(x { j l \ xJ 2) , . ... *H where {*<> : i = 1, 2, . . . , M] = #(/) 

is the set of compatible incorrect paths. For uniform weighting, this weight will 
just be the inverse of the number of distinct choices for the set of path sequences; 
in general, q NM ( ) has the property that its sum over all distinct possible members 
of the set #(/) equals unity. In fact, this notation allows us to augment the set fr(j) 
to include all Q NM choices of the M vectors x u) , ..., xj M) , where M < 2*, whether 
or not they are compatible based on the trellis structure just described, since any 
inadmissible combination may be eliminated by assigning it zero weight. 

Thus, averaging (5.1.13) over the ensemble with the weighting just defined, we 
have 



n.O)=I,W I I W*M 2) ,...,tnn lk (/) 



. *} 2) , .... 



(5.1.14) 

Note that the summation on i is now unrestricted, since any inadmissible path 
combinations are excluded by making q NM ( ) zero for that choice of x O) , xJ- 2) , . . . , 
xJ M) . Then, limiting p to the unit interval allows us to use the Jensen inequality 
(App. IB) to obtain 

fv \r> (v I v W( 1+ P) 
NV A j/rNlJ | A j7 

ZV . V rt fvU) v(2) v( W )\n /mrlv(0^ 

L L ^NMV X J > x j > x j ;P/vl v l x j > 



Now, for the terms in braces, suppose we consider the ith term of the outer sum 
and sum over all the internal summations except xj . Since only q NM ( ) depends 
on these \f ^ xf, we have 






The key observation to be made is that, as a result of this last step, we limit 
consideration to a single incorrect path xj . And, as was discussed previously, even 



308 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



though the choices of the set of path sequences for the entire incorrect set &(j) is 
limited by trellis constraints, the symbols for any single path may be freely chosen 
among Q N possible sequences in the space & N . Thus the weighting q N (\ ( j ] ) is the 
same as q N (\j) for the correct path (both being \/Q N if uniform weighting is 
assumed). Hence the bound (5.1.15) can be written as 2 



n.U) 



Z 



il/(l+p) 



+p 



ii/U+p) 



l+p 



0<p<l (5.1.16) 



since M < 2 fc , the channel is memory less and q N (x) is a product of N identical one- 
dimensional weight functions. Since N = (K + k)n, this may be written as 



< 2 k e- (K+k)nE (p > } 



where as was first defined in Sec. 3.1 



(5.1.17) 



0<p< 1 (3.1.18) 



Finally, substituting (5.1.17) in the ensemble average of (5.1.12) and using (5.1.8), 
we obtain as our bound on the ensemble node error probability 



PJU) < Z n(/) 



p < E (p, 



(5.1.18) 



Similarly, the ensemble average of the expected number of bit errors caused by an 
error at nodej is obtained by weighting the /cth term in (5.1.18) by (k + 1), since an 
error caused by an incorrect path unmerged for K + k branches can cause no 
more than k + 1 bit errors. Thus 



-KnE (p,q) 



E (p, q)/R 



(5.1.19) 



2 Here we assume there are M = 2* such paths. Since this is larger than the actual number of 
incorrect paths, this gives us a further upper bound on the error probability U k (j). 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 309 

There remains only the problem of choosing the parameter p and the best weight 
distribution q. Note also that (5.1.18) and (5.1.19) reduce to (5.1.9) and (5.1.10), 
respectively, for p = 1, as follows from the definition (5.1.11). 

The function E (p, q) was first studied in Sec. 3.2 and its basic properties, 
summarized in Lemma 3.2.1, are that it is a positive increasing convex n function 
for positive p, approaching as p-0 with slope 7(q) (see Fig. 3.1). Thus to 
minimize the bounds for asymptotically large X, this suggests that we should 
choose p as large as possible consistent with a positive exponent in the braces of 
the denominator of (5. 1.18) and (5.1.19). Such a choice would be, for small 3 e > 



which reduces the bounds (5.1.18) and (5.1.19) to 



(5.1.216) 



where the exponent E C (R, q) is established by the parametric equations 
E C (R, q) = E (p, q) < p < 1 



e) (5.1.22) 

The construction of Fig. 5.2, based on the properties of E (p, q), establishes that 
the exponent E C (R, q) is positive and that the rate R increases continuously from 



to 

R = (1 - c) lim [E (p, q)/p] = (1 - )/(q) 

p-O 

as p decreases from 1 to 0. Recall also from Sec. 3.2 that 

max 7(q) = C 
q 

which is the channel capacity. 

3 Of course e is any positive number. Even though all our results are functions of e, exponents are 
plotted for the limiting case of c = 0, for which they are maximized. Strictly, as e - 0, the multiplying 
factor approaches oo, although only algebraically (not exponentially) in 1/c. 



310 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




Figure 5.2 Construction for upper bound 
exponent (0 < p < 1). 



Finally we may combine (5.1.9) and (5.1.10) with our present result, with 
exponents maximized with respect to the weight distribution q. This yields 



PJU)< 



2~KE C (R)/R 



where 



and 



for 



[i- 



E C (R) = R = max (1, q) for < R < R (l - c) 



E C (R) = max E (p, q) < p < 1 



R = (1 - c) max [E (p, q)]/p R (l - e) < R < C(l - e] 



(5.1.23a) 
(5.1.23f>) 

(5.1.24) 



(5.1.25) 



The composite exponent is plotted for a typical memoryless channel in 
Fig. 5.3. Maximization of E (p, q) with respect to the weight distribution 
q = (<j(x): x = a l9 a 2 , . . ., a Q } is performed exactly as in Sec. 3.2 (Theorem 3.2.2). 
It is clear that, for asymptotically large K, c may be chosen asymptotically small. 



E C (R) 
R. 



RM-e) C(l-e) 



R 



Figure 5.3 E C (R) for typical memoryless channel. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 311 

It remains to generalize this binary trellis (rate l/n) coding result to trellises 
with 2 b branches 4 per node (rate b/n). Such encoders, shown in Fig. 5.1, require 
effectively b shift registers, each of constraint length K, and the decoder storage 
and computational complexity grows as 2 b(K ~ 1} . For the present analysis, we need 
only determine the form of the data sequences for all incorrect paths diverging at 
node j and remerging with the correct path after an unmerged span of K + k 
branches, where again, without loss of generality, we may take the correct path to 
correspond to the all-zeros data sequence. For binary trellises, this was given by 
(5.1.2). For 2 b -ary trellises, this is generalized to the form 

u,, u j+1 , u,+ 2 , ..., iij+fc, 0, 0, ..., 

-JC-l- (5.1.26) 

where all terms are b-dimensional binary vectors representing the b bits input to 
the encoder register per branch. Now, u, and u j+k can be any of the 2 b 1 nonzero 
/^-dimensional binary vectors, since we require that the path diverge from the 
all-zeros at node j and not remerge before) + k + K. And u j+1 through u j+k _! 
may each be any ^-dimensional binary vector, the only limitation being that no 
string of K 1 or more consecutive vectors may begin before the (j + k + l)st 
branch, for otherwise remerging with the correct path would occur before node 
j + K + k. Thus there are less than (2 b l)2 bk possible incorrect paths in the 
subset 5T(/) of incorrect paths which diverge at node j and remerge at node 
j + K + k. Hence, all results obtained for rate l/n trellis codes can be generalized 
to rate b/n by replacing 2 k with (2 b - \}2 bk in expressions (5.1.7), (5.1.9), (4.6.16), 
(5.1.12), and (5.1.16) through (5.1.19). It suffices to consider only the last two 
expressions, which represent the most general case. Thus for rate b/n codes 



e -KnE (p,q) y r^b _ } y^Ky - knE Ap , q) 
fc = 

(2 b i\ 



where 

R = r In 2 

= (b/n) In 2 nats/channel symbol (5.1.28) 



4 The mapping function for rate bin codes is the same as for rate l/n codes see the description 
following (5.1.4) and (5.1.9) for uniform and nonuniform weightings. We could even consider trellises 
with p branches per node, where /? is not a power of 2. However, this requires linear encoders with 
input data in nonbinary form, a very impractical possibility. Also it requires that all linear operations 
be performed over a finite field of /? elements; hence /J must be a prime or the power of a prime. 



312 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Similarly generalizing (5.1.19) for rate b/n, recognizing that an erroneous branch 
can cause up to b bit errors, we obtain 



-KnE (p,q) 



< e 

b(2 b - . 

< (1 _ 2 -c*.(P.q)/iq-P))2 ^ P < Eo(p, q)/* < 1 (5-1.29) 

Choosing p = 1 for 

# < R (l - e) (5.1.300) 

and 



for higher rates, we generalize (5.1.23) and (5.1.24) to rate b/n codes by replacing 
E C (R) by bE c (R) and multiplying both expressions by 2 b 1 and the second also 
byb. 

All our results thus far have been for events at a particular node level. 
However, bit error probability is defined as the expected number of bit errors over 
the total length of the code, normalized by the number of bits decoded. Thus for 
an L-branch trellis code of rate b/n, since b bits are decoded per branch 

P =M^3 

Lb 

1 L 
< X^KO )] (5.1.31) 

where N b is the total number of bit errors in the L-branch code sequence and the 
inequality follows from the fact that bit error sequences may overlap, as discussed 
in Sec. 4.4. Consequently, combining (5.1.29), (5.1.30), and (5.1.31) and optimizing 
with respect to q, we obtain over the entire length of the code 



Lb 

2~KbE c (R)/R 



[1-2- 

E C (R) = R Q<R<R (l-c) (5.1.33) 

E C (R) = max (p, q) < p < 1 



= (1 - 6) max ^^ R (\ -e)<R<C(l-c) (5.1.34) 
Q P 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 313 

Since this is an ensemble average over all possible trellis codes of length L 
branches, we conclude that there must exist at least one code in the ensemble with 
P b < P b . Hence we obtain 

Theorem 5.1.1: Convolutional channel coding theorem (Viterbi [1967], 
[1971]) For any discrete-input memoryless channel, there exists a time- 
varying convolutional code of constraint length X, rate b/n bits per channel 
symbol, and arbitrary block length, whose bit error probability P b , resulting 
from maximum likelihood decoding, is bounded by (5.1.32) through (5.1.34) 
where e is an arbitrary positive number. 



5.2 EXAMPLES: CONVOLUTIONAL CODING EXPONENTS 
FOR VERY NOISY CHANNELS 

As was done in Sec. 3.4 for block codes, we now evaluate the error bound expo 
nents for convolutional codes, for the class of channels for which explicit formulas 
are most easily obtained. This will provide a direct comparison of the performance 
of block and convolutional codes. Of course, most of the effort is involved in 
computing E (p) and C and the techniques to do this are already available from 
Sec. 3.4. 

For the class of very noisy channels defined by (3.4.23), we have that 



E (p) = max E (p, q) 

C 0<p<l (3.4.31) 



q 

P 



Substituting this into (5.1.33) and using (5.1.11), we obtain R = C/2 and hence, 
for low rates 



= 0<R<(l-)C/2 (5.2.hz) 

For higher rates, substituting (3.4.31) into the second parametric equation (5.1.34) 
and solving for p, we obtain 



Then substituting this into (3.4.31) and in turn into the first parametric equation 
(5.1.34), we obtain 

E C (R) = C - y^- (1 - e)C/2 <R<(l-e)C (5.2. Ib) 



314 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




Figure 5.4 lim E C (R) for very noisy channel. 
-.o 



Ignoring for the moment the parameter <^ 1, we plot the composite exponent 
(5.2. la) and (5.2. \b) in Fig. 5.4 and compare it with the exponent for block codes 
on very noisy channels given by (3.4.33). To obtain a meaningful comparison, we 
must let (for convolutional codes) 

NtmK n-*** (5.2.2) 

For block codes, of course 

N = ^^ (5.2.3) 

where K R is the block length in bits. 5 

With these definitions, the exponents of the bounds on error probability are 
N C E C (R) and NE(R), for convolutional codes and block codes, respectively. We 
recall from Sec. 4.6 that the relative decoding complexities per bit are 

2*8 gNR 

= - comparisons/bit (block codes) 

and, as follows from a direct generalization of previous results to rate b/n convolu 
tional codes 



(2 b l)(2 ft(X ~ 1) ) 2 Kb e NcR 

<=- comparisons/bit (convolutional codes) 

b bo 

Thus, setting N = N c , we find that, while the exponents diverge considerably, the 
computational complexity is only slightly greater for convolutional codes. Clearly, 
by making N c slightly smaller than N, we may achieve equal complexity, and still 
maintain a convolutional exponent which is much greater than the block 
exponent. 



5 Note that this compares encoders with the same " memory " since a convolutional code symbol is 
determined by Kb information bits and a block code symbol is determined by K B information bits. 
Decoder complexity grows roughly exponentially with this memory for both block and convolutional 
codes. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 315 

Also noteworthy is the fact that the exponent of (5.2.1) for very noisy channels 
is identical to the exponent of (4.6.24) for convolutional orthogonal codes on the 
AWGN channel, provided we make the obvious substitution 

C nats/symbol C T nats/s , . 

R nats/symbol R T nats/s 

The explanation is the same as that in Sec. 3.4 for block codes. 



5.3 EXPURGATED UPPER BOUND FOR BINARY-INPUT, 
OUTPUT-SYMMETRIC CHANNELS 

We have thus demonstrated that the ensemble average convolutional exponent is 
considerably greater then the corresponding block exponent everywhere except 
for R = C and R = 0. In the former case, both exponents, of course, become zero; 
while at zero rate 



But, for block codes, we found in Chap. 3, Sec. 3.3 that, by expurgating the 
ensemble, we could obtain the much tighter upper-bound exponent 6 



E ex (0) = max 



-II q(*)q(x ) In I ^p(y\x)p(y\x ] 



(3.3.27) 



For binary-input channels, this reduces in fact to 

ex (0) = -i In Z > In 2 - In (1 + Z) = E (l) (3.3.31) 

where 



z = I \/Po(y)pi(y) 

y 

Thus the convolutional coding exponents, obtained thus far, are weaker than 
the block exponents at low rates. As already discussed in Sec. 3.10, it is not 
possible to expurgate code vectors from a linear code without destroying its 
linearity. With convolutional codes, not only would expurgation destroy linearity, 
but it would equally damage the essential topological structure of the trellis. 
However, on the class of binary-input, output-symmetric channels, we found in 
Sec. 2.9 that for a linear code the error probability is always the same no 
matter which code vector is transmitted. Hence, for this class of channels, we 
need not expurgate, since the bound on the bit error probability for any trans 
mitted path is a bound for the entire code (independent of the path transmitted). 



6 For physical channels, this exponent is finite, but for degenerate channels this exponent can be 
infinite. 



316 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



The task then is to obtain a tighter bound on P b at low rates. Consider again 
node j and the probability that a bit error occurs at this node. A decoding bit error 
can occur at node j only if, for some k and some i, < i < /c, an error event of 
length K + k began diverging at node; i; that is, if the bit in question lies within 
an unmerged span corresponding to an error event. Since the event of a bit error is 
the union of such error events, 7 we have the union bound on the bit error 
probability at node j 

P*())Z I ln*(/-0 (5.3.1) 

fc = i = 

where we recall that n fc (/ - i) is the probability of an error event caused by one of 
up to 2 bk incorrect paths unmerged from node j i to node j i + k + K. For 
any parameter < s < 1, we have (inequality g in App. 3 A) 



PI(J) * I I nj(/ - 



The ensemble average of Pf,(j) is then 



00 k 

I I" 

* = i = 



(5-3.2) 



(5.3.3) 



For low rates, we may use the union-bound argument, which leads to (5.1.10), 
rather than the Gallager bound, which leads to (5.1.19), to bound n s k (j i). Thus 
for a rate b/n code 

njO -O < P" - l)2 tt [Pr{AAf(x}-,,x,_,)2>0}] (5.3.4) 

where x 7 _ t and x}_, are the correct path and an incorrect path unmerged for 
K + k branches, respectively. Then, by the same steps which led to (5.1.5) 



(2 b - l)2 



kb 



x)p N (y |x 



= (2 b - l)2 



kb 



q(x)q(x 



where N = n(K + k). Finally, letting p = 1/s, we obtain 



tippy - i) < (2 b - \)2 kb e~ T 1 < p < oo 

where, as was first defined in Sec. 3.3, 



1/p 



0<s < 1 

(5.3.5) 

(5.3.6) 
(3.3.14) 



7 The argument used here differs from that used previously for bit error probability bounds in this 
and the last chapter, which was based on the expected number of bit errors per error event. While it 
leads to the same result for the ensemble average bound of Theorem 5.1.1, it leads here to a tighter 
form of Theorem 5.3.1 than was previously obtained based on the earlier argument. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 317 

Thus substituting (5.3.6) into (5.3.3), we have 

oo k 

pi IP < y y /2 b n2 fcb e~ fl( *~ l ~ k) * (p q)/p 



2~ b[Ex(p q)/(pR} ~ 



1 < p < oo (5.3.7) 



where R = (b/n) In 2 nats per channel symbol. 

Since for binary-input, output-symmetric channels, P b is the same for all paths 
of a given code, (5.3.7) can be regarded as a bound over the ensemble of convolu- 
tional codes, or equivalently, over the ensemble of generator matrices (4.1.1). Thus 
from (5.3.7) we have that for at least one code in the ensemble P b 1/p < Pl lp ; and 
hence for this code 



We now choose p such that 

(1 + )p = ^p> >0 (5.3.9) 

Finally, maximizing over q, we obtain 

Theorem 5.3.1 (Viterbi and Odenwalder [1969]) For binary-input, output- 
symmetric channels, there exists a time-varying convolutional code of con 
straint length K and rate b/n bits per symbol for which the bit error 
probability with maximum likelihood decoding satisfies 

2 b - 



( 5 - 3 - 10 ) 
where 



oo 



(5.3.11) 

>0 



where we have used the fact that 



max E x (l, q) = max E (l, q) = R t 
q q 



318 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Actually, we can obtain the exponent explicitly in terms of the rate, since, for 
binary-input, output-symmetric channels, we found in Sec. 3.3 that 

(i . z 1/p \ 
(3.3.29) 

z / 

where 



y 
Thus combining (5.3.11) and (3.3.29), we find 

Z l/p 



e -R(l+t) _ 



and, consequently, we have that 

P = 



ln [2 

Dividing the first equation of (5.3.11) by the second, and using (5.3.12), we 
obtain 

Corollary 5.3.1 The exponent of (5.3.11) can alternatively be expressed as 
E cex (K) (1 + ) In Z 



R ln[2< 
Note, finally, from this that 



< R < RJ(l + e) (5.3.13) 



lim E cex (#) = - \ In Z (5.3.14) 

j?->o 2 

which is precisely the same as the zero-rate exponent (3.3.31) for block codes. 
The exponent (5.3.13) is plotted in Fig. 5.5 and compared with the corres 
ponding exponent for block codes. 



5.4 LOWER BOUND ON ERROR PROBABILITY 

For a rate b/n trellis code, let P b (j) be the probability that any of the b information 
bits associated with node j are decoded incorrectly. Certainly the average bit error 
probability, P b , is lower-bounded by the smallest such node bit error probability. 
Thus 

P b > min P b (j) (5.4.1) 

j 

Assuming that path lengths are arbitrarily long (L->oo), we now proceed to 
lower-bound P b (j)> First note that a decoding error at node j can be caused by 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 319 

many possible paths that diverge from the correct path at node; or earlier. Recall 
that n k (y) is the probability that a path diverging from the correct path at node j 
and remerging at node j + K + k causes an error event. Since this is only one of 
many possible events that can cause a decoding error at node 7, we have 



for any k. Maximizing over k we get 

P b (i) > max n,(/) (5A2) 

I 

For arbitrary /c, H k (j) is the probability of a block decoding error with no more 
than 2 bk code vectors each of block length (K + k)n channel symbols. Thus, this 
can be regarded as a highly constrained block code of length N = (K + k)n and 
rate R k = (bk In 2)/[n(K + k)] nats per channel symbol. 8 Hence, using (3.6.45) and 
(3.6.46), we have 



n (/) > e~ 

+ k)bln2 



R 
where 8p (R, A) = E,(p) - pE (p) (5.4.4a) 

R = i^ = I^ ;(p) (5.4.46) 

with 

E (p) = max E (p, q) (5.4.5) 

q 

Thus combining (5.4.1) through (5.4.5), we obtain 
P h (i) > max 



_ 2~ Kb min[(l+A) sp (/l, 

where we assume K sufficiently large that A can be any rational number; any 
inaccuracy resulting from this is compensated for by the o(K) term. To minimize 
the exponent, we must take the lower envelope with respect to A of 
(1 + A) sp (K, A), which is defined parametrically by (5.4.4). We show now that 
this function is convex u, and thus we can obtain a minimum by setting the 
derivative equal to zero. For, from (5.4.4a), we have 



= E (p) - 

= E.(p) - P E (p) - ^ (5.4.7) 

8 Since the actual number of codewords is slightly less than 2 bk , the actual rate is slightly less than 
this. But for large K these differences are negligible and will be incorporated into o(K) terms in our 
bound. 



320 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

since from (5.4.46) we have 

,dp R 



Differentiating (5.4.7) and using (5.4.8) and (3.2.5), we have 



*(R, X)] _ _V_ >Q . } 

- 



Thus we may set (5.4.7) equal to zero and obtain the absolute minimum as a 
function of L We obtain 



, v 
E (p) - pE (p) 

and combining (5.4.10) with (5.4.46) we have 



while (5.4.44 (5.4.10), and (5.4.11) yield 



(5A10) 



(5.4.11) 



min (1 + /l) sp (K, A) = P R = E (p) (5.4.12) 

A>0 

Finally, combining (5.4.6), (5.4.11), and (5.4.12), and recognizing that the argu 
ments used assume no particular decoding algorithm, we have 

Theorem 5.4.1: Convolutional coding lower bound (Viterbi [1967a]) The prob 
ability of bit error, for any convolutional code and any decoding algorithm, is 
lower-bounded by 



(5413) 

where 

E csp (R) = E (p) < p < oo 

C (5.4.14) 



Thus the convolutional lower-bound exponent agrees with the upper-bound 
(5.1.34) for the range R < R < C (ignoring the e s), but diverges at lower rates. 
This parallels exactly the situation for block codes, except that the bounds for 
block codes diverge at the lower rate E (\) < E (l) = R . We note also that at 
zero rate we have 

csp (0) = lim E.(p) = lim [E (p) - P E (p)] = sp (0) (5.4.15) 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 321 

since either the monotonic increasing function E (p) is bounded, in which case 
lim pE (p) = 0, or it is unbounded, in which case both exponents are infinite at 

p-oo 

zero rate. Thus the convolutional and block code lower-bound exponents are 
equal at zero rate, and neither bound is tight. 

To improve the convolutional lower-bound exponent at low rates, we utilize 
the zero-rate lower bound (3.7.19) instead of the sphere-packing bound. Then, in 
place of (5.4.3), we have 



(5.4.16) 

Although we have used the zero-rate exponent, this result is valid for any rate, 
since the exponent must decrease monotonically with R and R k . Hence (5.4.6) 
becomes 



(5 4 17) 

where ex (0) is given by (3.3.27) (see also Sec. 5.3). We may state this result as 



Corollary 5.4.1 : Low-rate lower bound For < R < R^ < R ,a tighter lower 
bound on bit error probability than that in Theorem 5.4.1 is 



p > 2-Kb[E e *(0) + o(K)]/R (5418) 

where R^ is the rate at which E csp (Ki) = ex (0)- 

The exponent of this bound is sketched for a typical binary-input, output- 
symmetric channel in Fig. 5.5, where it is compared with the low-rate upper 
bound, the latter holding only for this class of channels. We note also that we 
could have used the low-rate lower bound of Sec. 3.8 (see Viterbi [1967]), but this 
would have yielded exactly the same results as (5.4.17). 

We comment finally on the possibility of obtaining bounds which are asymp 
totically tight for all rates. The arguments of Sec. 3.9 for block codes apply equally 
for convolutional codes. If the Gilbert bound is tight [conjecture (3.9.4)], then the 
resulting lower bound (3.9.5) can be used in place of (5.4.4), yielding then a 
low-rate lower bound for binary-input, output-symmetric channels, which agrees 
everywhere with the upper bound of (5.3.13). Thus all aspects of block code 
exponents are paralleled in convolutional code exponents, which are, however, 
always significantly greater in the entire range < R < C. 



322 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 




Figure 5.5 Expurgated ensemble and 
sphere-packing bounds for convo- 
lutional and block codes on binary- 
input, output-symmetric channel. 



5.5 CRITICAL LENGTHS OF ERROR EVENTS* 

The maximization carried out in connection with the lower bound of the preced 
ing section [(5.4.2) and (5.4.6)] suggests that certain lengths of errors (unmerged 
paths) are more likely than others. Based on the lower bound, it appears that the 
most likely A w k/K is given by (5.4.10). Actually, to make this result precise, we 
must use a combination of upper and lower bounds. First of all, we found in 
Sec. 5.1 that the ensemble average probability of an error at node; caused by an 
unmerged path of length K + k is bounded by (5.1.17) for rate 1/n codes, while for 
rate b/n this generalizes [see (5.1.27)] to 



U k (j) < (2 b - 



< p 



(5.5.1) 



We shall call this an error event of length bk, since a run of errors will occur within 
k branches of b bits each, with no two errors separated by K 1 or more 



May be omitted without loss of continuity. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 323 

branches, 9 each with b correct bits. Rewriting (5.5.1) in terms of 

N = (K + k)n = (K + 

and 

- Rk P A 
Rk ~ K + k ~ 1 + A 

we have 



U k (j) < (2 b - 1) e - NlEo(p q} - ** < p < 1 (5.5.2) 

Since the exponent is identical to that of the block coding bound (3.1.17), minimiz 
ing with respect to p and q, we obtain the equivalent of (3.2.8), namely 



U k (j) < (2 b - 1) e- 1 

= (2 b l)2~ Kb(i + * )E(R * )/R (553) 

where 



E(R, A) = (p) - pE (p) < p < 1 (5.5.40) 

in 

T ^ = ft k = F O (P) s;(i) < k < c (5.5.46) 

and 

E (p) = max E (p, q) 
q 

Even though this is only a bound, we may expect to obtain an indication of 
the most likely run length of errors by maximizing (5.5.3) with respect to k (or, 
equivalently, /I). Since, other than for asymptotically unimportant terms, (5.5.3) is 
the same as the right side of (5.4.6), clearly the maximization (or minimization of 
the negative exponent) proceeds identically, and we obtain again (5.4.7) through 
(5.4.11). Let us call the length k = XK which maximizes (5.5.3) the critical length, 
/c crit . Thus from (5.4.10) and (5.4.11), we have 



_ fccru E (p) t o n ^ 1 ,- , _v 

*crit = -JT = r / v -- ^77^, - 1 = .., * . -- ^77-^ < p < 1 (5.5.5) 
K E (p) - pE (p) E (p) - pE (p) 



9 Note that this does not quite mean that b(K - 1) correct bits cannot occur between two incorrect 
bits. For example, if b = 2 and the second bit of the first unmerged branch and the first bit of the 
(K - l)st unmerged branch are correct (with the other bit on both these branches being incorrect), the 
number of correct bits between successive incorrect bits may be as large as 2(K 2) + 2 = b(K 1) in 
this case. 



324 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

We will next show that, for large K, the run length of errors tends to concentrate 
around k crh . More precisely, we prove 

Theorem 5.5.1: Error run lengths (Forney [19726], [1974]) Over the ensemble 
of time-varying convolutional codes, for any e > 0, the average fraction of 
error events of run length k outside the interval /c crit cK < k < /c crit + eK 
approaches as K -> oo, where 



k 

^ 



pE (p) 



< p < 1, R < R<C 



E (p)-pE (p) (5.5.6) 

< R < R 



PROOF (5.4.13) is a lower bound on event error probability for the best code, 
but in the high-rate region, R < R < C, it agrees asymptotically with the 
ensemble average upper bound. Hence, in this region, this is an asymptotically 
exact expression for the ensemble average event error probability, P e . For 
lower rates, we have from (5.1.9) that 

P., ) Kb[R + o(K)]/R f\ ., n .* o ic c T\ 

g <-. ^ U <. 1\ <. 1\Q ^J.J./j 

But, over the same ensemble, this is also a lower bound to the average event 
error probability since P e is lower-bounded by the average probability of 
pairwise errors for one incorrect path unmerged for the minimum length, 
which is just K branches. Averaged over the ensemble, this lower bound is the 
same as (5.1.5) except for a negligible o(K) term, since that result is based on 
the Bhattacharyya bound (5.1.3), which can be shown to be asymptotically 
tight by the methods of Sec. 3.5. Hence 

>-Kb[R +o(K)]/R Q < R < R 

R < R = E (p*)/p* < C (5.5.8) 
Combining (5.5.1) and (5.5.8), we obtain for the high-rate region 



s\ 



(2 b \}2~ KbE ( p)/R y (2~ b[o(p)/ *~ pl> 



2~ blEo(p)/R ~ p] 

(5.5.9) 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 325 

where, from (5.5.8), we see that p* satisfies 

R = E -^ (5.5.10) 

and where p must satisfy the condition 

-p>0 (5.5.11) 



R 

The exponent coefficient in (5.5.9) can be made positive for A large enough. 
We next examine the critical value of A where the exponent is zero in the limit 
as p - p*. The critical value of A satisfies 



E (p) - E (p*) E.(p) 



R R 

or 



~ P 



= (5.5.12) 



(5.5.13) 



Using (5.5.10), we have 



_ p*[.(p) - >*)] 
P E (p*) - p*E (p) 

p*[E (p) - E (p*)]/(p - p*) 
~ E.(p) - p[E (p) - E (p*)]l(p - p*) 



and 

P*[E (p) - 



b r{t = lim 



L crit 

p 



^ p * E (p) - p[E (p) - E (f 

(5.5.15) 



P w *.( 



which is exactly (5.5.5). Hence by choosing A = A crit 4- 6, we have 



fin. = (5.5.16) 



Noting that /c crit maximizes the bound on II k (j), we can similarly show that 



lim ^ ^ = (5.5.17) 

which completes the proof in the high-rate region. 



326 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

In the low-rate region, we have from (5.5.1) with p = 1 and from (5.5.8) 



_ < 



- | ^ * 

\1 -2~ b(RolR ~ V) 



Hence also 



lim 

K-oo 



(5.5.19) 



and we have shown that the fraction of error events with lengths which 
deviate from /c crit of (5.5.6) by eK approaches zero as K - oo for any c > 0. 
This proves the theorem. 

Figure 5.6 shows the ratio A crit = /c crit /K as a function of R for a typical 
memoryless channel. For the class of very noisy channels (see Sec. 5.2), we can, in 
fact, obtain an exact expression, since in this case E (p) = pC/(l + p) and 
p = (C/R) - 1 for C/2<R< C, so that 



K 







1 



(C/R) - 1 



< R < C/2 
C/2<R<C 



(5.5.20) 



Thus, for asymptotically large constraint lengths, the " most likely " error length is 
very small for R < R , increases stepwise at R , and grows without bound as 
R -> C. For very noisy channels, the step increase at R = C/2 is equal to one 
constraint length. 



*- (!) 




Figure 5.6 Normalized critical length of 
error runs. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 327 

5.6 PATH MEMORY TRUNCATION AND INITIAL 
SYNCHRONIZATION ERRORS 

In Sec. 4.7, we indicated that practical storage constraints require limiting the 
path memory for each state to a finite length, usually a few constraint lengths. One 
way to truncate memory at t branches is to make a maximum likelihood decision 
among all paths which are not merged t branches back. It easily follows that a 
truncation error can occur only if an incorrect path which diverges from the 
correct path at the y th node, and remains unmerged from it for t branches, has 
higher metric than the correct path after t branches. For, if the paths merged 
before t + 1 branches, the path with higher likelihood would survive, whether or 
not truncation were employed. Thus, consider the set 3C(j\ t) of paths which 
diverge from the correct path at node j and remain unmerged for exactly t 
branches. Now there are no more than 2 bt such paths. Thus, by exactly the same 
argument used in Sec. 5.1, analogous 10 to (5.1.17) but for b > 1, we find that the 
ensemble average probability that an incorrect path has higher metric than the 
correct path after t unmerged branches is bounded by 



< 2 btp e- tnE (p > q) = 2- bt[E (f >< ) - f)R]/R < p < 1 (5.6.1) 

Thus, maximizing with respect to p and q, we obtain the usual ensemble error 
upper bound for block codes of block length b(\n 2)t/R. 



where 

E(R) = R -R 0<R<E (l) (5.6.3) 

and for the high-rate region 

E(R) = E (p) - pE (p) < p < 1 

R = E (p) E (l) <R<C (5.6.4) 

Comparing (5.6.2) with (5.1.32) we may conclude that truncation errors mil not 
significantly (exponentially) affect the overall error probability if the truncation 
length t is such that 

tE(R) > KE C (R) (5.6.5) 

where E(R) of (5.6.3) and (5.6.4) is the block coding exponent, and E C (R) is the 
convolutional coding exponent of (5.1.33) and (5.1.34). 

10 This is just the block coding error bound for a code of nt symbols and 2 6 codewords. 



328 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

For very noisy channels, condition (5.6.5) reduces to 

1 
1 - 2R/C ^~ R 




C/2<R<C 

Note that, at R = C/2 = R , this indicates that the truncation length for very 
noisy channels should be t > K/(^/2 I) 2 ~ 5.8K. In practice, truncation 
lengths of 4 to 5 constraint lengths have been found sufficient to ensure minor to 
negligible degradation. 

Another problem arising in a practical decoder is that of initial synchroniza 
tion at any node other than the initial node. As we indicated in Sec. 4.7, synchro 
nization eventually occurs automatically once the initial symbol of each branch has 
been determined (which we assume here has already occurred). However, during 
the early stages of synchronization, many errors may occur. The situation in 
starting in midstream is that no initial state metrics are known. Thus we may take 
them all to be zero. In decoding in the usual way, we may regard as an initial 
synchronization error, any error which is caused by a path which is initially un- 
merged with the correct path, for an error caused by any path initially merged 
would have occurred anyway. Now s branches after decoding begins (in mid 
stream with all metrics set to zero at the outset), there is a set of at most 2 bs 
initially unmerged paths, which are merging for the first time with the correct 
path. Clearly, this set is the dual (and the mirror image) of the set ^(y ; t) con 
sidered above in connection with truncation errors. Thus the probability of initial 
synchronization error decreases exponentially with 5, the number of branches 
after initiation of decoding. In fact, the ensemble average upper bound on initial 
synchronization error is the same as (5.6.2) with t replaced by s. Thus after s 
branches, where 

sE(R) > KE C (R) (5.6.7) 

the effects of initial synchronization on error probability become insignificant. In 
practice, the first s branches (5 5K) are usually discarded as unreliable, when the 
decoder is started in midstream. 



5.7 ERROR BOUNDS FOR SYSTEMATIC CONVOLUTIONAL 
CODES 

In Sec. 2.10, we showed that every linear block code is equivalent in performance 
to a systematic linear block code, and in Sec. 3.10 we showed that the best linear 
code, and hence the best systematic linear block code, performs as well asymptoti 
cally as the best block code with the same parameters. That this is not the case for 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 329 

systematic convolutional codes was intimated in Sec. 4.5 where we found that, in 
general, the best systematic codes have smaller free distance than the best nonsys- 
tematic codes. 

We now proceed to obtain a more precise measure of the performance loss of 
systematic convolutional codes by deriving upper and lower bounds. We recall 
from Sec. 4.5 that a systematic rate b/n convolutional code is one in which, for 
each branch, the b data symbols 11 are transmitted uncoded, followed by n b 
parity symbols, which are generated just as for nonsystematic codes and con 
sequently depend on the last Kb data symbols. The systematic constraint affects 
primarily the form of the code paths during remerging, for any incorrect path 
remerges with the correct path only when (K l)b consecutive data symbols are 
identical to those of the correct path. But when this occurs, exactly this many of its 
code symbols are identical to the code symbols of the correct path (the first b 
symbols of each of the K 1 branches just before remerging). Hence, the effective 
length of the unmerged code paths is reduced by (K - l)b code symbols, since 
identical code symbols are useless in discriminating between code paths. 

We first determine the effect of this property on the upper bound of Sec. 5.1. 
The bound (5.1.27) applies in the same way, but now the effective length of 
incorrect code paths unmerged for (K + k) branches is only 

N = (K + k)n - b(K - 1) = K(n - b) + kn + b (5.7.1) 

rather than (K + k)n, for the fcth term of the summation. Note, however, that over 
the first (k + 1) branches all possible data symbols are used; hence the ensemble is 
not curtailed. Another viewpoint is that the /cth term of (5.1.27) is an ensemble 
average upper bound for a block code of 2 b(k+ 1) code vectors of length n(K + /c); 
we showed in Sec. 3.10, based on Sec. 2.10, that the ensemble average upper 
bound for systematic block codes is the same as for nonsystematic block codes. 
Hence we may employ this result, but the " block code " resulting from consider 
ing (2 b l)2 bk incorrect paths unmerged for K + k branches has only N rather 
than (K + k)n effective code symbols. Thus substituting N of (5.7.1) in place of 
(K + k)n in the kth term of (5.1.27), we obtain 



Tl k (j)<[(2 b -l)2 bk ] p e-i 

< (2 b - i)2~ Kb(l ~ r)Eo(p q)/R 2~ kblEo(p q)/R ~ p] < p < 1 (5.7.2) 

where we have again used R = b In 2/n and r = b/n R/\n 2. Thus inserting 
(5.7.2) for U k (j) in (5.1.29), we obtain, in place of (5.1.29) 

-FrT-^vr . b(2 b - 1)2 



1 1 If the channel input is not binary but Q-ary, then / = \n (where v = [log Q] is the least integer not 
less than log Q). Each sequence of \b input bits is transmitted, after mapping, as b Q-ary symbols 
followed by / - vb coded bits mapped into (n b) g-ary symbols (see Fig. 5.1). 



330 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Proceeding with the remainder of the steps in Sec. 5.1, we find that there exists a 
systematic convolutional code whose bit error probability is bounded by 

2 -KbE c (R)(l ~r)/R 



where E C (R) is given by (5.1.33) and (5.1.34). 

We now turn to the lower bound, modifying the derivation in Sec. 5.4 in the 
same way. Here again b(K 1) code symbols of remerging incorrect paths are 
constrained to be the same as those of the correct path. Hence, in (5.4.3), 
N = (K + k)n must be replaced by N of (5.7.1). This yields, in place of (5.4.3) 
through (5.4.5) 

n (/) > e~ N [Esp(Rk)+0(N)] 



(5.74) 

where 



b 

r = - 
n 



sp (R, A) = E (p) - pE (p) (5.7.5a) 



(5.7.56) 
Then proceeding as in the remainder of Sec. 5.4, we have 

P (]}> 2~ Ab [ min < 1 + ;i ~ r ) ; sp(K> A) + o(K)]/K 

_ 2-^[cs P ()(i-r)-t-o(K)]//? (576) 

where 

E csp (R) = E (p) < p < oo 

R = ^M Q<R<C (5.7.7) 

P 

Thus the upper-bound and lower-bound exponents agree for R < R < C. While 
we cannot, in general, obtain tight bounds for lower rates, we can improve the 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 331 

lower bound by using the zero-rate lower bound (3.7.19) in place of (5.7.5) with the 
result 



P b (j) > 2-K* <>< >< 1 - >+<K>J/K < R < R, < R (5.7.8) 

We summarize all these results as 

Theorem 5.7.1: Systematic convolutional code bounds (Bucher and Heller 
[1970]) For systematic convolutional codes, all the upper and lower error 
bounds of nonsystematic codes hold with all numerator exponents multiplied 
by 

1-,-iJLl-* (5.7.9) 



Note that there is a severe loss when b/n is close to unity. Even for b/n = \, the 
reduction in exponent requires doubling the constraint length to obtain with 
systematic codes the same asymptotic results as for nonsystematic codes. 



5.8 TIME- VARYING CONVOLUTIONAL CODES ON 
INTERSYMBOL INTERFERENCE CHANNELS* 

We conclude this chapter with an application of the ensemble average error 
probability analysis to the class of time-varying convolutional codes with the 
intersymbol interference (ISI) channel, first defined and analyzed in Sees. 4.9 and 
4.10. Figure 5.1 a and 5.76 illustrates the analog model and digital equivalent of the 
intersymbol interference channel, which are the same as in Figs. 4.20 and 4.21 but 
with a rate b/n convolutional encoder preceding the channel. In Sec. 4.10, we have 
shown that the maximum likelihood combined demodulator-decoder can be 
realized with a Viterbi algorithm of dimensionality \(^ l)/n] + (K 1) where 
the trellis diagram comes from combining the convolutional encoder and ISI 
linear filter into a single device. Here we shall assume such a maximum likelihood 
demodulator-decoder. 

In the trellis diagram for the combined demodulator-decoder, a path that 
diverges from the correct path and later remerges for the first time can cause an 
error event only if it accumulates a higher metric than the correct path while 
unmerged. Such a path can correspond to a data sequence with a path in the 
convolutional code trellis diagram which diverges and remerges with the correct 
path more than once during the same span of branches over which it is totally 
unmerged in the coded ISI trellis. We shall first consider only those paths for 
which there is only one unmerged span in the code trellis corresponding to the 



* May be omitted without loss of continuity. 



332 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



n(t} 



Convolutional 
encoder 

K,r- b - 

n 


X 


Impulse 
generator 




h(r) 












(a) Analog model 







(Z?) Digital equivalent 



(correlated 
Gaussian noise) 



Figure 5.7 Coded ISI channel model. 



unmerged span of the coded ISI trellis. That is, we first limit our discussion to 
error events for which the unmerged span in the coded ISI trellis corresponds to 
paths in the convolutional code trellis which diverge and remerge only once. 

Let {*} be the channel symbols (-1-1 or 1) of the correct path and let {x n } 
be the channel symbols corresponding to a path that diverges from the correct 
path and remerges for the first time after a span of N channel symbols. Here 

(5.8.1) 



(5.8.2) 



n < 0, n > N + 1 
Defining e n = ^(x n x n \ n = 1, 2, . . ., N we have from (4.9.18) 



P l (e) < 



n=l 



i = 1 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 333 

where the subscript E v indicates the restriction to an error event with paths that 
diverge and remerge only once in the convolutional code trellis during the span of 
N channel symbols. Suppose we could average P l (e) over all sequences e = (c^ 
c 2 , . . . , e N ) using the product measure 



f\q(e n ) (5.8.3) 



n=l 

where 



l (5.8.4) 

or, equivalently 



(5.8.5) 

Averaging P l (e) over this ensemble yields 

N I / y- 1 

!,.<-; I! (5.8.6) 



l 2 CN n= I 

Note that this expression differs from (4.9.22) in that here the summation is over 
all sequences and there is an additional weighting of 1/2 N . It remains, of 
course, to justify the validity of this weighting, as we now do by the following 
argument. 

Figure 5.8 illustrates the generation of the terms inside the product in (5.8.6) 
for the two paths of N channel symbols which correspond to an incorrect path 
that diverges and remerges with the correct path in the code trellis diagram. Its 
right half resembles Figs. 4.22c and 4.23c (the uncoded cases), but the error se 
quence now depends on the code. The error sequence for a particular pair of 
(correct and incorrect) information sequences u and u are generated as shown in 
the left half of Fig. 5.8. The information sequence u is encoded by the convolu 
tional coder into the channel sequence x. The binary sequence is mapped into the 
real channel inputs according to the convention "0" -> +1 and " 1 " -> 1. Since 
x fc = 1, the error sequence term is given by 

I x - x I 



Because of the linearity of the convolutional code, we may form the vector 

d=i|x-x = (i|x! -x 1 |,i|x 2 -x 2 |,...i|x N -x^|) 

by first forming the modulo-2 sum of the binary information sequences v = u u , 
encoding this sum using a convolutional encoder identical to that which encodes 
u, and mapping the resulting binary sequence according to the convention 
"0" -> and " 1 " - + 1. The error sequence is then obtained, as determined by 
(5.8.7), by multiplying this sequence by the coded information sequence. This 
explains the form of the error sequence generator shown in the left half of Fig. 5.8. 




2 

00 

2 
5 

<u 



334 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 335 

Over the ensemble of time-varying convolutional codes, each component of 
the vector d is equally likely to be or 1 provided u and u are on unmerged paths 
(or, equivalently, v has diverged from the all-zeros path). The bit error probability 
is averaged not only over the code ensemble but over the data sequence u as well. 
Since v varies over all binary sequences independent of u, the sequence x is 
independent of the sequence d even though the two generators shown are identi 
cal. Each component of x is equally likely to be a + 1 or 1. Hence each compo 
nent of the error sequence e is with probability 1/2 and -I- 1 or 1 each with 
probability 1/4, which verifies the weighting of (5.8.4). 

The branch metric generator half of Fig. 5.8 is similar to that of Figs. 4.22c 
and 4.23c except that here the weighting has an additional -j factor to account for 
the code ensemble averaging. We now present a straightforward matrix version of 
the convolutional coded bit error bound discussed in Sec. 5.1 as modified for the 
I SI channel. 

Define the state sequence, which corresponds to the contents of the last < 1 
stages of the branch metric generator 

s n = (e n _ ( ^_ 1) ,...,6 n _ 2 ,e n _ 1 ) n=l,2,...,JV+ 1 (5.8.8) 

and the shift relationship 

Sn+ 1 = g( n , S n ) = (C n+ ! _ ( ^_ 1} ,...,_ lf ) (5.8.9) 

Let 

A = 0, A 1? A 2 , ..., A 3 ^-i_ 1 

be the 3^ -1 possible distinct states. Initially we must have s l = A = since, 
before unmerging, the error sequence is 0. Also define 



/(e n ,s n ) = fajexp - h c 2 n + 2*WA-i (5-8.10) 

\ Jo\ i=l /I 

and define the 3^~ 1 x 3^" matrix 

A = { aij } (5.8.11) 



where 



[/( ,Aj) if A, = g(, A,) for some 6 {-1,0,1} 
fl|J= | otherwise 



Then (5.8.6) becomes 



= [1 ! l]A N 



(5.8.13) 



336 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

The matrix A is the state transition matrix of the intersymbol interference. It 
has only three nonzero components in each row and each column, where the 
nonzero components are branch values of a state diagram whose generation is 
shown in Fig. 5.8. For & = 2, for example, it is the state transition matrix for 
Fig. 4.22d but with the "0" state included with a self-loop and all branches also 
weighted by the probability q(e). Hence 



/(0,0) /(O, -1) /(O, 1) 
A= /(-1,0) /(-!,-!) /(-I, 1) 
/(1,0) /(!,-!) /(I, 1) 



A 2 =l 



where 



/(0,0)=/(0, -1)= 



/(-I, -!)= 
/(-I, !)= 



and a , a t , a 2 are given in Fig. 5.9(a) which presents the state diagram for this 
case. Note that the bound in (5.8.13) represents the set of all paths of length N 
starting from the initial state s 1 = 0. It can terminate in any state, however, since 
merger of the code path guarantees only that e n = 0, but the state-vector 
s n = (e,,.^-!), ...,e n _ 2 , e n -i), the contents of the register of Fig. 5.8, is arbitrary. 
This also explains the fact that the premultiplying vector in (5.8.13) is (1 1 1 1). 



i i 



40 



41 




4"0 




(a) State diagram (b) Reduced state diagram 

Figure 5.9 Coded ISI channel error state diagram (ensemble average) for & = 2. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 337 

By symmetry, the set of all paths ending at state " 1 " is the same as the set of 
all paths ending at state " 1." Hence, we have for ^ = 2 (for ^ = 3 see Prob. 5.11) 



where 



where 




P El (N) 



/(O, 0) 



/(O, 1) 
(+!, 1) 



A = 
A,= 1 



/(0,0)=/(0, l) = 



/(1, 1) = ^, + ^) 

The corresponding reduced state diagram is shown in Fig. 5.9b. 

In general for memory ^, the 3^~ * x 3^~ * matrix A corresponds to a state 
diagram where the 3^~ l 1 nonzero states come in equivalent pairs, for which 
the set of N-step transitions to these states starting at the zero state are the same. 
Hence we can always find a reduced state diagram and the corresponding square 
matrix A of size (3^ -1 l)/2 + 1 such that (5.8.13) is expressed as 



(5.8.14) 



Thus, in the following, the matrix A can be used interchangeably with A, with 
concurrent reduction of the dimensionality of the vectors. Initially, however, for 
clarity of exposition we shall consider the unreduced diagram; the reduction will 
then follow immediately. 

Recall that (5.8.13) is the convolutional code ensemble bound on the probabil 
ity that a path diverging from the correct path and remerging N channel symbols 
later (in the convolutional code trellis) causes an error event. If the span over 
which the two paths are apart is K + k branches, then N = n(K + /c). The code- 
ensemble average bit error bound due to these single code-merger error events is 
then (see Sec. 5.1) 



=- 2 b -\ 



(5.8.15) 



where_P fc = P El (N) with N = n(K + k). Substituting (5.8.13) into (5.8.15), we see 
that P bl is bounded by 



P bl < 






(5.8.16) 



338 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

The matrix A is nonnegative and irreducible. The Perron-Frobenius theorem 
(see, e.g., Gantmacher [1959]) states that such\a matrix has a real maximum 
eigenvalue A and an associated positive left eigenvector. Defining a > to be the 
largest component of the left eigenvector divided by its smallest component we 
have (Prob. 5.12) the inequalities 






Thus (5.8.16) can be expressed as 



i]A N 



1) 



61 



\nK 



2 



(I- 2T) 



(5.8.17) 



(5.8.18) 



Up to this point, we have restricted the error events to those paths that 
diverge and remerge only once in the convolutional code trellis during the un- 
merged span in the coded ISI trellis. Now consider again the transmitted convolu 
tional coded sequence {*} and another coded sequence {x n } corresponding to an 
error event satisfying (5.8.1), but suppose that the paths merge twice in the convo 
lutional code trellis, merging at NI but diverging again at N 2 where 

nK < N l < N 2 < N! + (& - 1) (5.8.19) 

This means that the code paths diverge again before the e register of Fig. 5.8 is 
allowed to clear, for that would require N 2 > NI + (& 1). We thus have, in 
addition to (5.8.1) 

\+2,. ,N 2 (5.8.20) 



n = 



+ 1, 



This situation is sketched in Fig. 5.10 where the paths in the convolutional code 
trellis merge at N x and N. The error sequence for the N coded symbols is thus 

= (!, ..., Nl ,0, ...,0, W2 + 1 , ...,C N ) (5.8.21) 

Over the ensemble of time-varying convolutional codes with product measure 
given by (5.8.3), e has measure 



fl 



(5.8.22) 




Figure 5.10 Typical two code-merger path in the coded ISI trellis. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 339 



For this error sequence, (5.8.2) becomes 
P E2 (t)< exp - 



x l exp-v* 2 + 2w fc -,- (5.8.23) 

fc=.V 2 +l I r *0\ i=l 

and its average over the ensemble is 

-I fl 



= 1-1 /( n .a[l 1- lM- v -"M(s Kl+1 )) (5.8.24) 

i j v i " = 1 

where i(s-v 2+ 1) is the (3^ ~ ^-dimensional column vector with " 1 " in the position 
corresponding to state s V2+1 and "0" elsewhere. 12 An inequality similar to 
(5.8.17) also applies here (Prob. 5.12) to give 

[1 1- -lM v - v M(s),. 2+1 )<a/ v -^ (5.8.25) 

This bound eliminates the dependence on state s V2 + j and allows separation of the 
two code-trellis spans that make up the single-error event in the coded ISI trellis. 
Thus (5.8.13), (5.8.17), and (5.8.25) yield the further bound on (5.8.24) 



P E2 (N 19 N-N 2 )< (a/t Nl )M N ~ N2 ) (5-8.26) 

For fixed N l9 N 2 , and N given above, the number of paths that merge twice in 
the convolutional code trellis is bounded by (2 b - \)2 bkl (2 b - l)2 bfc2 , where 
n(K + /cj = N! and n(K + k 2 ) = N - N 2 . For such error events, there can be at 
most b[(k v + 1) + (k 2 + 1)] coded binary symbol errors. Since 



+ l) (5-8.27) 

(see Prob. 5.13 for generalizations to / code mergers), the code ensemble average 
bit error probability due to these two code-merger error events is bounded by 

K ^ J I I 2fc(ki + 1)(*2 + l)(2 b - 1)2 M (2" - 1)2** 

ki=0 k 2 = 

x a 



_ 



(5.8.28) 



12 This follows since here the initial state is not e = but rather the e corresponding to the contents 
of the register when the code paths diverge for the second time. 



340 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

The bounds for two code-merger error events easily generalize to error events 
where there are / path mergers in the convolutional code trellis during the single 
unmerged span in the coded ISI trellis. For any integer /, the corresponding 
code-ensemble average bit error bound due to these events is 

1*1,2,... (5.8.29) 



Taking the union of events bound over all error events, we find that the code 
ensemble average bit error is bounded by 



(l-2 b /L") 2 
a(2 b - 



:;: n21 . nX)2 A"* (5.8.30) 

^i - L^ ij/v* 2 A ) JA } 

From this we obtain 

Theorem 5.8.1 For an additive Gaussian ISI channel with & nonzero 
coefficients h , h l9 . . . , h<?_ 19 there exists a time-varying convolutional code of 
constraint length K and rate b/n for which the bit error probability with 
maximum likelihood demodulation-decoding is bounded by 

v(R)2~ KbRo/R 

P ^(i-y(R)2-*W]> (5 831) 

where 



b 

R = - In 2 < R 
n 

R = -In A nats/channel symbol (5.8.32) 

and where A is the maximum eigenvalue of the ISI channel transition matrix 
A, and a is the ratio of the maximum component over the minimum compo 
nent of the positive left eigenvector associated with L 

The maximum eigenvalue A and the ratio of eigenvector components a are 
the same for both the state transition matrix A and the corresponding reduced- 
state transition matrix A (Prob. 5.12). In the case of duobinary ISI where h = S s 
and /i t = <f s /2, we have the maximum eigenvalue 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 341 

X 

Duobinary ISI 

0.7 1- ^ 




0.6 - 



0123456 
Figure 5.11 Maximum eigenvalue for duobinary ISI. 

and ratio 

a = (2/ - l)/flo 
where 

a = e S/N 

Figure 5.11 shows A as a function of & 5 /N for this special case, as well as for 
the non-ISI AWGN channel (where h = S s and h = 0) for which the only 
nonzero eigenvalue is (1 + a )/2. It is interesting to note that rate = \ encoding 
together with duobinary digital linear filtering results in no net change in the 
signal spectrum ; yet the performance loss relative to rate = % coding only, as shown 
by Fig. 5.11 and (5.8.32), is less than 1 dB. Of course, there are now three 
signal levels rather than two. 



5.9 BIBLIOGRAPHICAL NOTES AND REFERENCES 

The basic upper and lower bounds on convolutional codes in Sees. 5.1, 5.2, and 5.4 
first appeared in Viterbi [19670]. The expurgated bound for binary-input, output- 
symmetric channels in Sec. 5.3 appeared in slightly weaker form in Viterbi and 
Odenwalder [1969]. The results on critical lengths of error events and memory 
truncation and initial synchronization errors in Sees. 5.5 and 5.6 are due to 
Forney [1974]. The modification of the results of the first four sections for 
systematic convolutional codes, treated in Sec. 5.7, is due to Bucher and Heller 
[1970]. Application of the ensemble average error probability techniques to the 
intersymbol interference channel with coding has not been published previously. 



342 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

PROBLEMS 

5.1 Find E C (R) and E csp (R) for the following channels and compare with the corresponding block 
exponents 

(a) All channels of Prob. 3.2. 

(b) All channels of Prob. 3.3. 

(c) Channel of Prob. 3.5. 

(d) Channel of Prob. 3.6. 

5.2 Verify (5.6.6) and plot k clh /K versus R for < R < C. 

5.3 (Construction of a Block Code from a Convolutional Code by Termination with a Zero Tail, Forney 
[1974]) Suppose we construct a block code of length (L + K l)n = N b symbols by taking L branches 
of a rate b/n convolutional code and terminating it with a (K 1) branch tail of all-zero data. 

(a) Show that the rate of this block code is given by R b = [L/(L + K - l)]R where R = (b/n) In 2 
is the convolutional code rate in nats per symbol. 

(b) Show that the block error probability of this block code is upper-bounded by 

U2 b - 1} 

" e -N b E (p)K/(L + K- 1) < < 1 



1 2~ b[Eo(p)IR ~ p] 
(c) Letting 



show that 



where 



and where Jf is a constant independent of K, for c > 0. 

(d) Now, since L and K are arbitrary, choose 6 so as to minimize the bound on P E . Show then 
that 

P E < jre- N ** 
where 

E b (R b }= max {(1 - 0)E e (R)} 

0,R:R b = Re 

(e) Substituting the result of (c) into that of (d) show 



E b (R b )= max 



0<p< 1 



E (P) ~ 



Thus, aside from e <^ 1 and the constant Jf, we have constructed a block code which is as good as 
the ensemble average upper bound on block codes (Chap. 3). 

5.4 In Prob. 5.3, suppose that after step (b), we arbitrarily choose L = /c crit of (5.5.8) the critical run 
length of errors. Then show 

R " ~ k /CCrh K R = E M Q ^P^ 1 
E b (R b ) * , * v E (p) = E (p) - pE (p) 



and thus obtain the same result as in 5.3(e) 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 343 

5.5 (Lower Bound by a Termination with a Zero Tail Argument: Alternative Proof to Theorem 5.4.1, 
Forney [1974]) 

(a) Consider a terminated convolutional code as in Prob. 5.3. Show that this block code must 
have block error probability 



where 

N b = (L+K- l) 

R - L R 
R *- L + k -l R 

(b) Applying the definitions of Prob. 5.3(c) show that 



where 

sp (R b ) = E (p) - pE (p) < p < oo 
R b = E (p) = R0 < 9 < 1 

(c) Now show that the probability of at least one error in L branches of any convolutional code is 
lower-bounded by 

p ^ J - bK[E csp (f 

i e ^ 

where 

csp (R) = min 

e. R:R b = RO 

= mm E M-? E M 
o<p<oo l-E .(p)/R 

and this minimization yields 

E c (R) = E (p) where R = -^ < p < oo 

P 

5.6 (Upper Bound on Free Distance by a Termination with a Zero Tail Argument, Forney [1974]) 

(a) Show that, for a terminated convolutional code with parameter as in Prob. 5.3 

d min (block) > d free (convolutional) 

(b) Thus, given any upper bound for the block code 

J min (block) < D(R b ) 
show that [using the definition of 5.3(c)] 



and hence 






<9<1 N b (l-8) 



344 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(c) Using the Plotkin bound D/N b i(l - R b /\n 2), show that (b) merely results in 

</,... 1 



(K - l)n ~ 2 

(Note that this agrees asymptotically with the result of Prob. 4.9. A tighter, more useful bound can be 
obtained from the Elias upper bound with considerably more manipulation.) 
5.7 (Gilbert-Type Lower Bound on Free Distance) 

(a) Suppose d free < 6(R) for every binary convolutional code. Show that, on a binary-input 
output-symmetric channel, any such code yields 



-(6(R)-o(K)] 



(b) Suppose that 



d(R) -R 



Kn In (2e~ R - 1) 



Show that this would imply that, for every binary convolutional code used on a binary-input, 
output-symmetric channel 



where 

E cex (K) In Z 



R In (2e~ R - 1) 

(c) Show that this is in direct contradiction to the upper bound of Theorem 5.3.1 and Corollary 
5.3.1, and that hence there exists a convolutional code for which 

d(R) -R 



Kn In (2e~ R - 1) 

(d) Suppose we terminate this code in exactly the same manner as Probs. 5.3, 5.5 and 5.6. Show 
that there exists a resulting block code with 



R = In 2 - jHf(d b ) (Gilbert bound) 

5.8 Consider an L-branch convolutional code of rate b/n. Show that, over some ensemble of convolu 
tional codes, the average node error probability for any node j is bounded by 

P~(/) < L 2~ Kb[E (p)+0(Kn/R 
where p satisfies 



R = 



when (1) < R < C, and p = 1 for R < E (l). 
5.9 Prove (5.5.17) of Theorem 5.5.1. 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 345 

5.10 Consider a K = 3, r = ^ time-varying convolutional encoder where at time i when the binary data 
symbol u t enters the encoder the output binary symbols are v, = (v ilt v i2 ) as shown below, and g^ , g\, 




Figure P5.10 

and g ( 2> are the time-varying connection vectors of dimension 2. Assuming the all-zero data sequence is 
transmitted, we can consider the modified state diagram showing distances of all branches from the 
all-zero path branch at time i as 




W"/ 



where, for k = 1, 2, 3, ..., 7, 



is the sum of the encoder output binary symbols for the kth branch of the state diagram at time i. 

(a) Define, for the above state diagram, C X (I>, /;y, i) = transition function for all paths going 
from state a to state b at time j and going to state x at exactly time i, where x = b, c, d. 

Let 






and find A(i + 1) such that 



Initially we have 



fD" Y 

. 



346 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



(b) For a binary-input, output-symmetric channel, show that the node error probability at node j 
is bounded by 



/=!. D=Z 

(c) Suppose at each time i the time-varying connection vectors are independently selected at 
random according to 

P(& sft 82) - (i) 6 

for all connections go , gj , and gj . Over this ensemble of time-varying codes, show that 



and the averaged bit error probability is bounded by 



_ ^ dT(D, I) 



where 



dl 



T(D, I) = 



D 3 / 



1 - 0(1 + D)I 

(d) Generalize (c) to arbitrary K and rate r = l/n where 

D K (\-D)I 



+ 7(1 - 



and 



-m- 



_ 2 -do/*) 



This gives a bound which is exponentially the same as those of (5.1.23) for rates 
R<R = -ln[(l + Z)/2]. 

Hint: See T(L, I) given by (4.6.5). 
5.11 Show that, for g = 3, the 9 x 9 matrix A defined in (5.8.11) reduces to the 5 x 5 matrix 



A = 



where a , a lt a 2 , a 3 , a 4 , a s , a 6 , a 7 , and a 8 are defined by Fig. 4.23. Here the state at time n is 
s n = (c n _ 2 , e n _ l ). Also sketch the reduced state diagram for this case and show that (5.8.13) becomes 





1 

2 











I 
2 


A 


= (0, 0) 


00 








- 


K7 + a 8 ) 


AI 


= (0, 1) 





ifl, 


i 3 


K 





A 2 


-(1. 1) 





i 2 


K 


1*5 





A 3 


= (1. + 1 ) 


.0 


1 

2 


i 


2 





A 4 


= (1,0) 



P E (N) < [1 1 1 1 l]A N 



5.12 Prove the inequalities (5.8.17) and (5.8.25) and show that /I and a are the same for the state 
transition matrix A and the reduced state transition matrix A. Note that a is the ratio of the largest 
component to the smallest component of the positive left eigenvector of A associated with the maxi 
mum eigenvalue L 



CONVOLUTIONAL CODE ENSEMBLE PERFORMANCE 347 



5.13 For nonnegative integers /c t , /c 2 , ..., /c/, prove the inequality 



This general form of (5.8.27) is required to prove the code-ensemble average bit error bound for / 
code-merger events given in (5.8.29). 

5.14 Generalize the results of Sec. 5.8 to channels with an arbitrary but known finite memory part 
followed by a noisy memoryless part where, for channel input sequence x = (x x , x 2 , ..., x v ), the 
channel output sequence y = (y^ y 2 , , >\) has conditional probability 

N 

p. v (y|x,s 1 )= p(y n x.*-i, ..,*-(*-!)) 

where Sj = (x 2 _y, ..., x_ 1? x ). This is a channel with memory y. Defining the state sequence 

S n = ( X n-(^- 1) > X n-2 X n- l) 

the channel conditional probability becomes 



p N (y\x,Si) = 
where there is a state transition equation 



A- H e 9C 


Noiseless 




Noisy 


1 * 




memory JC 




memoryless 


1 



Channel 



(a) Assume two input sequences 



Figure P5.14 



and initial states s t and s^. Show that, for the maximum likelihood decision rule, the two-signal error 
probability is bounded as 



P (x, x ls^ 



(b) Select the components of x and x independently according to the probability distribution 
<j(x), x e 3C. Then show that 



F ,(X,X S^S J < S Z Z I S Z l 

x\ x t x 2 x 2 x s x.v n= 1 

(c) Define the " super state " 



348 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

where K = \3C\ is the number of channel input letters and A are the K 2( ^~ 1} distinct "super states." 
Defining x = (x, x ), q(\) = q(x)q(x ), and super-state transition expression s k+ 1 = g(x fc , s k ), show that 



*, & 2 v n=l 

= [\ l-\]A N i(*) 

where i(sj) is the (K 2( ^~ ^^dimensional column vector with " 1 " in the position corresponding to state 
Sj and "0" elsewhere, and where 



is the K 2( *~ l} x K 2( *~ l} matrix with 

_ j/(x, A,) if A, = g(x, Aj) for some x = (x, x ) 
I otherwise 



and 



/(M) = (x)(; 



(d) Verify that Theorem 5.8.1 generalizes to this general finite memory channel. 



CHAPTER 

SIX 



SEQUENTIAL DECODING OF 
CONVOLUTIONAL CODES 



6.1 FUNDAMENTALS AND A BASIC STACK ALGORITHM 

In the last two chapters, we described and analyzed maximum likelihood decoders 
for conventional codes. While their performance is significantly superior to that 
of maximum likelihood decoders for block codes, they suffer from the same disad 
vantage that computational complexity grows exponentially with constraint 
length. Thus, even though error probability decreases exponentially with the same 
parameter, the net effect is that error probability decreases only algebraically with 
computational complexity. The same is true for block coding, but of course the 
rate of decrease is much greater with convolutional codes. 

This situation could be improved if there were a way to avoid computing the 
likelihood, or metric, of every path in the trellis and concentrate only on those 
with higher metrics which presumably should include the correct path. 1 It is 
practically intuitive, based on our previous analyses, that while an incorrect path 
is unmerged from the correct path, its metric increments are much lower than 
those of the correct path over this segment. We can support this observation 
quantitatively by again considering the ensemble of all possible convolutional 
codes of a given constraint length for a given channel. Let x and x be the code- 
vectors for the correct and an incorrect path over a segment where the two are 
unmerged, and let y be the received output vector from the memoryless channel 

1 An extension of a given path is regarded as another path. 

349 



350 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

over this segment. We now indicate the nth symbol of each vector by the subscript 
n. Suppose we arbitrarily choose for our metric 



M(x) - X m(x n ) 



where 



and where we define 



m(x n ) = In 



p(y r 



p(y n ) 



-R 



(6.1.1) 



(6.1.2) 



(6.1.3) 



and q(x) is the arbitrary weighting distribution imposed on the code ensemble 
(see Sec. 5.1). 

We note, first of all, that this choice of metric is consistent with the maximum 
likelihood metric used previously. For, in maximum likelihood decoding, only the 
difference between the metrics of the paths being compared is utilized. Thus, as we 
previously defined in (4.4.1) and (5.1.1), the metric difference is 



AM(x, x ) = M(x) - M(x ) = In 



p(y n \x) 



p(y n \<) 



(6.1.4) 



where the sum is over the symbols in the unmerged span. Consequently, the terms 
p(y n ) and R do not appear in the metric difference, and hence are immaterial in 
maximum likelihood decoding. On the other hand, in any algorithm which does 
not inspect every possible path in making a decision but must choose among 
paths of different lengths, these terms introduce a bias which is critical in optimiz 
ing the performance of the algorithm. 2 To illustrate the effect of these terms, 
consider the average metric increase for any symbol of the correct path. As usual, 
we take both the expectation with respect to the channel output conditional 
distribution p(y n \ x n ) and the ensemble average with respect to the input weighting 
distribution q(x n ). Thus we have 



p(y n ) 



-R 



= I(q)-R (6.1.5) 

and, if we choose the weighting vector q to maximize 7(q) and thus make it equal 
to channel capacity 

E * n , J m (*n)] = c ~ R > for all R < C (6.1.6) 

2 Massey [1972] has given analytical justification that the metric (6.1.2) is the optimum decoding 
metric. This metric was first introduced by Fano [1963] and is referred to as the Fano metric (see 
Prob. 6.7). 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 351 

On the other hand, for any symbol on an unmerged incorrect path 
E Xn , X n , yn [m(x n )] = X q(x n ) % p(y n \ x n )q(x n )m(<) 



p(y n \<] 



p(y n ] 



- R 



-R 

where we have used (6.1.3) and the inequality In x < x 1. Then since the sum 
mation in the last inequality is identically zero we have 

E Xii , xiityH [m(x n )]< -R (6.1.7) 

The reason that we had to average over the weighting of x n , the corresponding 
symbol of the correct path, is that the distribution of the channel output y n is 
conditioned on it. 

Thus, we have the heuristic result that the " average " metric 3 increment per 
symbol of the correct path is always positive for R < C, while on an unmerged 
incorrect path it is always negative. Obviously, any bias term less than C could be 
used in place of R, but this choice minimizes the computational complexity. The 
main conclusion to be drawn from this is that, on a long constraint length con- 
volutional code, it should be possible to search out the correct path, since only its 
metric will rise on the average, while that of any unmerged incorrect path will fall 
on the average. By making the constraint length K sufficiently long, the fall in 
any unmerged span can be detected and the path discarded, usually soon after 
diverging. 

Before we can substantiate these heuristic generalities, we must describe an 
algorithm which somehow recognizes and utilizes these properties. We begin by 
defining a sequential decoding algorithm as an algorithm which computes the 
metric of paths by extending, by one branch only, a path which has already been 
examined, and which bases the decision on which path to extend only on the 
metrics of already examined paths. 

Probably the most basic algorithm in this class, and certainly the simplest to 
describe, is the stack sequential decoding algorithm whose flowchart is shown in 
Table 6.1 for a rate b/n convolutional code. We adopt the notation u(u, w) for the 
branch metrics, which consist of the sum of n symbol metrics and depend on the b 
data symbols w of the given branch as well as on the (K l)b preceding data 
symVols u of the path which determine the state of the node. Thus the algorithm 
creates a stack of already searched paths of varying lengths, ordered according to 
their metric values. At each step, the path at the top of the stack is replaced by its 



3 This average, of course, is over the ensemble of codes defined by the arbitrary weighting distribu 
tion q(x). From this we can not necessarily conclude at this point that the same will be true for a 
particular code. To deduce this from the ensemble average can only be considered a heuristic 
argument. 



352 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 



Table 6.1 Stack algorithm flow chart 



Initialize by placing initial node 
with metric in stack 



Replace the top path u and its metric 
by its 2 fc successors with 

augmented metric* 
M u . w = M u + Mu w 
"00 . . 

t oo . . i 

w = 



If any of the 2 b newly added paths 
merges with path already in stack, 
eliminate the one with lower metric 



Reorder stack 
according to metric values 



Is node at top of the stack 
at the end of trellis? 




Output path for top node 
stop 



* The metric subscripts u and w indicate data vectors; in the next section we shall identify 
metrics by their code vectors x used as arguments of M(-). 



2 b successors extended by one branch, with correspondingly augmented metrics. If 
any one of the newly added paths merge with any other trellis path already in the 
stack, the one with lower metric is eliminated. The algorithm continues in this way 
until the end of the trellis is reached. 

An example of the basic stack algorithm search is illustrated in Fig. 6. 1 which 
shows the tree and path metrics for the K = 3, r = \, convolutional code, first 
studied in Chap. 4 (Fig. 4.2), transmitted over a BSC with p = 0.03. To determine 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 353 



Information 



sequence 
Lience 

) 


1 
11 
01 

00 



10 
10 

00 


1 
00 
01 

00 



10 
10 

00 



11 
11 

00 

rrr 


11 


1 n 

10 
-TnT 


11 


10 


1_P_L 
11 


-18 
11 


01 


] UL) 

01 


10 


11 


00 

-TTT 


-9 
11 


00 


1 H 

10 
-TnT 


01 


01 


1 01 
11 

I AA 


-18 
10 


10 


01 
-Tin 


11 


00 


-Q0_ 
00 

-TTT 




[ 


-25 
11 


1 u 

10 

foT 


-16 
00 


-25 
10 


1 ul 
11 


-7 
01 


-14 
01 


01^-34 

1 


-16 
01 


-36 
11 


00 

-Til 


-9 


00 


1 u 

10 
-TnT 


10 


01 


1 ul 
11 

_| 00 


-29 


10 


01 

-TuT 






1 iu 



Stack contents after reordering 
(path followed by metric) 

Time 

1 0, -9: l,-9 

2 1,-9;00, -18:01, -18 

3 10, -7; 00, -18; 01, -18; 11, -29 

4 100, -16; 101, -16: 00, -18: 01, -18: 11, -29 

5 101, -16:00, -18:01, -18: 1000, -25: 1001, -25: 11, -29 

6 1010, -14; 00, -18; 01, -18; 1000, -25: 1001, -25; 11, -29: 
1011, -36 

7 10100,-12;00,-18;01,-18; 1000, -25; 1001. -25; 
11. -29; 10101, -34; 1011, -36 



Figure 6.1 Stack algorithm decoding example. 



354 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

the symbol metrics, we note first that R = r In 2 = 0.347 and p(y n ) = \ for y n = 
and 1. Thus from (6.1.2) we have 



mx = ~P)]-* = 0-316 for x n = y n 

\ln(2p)-R= -3.160 for x n * y n 

Since the order of the search is unaffected if the symbol metrics are all multiplied 
by the same positive constant, we may equally use 



-10 tox 



and thus simplify the bookkeeping and the diagram. Figure 6. 1 shows the first seven 
steps of the search, indicating the path and metric values after each new pair of 
branches have been searched and the stack reordered. Since no two equal length 
paths with the same terminal state appear in the stack shown through the seventh 
step, no eliminations occur due to merging up to this point. If, for example, the 
correct data sequence had been 10100 so that the received code sequence 
contains two errors, in the first and third branches (underlined), then it appears 
that by the fifth step the correct path has reached the top of the stack and remains 
there at least through the seventh step. Assuming that, of the paths shown in 
Fig. 6.1, no path other than the top path is further extended (which would cer 
tainly be the case if no further errors occurred), we see that, from the third node 
(where the trellis reaches its full size) through the sixth node, only eight branch 
metric computations were required by this sequential stack algorithm as 
compared to the 24 computations required in maximum likelihood decoding. 
Obviously this comparison becomes ever more impressive as the code constraint 
length grows. 

Nevertheless, each step of the sequential stack algorithm does not necessarily 
advance the search by one branch. It is clear from the example that the number of 
incorrect paths searched varies from node to node. At each node of the correct 
path, we define the incorrect subset to be the set of all paths which diverge from the 
correct path at this node. For a rate i/n code, exactly half the paths emanating 
from a given node are in its incorrect subset. In the example, assuming again that 
no further search occurs within the first six branching levels, we see that three 
paths were searched in the incorrect subset of the first node, one path in that of the 
second node, and three in that of the third node. Let us define a branch computa 
tion as the calculation of the metric of a single path by extension of one branch of 
a previously examined path. Thus the number of branch computations per node 
level is just one more than the number of branch computations in the incorrect 
subset of that node. As shown more generally in Fig. 6.2, there are C j paths 
(branch computations) in the incorrect subset of node;, and hence C, + 1 compu 
tations required ultimately to reach node level j + 1 from node j without ever 
again retreating. 4 Clearly C ; is a random variable, but, as we shall see in the next 

4 Note that the jth incorrect subset may be revisited at any later time, but we take Cj as the total 
number of branch metrics computed in this subset over all visits. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 355 



Incorrect subset SC (1) 
Cj paths 



\ 




Correct path 



Incorrect subset 9C ; (3) 
C 3 paths 



Incorrect subset 9C (2) 
C 2 paths 



Figure 6.2 Incorrect subsets for first 
three nodes. 



section, its distribution is independent of the constraint length of the code, 
although it does depend on the rate R. Equally important is the fact that, even 
though this algorithm is suboptimal, asymptotically for large constraint length it 
performs essentially as well as maximum likelihood decoding. 

We examine the distribution of the number of computations in Sec. 6.2 and 
the error probability in Sec. 6.3. 



6.2 DISTRIBUTION OF COMPUTATIONS: UPPER BOUND 



Let x be the correct path through the trellis and let x} be any incorrect path which 
diverges from x at node y; that is, x} is a path in the incorrect subset of node;. 
Further, let M[x(/)] be the metric up to node i of the correct path, and let M[x}(/c)] 
be the metric at node k of x} where k > j. The number of computations in theyth 
incorrect subset will depend on the relative values of the metrics M[x j(k)] for all 
incorrect paths in the subset and on M[x(/)] where both k >j and i >j. Precisely, 
we have the following condition: 



356 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Lemma 6.2.1 The incorrect path x} in theyth incorrect subset may have to be 
searched beyond node k >j only if 

M[x j(k)] > min M[x(i)] = 7,- (6.2.1) 

i>j 

PROOF A path is searched further if and only if it reaches the top of the stack. 
We may assume node j on the correct path has been reached ; otherwise the 
incorrect subset for node j will be empty. The algorithm guarantees that, if 
M[x j(k)] < M[x (/)], then this incorrect path x} cannot be searched further 
until after x has been searched to a point at which its metric falls below 
M[x}(/c)], and hence its position in the stack falls below that of x} . But if 

min M[x(i)] > M[ 

i>j 

then this never happens and consequently the incorrect path in question is 
never searched again, which proves the lemma. Note that we have ignored 
mergers, but since Lemma 6.2.1 is only a necessary, and not a sufficient, 
condition for further search, the side condition which causes the pruning of 
merging paths can be ignored. 

This lemma is all that we need to determine the upper bound on the distribu 
tion of computation in they th incorrect subset, which we henceforth denote #"(y). 
We note first that the number of computations Cj in this subset will exceed L only 
if L paths in 3T(j] satisfy condition (6.2.1). Hence 

)</> y (L) (6.2.2) 



where the received code vector 5 runs over all symbols beyond nodey, and 
1 if M[\ j(k)] > yj for at least L paths x}(/c) 6 < 



(6.2.3) 
otherwise 

We proceed to upper-bound (6.2.3) by noting that if, for a given y, </> y (L) = 1, then 
by definition 

M[\ j(k)] - yj > for at least L paths x}(/c) e 3C (j) 
and consequently 





and is nonnegative for all other paths. Thus, summing over all paths in the 
incorrect subset, we obtain that for any y for which y (L) = 1 

k )] - y, } > L for any a > o 



5 Notation and discussion is simplified if we do not specify the dimensions of vectors; these are 
either implicit or specifically designated after each equation. 



SEQUENTIAL DECODING OF CONVOLLJTIONAL CODES 357 

Equivalently, 

p 
> 1 = (t) y (L) for any a > and p > (6.2.4) 



The inequality (6.2.4) also holds trivially (as a direct inequality without the 
intermediate unity term) for y such that (/> y (L) = 0. Also, from the definition (6.2.1) 
of yj , it follows that 

e -*pyj _ ex p _ a p m in M[x(/)] 
and hence 



Thus combining (6.2.4) and (6.2.5), we have 



for all y and any a > 0, p > 0. Substituting into (6.2.2) we obtain 

Lemma 6.2.2 The distribution of computation in the jth incorrect subset is 
upper-bounded by 

\ a>0 

< p < 1 

(6.2.7) 

Note, of course, that the metrics M[ ], as defined by (6.1.1) and (6.1.2), 
are functions of y as well as of x or x . 

To proceed, we again consider the ensemble of time-varying convolutional 
codes, first described and used in Sec. 5.1. Averaging over this ensemble, and 
arguing just as in (5.1.14) and (5.1.15) by restricting p to the unit interval and using 
the Jensen inequality, we obtain 6 



Pr {Cj > L} < IT" Z p(y x) 

U 



a>0 



and where the first and second overbars on the right side indicate averages with 
respect to the weighting distributions q[\(i)] and <?[x}(/c)], respectively. Finally, 

6 Note that in taking this ensemble average, we are again ignoring possible merging of the correct 
and incorrect paths. But. if merging occurs, we would not need to make any further computations on 
the incorrect path in question; thus ignoring merging merely adds additional terms to the upper 
bound, which is therefore still valid. 



358 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

recognizing that in a rate b/n code, ignoring mergers, there are less than 2 b(k ~ j} 
paths \ j(k) for k >j and that all averages are the same, we obtain, with the aid of 
inequality (g) of App. 3A 



Pr {Cj > L] < 



P(y | 



e ~apM[x(i)] y 2*>(fc-J)PjaM[x}(fcmp 



To simplify the notation, we let t = i j and r = k j and summarize the above 
results as 

Lemma 6.2.3 The ensemble average computational distribution in the jth 
incorrect subset is upper-bounded by 



where 
T(t, r) = 2* 



f=0 T=0 



(T) 



(6.2.8) 


p<l 

(6.2.9) 



and where x(f) and X (T) are codeword segments of t and T branches, 
respectively. 



This bound is clearly independent of j. To evaluate (6.2.9), it is necessary to 
distinguish the cases T < t and T > t. As shown in Fig. 6.3, the former case corre 
sponds to the case where the correct path segment under consideration is longer 
than the incorrect, and vice versa for the latter. Then, since the channel is mem 
ory less, it follows from the definitions (6.1.1) through (6.1.3) that, for T < t 



where 



while for T > t 



where 



>. P) _ 



. P) 



p(y 



-ap 



, P) 



*-Ip(y)E<* ) 



(6.2. 10a) 



(6.2.11) 



(6.2.12) 



(6.2. 



(6.2.13) 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 359 

Correct path 




Node 



(a) t > T 



Correct path 



Incorrect path 




Node 



(b)r>t 



Figure 6.3 Relative node depths for 
Eq. (6.2.9). 



It should be clear that the single subscripts C and / correspond to segments which 
contain only the correct or incorrect path branches, respectively, while the double 
subscript CI corresponds to segments which contain branches of both the correct 
and incorrect paths (see Fig. 6.3). 

Thus, since R = (b/n) In 2, we may rewrite T(f, T) as 



I exp [-n{(t - r) c (a, p) + T[ c/ (a, p) - pR]}] 

|exp [- W {(T - fp/(, p) - pR] + r[ c/ (a, P) - pR]}] 



T < t 

T > t 

(6.2.14) 

Finally, applying the Holder inequality (App. 3 A) to each component of the expo 
nents, using the definitions (6.2.11) through (6.2.13), we find 

e - c(a. P) < e *pR - ( 1 - ap)Eo[ap/( 1 - ap)] ^ ^ 
e -[Ei(x.p)-pR] ^ e p(l-a)H-apl(l-a)/a] ^^ 



- [c/(a. P) - 



- ( 1 - ap) [ap/( 1 - ap)] - zp [( 1 - a)/a] _ ^ ^ 



^ J 5 ) 
(6 2 16) 
(6217) 



360 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

where < up < 1 and where E (p) is the Gallager function 7 of (3.1.18). Thus 

T(t, i) < 6% (5J T for all t > 0, r > (6.2.18) 

Now in order for the double summation (6.2.8) to be bounded, we must have 
d c < 1 and <5/ < 1; but according to (6.2.15) and (6.2.16) 



if R< -E-- or 



a/9 / y 1 ap 



if R<-^.IL? or R<^> where,5 = i^>0 
1 a \ a / a 

Thus for a = 1/(1 + p), both conditions reduce to 

<5 C <1, 5j<\ if R< E ^ 0<p<l (6.2.19) 

Finally, choosing p such that 

R = (l - e)-^ e>0 
we have from (6.2.8), (6.2.18), and (6.2.19) 



where 



Thus, we may conclude with 

Theorem 6.2.1 There exists a time-varying, rate b/n, convolutional code 
whose distribution of computation in any incorrect subset (and hence of 
computation required to advance one branch) is bounded by 

Pr {C > L} < AL~ " < p < 1 (6.2.20) 

where A is a constant and p is related to the rate R = (b/n) In 2 by the 
parametric equation 



7 Maximization of the exponent with respect to <?(x) is implied here. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 361 

and is any positive constant. The distribution described in (6.2.20) is called a 
Pareto distribution. Note that the power p goes to unity as R -* R and to zero 
as R -> C when we let c -> 0. 

Obviously, the condition (6.2.19) also yields an upper bound for lower 
rates (R < R ). We may take p = 1 so that 

Pr {C > L} < - R<R (6.2.22) 

However, one would expect a more rapid decrease with L for low rates, and in 
fact, if we remove the linearity condition on our code, a tighter result can be 
proved. Precisely, for a time-varying trellis code, 8 it can be shown (Savage 
[1966]) that 

Pr{C>L}<AL- p 0<p<oo (6.2.23) 

where 

R = (l- 6)^^ < R < C(l - c) 
P 

We shall show in Sec. 6.4 that this is the best possible computational 
distribution by deriving a lower bound for any sequential decoding algorithm. 
But before we do this, in Sec. 6.3 we upper-bound the error probability for 
this algorithm, to show that it is asymptotically optimum for large K. 



6.3 ERROR PROBABILITY UPPER BOUND 

The calculation of an upper bound on node or bit error probability for sequential 
decoding is almost the same as that for the distribution of computations. We now 
concentrate on the merging of paths, but rather than consider the probability that 
an incorrect path metric exceeds the correct path metric upon merging, we recog 
nize that an incorrect path in they th incorrect subset does not even get a chance to 
reach the merging point if its metric at the merging point is below the minimum 
metric of the correct path after node j. That is, consider the incorrect path x j(k) 
which diverges from the correct path at node j and remerges at node k. If 

M[\j(k)] < min M[x(i)] 

i>j 

then the incorrect path does not even get a chance to be compared with the correct 
path at this point. 9 Alternatively, we can state this in the same form as Lemma 
6.2.1. 

8 A general time-varying trellis code of constraint length K can be generated by the same K -stage 
shift register(s) as a convolutional code, but with time-varying arbitrary logic ("and" and "or" gates) 
in place of linear logic (modulo-2 addition). 

9 This implies also that the step in the Stack algorithm (Table 6.1) which eliminates merging 
paths may be omitted. (See Sec. 6.5.) 



362 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Lemma 6.3.1 An error may be caused by selecting an incorrect path x}(/c) 
which diverged from the correct path at node; and remerged with it at node k 
only if 

M[x j(k)] > min M[x(/)] = y, (6.3.1) 

i>j 

Again this condition is necessary, but clearly not sufficient for an error to 
occur. 

From this point, much of the derivation closely follows that of the previous 
section. There is, however, one important difference. While x}(/c) in Sec. 6.2 repre 
sented any path in the jth incorrect subset, it now represents only such a path 
which merges at node k. Thus the steps leading to Lemma 6.2.2 are essentially the 
same, as is the lemma itself, but now the set 3C (j) must be replaced by the union of 
subsets \JF =J+K #"(/; k) c 3T(j) where #"(/; k) contains all paths in #"(/) which 
remerge with the correct path at node k. We can thus prove 



Lemma 6.3.2 The node error probability at node j is upper-bounded by 

I 



y l = j k=j+K x}( 

(6.3.2) 

and the expected number of bit errors caused by a path which diverged at 
node j is upper-bounded by 



E[n b (j)] < p(y |x) *-"" b[k -j-(K- 1)] 



y ^Af[*W P a>0 (633) 

ii I 0<p<l 



PROOF Let P e (j\ k) be the probability of an error at node; caused by a path 
which remerges at node k. Analogously to (6.2.2), we have, using Lemma 6.3.1 

>(i) (6.3.4) 



where recall from (6.2.3) that 

( 1 if M[\ j(k)] > yj for some x}(/c) e #"(/; 

<MI)= ; 

1 otherwise 
Thus if for a given y, y (l) = 1, then 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 363 

for some x j(k) e 3C (j\ k). Hence for this y 

>l = y (l) (63.5) 

while if y (l) = 0, (6.3.5) holds trivially without the intermediate unity term. 
At the same time, e~* p J may be bounded just as in (6.2.5). Thus substituting 
(6.3.5) for y (l) and (6.2.5) for e"*"* yields 



f^m (6.3.6) 

y i=j \x j(k)er (j-k) 

It takes at least K branches for a path to remerge; thus 

P f (j)< P,(/:*) (6.3.7) 

fc = j + K 

Combining (6.3.6) and (6.3.7) yields (6.3.2). To find the expected number 
of bit errors caused by such a node error, we observe, as in Sec. 5.1, that the 
number of bit errors caused by an incorrect path unmerged for k j branches 
cannot be greater than b[k -j - (K - 1)] [since the last K - 1 branches, or 
b(K - 1) symbols, must be the same as for the correct path]. Thus 

E[n b (j)] < J b[k - j - (K - l)]P e (/; k) (6.3.8) 

k=j+K 

Combining (6.3.6) and (6.3.8) yields (6.3.3), and thus proves the lemma. 

If we now proceed as in Sec. 6.2, by averaging (6.3.3) over the same code 
ensemble, restricting p to the unit interval and applying the Jensen inequality, we 
obtain 



*] > p<1 (6.3.9) 

But the set 3C (j\ k) of incorrect paths diverging at node; and remerging at node k 
contains no more than (2 b - \)2 bl(k ~ j) ~ K] paths, since the first branch must differ 
from the correct path while, of the remaining (k j 1) branches, the last 
(K 1) branches must be identical to it. Thus, since the same weighting distribu 
tion is used for all path branches 



b(k -j - K + l)(2 b - 



364 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Finally, since by (5.1.32) 






letting t = i j and T = k y, we have 

Lemma 6.3.3 The ensemble average bit error probability is upper-bounded 
by 



where 

T(t, T) = 2> 



y x 



j Zq[x 

U (r) 



:) (6.3.10) 

p a>0 

0<p< 1 

(6.2.9) 



But T(t, T) is identical to the function defined in Sec. 6.2, and we have shown 
there that 



with 



T(r, T) < d%dT for all t > 0, T > 
S c < 1, dj < 1 if a = 



and R < 



(6.2.18) 



(6.2.19) 



1-fp p 

Thus letting R = (1 e)E (p)/p, we have that the sum of (6.3.10) is bounded by 

- p-iyp bK o< P <i 

- < R ^ C(l - e) 









For R < R , as usual we choose p = 1. Thus, using the terminology of Sec. 5.1 
(5.1.34) 



- c)E (p) 



and taking e = |ln dj \/E c (R), we have the following theorem. 



Theorem 6.3.1: Error probability with sequential decoding (Yudkin 
[1964]) The ensemble average bit error probability of a sequentially decoded 
time-varying convolutional code of rate b/n is upper-bounded by 



PH< 



-Kb 



<# <K (1 ~ 



(6.3.12) 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 365 

where E C (R) is given by (5.1.34) and 10 

2 b - 1 2 b - 1 

/i _ ^bln 2/1? \M _ 2-&c<K)AR~|2 < M _ 2-d>EcWARl3 

The exponent of (6.3.12) has the same form as that of (5.1.32), the upper 
bound for Viterbi decoding at high rates 11 [except that e here is related to S t 
whereas in (5.1.32) it is an arbitrary positive number]. On the other hand, for 
lower rates R < R , the exponent is reduced from R /R > 1 to unity. It would be 
possible to increase this exponent, as well as that of (6.2.22), by a different choice 
of bias term [R replaced by R (l c)] in the metric (6.1.2), but only at the cost of a 
worse distribution of computation at higher rates (see Prob. 6.2). 

There remains one issue to resolve. Although we proved in Sec. 6.2 that there 
exists a code for which Pr (C > L) < AU P , and_although it follows from Theorem 
6.3.1 that there exists a code for which P b < P b is bounded by (6.3.12), these 
bounds may not both hold simultaneously for the same code. The resolution of 
this dilemma is arrived at by an argument similar to that used in Sec. 3.2. 
Assuming, for the moment, a uniform weighting of the ensemble, there exist a and 
P on the unit interval such that all but a fraction a of the codes satisfy 

PT{C>L}<-L~ P (6.3.14) 

while all but a fraction p of the codes satisfy 

P b <P~ b /P (6.3.15) 

Thus, at most a fraction a + ft fail to satisfy at least one of these bounds, and con 
sequently a fraction (1 a ft) must satisfy both. With nonuniform ensemble 
weighting, an essentially probabilistic statement must replace this simple argu 
ment. In any case, there exists at least one code which, within unimportant multi 
plicative constants, simultaneously satisfies the upper bounds of both Theorems 
6.2.1 and 6.3.1. 



6.4 DISTRIBUTION OF COMPUTATION: LOWER BOUND 

We now proceed to show that the upper bound of Theorem 6.2.1 for convolu- 
tional codes is asymptotically tight at least for R > R , and that the result (6.2.23) 
for trellis codes is asymptotically tight for R < R as well. The proof is based on 
comparing the list of paths searched by a sequential decoder with the list of the 
L paths of highest metric for a fixed block decoder, and employing the lower 
bound of Lemma 3.8.1 on list-of-L block decoding. 

10 The last inequality in (6.3.13) follows from the choice of t and substitution of (6.2.19) in 
(6.2. 15) and (6.2.16). 

1 1 For systematic codes, the exponent is reduced by the factor of 1 r = 1 R/\n 2. This is shown 
by applying the same arguments as used in Sec. 5.7. 



366 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

We begin by considering a sequential decoder aided by a benevolent genie 
who oversees the decoder action on each incorrect subset. If any incorrect path of 
the 7 th incorrect subset is searched to length / branches (N = In symbols) beyond 
node 7, the genie stops the decoder and informs it to stop searching this path. 
Provided no decoding error is made at the jih node, the distribution of computa 
tion on the 7 th incorrect subset Pr {C 7 > L] is lower-bounded by the probability 
that the genie stops the decoder L times. For first of all, a computation has been 
defined as a branch computation, and L is just the number of computations on the 
last branch of all the incorrect paths stopped by the genie. Hence we are ignoring 
all but the last branch of each path in computing the lower bound. Furthermore, 
many other paths may have been searched, but not to depth / branches. Finally, if 
the genie were not present, the incorrect paths might be allowed to continue for 
even more operations; but if no errors are made in the jih subset, we will ulti 
mately return, just as if the genie were present. Thus in the absence of errors 

Pr {Cj > L} > P g (L) (6.4.1) 

where 

_ p )g en i e sto P s decoder at depth / of jth incorrect | 

IQ\I-J} ^^ ,1 1 V - . / lO.^.^I 

(subset more than L times ) 

Naturally, when the correct path arrives at depth / beyond node 7, it is allowed 
through. Thus suppose we construct a list L (j) of the first L paths (incorrect or 
correct) emanating from node j of the correct path and examined by the genie at 
node 7 + /. Then letting 3C L (j) be the complementary set of 2 bl - L paths not on 
this list, the probability that the genie stops the decoder more than L times for a 
given received vector y is 



(6.4.3) 

where x is the correct path over the given /^-symbol segment. Then, since all 2 bl 
paths of this length emanating from a common node j are a priori equiprobable, 
it follows that 

( 6 - 4 - 4 ) 

This procedure should remind us of the list-of-L decoder described in Sec. 3.8 
and Prob. 3.16. A maximum likelihood list-of-L decoder for a code of M code- 
vectors produces a list consisting of the L code-vectors with highest likelihood 
functions (metrics). Suppose the number of code vectors is M = 2 bl . Let the list of 
the L most likely code-vectors be denoted %(L) and the complementary set, 
consisting of the 2 bl - L code-vectors not on the list, be denoted #(L). Then for 
any block code of 2 bl a priori equiprobable code vectors of length N, the block 
error probability of such a decoder is 

W = I I ,Uv(y|x) (6.4.5) 

y xe*(L) ^ 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 367 

Now the genie-aided sequential decoding of all paths of length N symbols emanat 
ing from they th node can also be regarded as a decoding operation on a somewhat 
constrained (truncated convolutional) block code of 2 bl vectors. However, while it 
does produce a list-of-L output, this list does not correspond to maximum likeli 
hood decoding. Thus it follows that for every x a e L (j), there is some \ b e 
such that 



Since #(L) + #(L) = 3C L (j) + 3C L (j) it follows that if, for a given y, we sum the 
2 bl L elements of the complementary sets, then 

I p.v(y|*)> I P.v(y|x) (6.4.6) 

x e S~ L (j) x e (L) 

since #(L) consists of the (2 bl - L) vectors with lowest likelihood, while 3C L (j) 
may have some elements which are contained in #(L). 

Finally combining (6.4.1) and (6.4.4) and employing (6.4.6) to compare this 
with (6.4.5), we have 



Pr {<-,. > L} > P g (L) = I 2-"p.v(y|x) 



-" 

v xe* L (j) 

>Z I 2- w p. v (y|x) 

y xeW(L) 

= PE(L) (6.4.7) 

At this point we may use Lemma 3.8.1 which lower-bounds the list decoding 
error probability P E (L) = P E (N, 2 bl , L) to obtain 

Pr {C s >L} 

where 



and 

sp () = E (p) - P E (p) < p < oo 
R = E (p) 0<R<C 

To utilize this result, we must choose / or N = nl, the genie s vantage point. 
Suppose we arbitrarily pick 12 



|6A8 



The connection between / crj( and /c crit of (5.5.5) is noteworthy (see Prob. 6.5). 



368 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

Combining (3.8.2), (3.8.3), (3.6.46), and (6.4.8), we obtain 13 

P r { . > J\ > e -n 



= L~ p [l-o(L)] 0<p<oo (6.4.9) 

To determine the relationships between rate and p, we have from (3.6.46), (3.8.3), 
and (6.4.8) 



Thus 

R = E M o<p<oo (6.4.10) 

Hence we obtain the following theorem. 

Theorem 6.4.1: Computational distribution lower bound (Jacobs and Berle- 
kamp [1967]) The computational distribution of any convolutional (or 
trellis) code, on any incorrect subset where no decoding error occurs, is lower- 
bounded by 

Pr {C > L} > L~ p [l -o(L)] (6.4.11) 

* 



Thus, by comparing Theorems 6.2.1 and 6.4.1, we see that the bounds in both 
theorems are asymptotically tight for R < R < C. For lower rates < R < R , 
the lower bound (6.4.11) has been shown to be asymptotically tight only for 
time-varying trellis (nonlinear convolutional) codes. For linear convolutional 
codes, the lower-bound exponent of (6.4.11) does not agree for the lower rates 
with the upper-bound exponent of (6.2.22). It is not known whether either bound 
is tight. 

Given the significance of this Pareto distribution for the operation of a se 
quential decoder, it is worthwhile to examine how the key parameter p, known as 
the Pareto exponent, varies with the channel probability distribution for specific 
commonly used channels. For the BSC derived from the binary-input AWGN 



Here o(L) ~ 1/^/ln L. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 369 

channel by hard quantization of the channel output (J = 2), the function E (p], 
first derived in Sec. 3.4, depends only on the symbol energy-to-noise density S S /N 
[see (3.4.1) with p given by (3.4.18)]. Then solving the parametric equation (6.4.12) 
for p, with various values of code rate r = R/\n 2, results in the curves shown in 
Fig. 6.4, where p is plotted as a function of S b /N = ( s /N )/r in dB. 

Of considerable interest is the behavior of the decoder when soft (multilevel) 
quantization is used on the AWGN channel output. Figure 6.5 shows the corre 
sponding results for the octal (3-bit) quantizer of Fig. 2.13 and the corresponding 
channel of Fig. 2.14, with the quantization step a = Q.58^/N /2. For this case, 
E (p) is obtained from the general expression (3.1.18) with the transition probabil 
ities given by (2.8.1). We note that the improvement over the hard-quantized case 
is very nearly n/2 (2 dB), the same improvement factor found for small SJN in 
Sec. 3.4 (Fig. 3.8). 

Note also that p = 1 corresponds to R = R = E (\) = (0), and thus the 
intercepts of the line p = 1 for each curve in Figs. 6.4 and 6.5 can be derived from 
the J = 2 and J = 8 curves of Fig. 3.8(6) by finding the point at which 
(0) - R = r In 2. 



4.0 



3.5 



3.0 



2.5 



2.0 



1.5 



1.0 




Figure 6.4 Pareto exponent versus S b /N for an AWGN channel with hard quantization. 



370 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

4.0 



3.5 



3.0 



e 

g 

I 2.5 

OJ 

O 




2.0 



1.5 



1.0 



Figure 6.5 Pareto exponent versus S b /N for an A WON channel with octal quantization 
(a = Q. 



6.5 THE FANO ALGORITHM AND OTHER SEQUENTIAL 
DECODING ALGORITHMS 

The basic stack sequential decoding algorithm is a distillation and ultimate 
simplification of a number of successively discovered algorithms, each of which 
was progressively simpler to describe and analyze. The original sequential decod 
ing algorithm, proposed and analyzed by Wozencraft [1957], utilized a sequence 
of progressively looser thresholds to eliminate all paths in the incorrect subset of 
node j before proceeding to search the paths emanating from node 7+1. This 
technique is mainly of historical importance. The next important step was the 
algorithm described by Fano [1963], whose complete analysis appears in the work 
of Yudkin [1964], Wozencraft and Jacobs [1965], and Gallager [1968]. From a 
practical viewpoint, the Fano algorithm is still probably the most important and 
will be discussed further below. Stack algorithms, which form the basis of the 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 371 

algorithm treated in Sees. 6. 1 to 6.4 were proposed and analyzed independently by 
Zigangirov [1966] and Jelinek [1969a]. Also of some tutorial value is the semi- 
sequential algorithm proposed by Viterbi [19670] (Prob. 6.4) and extended by 
Forney [1974J. 

The Zigangirov and Jelinek algorithms are most similar to the one considered 
here. They differ, however, in certain features designed to render them more 
practical for implementation. First, both ignore merging and thus make no provi 
sions for comparing the metric of a path newly added to the stack with that of a 
previously inserted path of the same length terminating in the same state. But this 
does not significantly increase the probability of error, for in Sec. 6.3, we upper- 
bounded P e by determining the probability that an incorrect path was searched up 
to the point of merging. This is tantamount to assuming that errors always occur if 
an incorrect path is allowed to merge, and so the already calculated error bound is 
valid even if comparisons of merging paths are not performed. As for the compu 
tational distribution, it is possible that, by not eliminating merging paths, more 
computations are required, since excess (duplicate) paths are carried along in the 
stack. But as we have just noted, the probability of this event is on the order of the 
error probability, which decreases exponentially with constraint length, K. 
Moreover, the computational distribution upper bound is independent of X, 
which suggests that K can be made very large much larger than for maximum 
likelihood decoding, as we shall discuss further in the next section; in that case, 
both error probability and the additional computation due to ignoring mergers 
will be negligible. From a practical viewpoint, ignoring mergers is very useful, for 
carrying out the merge-elimination step in the flowchart of Table 6. 1 would con 
tribute heavily to the computation time for each branch. 

A much more serious weakness of the basic stack algorithm is that the stack 
size, and hence required memory, increment for node j is proportional to (*,, the 
number of computations in the incorrect subset, and hence it too is a Pareto 
distributed random variable. In the Zigangirov algorithm, this drawback is par 
tially remedied by discarding a path from the stack whenever its metric falls more 
than a fixed amount /? below the metric of the top path. The probability of 
eliminating the correct path in this way decreases exponentially with /?, so that the 
effect on performance can be made negligible. 

A third, and possibly most undesirable, drawback of the basic stack algorithm 
is that the stack must be reordered for each new entry, requiring potentially a very 
large number of comparisons each time. The Jelinek algorithm partially avoids 
this by ordering paths only grossly; that is, all paths with metrics in the range 
M m <M< M m + A are placed in the mth " bin " and paths in the top bin are 
further searched in inverse order of their arrival in the bin (last-in first-out or 
"push down " stack). This requires then that any path not in the bin has its metric 
compared only with one metric for all the paths in the bin. The effect of this gross 
ordering is easily determined. The basic condition for further search of an incor 
rect path (6.2.1) is modified to become 

> 7j - A k>j (6.5.1) 



372 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

The remainder of the derivation of the computational distribution of Sec. 6.2 
follows in exactly the same way, with the result that the additional factor e apA is 
carried throughout. Since in the final steps we choose a = 1/(1 + p), the final effect 
is to multiply the factor A of (6.2.20) by e Ap/(1+p) , which obviously is asymptot 
ically insignificant. For the same reasons, the same factor also multiplies P b of 
(6.3.12). 

This brings us finally to the Fano algorithm, which is generally considered to 
be the most practical to implement. It too utilizes a sequence of metric thresholds 
spaced at intervals of A. Its most desirable feature is that it examines only one 
path at a time, thus eliminating the storage of all but one path and its metric. 
Basically, it continues to search further along a given path as long as its metric 
is growing. Whenever the metric begins to decrease significantly, it backs up and 
searches other paths stemming from previous nodes on the already travelled path. 
It accomplishes this by varying a comparison threshold in steps of magnitude A. 
The threshold is tightened (raised by A) whenever the metric is growing sufficient 
ly on a forward search and relaxed (lowered by A) during backward searches. 
This is done in such a way that no node is ever searched forward twice with the 
same threshold setting on each successive forward search, the threshold must be 
lower than when it was previously searched. 

The details of the Fano algorithm can be explained by examining the 
flowchart of Fig. 6.6. In the first block, looking forward on the better node of a 
binary tree refers to computing both branch metrics and tentatively augmenting 
the current node metric by the greater of the two branch metrics. If the better node 
has just been searched and the running threshold, T, violated, the forward look 
must be to the worse node. This will occur if the first block is entered from point 
(A), which corresponds to a single pass through the backward search. In either case, 
the metric of the node arrived at is compared with 7, and if it is satisfied (M > T), 
the search pointer is moved forward to that node. The next test is to determine 
whether this is the first time this node has been visited in the sequential decoding 
search. 14 It can be shown (Gallager [1968]) that if this is the case, the metric of the 
preceding node will violate T + A. If so, we may attempt to tighten the threshold 
by increasing T by integer multiples of A until M < T + A, and continue to look 
forward. If the node has been searched before, it is essential that we not tighten the 
threshold prior to searching further, for otherwise we may enter a closed loop and 
repeat the same moves endlessly. 

If, in the first block, upon forward search the new node has metric M < 7, we 
must enter the backward search mode. This involves subtracting the previous 
branch metric from the current node metric. If this satisfies T, then the pointer is 
moved back ; if the branch upon which the backward move was made was the 
better of the two emanating from the node just reached, the worse has yet to be 
searched. Thus we return to the forward search via (3) . If it was a worse branch, 
there are no more branches to search forward from this node; hence we must 
continue the backward search. If upon a backward look the current threshold T is 

14 If the code tree is of finite length, this is also the point at which we should test for the end of the 
tree tail and terminate when this is reached. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 373 



Initialize with threshold T = 



Look forward to 
better node 

or 
if entering via (A) to 



T violated 




(Forward search) (Backward search) 

Figure 6.6 Fano sequential decoding algorithm for binary tree. 



violated, we cannot move back. When this occurs, all paths accessible from here 
with the current T in effect have been searched and found to eventually violate T. 
The threshold is now decreased by A and forward search is again attempted. Note 
that, when a node is searched two or more times, each successive time it will be 
searched with a lower current threshold; hence endless loops are avoided. 

Extensive treatments of the Fano algorithm are contained in Wozencraft and 
Jacobs [1965] and Gallager [1968]. Its performance, as well as its analysis, is 
essentially the same as that of the stack algorithm. In fact, the threshold increment 
A of the Fano algorithm has exactly the same effect as the bin size A of the Jelinek 
algorithm. Geist [1973] has shown that, under some weak conditions, the Fano 
algorithm always finds the same path through the tree as the stack algorithm. The 
only difference is that in the Fano algorithm a path node may be searched several 
times, while in the stack it needs to be searched only once. The effect is a modest 
increase in number of branch computations, which can be accounted for by an 
additional multiplicative factor in Theorem 6.2.1. This disadvantage is usually 
more than offset by the advantage of a considerable reduction in storage 
requirements. 



374 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

6.6 COMPLEXITY, BUFFER OVERFLOW, AND OTHER 
SYSTEM CONSIDERATIONS 

For maximum likelihood decoding of convolutional codes by the Viterbi algo 
rithm, complexity is easily defined, for if the constraint length is K and the code 
rate is b/n, the number of branch metric computations per branch (b bits) is 2 Kb , 
while the number of comparisons, and the number of storage registers required for 
path memories and metrics, is 2 (K ~ 1)b . Thus, if we define complexity for Viterbi 
decoding as the number of branch metric computations per bit, that is 

2 Kb 
X = T (6.6.1) 

then it follows from (5.1.32) and (5.4.13) that the bit error probability is, 
asymptotically 15 for large #, and for R < R < C 



P b ^ 2~ KbEc(R)IR ~~ - C<K)/K = %-P (6.6.2) 

where 

E C (R) = E (p) 0<p<l 

R E (p)/p ~ R <R<C 

We have already seen in Sec. 5.2 that convolutional code behavior as a function of 
complexity is much more favorable than that of block codes. In the present 
context, for a X-bit block code, we should define 7 = 2 K /K, the number of code- 
vector metric computations per bit. Then from (3.2.14) and (3.6.45), it follows that 
the block error probability for E (l) < R < C is asymptotically 



P E ~ e~ NE(R) = 2~ KE(R)/R ~ X - E W R E (\) <R<C (6.6.3) 

But since E(R) is significantly smaller than E C (R) for R < R < C, the magnitude 
of the negative exponent of (6.6.2) is significantly greater than that of (6.6.3); hence 
the superiority of convolutional codes with Viterbi decoding. 

The definition of complexity for sequential decoding is somewhat less 
obvious. One possible definition would be the maximum number of branch metric 
computations per bit; that is, the maximum number of computations in the incor 
rect subset of each node, normalized by b, the number of bits decoded for each 
node advanced. The problem is that this is a random variable, C/b, and for an 
infinite length tree C has a Pareto distribution with no maximum. On the other 
hand, for practical reasons discussed further below, we must limit the number of 



15 In this asymptotic expression, we ignore all terms which do not depend on K, hence both the 
multiplicative constant and c are omitted. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 375 

computations in any given incorrect subset or we might never complete de 
coding. 16 Thus if we require C < L max for each incorrect subset, we may define 
the complexity for sequential decoding as 

(6.6.4) 

U 

Then we have from (6.2.20) and (6.4.9) that the decoder will fail to decode a given 
node, by virtue of requiring more computations for that node than are available, 
with probability 

^failure ^ ^a x ^ X~ " (6-6.5) 

where 

_ E C (R) < p < 1 
P ~~ R R <R<C 

Thus, interestingly enough, we note by comparing (6.6.5) with (6.6.2) that the 
probability of sequential decoding failure, when the number of computations per 
branch is limited, asymptotically bears the same relation to complexity as does the 
bit error probability for Viterbi (maximum likelihood) decoding. Note, however, 
that in sequential decoding, the constraint length K does not appear and it would 
almost seem that, since complexity is independent of K, we should make this 
arbitrarily large, 17 thus eliminating the possibility of ordinary error and replacing 
it by the kind of decoding failure just described. 

Comparison of (6.6.2) and (6.6.5), or of (6.6.1) and (6.6.4), suggests choosing 
L max for sequential decoding such that 

L max =2* b 

where K pertains to Viterbi decoding. Thus, the effective constraint length for 
sequential decoding is 

log L max 



which measures the effective complexity of the algorithm in the same way as does 
the ordinary constraint length in Viterbi decoding. 

However, our definition of complexity for sequential decoding is somewhat 
misleading, for it is based on the maximum number of computations per bit, 
whereas normally, for most nodes, C 7 will be much less than L max . This scheme of 
limiting C 7 is also impractical by itself since a decoding failure may well be 



16 There is also the issue of the size of the stack, which grows with C for each node; however, by 
using the Fano algorithm, all this storage is avoided at the cost of increased computation. 

1 7 For other practical system considerations, discussed below, this is really neither feasible nor 
desirable, but we might make K sufficiently large that P b is negligible compared to the probability of 
failure as given by (6.6.5). 



376 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

"catastrophic" in the sense 18 that the decoder may possibly never recover from it 
to return to correct decoding. These issues suggest that, for any real sequential 
decoder, we need to provide some additional features and basic operational 
techniques, before we can realistically consider its performance and complexity. 
The best place to begin is to establish the size of the memory or buffer which 
contains the symbols of the received code-vector y as they await their turn to be 
searched by the decoder. Let this be fixed at B branches, or Bn channel symbols. 
Assuming a J-ary output channel, or equivalently a ./-level quantized continuous 
channel, this will require at most Bn\\og 2 J~\ bits of storage. (This does not include 
any of the memory required by the stack but, as already noted, we can avoid this 
altogether by using the Fano algorithm.) 19 Next, suppose the decoder can per 
form // branch computations during the interarrival time between two successive 
received branches thus [i is the number of computations for every b bit times and 
is generally called the decoder speed factor. Then, if the number of computations 
required in the jth incorrect subset is 

Cj > nB (6.6.6) 

it is clear that a failure will occur. For, even if the buffer is empty when the channel 
symbols of the jth branch are received, if C j > ^B computations are required in 
the jth subset, then clearly the received symbols for the jth branch cannot be 
discarded for at least C,/// > B branch times, since we may need to use these to 
compute a metric at any time until the jth branch decision is finally concluded. 
But in this time B more branches will have arrived and require storage, which is 
impossible unless the jth branch is discarded. This type of failure is called a buffer 
overflow for obvious reasons, and it follows from (6.6.6) and the lower bound of 
Theorem 6.4.1 that at the jth node 



(6.6.7) 
where 



We also know from Theorem 6.2.1 that this result is asymptotically tight for 
RO < R < C, provided the buffer is assumed initially empty. Moreover, if we 
widen our horizon to include arbitrary time-varying trellis (nonlinear convolu- 
tional) codes, the result is asymptotically tight for all rates, according to (6.2.23), 
as shown by Savage [1966]. Assuming this wider class of codes 20 in the 



8 This is to be distinguished from the definition of catastrophic codes given in Chap. 4. 
19 Arguments in favor of the stack algorithm point out that, if enough storage is available, the stack 
algorithm is preferable because of the reduced computational distribution (by a moderate factor, 
independent of L). But the counterargument can be made that, if properly organized, this additional 
stack memory can be devoted to the input data buffer in the Fano algorithm, which does not require 
the stack, and that this advantage will more than overcome the required increase in computation by 
significantly increasing B while only moderately increasing Jf [see (6.6.9)]. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 377 

following, we thus conclude that at the ;th node with an initially empty buffer 

(6-6.9) 



where Jf is a constant. Experimental evidence (Forney and Bower [1971] 
Gilhousen et al. [1971]) indicates that Jf is on the order of 1 to 10, and that long 
searches are sufficiently rare that the assumption of a nearly empty buffer at the 
beginning of each search is reasonably accurate. Then, for a sufficiently low 
overflow probability per node, which must be the case for efficient operation, we 
would have 



Pr {overflow in an ^-branch trellis} - ^^(^B)~ p (6.6.10) 

where p is related parametrically to R by (6.6.8). 

Since overflow is almost certainly "catastrophic," it appears 21 that one way to 
operate a sequential decoder, with finite buffer size and speed factor, is to block off 
the data in ^-branch (^b bit) blocks and to insert, between successive blocks, tails 
consisting of (K 1) branches each containing b zeros. In this way, even if cata 
strophic overflow occurs in one ^-branch block, the tail allows us to reset the 
decoder to the correct state and recommence decoding with a loss of at most 
bits. Of course, the insertion of tails introduces a reduction of rate by (K 
and complicates the timing of the decoder. Thus, to keep the degradation small, 
K/< should be kept small. At the same time, cannot be made excessively large 
because it appears as a multiplicative factor in block overflow probability (6.6.10). 
Typical values used in sequential decoders are <? ^ 500 to 2000, K ^ 20 to 40, and 
buffer size in branches 22 B ^ 10 4 to 10 5 . The speed factor // depends, of course, on 
the data rate in bits per second, and on the speed and complexity of the digital 
logic required for the computations. For example, if we are limited, by a maximum 
logic speed, to 10 7 branch computations/s, and have a data rate of 10 6 bits/s, then 
H = 10. Clearly, we must have // > 1 just to keep up with the arriving data. Thus, 
for low enough data rates (less than 100 K bits/s), pB products in excess of 10 7 are 
possible. Of course, fj. also depends on the complexity of the metric calculation. 
Obviously, computation of the Hamming distance metric for the BSC is far sim 
pler than metric computation for an octal output channel; thus, n will be several 
times greater for the BSC. 



20 Experimental evidence, (Forney and Bower [1971], Gilhousen et al. [1971]) indicates that this 
behavior is accurate even for time-invariant linear convolutional codes. 

21 There is, however, another strategy which has been implemented effectively with systematic 
codes and even some nonsystematic codes. As soon as an overflow occurs, the strategy is to guess the 
correct state of the code at this time and start to decode at this point. The most likely state corresponds 
to the last (K \}b bits (which are transmitted uncoded in a systematic code). If the guess is wrong, the 
decoder will again overflow and then the state is again guessed at that time. After several false starts, 
the decoder ultimately " synchronizes." 

22 As noted above, this translates into a memory size of Bn\\og 2 Jl bits. Thus if we have 2 x 10 5 bits 
available, for a BSC with r = %(n = 2), this translates to B = 10 5 branches of buffer storage, while for 
an octal output channel with r ^ this translates to B = 2.2 x 10 4 branches. 



378 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

We see by comparing (6.6.9) with (6.6.2) that, in sequential decoding, ^iB plays 
a role similar to 2 Kb in Viterbi decoding. Of course, as we noted, ]nB may be of the 
order of 10 7 at low data rates, while, since 2 (K ~ l)b is the number of path memory- 
and-metric storage registers in maximum likelihood decoding, it is not feasible to 
make this much greater than 10 3 . On the other hand, at high data rates (10 7 bit/s 
or above), it is not practical to make /^ sufficiently greater than unity, as required 
for effective sequential decoding, except for a binary-output channel, and even 
then this requires very fast digital logic. With Viterbi decoding, however, by 
providing a separate metric calculator for each state, we require only that // = 1. 
Also, the highly repetitive "pipeline" nature of a Viterbi decoder, as described in 
Chap. 4, serves to reduce its hardware complexity. 

Another aspect to be considered is the decoding delay. With sequential decod 
ing, this must be Bb bits ; for, in order for the data to be output in the same order 
that it was encoded, the same (maximum) delay must be provided for all bits. For 
Viterbi decoding, we found in Sec. 5.6 that the maximum delay need only be on 
the order of a small multiple of Kb; on the other hand, B is typically two orders of 
magnitude greater than K. 

The final major consideration which must be included in any choice between 
Viterbi decoding and sequential decoding concerns their relative sensitivity to 
channel parameter variations, i.e., their robustness. In this category, the sequential 
decoder is inferior, for its performance is strongly influenced by the choice of 
metric, which depends on the channel parameters (e.g., on the channel error pro 
bability for a BSC, and on the energy-to-noise ratio for an AWGN channel). 
Another source of channel variation is the phase tracking inaccuracy (see Sec. 2.5). 
In fact, it has been demonstrated that both phase and gain (amplitude) variations 
affect sequential decoders more detrimentally than they do Viterbi decoders 
(Heller and Jacobs [1971], Gilhousen et al. [1971]). A revealing indication of the 
robustness of Viterbi decoders is that, in some cases, the decoder partially offsets 
imperfections of the demodulator which precedes it (Jacobs [1974]). 



6.7 BIBLIOGRAPHICAL NOTES AND REFERENCES 

As was noted previously, the original sequential decoding algorithm was proposed 
and analyzed by Wozencraft [1957]. The Fano algorithm [1963], with various 
minor modifications, has been analyzed by Yudkin [1964], Wozencraft and Jacobs 
[1965], Gallager [1968], and Jelinek [1968a]. Two versions of stack algorithms and 
their performance analyses are due to Zigangirov [1966] and Jelinek [1969a]. The 
precise form of the Pareto distribution on computation emerged from the works 
of Savage [1966] for the upper bound, and of Jacobs and Berlekamp [1967] 
for the lower bound. 

The development of Sees. 6.2 through 6.4 follows closely the tutorial presenta 
tion of Forney [1974]. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 379 

PROBLEMS 

6.1 Apply the Holder inequality 



Z 



Z "( 



to (6.2.11) through (6.2.13) to verify inequalities (6.2.15) through (6.2.17). 

6.2 Suppose the metric defined in (6.1.2) is modified to utilize an arbitrary bias ft. That is, 



mix.) = In 



p(y n ) 



\-p 



(a) Show that the correct path mean is positive and the incorrect mean negative, provided ft < C. 

(b) Show that this modifies (6.2.11) through (6.2.13) by replacing R by /?, and similarly for 
(6.2.15), but (6.2.16) now involves both R and /?, while (6.2.17) only involves R. 

(c) Find the effect on (6.2. 19) and Theorem 6.2.1 of using ft = R (\ - e) (choose ap = \) and thus 
show that (6.2.22) is replaced by 



Pr {C> L}< AL- (R i /RHl - l) Q<R< R (l - c) 

Find the effect on error probability (6.3.12) and (6.3.13) and thus show that, for low rates, (6.3.12) is 
replaced by 



6.3 If Pr {C > L] = jfL- p where R = E (p)/p, < p < oo 

(a) Find the rate R (1} below which the mean E(C) is finite. 

(b) Find the rate R (2) below which the second moment E(C 2 ) is finite. 

(c) Find the rate R (k) below which the /cth moment E(C k ) is finite. 

6.4 (Semisequential Decoding) Consider a constraint length K, rate b/n, convolutional code of B 
branches terminated by K 1 zeros. Suppose we station a genie at the end of the terminated code, and 
that we utilize a maximum likelihood decoder of a code of shorter constraint length k < K. Precisely, 
let the decoder and genie operate as follows: 

1. Suppose k = 1 and decode on this basis the entire B-branch code. If all the right decisions are made, 
the genie accepts the result; otherwise, he sends us back to the beginning and step 2. 

2. Suppose k = 2 and repeat. Again the genie either accepts the result or sends us back to step 3. 

3. Repeat for k = 3 and continue until either the genie accepts or k = K. 

Using the results of Chap. 5, show that the probability that the number of computations per 
branch exceeds 2* is upper-bounded by 



Pr f * 2") <P -,_-...,., 0<p<l 

and hence, letting L = 2 bk 

Pi{C>L}<D L- p 0<p< 1 

where R = E (p)/p and D is a constant independent of L. 
6.5 (a) Show the relationship between 



where / crit is given by (6.4.8) and /c crit is given by (5.5.5). 
(b) Justify this result intuitively. 



380 CONVOLUTIONAL CODING AND DIGITAL COMMUNICATION 

6.6 Consider a binary tree of depth L + T where two branches diverge from each node at depth less 
than L and only one branch emanates from each node from depth L to the final terminal node at depth 
L + T. Assume each branch of the tree has n channel symbols independently selected according to 
distribution q(x\ x e 3C. Suppose one path in the tree is the actual transmitted sequence and let ,, 
1 < i < L, be the event that some path to the terminal node diverging from the correct path at node 
depth (L - j) is incorrectly decoded. The probability that an incorrect path is decoded when using 
maximum likelihood decoding is therefore bounded by 

PE < I P(E t ) 

i= 1 

For any DMC with input alphabet SC show that for P E averaged over an ensemble of such tree codes 
defined by q(x), x e 9C, we have 

e -(T+l)nE (p, q) 

p ^ 

E i * _ nE.ln. a) 



where 



E>, q) = -In X iflMPW*) 1 * 1 * 1 " " and < p < 1 



Note that this bound is independent of L. Show that, for any rate r = l/n bits per channel symbol, less 
than capacity, the bound can be made to decrease exponentially with T. Generalize this result to rate 
r = b/n bits per channel symbol using a 2 b -ary tree of depth L + T (Massey [1974]). 
6.7 (The Fano Metric, Massey [1974]) Assume a variable length code {x t , x 2 , . . . , x m } where codeword 
x m has a priori probability n m and length N m . To each codeword, add a random tail sequence to extend 
the codewords to length N = max m N m . That is, for codeword x m , add the tail t m = (r l5 f 2 , . . ., t N _ Nm ) 
where t m is randomly chosen according to probability distribution 

/. \ ij / \ 

"/V~N m V m 1 1 "V k / 

k= 1 

By adding independent random tails to each codeword, a code (z t , z 2 , . . ., z m } of fixed block length N, 
where z = (x m , t m ) for each m, is obtained. 

(a) Suppose that code (z^ z 2 , ..., z m } is used over a DMC where the decoder does not know 
which random tails are used (only their probability distribution). Show that the minimum-probability- 
of-error decision rule is to choose m that maximizes L(m, y) for channel output sequence y = (y lt y 2 , 
..., y N ) where 

L(m, y) = I 
and 



where p(>>|x) is the transition probability of the DMC. 

(b) In a sequential decoder, suppose (xj, x 2 , ..., x m } above represents all the paths in the 
encoding tree that have been explored up to the present time. The decoder is assumed to know nothing 
about the symbols in the unexplored part of the encoded tree except that they are selected indepen 
dently according to q( ). In order for the decoder to learn the branch symbols that extend any already 
explored path, it must pay the price of one computation. (Any sequential decoding algorithm can be 
thought of as a rule for deciding which already-explored path to extend.) Show that, when the informa 
tion bits are independent and equally likely, then L(m, y) is the Fano metric given by (6.1.1) and (6.1.2). 
Hence the basic stack algorithm always extends the path which is chosen according to the minimum 
probability of error criterion. 



SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 381 

6.8 (Massey [1973]) Suppose we have an r = \ binary tree code used over the BEC with erasure 
probability p. Using the stack decoding algorithm, we note that any path that disagrees with the 
received sequence in any unerased position has metric - oo and can never reach the top of the stack. 
Over the ensemble of tree codes and received sequences, we now find bounds on the average computa 
tion per node, ?T . 

Following the notation of Sec. 6.2, define the random variable 



and thus 



| 1 path \ j(k) e 3"(j) is extended by the algorithm 
.(x, x;.(/c), y) = 

otherwise 



where x},(/c) is the ith path in JT(j) at node depth k. (There are 2 k ~ j " 1 such paths.) 
(a) Show that, over the ensemble of tree codes and received sequences, 

Pr {e(\, X;,(/c), y) = 1} < Pr {path x},(/c) agrees with y in all unerased positions} 



Then show that 

2~2ro 

P rovided r = 2 < 



(b) Next observe that, whenever path x(fc) reaches the top of the stack before x(/c), then 
e(x, x},(/c), y) = 1. Show that 



and thus 



1 2~ 2r 
> - _ ll - 2ro} provided r = \ < r 



PART 



THREE 



SOURCE CODING FOR 
DIGITAL COMMUNICATION 



CHAPTER 

SEVEN 

RATE DISTORTION THEORY: 

FUNDAMENTAL CONCEPTS 

FOR MEMORYLESS SOURCES 



7.1 THE SOURCE CODING PROBLEM 

Rate distortion theory is the fundamental theory of data compression. It estab 
lishes the theoretical minimum average number of binary digits per source symbol 
(or per unit time), i.e., the rate, required to represent a source so that it can be 
reconstructed to satisfy a given fidelity criterion, one within the allowed distortion. 
Although the foundations were laid by Shannon in 1948, it was not until 1959 that 
Shannon fully developed this theory when he established the fundamental 
theorems for the rate distortion function of a source with respect to a fidelity 
criterion which endow this function with its operational significance. Initially, rate 
distortion theory did not receive as much attention as the better known channel 
coding theory treated in Chaps. 2 through 6. Ultimately, however, interest grew in 
expanding this theory and in the insights it affords into data compression practice. 
Let us now re-examine the general basic block diagram of a communication 
system depicted in Fig. 7.1. As always we assume that we have no control over the 
source, channel, and user. 1 We are free to construct only the encoders and de 
coders. In Chap. 1 we determined the minimum number of binary symbols per 
source symbol such that the original source sequence can be perfectly reconstructed 
by observing the binary sequence. There we found that Shannon s noiseless coding 

1 In earlier chapters we referred to the user as the destination. To emphasize the active role of the 
user of information in determining the fidelity measure, we now call the final destination point the user. 

385 



386 SOURCE CODING FOR DIGITAL COMMUNICATION 




Figure 7.1 Communication system model. 

theorem gave operational significance to the entropy function of a source. In this 
chapter we generalize the theory of noiseless source coding in Chap. 1 by defining a 
distortion measure and examining the problem of representing source symbols 
within a given fidelity criterion. We shall examine the tradeoff between the rate of 
information needed to represent the source symbols and the fidelity with which 
source symbols can be reconstructed from this information. 

Chapters 2 through 6 were devoted to the channel coding problem where we 
restricted our attention to only the part of the block diagram of Fig. 7.1 consisting 
of the channel encoder, channel, and channel decoder. In these chapters, we 
showed that channel encoders and decoders can be found which ensure an arbi 
trarily small error probability for messages transmitted through the channel 
encoder, channel, and channel decoder as long as the message rate is less than the 
channel capacity. For the development of rate distortion theory, we assume that 
ideal channel encoders and decoders are employed so that the link between the 
source encoder and source decoder is noiseless as shown in Fig. 7.2. 2 This requires 
the assumption that the rate on this link is less than the channel capacity. 

The assumption that source and channel encoders can be considered 
separately will be justified on the basis that, in the limit of arbitrarily complex 
overall encoders and decoders, no loss in performance results from separating 
source and channel coding in this way. Representing the source output by a 
sequence of binary digits also isolates the problem of source representation from 
that of information transmission. From a practical viewpoint, this separation is 
desirable since it allows channel encoders and decoders to be designed inde 
pendently of the actual source and user. The source encoder and source decoder 
in effect adapt the source and user to the channel coding system. 



2 This is also a natural model for storage of data in a computer. In this case the capacity of the 
noiseless channel represents the limited amount of memory allowed per source symbol. 



Source 


u 


Source 
encoder 





Source 
decoder 


v 


User 





RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 387 



r~ ~~i 

J Noiseless ! 
1 channel r 

i i 



Figure 7.2 Source coding model. 

We begin by defining a source alphabet #, a user alphabet V (sometimes 
called the representation alphabet), a distortion measure d(u, r) for each pair of 
symbols in # and i\ and a statistical characterization of the source. With these 
definitions and assumptions, we can begin our discussion of rate distortion theory. 
For this chapter we will consider discrete-time memoryless sources that emit a 
symbol u belonging to alphabet % each unit of time, say every T s seconds. Here the 
user alphabet i depends on the user, although in many cases it is the same as the 
source alphabet. Throughout this chapter we will also assume a single-letter dis 
tortion measure between any source symbol u and any user symbol v represented 
by d(u, r) and satisfying 



d(u, v) > 



(7.1.1) 



This is sometimes referred to as a context-free distortion measure since it does not 
depend on the other terms in the sequence of source and user symbols. 

Referring to Fig. 7.2 we now consider the problem of source encoding and 
decoding so as to achieve an average distortion no greater than D. Suppose we 
consider all possible encoder-decoder pairs that achieve average distortion D or 
less and denote by $ D the set of rates required by these encoder-decoder pairs. 
By rate R, we mean the average number of nats per source symbol 3 transmitted 
over the link between source encoder and source decoder in Fig. 7.2. We now 
define the rate distortion function for a given D as the minimum possible rate R 
necessary to achieve average distortion D or less. Formally, we define 4 



R*(D) = min R 

R e D 



nats/source symbol 



(7.1.2; 



Naturally this function depends on the particular source statistics and the distor 
tion measure. This direct definition of the rate distortion function does not allow 
us actually to evaluate R*(D) for various values of D. However, we shall see that 
this definition is meaningful for all stationary ergodic discrete-time sources with a 
single-letter distortion measure, and for these cases we will show that R*(D) can 
be expressed in terms of an average mutual information function, R(D), which will 
be derived in Sec. 7.2. 

There is another way of looking at this same problem, namely the distortion 
rate viewpoint. Suppose we consider all source encoder-decoder pairs that require 
fixed rate R and let Q R be the set of all the average distortions of these encoder- 



3 Recall that R = r In 2 nats per symbol where r is the rate measured in bits per symbol. 

4 Strictly speaking, the "minimum" here should be "infimum." 



388 SOURCE CODING FOR DIGITAL COMMUNICATION 

decoder pairs. Then, analogously to the previous definition, we define the distortion 
rate function as 

D*(R) = min D (7.1.3) 

De R 

For stationary ergodic sources with single-letter distortion measures, the 
definitions of R*(D) and D*(R) yield equivalent results, the only difference being 
the choice of dependent and independent variables. 

The study of rate distortion theory can be divided roughly into three areas. 
First, for each kind of source and distortion measure, one must find an explicit 
function R(D) and prove coding theorems which show that it is possible to achieve 
an average distortion of D or less with an encoding and decoding scheme of rate R 
for any rate R > R(D). A converse must also be derived which shows that if an 
encoder-decoder pair has rate R < R(D), then it is impossible to achieve average 
distortion of D or less with this pair. These two theorems (direct and converse) 
establish that R*(D) = R(D) and give operational significance to the function 
R(D). The second area concerns the actual determination of the optimal attainable 
performance, and this requires finding the form of the rate distortion function, 
R*(D), for various sources and distortion measures. Often when this is difficult, 
tight bounds on R*(D) can be obtained. The final category of study deals with the 
application of rate distortion theory to data compression practice. Developing 
effective sets of implementation techniques for source encoding which produces 
rates approaching R*(D\ finding meaningful measures of distortion that agree 
well with users needs, and finding reasonable statistical models for important 
sources are the three main problems associated with application of this theory to 
practice. 

In this chapter, we develop the basic theory for memoryless sources, beginning 
with block codes for discrete memoryless sources in the next section, and its 
relationship to channel coding theory in Sec. 7.3. Results on tree codes and trellis 
codes are presented in Sec. 7.4. All these results are extended to continuous- 
amplitude (discrete-time) memoryless sources in Sec. 7.5. Sections 7.6 and 7.7 
treat the evaluation of the rate distortion function for discrete memoryless sources 
and continuous-amplitude memoryless sources, respectively. Various generaliza 
tions of the theory are presented in Chap. 8, including sources with memory and 
universal coding concepts. 



7.2 DISCRETE MEMORYLESS SOURCES BLOCK CODES 

In this section and the following two sections we shall restrict our study of source 
coding with a fidelity criterion to the case of a discrete memoryless source with 
alphabet ^ = {a l5 a 2 ,...,a A ] and letter probabilities Q(a l ), Q(a 2 \ ..., Q(a A ). 
Then in each unit of time, say T s seconds, the source emits a symbol u e ty 
according to these probabilities and independent of past or future outputs. The 
user alphabet is denoted V {b l9 b 2 , ..., b B } and there is a nonnegative distor- 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 389 

tion measure d(u, r) defined for each pair (w, v) in ^ x y. Since the alphabet is 
finite, we may assume that there exists a finite number d such that for all w 6 ^ 

and v e i^ 



< d(u, v) < d < oo 



(7.2.1) 



In this section, we consider block source coding where sequences of N source 
symbols will be represented by sequences of N user symbols. The average amount 
of distortion between N source output symbols u= (u t , w 2 , ..., UN) and N rep 
resentation symbols v = (i-j, v 2 , . . ., V N ) is given by 



u n , v n ) 



(7.2.: 



Let $ = {Y!, v 2 , . . ., V M } be a set of M representation sequences of N user symbols 
each. This is called a block source code of size M and block length N, and each 
sequence in M is called a codeword. Code $ will be used to encode a source 
sequence u e # v by choosing the codeword v e $ which minimizes d v (u, v). We 
denote this minimum by 

d(u I M} = min d v (u, v) 



(7.2.3) 



\ e 



and we define in a natural way the average distortion achieved with code $ as 



where 



(7.2.4) 



(7.2.5) 



n= 1 



as follows from the assumption that the source is memoryless. 

Each N units of time when N source symbols are collected by the source 
encoder, the encoder selects a codeword according to the minimum distortion rule 
(7.2.3). The index of the selected codeword is then transmitted over the link 
between source encoder and source decoder. The source decoder then selects the 
codeword with this transmitted index and presents it to the user. This block 
source coding system is shown in Fig. 7.3. Since, for each sequence of N source 
symbols, one of M indices is transmitted over the noiseless channel between the 
encoder and decoder (which can be represented by a distinct binary sequence 
whose length is the smallest integer greater than or equal to log M) the required 





u 


Search for 
codeword \ m in 
(B which mini 
mizes cf v (u. v) 


m 




v m 




Choose 
code word 
r m e<B 


Source 


User 


^.v * 


me{\. 2 A/} 


^v 









Figure 7.3 Block source coding system. 



390 SOURCE CODING FOR DIGITAL COMMUNICATION 

rate 5 is R = (In M)/N nats per source symbol. In the following we will refer to 
code ^ as a block code of block length N and rate R. 

For a given fidelity criterion D, we are interested in determining how small a 
rate .R can be achieved when d(&) < D. Unfortunately, for any given code 38, the 
average distortion d($) is generally difficult to evaluate. Indeed, the evaluation of 
d($) is analogous to the evaluation of error probabilities for specific codes in 
channel coding. Just as we did in channel coding, we now use ensemble average 
coding arguments to get around this difficulty and show how well the above block 
source coding system can perform. Thus we proceed to prove coding theorems 
that establish the theoretically minimum possible rate R for a given distortion D. 

Let us first introduce an arbitrary conditional probability distribution 
{P(v | u) :vei^, u e W}. 6 For sequences u e W N and v e i^ N , we assume condi 
tional independence in this distribution so that 

P w (v|u)= flP(^k) (7.2.6) 

n=l 

Corresponding marginal probabilities are thus given by 



= II P(.) (7.2.7) 

n=l 

where 

P(v) = ZP(v\u)Q(u) 

u 

Similarly, applying Bayes rule, we have the backward conditional probabilities 









= ne(.K) (7.2.8) 

n= 1 

where 



We attach no physical significance to the conditional probabilities 
{P(v | u) : v e 1f , u e U\ but merely use them as a convenient tool for deriving 
bounds on the average distortion when using a code ^ of size M and block length 

N. 



5 M is usually taken to be a power of 2; however, even if this is not the case, we may combine the 
transmission of several indices into one larger channel codeword and thus approach R as closely as 
desired. 

6 We shall denote all probability distribution and density functions associated with source coding 
by capital letters. 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 391 

Recall from (7.2.4) that the average distortion achieved using code M is 
rfW = I&r(u)rf(u|^) (7.2.4) 

u 

Since 

ijyv|u)=i 

V 

we can also write this as 

4) = I X 2/y v | u)d( | M) (7.2.9) 

U V 

Here v e V N is not a codeword but only a dummy variable of summation. We 
now split the summation over u and v into two disjoint regions by defining the 
indicator function 



/ \f\ J I \ J 

Since (1 - O) + O = 1, we have 

d () = Z Z ftv( u ) p ]v( v l u )d( u Wt 1 ~ <&(u, v; 4 

U V 

+ Z Z 6]v(u)^N(v|u)d(u|^)a>(u, v; ^) (7.2.11) 

U V 

Using the inequality, which results from definition (7.2.10) 

d(u | )[l - O(u, v; ^)] < 4v(u, v) (7.2.12) 

in the first summation and using the inequality, which follows from (7.2.1) 

d(u\&) = min d N (u, v) < d (7.2.13) 

in the second summation in (7.2.11), we obtain the bound 

Z Z &v( u ) p N(v|)4vK v)+ d Z Z GN()^N( V I "Wu, v ; ^) (7.2.14) 



The first term in this bound simplifies to 

I Z ejyv I u) ^(u, v) = z Z o^(^ w 4 i ^K , 

u v u v ^ n=l 



= D(P) (7.2.15) 

To bound the second term, we need to apply an ensemble average argument. 
In particular, we consider an ensemble of block codes of size M and block length 



392 SOURCE CODING FOR DIGITAL COMMUNICATION 

N where # = {YJ, v 2 , . . . , V M } is assigned the product measure 

Af 

m=l 



(7.2.16) 



where P N (v) is defined by (7.2.7) and is the marginal distribution corresponding to 
the given conditional probability distribution {P(v\u): ve_i^, u etft}. Averages 
over this code ensemble will be denoted by an upper bar ( ). The desired bound 
for the ensemble average of the second term in (7.2.14) is given by the following 
lemma. 



Lemma 7.2.1 



where 

E(R; p,P)=- P R + E (p, P) 



R = 



InM 

N 



(7.2.17) 
(7.2.18) 



l+p 



PROOF Using the Holder inequality (see App. 3 A), we have, for any 
- 1 < p < 0, 



-p 



(7.2.19) 



since it follows from definition (7.2.10) that <I> 1/p = <I>. Averaging this over 
the code ensemble and applying the Jensen inequality over the same range of 
p yields 



Si I^XM-I )""*" Z ^(vWu, v; & 



l + p 



l+p 



v 



-P 



(7.2.20) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 393 

The second bracketed term above is simply 



^\M) 

= Pr {d N (u, v) < min (^(u, v^, d N (u, v 2 ), ..., d N (u, \ M ))} 

1 



M + 1 



(7.2.21) 



since the code $ has the product measure given in (7.2.16) and thus, for a fixed 
u, each of the random variables d N (u, v), d N (u, v x ), ..., d N (u, V M ), which are 
independent and identically distributed, has the same probability of being the 
minimum. Using (7.2.21) in (7.2.20), we have 

I + P 



np(OCkk) 



: 



n= 1 



1+p 



1+p 



1+P 



1+P 



(7.2.22) 

Let us briefly examine the behavior of this bound for various parameter 
values. As stated in the above lemma, the bound given in (7.2.17) applies for all p 
in the range 1 < p < and for any choice of the conditional probability 
{P(v\u) :vei^, u e W}. The expression E(R; p, P) is identical to the random 
coding exponent in channel coding theory introduced in Sec. 3.1. The only differ 
ence is that here the parameter p ranges between 1 and while for channel 
coding this parameter ranges from to 1. Also, here we can pick an arbitrary 
conditional probability {P(v \ u)} which influences both P(v) and Q(u \ v), while in 
the channel random coding exponent the channel conditional probability is fixed 
and only the distribution of the code ensemble is allowed to change. In the 
following lemmas, we draw upon our earlier examination of the random coding 
bound for channel coding. Here E (p, P) is a form of the Gallager function first 
defined in (3.1.18). 



394 SOURCE CODING FOR DIGITAL COMMUNICATION 

Lemma 7.2.2 

u r 

has the following properties for - 1 < p < 0: 
E (P, P) < 
>,P) 



> /(P) > (7.2.24) 



< p >,o 



dp* 
E (0, P) = 

>,P) 



dp 
where 7 

/(P) = 11 Q(u)P(v I ") In P ~^ (7.2.25) 

V / L^ LI <G*\ / V I / Pliii 

is the usual average mutual information function. 

PROOF This lemma is the same as Lemma 3.2.1. Its proof is given in App. 3 A. 

Since we are free to choose any p in the interval 1 < p < 0, the bound in 
Lemma 7.2.1 can be minimized with respect to p or, equivalently, the negative 
exponent can be maximized. We first establish that the minimum always corre 
sponds to a negative exponent, and then show how to determine its value. 



Lemma 7.2.3 



max E(R;p, P) > for R > /(P) (7.2.26) 

- I<P<O 



PROOF It follows from the properties given in Lemma 7.2.2 and the mean 
value theorem that, for any 6 > 0, there exists a p in the interval 1 < p < 
such that 8 



7 /(P) = /(^; 1 ~) was first defined in Sec. 1.2. Henceforth, the conditional probability distribution 
is used as the argument because this is the variable over which we optimize. 

8 We assume E (p, P) is strictly convex n in p. Otherwise this proof is trivial. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 395 

which, since (0, P) = and ;(0, P) = /(P), implies 

E ( Po ,P)> Po [I(P) + d] (7.2.27) 

Hence 

max E(R;p, P) = max [-pR + E (p,P)] 

- i<p<o - i<p<o 

> - Po R + E ( Po ,P) 



= -p o [R-I(P)-S] 
We can choose 6 = [R - /(P)]/2 > so that 



max E(R ,p, P) > - Po ( R ~ /(P) ) > (7.2.28) 

I<<O * 



Analogously to the channel coding bound for fixed conditional probability 
distribution {P(i; | u): v e i\ u e %}, the value of the exponent 

max E(R;p, P) 

- 1 <P<O 

is determined by the parametric equation 



max E(Ri p, P) = -p*R + >*, P) 

- I<P<O 



R = 



(7.2.29) 



= p* 



for 

/(P) < R 



and 1 < p* < 0. In Fig. 7.4 we sketch these relationships. 

Now let us combine these results into a bound on the average distortion using 
codes of block length N and rate R. We take the code ensemble average of d() 
given by (7.2.14) and bound this by the sum of (7.2.15) and the bound in Lemma 
7.2.1. This results in the bound on d(J3) given by 



d e~- (7.2.30) 

for any 1 < p < 0. Minimizing the bound with respect to p yields 



D(P) + d exp - 



- l<p<0 



max E(R;p 9 P)\\ (7.2.31) 



where 



max E(K;p,P)>0 for R > /(P) 

- i<p<o 



396 SOURCE CODING FOR DIGITAL COMMUNICATION 



E (P, 



Slope = R 



Slope = /(P) 





p*R 



Figure 7.4 (p, P) curve. 



and 



At this point we are free to choose the conditional probability {P(v \ u)} to mini 
mize the bound on d($) further. Suppose we are given a fidelity criterion D which 
we wish to satisfy with the block source encoder and decoder system of Fig. 7.3. 
Let us next define the set of conditional probabilities that satisfy the condition 
D 

(7.2.32) 



& D = {P(v\u):D(P)<D} 

It follows that 3P D is a nonempty, closed, convex set for all 
D > I Q(u) min d(u, v) = > min 



(7.2.33) 



since in defining v(u) by the relation d(u, v(u)) = min d(u, v) we may construct the 

V 

conditional distribution 



-i; 



(7.2.34) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 397 

which belongs to 0> D and achieves the lower bound. Now we define the source 
reliability function 

E(R, D)=max max (K; p, P) (7.2.35) 

- i<p<o 



and the function 

R(D) = min /(P) (7.2.36) 



Pe J 



We will soon show that in fact R(D) is the rate distortion function as defined in 
(7.1.2), but for the moment we shall treat it only as a candidate for the rate 
distortion function. With these definitions we have the source coding theorem. 

Theorem 7.2.1: Source coding theorem For any block length N and rate R, 
there exists a block code M with average distortion d(M) satisfying 



) (7.2.37) 

where 

E(R, D) > for R > R(D) 

PROOF Suppose P* e 3? D achieves the maximization (7.2.35) in the source 
reliability function. Then from (7.2.31) we have 



+ d Q e~> (7.2.38) 

where 

E(R, Z))>0 for#>/(P*) 
But by definition (7.2.32) of ^ D , we have D(P*) < D. Also since 

E(R, D) > max (K; p, P) > for K > /(P) 

- I<P<O 

where P can be any P e J? D , we have 

(R, D) > for R > min /(P) = R(D) 

Hence 

) e- JV <*- l (7.2.39) 



where E(R, D) > for R > R(D). Since this bound holds for the ensemble 
average over all codes of block length N and rate R, we know that there exists 
at least one code whose distortion is less than or equal to d(38\ thus complet 
ing the proof. 

Example (Binary symmetric source, error distortion) Let # = i ~ = {0, 1} and d(u, v) = 1 - 6 U1 .. 
Also suppose Q(Q) = Q(\) = \. By symmetry, the distribution P e & D that achieves both E(R, D) 
and R(D) is given by 

P(r | U ) = [ V * U where < D < \ (7.2.40) 

1 1 - D v = u 



398 SOURCE CODING FOR DIGITAL COMMUNICATION 

Then the parametric equations (7.2.29) become (see also Sec. 3.4) 

E(R, D) = E(R; p*, P) = -S D In D - (1 - <5 D ) In (1 - D) - Jf (<5 
and 

R = In 2 - jT(d D ) 
where 



(7.2.41) 
(7.2.42) 

(7-2-43) 



E(R, D) is sketched in Fig. 7.5 for < D < \ and R(D) <R<\n2 where R(D) = In 2 - Jf (D). 



E(R,D) 




Figure 7.5 Sketch of E(R, D) for the binary symmetric source with error distribution. 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 399 

Theorem 7.2.1 shows that, as block length increases, we can find a code of any 
rate R > R(D) whose average distortion is arbitrarily close to D. A weaker but 
more common form of this theorem is given next. 

Corollary 7.2.2 Given any 6 > 0, there exists a block code ^ of rate 
R < R(D) + with average distortion d(M] < D + e. 

PROOF Let R satisfy 

R(D) <R< R(D) + e 
and choose N large enough so that 



In order to show that R(D) is indeed the rate distortion function, we must 
show that it is impossible to achieve an average distortion of D or less with any 
source encoder-decoder pair that has rate R < R(D). To show this we first need 
two properties of 7(P). First let {P v (v |u): v e y \, u e 9f N } be any arbitrary condi 
tional distribution on sequences of length N. Also let P (n) (v n \ u n ) be the marginal 
conditional distribution for the nth pair (r n , u n ] derived from this distribution. 
Defining 



/(P.v) = 11 QN(U)P(V | u) In (7.2.44) 

U V *JNV 

and 

P (n) (r\u\ 

/(P"") = X I Q(u)P M (v | ) In jjLL (7.2.45) 

u t r (V) 

where 



n=l 

and 



) = Z Q 

u 

we have the following inequalities 



and 

- Z/(P (n) )<4 7 ( P v) (7.2.47) 

N n=l N 



400 SOURCE CODING FOR DIGITAL COMMUNICATION 

Inequality (7.2.46) is the statement that 7(P) is a convex u function of P. This 
statement is given in Lemma 1A.2 in App. 1A. Inequality (7.2.47) can be shown 
using an argument analogous to the proof of Lemma 1.2.2 for I(3 N \ ^ N ) given in 
Chap. 1 (see Prob. 7.1). 

Theorem 7.2.3: Converse source coding theorem For any source encoder- 
decoder pair it is impossible to achieve average distortion less than or equal to 
D whenever the rate R satisfies R < R(D). 

PROOF Any encoder-decoder pair defines a mapping from source sequences 
to user sequences. For any length N, consider the mapping from ^ N to i^ N 
where we let M be the number of distinct sequences in i^ N into which the 
sequences of <% N are mapped. Define the conditional distribution 



h (7.2.48) 

otherwise 

and let P (n) (v \ u) be the resulting marginal conditional distribution on the nth 
terms in the sequences. Also, define the conditional distribution 

(7.2.49) 

Now let us assume that the mapping results in an average distortion of D 
or less. Then 



u 



< D (7.2.50) 

where u is mapped into v(u). But by definition (7.2.2) 

N ) = Z Z QN(U)/\( v | u) I d(u n , ) 

u v Pi n = 1 



n=l u v 



n= 1 



(7.2.51) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 401 

where the inequality follows from (7.2.50). Hence P(v \ u) given by (7.2.49) 
belongs to 0* D and so 

R(D) < I(P) 



I 1 " \ 

- (sir) 



4 /(P.) 

<-lnM 
N 

= R (7.2.52) 

We used here inequalities (7.2.46), (7.2.47), and I(P N ) < In M (see Prob. 1.7). 9 
Hence, D(P N ) < D implies that R(D) < R, which proves the theorem. 

Note that this converse source coding theorem applies to all source encoder- 
decoder pairs and is not limited to block coding. For any encoder-decoder pair 
and any sequence of length N, there is some mapping defined from -% N to V N and 
that is all that is required in the proof. Later in Sec. 7.3 when we consider non- 
block codes called trellis codes, this theorem will still be applicable. 

The source coding theorem (Theorem 7.2.1) and the converse source coding 
theorem (Theorem 7.2.3) together show that R(D) is the rate distortion function. 
Hence for discrete memory less sources we have R*(D) = R(D) where 

R(D) = min 7(P) nats/source symbol 

(V(v | ) In P (7.2.53) 



# D = p(t- 1 ): 1 1 Q(u)P(v | u)d(u, v)<D 

u r 

Thus for this case we have an explicit form of the rate distortion function in terms 
of a minimization of average mutual information. 

The preceding source coding theorem and its converse establish that the rate 
distortion function R(D) given by (7.2.53) specifies the minimum rate at which the 
source decoder must receive information about the source outputs in order to be 
able to represent it to the user with an average distortion that does not exceed D. 

9 With entropy source coding discussed in Chap. 1 it may be possible to reduce the rate below 
(In M)/N, but never below I(P N )/N. 



402 SOURCE CODING FOR DIGITAL COMMUNICATION 

Theorem 7.2.1 also shows that block codes can achieve distortion D with rate 
R(D) in the limit as the block length N goes to infinity. For a block code 3d of finite 
block length N and rate R, it is natural to ask how close to the rate distortion limit 
(D, R(D)) we can have (d(@\ R). The following theorem provides a bound on the 
rate of convergence to the limit (D, R(D)). 

Theorem 7.2.4 There exists a code ^ of block length N and rate R such that 

0^- Al riA\ n *- A ,,-Nd 2 (N)/2C /7 ~\ CA\ 

< cL\y3) L) < a ^/.Z.j4j 

when 

< <5(N) - R - R(D) < \C 
where C = 2 + 16[ln A] 2 is a constant such that for all P 

A-c. - ls , s . 



PROOF From (7.2.30) we know that, for each p in the interval 1 < p < and 
for the conditional probability {P(v | u): v e Y*, u e ^U\ there exists a code ^ of 
block length N and rate R such that 



d e~ r (7.2.55) 

Recall from (7.2.18) that 



<-p (7.2.56) 

For fixed P, twice integrating E,(p, P) = d 2 E (p, P)/Sp 2 yields 

f [*; (, P) ^ dft = -pi(0, P) + (p, P) - (0, P) (7.2.57) 
o o 

Since (0, P) = and ;(0, P) = /(P), we have 

(p, P) = p/(P) + [ [ ; (, P) da ^ (7.2.58) 

J o J o 

Let C be any constant upper bound to El(p, P). (See Prob. 7.3, where we 
show that C = 2 + 16[ln A} 2 is such a bound for -J < p < 0.) Then 



, P) > p/(P) - f P I C 

>f\ ">r\ 



o 



.2 



da 



(7.2.59) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 403 

Hence 

E(R; p, P) > -pK + P I(P) -^-C (7.2.60) 

Now choose P* e J> D such that /(P*) = R(D). Then 

E(R; p, P*) > -p[R - R(D)] - P -C (7.2.61) 

Defining 6(N) = R- R(D), we choose 

P*=-^> (7.2.62) 

*- o 

where S(N) is assumed small enough to guarantee -j < p* < 0. Then 

(K;p*,P*)>^ (7.2.63) 

^^o 

and putting this into (7.2.55) gives 



(7.2.64) 

There are many ways in which the bound on (d(M\ R) can be made to 
converge to (D, R(D)). For example, for some constant a > 

6(N) = R- R(D) = aAT 3 / 8 (7.2.65) 

yields 

/ ~2\ri/4 

(7.2.66) 



2C 



A different choice of 

/In /V 

(7.2.67) 



yields 

H- 

(7.2.68) 



which shows that, if R -> /?(D) as ^/(In N)/N, we can have d($) -* Z) as AT for 
any fixed y > 0. 

Although Theorem 7.2.4 does not yield the tightest known bounds on the 
convergence of (d(\ R) to (D, R(D)) (cf. Berger [1971], Gallager [1968], Pile 
[1968]), the bounds are easy to evaluate. It turns out that some sources called 
symmetric sources have block source coding schemes that can be shown to con 
verge much faster with block length (see Chap. 8, Sec. 8.5). 



404 SOURCE CODING FOR DIGITAL COMMUNICATION 

7.3 RELATIONSHIPS WITH CHANNEL CODING 

There are several relationships between the channel coding theory of Chaps. 2 
through 6 and rate distortion theory. Some of these appear simply because the 
same mathematical tools are applied in both theories, while others are of a more 
fundamental nature. 

Suppose we no longer assume a noiseless channel and consider both source 
and channel block coding as shown in Fig. 7.6. Assume the discrete memoryless 
source emits a symbol once every T s seconds and that we have a source encoder 
and decoder for a source block code, ^, of block length N and rate R nats per 
symbol of duration T s seconds. For each NT S seconds, a minimum distortion 
codeword in ^ = {vj, v 2 , ..., V M } is chosen to represent the source sequence 
u e tft N and the codeword index m is sent over the channel. Hence, once every NT S 
seconds, one of M = e NR messages is sent over the channel. We assume that the 
memoryless channel is used once every T c seconds and has a channel capacity of C 
nats per channel use of duration T c seconds. The channel encoder and decoder use 
a channel block code, #, of block length N and rate R nats per channel use 
where 10 

T C N= T S N 

T C R = T S R 

10 It is not strictly necessary for the channel block length to satisfy (7.3.1) since the channel encoder 
can regard sequences of source encoder outputs as channel input symbols; that is, N could be any 
multiple of its value as given by (7.3.1). 




me (1,2,.. .,M} 
R nats every T s seconds 



Source encoder 



Channel source 




me { 1, 2, . . ., M} 
R nats every T c seconds 




Pr {} = Pr (m*m] 



R nats every T c seconds 



Source decoder 
j 



Channel decoder 



Channel destination 
Figure 7.6 Combined source and channel coding. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 405 

Here let S = {m =/= m} be the event that a channel message error occurs and let 
S = {m = m] denote its complement. The average distortion attained when using 
source code $ and channel code # is 



E{d N (u, vj | , tf , <?} PrK) (7.3.2) 

where the expectation E{ } is over both source output random variables and 
noisy channel outputs. When no channel errors occur, we have 

4v(u, vj = d N (u, vj = d(u\3) = min ^(u, v) 
From (7.2.1), we have 



(7.3.3) 
Substituting this bound and 

Pr{^} < 1 (7.3.4) 

in (7.3.2), we have 

(7.3.5) 



From channel coding theory (Theorem 3.2.1), we know that there exists a 
channel code # such that the probability of a channel message error Pr{S] is 
bounded by 



(7.3.6) 
where 

T.R 



> 



Similarly from Theorem 7.2.1, we know that there exists a source code ^ such that 

<D 4- 



where E(R, D) > for R > R(D). Applying these codes to the combined source 
and channel coding scheme of Fig. 7.6, substituting (7.3.6) and (7.3.7) in (7.3.5), 
yields the average distortion given by the following theorem. 

Theorem 7.3.1 For the combined source and channel coding scheme of 
Fig. 7.6 discussed above, there exists a source code M of rate R and block 
length N and a channel code # of rate and block length N satisfying (7.3.1) 
such that the average distortion is bounded by 

} + d e- (T ITt)NE(T RITt) (7.3.8) 



406 SOURCE CODING FOR DIGITAL COMMUNICATION 

where 

E(R, D) > and 
for R satisfying 

R(D) <R<C (7.3.9) 

where 

C = |c (7.3.10) 

is the channel capacity in nats per T s seconds. 

As long as the rate distortion function is less than the channel capacity, 
R(D) < C, we can achieve average distortion arbitrarily close to D. When 
R(D) > C, this is impossible, as established by the following. 

Theorem 7.3.2 It is impossible to reproduce the source in Theorem 7.3.1 with 
fidelity D at the receiving end of any discrete memoryless channel of capacity 
C < R(D) nats per source letter. 

PROOF The proof of this converse follows directly from the data-processing 
theorem (Theorem 1.2.1) and the converse source coding theorem (Theorem 
7.2.3) (see Prob. 7.5). 

The above converse theorem is true regardless of what type of encoders and 
decoders are used. In fact, they need not be separated as shown in Fig. 7.6, nor do 
they need to be block coding schemes for Theorem 7.3.2 to be true. Since 
Theorem 7.3.1 is true for the block source and channel coding scheme of Fig. 7.6, 
we see that in the limit of large block lengths there is no loss of generality in 
assuming a complete separation of source coding and channel coding. From a 
practical viewpoint, this separation is desirable since it allows channel encoders 
and decoders to be designed independently of the actual source and user. The 
source encoder and decoder in effect adapts the source and user to any channel 
coding system which has sufficient capacity. As block length increases, the source 
encoder outputs become equally likely (asymptotic equipartition property) so that, 
in the limit of large block lengths, all source encoder outputs depend only on the 
rate of the encoder and not on the detailed nature of the source. 

From Fig. 7.6, we see a natural duality between source and channel block 
coding. The source encoder performs an operation similar to the channel decoder, 
while the channel encoder is similar to the source decoder. Generally, in channel 
coding, the channel decoder is the more complex device, while in source coding 
the source encoder is the more complex device. We shall see in Sec. 7.4 that this 
duality also holds for trellis coding systems. Finally, we note that, although the 
source encoder removes redundancy from source sequences and channel encoding 
adds redundancy, these operations are done for quite different reasons. The source 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 407 

encoder takes advantage of the statistical regularity of long sequences of the 
source output in order to represent the source outputs with a limited rate R(D). 
The channel encoder adds redundancy so as to achieve immunity to channel 
errors. 

We next show an interesting channel coding interpretation for the source 
coding theorems of Sec. 7.2. For the general discrete memoryless source, represen 
tation alphabet, and distortion measure defined earlier, consider 3> D = {P(v\u}: 
D(P) < D] for some fidelity D. For any P e ^ D , define the channel transition 
probability for a discrete memoryless channel with input alphabet y" and output 
alphabet ^ as 



This is sometimes referred to as the " backward test channel." Now consider any 
source code % = {vj, v 2 , . . . , V M } of rate R and block length N. We can regard {v , 
v !,..., V M } as a channel code 11 for the above backward test channel as shown in 
Fig. 7.7. Assume that the codewords are equally likely so that the maximum 
probability of correct detection, denoted P c (v , v 1? . . . , V M ), would be achieved by 
the usual maximum likelihood decoder. But suppose we use a suboptimum chan 
nel decoder which uses the decision rule, for given channel output u e W N 

choose v e {v , YJ, ..., V M } which minimizes d v (u, v) (7.3.12) 

Then for this suboptimum decoder, the probability of a correct decision, denoted 
P c (v , v j,..., V M ), is upper-bounded by 



mn 



M, i f- -i I 11 \ ~ * m / ~ ~ 11 \~~* rn / fH 



v is sent 



<P C (V , ?!,..., V M ) 

< e -N(E ( P ,p)-pR] -l<p<0 (7.3.13) 

where the last inequality follows from the strong converse to the coding theorem 
(Theorem 3.9.1). We now use (7.3.13) to show why in the source coding theorem 

1 The vector v plays the same role as the dummy vector v in the proof of Lemma 7.2.1. 




. \ ) Figure ?J Backward test channel. 



408 SOURCE CODING FOR DIGITAL COMMUNICATION 

(Theorem 7.2.1, also see Lemma 7.2.1) the source coding exponent, E(R, D\ is 
essentially the exponent in the strong converse to the coding theorem. 

We are primarily interested in the term Pr{d N (u, v ) < min m ^ d N (u, v m ) | v is 
sent} which may or may not be larger than P c (v , v t , ..., V M ). However, if we 
average (7.3.13) over the ensemble of codewords {v , v ls ..., V M } where all com 
ponents are chosen independently according to (P(v): v e 1f\ we have 12 



Pr \d N (u, \ ) < min d N (u, \ m ) 



\Q is sent 



^cK)> Vi, ..., V M ) 

< e -NiE o(P ,P)- pR] _i<p<o (7.3.14) 

which is exactly Lemma 7.2.1. Then, as in Sec. 7.2, for P e 0> D we have average 
distortion 



<D + d Pr |4v(u, v ) < min d N (u, v m ) 



v is sent 

< D + d e- NlE (p > p) -<> R] (7.3.15) 

where 

max [E (p, P) - pR] > for R > /(P) 



Here we see that the source coding theorem can be derived directly from the 
strong converse to the coding theorem due to Arimoto [1973] by applying it to the 
backward test channel corresponding to any P e 0> D as shown in Fig. 7.7. Since 
the strong converse to the coding theorem results in an exponent that is dual to 
the ensemble average error exponent, the source coding exponent is dual to the 
ensemble average error exponent. 

Perhaps the least direct relationship between channel and source coding 
theories is the relationship between the low-rate expurgated error bounds of 
channel coding theory and the natural rate distortion function associated with the 
Bhattacharyya distance measure of the channel. In particular, suppose we have a 
DMC with input alphabet $T, output alphabet <&, and transition conditional 
probabilities {p(y \x): y e ^, x E %}. For any two channel input letters x, x e #", 
we have the Bhattacharyya distance defined [see (2.3.15)] as 



d(x, x ) = -In X Jp(y\x)p(y\x ) (7.3.16) 

y 

and we suppose that the channel input letters have a probability distribution 
(g(x): x e #"}. Alternatively, for a source with alphabet ^ = 3C, probability distri 
bution (q(x): x e #*}, representation alphabet ^ %, and the Bhattacharyya dis 
tance in (7.3.16) as a distortion measure, we have a rate distortion function which 



We again use the overbar to denote the code ensemble average. Symmetry gives the equality here. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 409 

we denote as R(D; q). This leads us to define the natural rate distortion function for 
the Bhattacharyya distance (7.3.16) as 

R(D) = max R(D q) (7.3.17) 

q 

To show the relationship between R(D) and the expurgated exponent for the 
DMC, let us consider the BSC with crossover probability p. Here # = i ~ = {0, 1} 
and the distortion measure is 



Thus letting a = In ^/4p(l p), we see that the Bhattacharyya distance is pro 
portional to the Hamming distance. It is easy to show (Sec. 7.6) that 



= max 



= ln2-jr[-J (7.3.19) 

and the corresponding source is the Binary Symmetric Source (BSS). 

Recall from Sec. 3.4 [see (3.4.8)] that, by the expurgated bound for the BSC, 
there exists a block code # of block length N and rate R such that 

p E <e~ NE (R) (7.3.20) 

where D = ex (K) satisfies R = R(D), and R(D) is given by (7.3.19). Hence, we see 
that the natural rate distortion function for the BSC yields the expurgated expo 
nent as the distortion level. 

We can also prove the Gilbert bound discussed in Sec. 3.9 by using the above 
relationship with rate distortion theory. Let 

d(N, R) = max <U() (7.3.21) 

<f 

where 

<U() = min rf w (x, x ) (7.3.22) 



and where the maximization is over all codes of block length N and rate R. Next 
let #* be a code of block length N and rate R that achieves the maximum 
minimum distance with the fewest codeword pairs that have the minimum dis 
tance d(N, R). Hence 

d(N, R) = d min (tf *) 

> d(\ | tf* ) for all \e N (7.3.23) 

where 

d(\\V*)= mindjv(x, x ) 



410 SOURCE CODING FOR DIGITAL COMMUNICATION 

This inequality follows from the fact that if there exists an x* e & N such that 
d(\*\^*) > d min (^*), then by interchanging x* with a codeword in ^* that 
achieves the minimum distance when paired with another codeword, there would 
result a new code with fewer pairs of codewords that achieve the minimum dis 
tance. This contradicts the definition of #*. With (7.3.23), we can now prove the 
Gilbert bound. 

Theorem 7.3.3: Gilbert bound 

d(N, R)>D 
where 

R = R(D) = In 2 - jel-\ (7.3.24) 

\a/ 

and D H = D/OL is the Hamming distance. 

PROOF #* defined above is a code of rate R which has average distortion 
) satisfying 

= I *vW<W*) 



= d(N, R) (7.3.25) 

where (7.3.23) is used in this inequality. Here we consider #* as a source block 
code. The converse source coding theorem (Theorem 7.2.3) states that any 
source code ^* with distortion d(<g*) must have 

R > R(d(V*)) (7.3.26) 

Since D is given by (7.3.24), we must have R(D) = R> R(d(<#*)). Then since 
R(D) is a strictly decreasing function of D on < D < a/2, we have 

d(N, R) > 

>D 



The results for the BSC generalize to all DMCs, when we use the Bhatta- 
charyya distance, if for the parameter s such that D = D s , the matrix [e sd(x x>) ] is 
positive definite (see Jelinek [19686] and Lesh [1976]). This positive definite condi 
tion holds for all s < in most channels of interest. This shows that, for an 
arbitrary DMC, the Bhattacharyya distance is the natural generalization to the 
Hamming distance for binary codes used over the BSC, and a generalized Gilbert 
bound analogous to Theorem 7.3.3 can be found (see Probs. 7.8 and 7.9). 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 411 

7.4 DISCRETE MEMORYLESS SOURCES TRELLIS CODES 

For block codes we have demonstrated a duality between source and channel 
coding where the source encoder performs in a manner similar to the channel 
decoder and the source decoder performs like a channel encoder (see Fig. 7.6). 
This duality holds also for trellis codes. We now proceed to show that trellis codes 
can be used for source coding where the source encoder performs the operations 
that are essentially the operations of the maximum likelihood trellis decoding 
algorithm of channel coding, while the trellis source decoder is essentially a trellis 
channel encoder. In particular, we show that it is possible to use trellis codes, 
which are general forms of convolutional codes, to achieve the rate distortion limit 
(D, R(D)) of a discrete memoryless source. 

Furthermore, the same algorithm which attains the channel coding bound 
with convolutional channel codes (Viterbi [19670]) also attains the source coding 
bound with trellis (generalized convolutional) source codes. In this context, 
however, the term " maximum likelihood " does not apply. 

We again consider a discrete memoryless source with alphabet # = {a l9 
2 , ..., a A } and nonzero letter probabilities Q(a 1 ), Q(a 2 ], ..., Q(a A ). The user 
alphabet is denoted by i" = {b^ b 2 , . . ., b B }, and there is a nonnegative bounded 
distortion measure d(u, v) which satisfies 



< d(u, v) < 



(7.4.1) 



for all u E #, re i and some d < oo. 

Trellis codes are generalized convolutional codes generated by the same shift 
register encoder as convolutional codes, but with arbitrary delayless nonlinear 
operations replacing the linear combinatorial logic of the latter. Whether fixed or 
time-varying, they can most conveniently be described and analyzed by means of 
the familiar trellis diagram of Chap. 4. Figure 7.8 shows a trellis source decoder 
and Fig. 7.9 shows the corresponding trellis diagram for the binary-trellis code 
with K - 1 delay elements and a delayless transformation. Following the same 
convention as for channel convolutional codes, we will refer to K as the constraint 
length of the trellis code. We assume for the present a binary-trellis code with n 
destination symbols per branch, resulting in a code rate r = \/n bits per source 



.ve{0. 







D 




D 




C 




D 




V = (v i . V,,i 4. > > J M/ + W ) 


*4 




*, 




*, 














- 


- v ,-2 




Delayless transformation 


T ; v m+ I " + ^ 





Figure 7.8 Trellis source decoder. 



412 SOURCE CODING FOR DIGITAL COMMUNICATION 

State 1 K - 1 / L 

oo ... oo fo\ ^/oV- -*/<rw +CQ 



00... 01 








(2 K ~ l states) 

Figure 7.9 Trellis diagram. 



symbol. This means that, for each binary input, the trellis source decoder emits n 
symbols from V, and a sequence of binary input symbols defines a path in the 
trellis diagram. We can easily generalize to nonbinary trellis source decoders later. 
Here, each branch of the diagram is labelled with the corresponding n- 
dimensional destination vector in i^ n , and the states (contents of the source 
decoder s first K 1 delay registers) are denoted by the vertical position in the 
diagram, also shown at the left of the trellis diagram. The trellis is assumed to be 
initiated and terminated in the state, and no encoding or decoding is performed 
during the final merging in what we will call the " tail " of the trellis. There are 
2 K ~ l states, and we assume that the trellis source coding operates continuously for 
many source symbols so that the effects of the tail can be ignored. We let the total 
code length be L branches, while the tail requires K - 1 further branches. 

The source encoder searches for that path in the trellis whose destination 
(user) sequence v most closely resembles (in the sense of minimum distortion) the 
source sequence u. Once the source encoder picks a path, then it sends binary 
symbols x through the channel (again taken to be noiseless) which drives the 
trellis source decoder through the desired sequence of states yielding the desired 
path v as the trellis source decoder output. Figure 7.10 shows the block diagram 
for the trellis source coding system. 

We assume that the trellis source coding system operates continuously for a 
long time between initial fan-out and final merging. This means we assume that 
L > K and that the effects of the tail can be ignored. In particular, we will ignore 
the last K 1 branches where all paths merge to the zero state. Hence, the total 
code length is taken to be L branches and we have a total source sequence length 
of N L = nL. There are many possible paths or trellis codewords of length N L , one 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 413 

of which must be chosen to represent the source sequence. For a given source 
sequence u and any trellis codeword v, we have the distortion measure 

1 NL 

(7.4.2) 



f=l 



The source encoder chooses the path corresponding to the trellis source de 
coder output v that minimizes d NL (u, v). Defining, for each index i e {0, 1, 
2, . . ., L - 1}, the subsequences of length n 



V i = V 

and branch distortion measures 



we can rewrite d NL (u, v) as 



1 

n ,T 



L- 1 



(7.4.3) 
(7.4.4) 

(7.4.5) 



In this form it is clear that the source encoder selects a path in the trellis which 
consists of a sequence of L connected branches, where each branch adds an 
amount of distortion that is independent of the distortion values of other branches 
in the path. For a given source sequence u, the source encoder s search for the path 





u 


Viterbi algorithm 

Search for the 
minimum distortion 
path in the trellis 


K = (x ,* lf . ...X L _ l ) 


Source 




x ? e (0. 1} 





Source encoder 



I Noiseless I 



n i 
i 



channel 





v 


Trellis code 

Follow the path 
indicated by x 


X 


User 









Source decoder 



Figure 7.10 Trellis source coding system. 



414 SOURCE CODING FOR DIGITAL COMMUNICATION 

in the trellis that minimizes d NL (u, v) is equivalent to the channel decoding search 
problem where the Viterbi algorithm was used to find the path, or convolutional 
codeword, that minimizes the negative of the log-likelihood function. Hence the 
source encoder for trellis codes can be realized with the Viterbi algorithm. 

For the given source and distortion measure, we have shown in Sec. 7.2 that 
the rate distortion function R(D) is given by (7.2.53). Regardless of what type of 
source coding system we consider, the converse source coding theorem (Theorem 
7.2.3) has shown that it is impossible to achieve average distortion of D or less 
with a system using rate less than R(D). This converse theorem applies to trellis 
source coding as well as to block source coding (Sec. 7.2). We have also shown 
that, in the limit of large block lengths, block source coding systems can achieve 
average distortion D with rate R(D) nats per source symbol, thus justifying R(D) 
as the rate distortion function. In this section, we will show that, in the limit of 
large constraint length X, trellis codes can also achieve the rate distortion limit. 

We again appeal to an ensemble coding argument where we consider an 
ensemble of binary trellis source codes of constraint length K and bit rate r = l/n, 
The ensemble and the corresponding distribution are so chosen that each branch 
of the trellis diagram has associated with it a user or representation sequence 
consisting of symbols with common probability distribution {P(v): v e V} with 
independence among all symbols. Now for any given source sequence u and any 
given trellis code, we denote the minimum distortion path sequence as v(u). Thus 
by definition, we have the bound d NL (u, v(u)) < d NL (u, v) for any other path se 
quence v belonging to the trellis code. We now choose v = v* as follows: 

1. For a given trellis code and the given source sequence u, replace the representa 
tion sequence of the all-zeros state path by the sequence v randomly selected 
according to the conditional probability 

P(v |u)= l\P(v ,\u t ) (7.4.6) 

t=l 

This results in a new trellis code which differs from the original trellis code only 
in the branch values of the all-zeros state path. We call this modified trellis code 
a forbidden code, since in general we are not allowed to select parts of a trellis 
code after observing the source output sequence u. Note that the original code 
and the corresponding forbidden code differ only in the forbidden code path 
corresponding to the all-zeros state path. 

2. Given a source sequence u, for the above forbidden code, let v** be the mini 
mum distortion path sequence. That is, let v** correspond to the forbidden 
trellis code output sequence which represents u with minimum distortion. 

3. v** defines a path through the forbidden trellis diagram. Now choose v* as the 
corresponding path sequence in the original trellis diagram. Hence v** and v* 
are the same except for the subsequences on branches of the all-zeros state 
path. 

Note that v* is a trellis code sequence in the originally selected trellis code, and we 
introduced the forbidden trellis code only as a means of selecting this trellis code 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 415 

sequence. We never use the forbidden trellis code in the actual encoding of source 
sequences and require it only to derive the following bounds. Since v* is a path 
sequence in the trellis code, we have from the definition of v(u) 

4v>, *()) = <*v>- **> (7.4.7) 



We now derive a bound on d VL (u, v(u)), where ( ) denotes an average over all 
source sequences and trellis codes in the ensemble. We do this by bounding 



Lemma 7.4.1 

A L- I L-j-1 



X kP Jk (7.4.8) 

7 = k=l 



where 



|v* merges with the all-zeros state I 

P jk = Pr path at node j and remains merged (7-4.9) 

(for exactly k branches ( 

and 

(")nr|WM (7.4.10) 



PROOF For a given source sequence u and for v* as selected above, let 2[ = 
{i: vf is a branch output vector of the all-zeros state path}. Then 



= Irfjn,,v,*)+ I4,(u,,v?) (7.4.11) 

\tX ief 

For i e 3? we use the bound d n (u it \*)<d , while for i$ we have 
d n (u,.,vf) = </ n (u,vf*). 
Hence 

L</,>, v*) < 2 rfju, ,?)+ d 



<d>,.,v ,.)+ X^o (7.4.12) 

i = i e J" 

where v 0l is the ith branch output vector of the all-zeros path of the forbidden 
code. This last inequality follows from the fact that, by the definition of v**, 
we have 

d.v>, v**) < rf Vl (u, v ) (7.4.13) 



416 SOURCE CODING FOR DIGITAL COMMUNICATION 

in the forbidden trellis code where v is the all-zeros state output sequence. 
From (7.4.7) and (7.4.12), we obtain the inequality 



When we average (7.4.14) over all source sequences and over the trellis code 
ensemble, the first term becomes D(P). Using the definition of P jk given in 
(7.4.9), we employ the union-of-events bound on the second term to get the 
desired result. 

There remains only the evaluation of a tight bound for P jk . This is computed 
over the ensemble of forbidden trellis codes which consist of the normal codes 
with the branch vectors of the all-zeros state path \ selected according to (7.4.6) 
for each source sequence u. Note that when v* merges with the all-zeros state path 
at node j and remains merged for exactly k branches, in the corresponding forbid 
den code v** also is merged with the all-zeros state for the same span. Hence P jk is 
also the probability that, in the forbidden trellis codes, v** (the minimum distor 
tion path) merges with the all-zeros state path at node; and remains merged for 
exactly k branches. 

Let x** be the binary input sequence to the forbidden trellis decoder that 
yields the minimum distortion codeword v**. If v** merges with the all-zeros state 
for exactly k branches starting with thejth node, the binary sequence x** has the 
form 

a l a 2 a K - l 1 00 1 b, b 2 --b K _ 1 --- (7.4.15) 

T T T T 

node j - K node j node j + k node j + k + K 

At node; - X, we take the forbidden trellis decoder to be in state a = (a l9 a 2 , - - - , 
fl x _i), and at node J + k + ^ to be in state b = (b it b 2 , ..., &K-I)- Tne " !" 
immediately following node j K is required, for otherwise merging could not 
start exactly at node j. Similarly a " 1 " must follow node j + k, for otherwise the 
merged span would be longer than exactly k as assumed. The merged span is 
shown in Fig. 7.11. Note that states a and b are unrestricted, and either or both 
may possibly be the all-zeros state. 

Now for the moment let us assume that states a and b are fixed. That is, the 
trellis path corresponding to the minimum-distortion forbidden trellis decoder 
output, v**, is assumed to have passed into state a at node; - K and state b at 
node j + k + K. Then we seek the probability that the subpath with decoder input 
sequence 

a 1 0---0 1 b (7.4.16) 

is the minimum distortion path (subsequence of x**) from state a to state b in the 
forbidden trellis code. Any other path from a to b has an input of the general form 

ax,._ x xx, +fc b (7.4.17) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 417 

j-K j /+! / + 2 .- j + k j + k + K 

) A 1 H \- { 





Figure 7.11 Merger with the all-zeros state path. 

where x = (x;_ x +i, , x j+k - 1 ). Since the probability that path a 1 1 b is the 
minimum distortion path among all paths of the general form a x 7 _ K x x j+k b is 
upper-bounded by the probability that path a 1 1 b is the minimum distortion 
path among all paths of the restricted form a 1 x 1 b, we now consider only paths 
of this restricted form. Let v(x) be the forbidden trellis decoder output for the 
(k + 2K) branches going from state a to state b corresponding to the input 
a 1 x 1 b. Then for random source subsequences of length n(k + 2K), denoted 
u, and for the ensemble of forbidden trellis codes, we seek to bound P jk by first 
bounding the probability 13 



P,. fc (a, b) = Pr <*(, v(0)) < min d(u, v(x)) 

x*0 



a.b 



By restricting our attention to subpaths from state a to state b of a forbidden 
code, we have formulated the problem as a block source coding problem. Our 
bound on P jk will be developed in a way analogous to the block coding bound of 
Sec. 7.2. 



Lemma 7.4.2 Over the trellis code ensemble just defined, 






where 



I + P 



and 



(p, P) = -In X (l P(v)Q(u |r) 1/(1 + ">y 

U \ t / 

K = r In 2 = (In 2)/w 



- 1 < P < 



(7.4.18) 
(7.4.19) 



3 To simplify the notation, in the following, when the subscript on the distortion d( 
we assume that, as always, it is defined by the dimensions of the vectors involved. 



) is missing, 



418 SOURCE CODING FOR DIGITAL COMMUNICATION 

PROOF We now require some notation to separate branch vectors of the all- 
zeros path from other branch vectors of the forbidden trellis. As was discussed 
above, we are concerned only with branch vectors associated with paths in the 
trellis of the form a 1 x 1 b. Hence our notation refers only to quantities 
associated with these paths. 

u c denotes the source subsequence over the central k branches 

u jk denotes the source subsequence over the first K and final K branches of 

the subtrellis under consideration 
v c (0) denotes the branch vectors of the all-zeros state path over the central k 

branches 
i^j k denotes the collection of all branch vectors over the central k branches 

not belonging to the all-zeros state path 

If u is the subsequence of the source in going from state a to state b, then we 
have Q(u) = Q(u jk )Q(u c ), since all components of u are independent and iden 
tically distributed. The term v c (0) also represents the only part of the all-zeros 
path of the forbidden trellis that is relevant; it is a random sequence selected 
according to P(v c (0) | u c ) and is independent of i^ jk Vectors v c (0) and i^ jk 
comprise all the branch vectors corresponding to paths in the forbidden trellis 
code with binary subsequences of the form a 1 x 1 b. Hence all the quantities 
of interest have the joint probability distribution 14 



u jk ,u c ) = 

Now we define the indicator function 

v(0)) < min d(u, v(x)) 



(7.4.20) 
otherwise 



Then 



, b) = Pr Jd(u, v(0)) < min d(u, v(x)) a, b 



= I I I I 

t jk Uj k u v c (0) 

= Z I X I 

f> u jk uc v c(0) 

1+p 



vc(0) 



<I 1 1 

L-i it L-i 

x f P(v<(0))<D(u, v<(0); r J (7.4.21) 

LV C (O) J 

14 Note that u jk and V~ jk are independent since they refer to disjoint segments. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 419 



where the Holder inequality is used with - 1 < p < 0. Next the Jensen 
inequality yields the further bound 

P*(* b) < I Q(u jk ) I 



(7.4.22) 

*> v (0) 

In the last bracketed term, we note that there is now complete symmetry for 
all paths involved since the section of the all-zeros state path v c (0) has the 
probability P(v c (0)) induced by definition (7.4.6), which is the same for all 
branch vectors in i ~ jk and thus, since all of the 2 k + K ~ l paths 15 of the form 
a 1 x 1 b have the same statistical properties, we have 

(7.4.23) 



1> vc(0) 

independent of u. Hence 



XF(v c (0))e(u c |v c (0)) 1/(1+p) 

vc(0) 



l+p 



1+plkn 



_ J(K- \)p 2-k[E (p,P)IR- p] (1424} 

Here we use the fact that all components of all vectors are statistically 
independent of each other. Since the bound on P Jfc (a, b) is independent of 
states a and b, we obtain the desired result. 

Using this bound on P jk , we now obtain from (7.4.8) 
A L-I L-j-i 



L, j = k = 

< D(P) + d 

fc 

= D(P) + J 2 ( 



fc = 



ri 



(7.4.25) 



provided (p, P)//? - p > 0. Recall that (p, P) is the Gallager function whose 
properties are given in Lemma 7.2.2. From (7.2.58), we have 

E.(p,P) = pI(P)+ \" ffttP)*dp 

n n 



15 This corresponds to all paths over the k central branches of the subtrellis starting in any one of 
the 2 K ~ 1 possible states. 



420 SOURCE CODING FOR DIGITAL COMMUNICATION 

where 



For - < p < 0, we have the bound (see Prob.7.3) 16 

E>, P) > -C = -2 - 16[ln A] 2 (7.4.26) 

which yields 



C (7.4.27) 

This inequality is then used to bound 



R R 

Next we choose 



so that the lower bound in (7.4.28) becomes 

/(P)-Kl C lp*\ (I(P)-R) 



R R\2 I 2RC 



R - /(P) 

mA;P)s-p = ^ 

we have 



where (R; P) > for R > /(P). 

Recall from (7.2.53) that the rate distortion function is 

R(D) = mm /(P) 



< o (7.4.29) 



(7.4.30) 

rs. j rs. \ z. / ^rs.^ 

Defining 



v(u)) < D(P) + r . " c,/ 2R .i. K :Pv.2 (7-4.3 1 ) 



16 Actually any bounded number larger than 2 + 16[ln A] 2 will suffice. By choosing C large enough 
we can always choose p in (7.4.29) such that p > \. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 421 

where 

^D = \P(v | "): 1 1 Q(u)P(v | u)d(u, v)<D 

U V 

For P* e & D that achieves R(D) = 7(P*), we define 

R - R(D) 



C 



and we have the source coding theorem. 

Theorem 7.4.1: Trellis source coding theorem Given distortion 7), for any 
constraint length K and rate R = (In 2)/n > R(D) for any integer n, there 
exists a binary trellis code T K with average distortion d(T K ) satisfying 

J 2-<*-D c(K,D) 

in (7A32) 



[1- 

where 



AR, D) = R ~* (D) > (7.4.33) 



PROOF The only additional observation we make from (7.4.31) is that at least 
one code has average distortion less than or equal to the ensemble average 
distortion. 

This theorem shows that in the limit of large constraint length K we can 
achieve the rate distortion limit (D, R(D)) with the trellis source coding system 
shown in Fig. 7.10. Furthermore, it gives a bound on the distortion achievable 
with finite constraint length. 

Up to this point we have considered trellis decoders with only binary inputs, 
which corresponds to a trellis diagram where only two branches leave each node. 
We can easily generalize to the case where the decoder has one of q inputs so that 
the corresponding trellis diagram has q branches leaving each node. Over the 
noiseless channel, the encoder sends q-ary symbols for each n source symbols so 
that, for these codes, the rate is 

log q 

r = - bits/source symbol 

n 

or 

R = - nats/source symbol (7.4.34) 

n 

There are still n representation symbols for each branch. The proof is essentially 
the same as for the binary case where q = 2, but it requires conditioning on states 



422 SOURCE CODING FOR DIGITAL COMMUNICATION 

a, b and on the two nonzero symbols that follow a and precede b. For arbitrary 
integer q, (7.4.8), (7.4.9), and (7.4.10) are the same, but now P jk is bounded by 

P Jk < q (K-Vp q -Wo(P,r)/R-P) (7435) 

Hence, for this more general case, we have the same source coding theorem with 
(7.4.32) replaced by 

j -(K-1)E C (R,D) 

d(T K )<D + [l _^_ (C ^ R}Et2(RtD}]2 (7.4.36) 

where 



To examine the rate of convergence to the rate distortion limit (D, R(D)) as 
constraint length K increases, we merely substitute E C (R, D) into the bound 
(7.4.36) and rewrite this as 



d(T K )<D + ~ 



d q 



-(K-l)(R-R(D))/C 



M _ q-(R-R(D))2/2RC -\2 

-N t R(R-R(D))/C 



n (R-R(D))IC 
Q( * 



_ 
~ M _ -(R-R(D))2/2RC 12 . 

where N t = nK = (K/R) In q is the equivalent block length. Comparing this with 
the convergence of block source coding given by Theorem 7.2.4, we see that this 
bound on distortion has an exponent proportional to R[R R(D)], whereas with 
block codes the exponent is proportional to [R jR(D)] 2 /2. We observed similar 
superiority for convolutional codes over block codes in channel coding. 

In this section we described and analyzed trellis source decoders and the 
optimum trellis source encoder implemented by the Viterbi algorithm. As with 
channel coding, the computational complexity of the optimum source encoder 
grows exponentially with constraint length. It is natural to consider suboptimum 
path search algorithms such as the sequential decoding algorithms for channel 
convolutional codes. These algorithms can reduce the computation required per 
source symbol. 

For the most part, sequential algorithms for source encoding are best 
analyzed in terms of tree codes which are trellis codes with infinite constraint 
length, K = oo. This results in a trellis diagram that never has merging nodes but 
continues to branch out forever with independent random representation vectors 
on all branches. For a tree source decoder, the optimum source encoder finds the 
path in the tree that can represent a source sequence with minimum distortion. 
Jelinek [1969], by using branching theory arguments (see Probs. 7.10 and 7.11), 
was the first to show that tree codes can achieve the rate distortion limit (D, R(D)). 
We obtain the same result by letting K = oo in (7.4.37). 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 423 

Note finally that the source trellis or tree encoder need not necessarily find the 
unique path that represents the given source sequence with minimum distortion. 
There may, in fact, be many paths that can represent a source sequence within a 
desired fidelity criterion D and so it is natural to consider various sequential 
search algorithms which choose the first subpath that meets a fidelity criterion. 
Anderson and Jelinek [1973] and Gallager [1974] have proposed and analyzed 
such algorithms and have shown convergence to the rate distortion limit for 
various sources. Sequential algorithms of this type, although suboptimum, yield 
less complex trellis or tree source encoders and still achieve the rate distortion 
limit. 



7.5 CONTINUOUS AMPLITUDE MEMORYLESS SOURCES 

Many sources, such as sampled speech, can be modeled as discrete-time sources 
with source outputs which are real numbers; that is, a source with alphabet 
3$ = (-co, oo ). We now consider such discrete-time continuous-amplitude mem- 
oryless sources where the source and representation alphabets are the real num 
bers, the source output at time n is a random variable u n with probability density 
function {Q(u)\ co<u< oo}, 17 and we have a possibly unbounded nonnegative 
distortion measure d(u, v) for each w, v e $. All outputs of the source are indepen 
dent and identically distributed. The distortion between sequences u and v of 
length N is again defined as d N (u, v) = (l/N) * =1 d(u n , i> n ). Besides the fact that 
the source outputs are now continuous real random variables, the main difference 
from the previous sections is the fact that the single-letter distortion measure can 
be unbounded. This is to allow many common distortions such as the magnitude 
error, d(u, v) = \ u v \ , and the squared error, d(u, v) = (u v) 2 , distortion 
measures. To overcome the fact that we no longer have a bounded distortion 
measure, we require instead the condition 

Q(u)d 2 (u,0)du<dt (7.5.1) 

o 

for some finite number d . This is the condition that the random variable d(u, 0) 
has bounded mean 18 and bounded variance which is satisfied in most cases 
of interest. Throughout the following we will assume this condition for continuous 
amplitude sources and distortion measures. 



17 We shall denote all probability density functions associated with source coding with capital 
letters. 

18 Holder s inequality (App. 3 A) applied to (7.5.1), the bounded variance condition, implies a 
bounded mean, that is 

[ Q(u)d(u, 0) du < d 



424 SOURCE CODING FOR DIGITAL COMMUNICATION 

7.5.1 Block Coding Theorems for Continuous- Amplitude Sources 

Again referring to Fig. 7.3 for the basic block source coding system, let 
& = ( v i> V 2> > V M} be a set of M representation sequences of N user symbols 
each, which we call a block code of length N and rate R = (In M)/N nats per 
source symbol. For this code the average distortion is now 

d(} = C - C Q N (u)d(u\%) du (7.5.2) 

J -oo - 

where 

d(u\&) = min d N (u, v) 



u)=n i ew 

In proving coding theorems for block codes we essentially follow our earlier 
proofs for the discrete memoryless source presented in Sec. 7.2, the main differ 
ence being that integrals of probability density functions replace summations of 
probabilities. As before, we use an ensemble average coding argument by first 
introducing the conditional probability density function 

P N (v|u)= [I (.!.) (7.5.3) 

n 1 

and the corresponding marginal probability density on $ N 



oo oo 



) (7.5.4) 

n=l 

where 

P(v)=l Q(u)P(v\u)du (7.5.5) 

oo 

Proceeding exactly as in Sec. 7.2 [Eqs. (7.2.8) through (7.2.11)], but with summa 
tions replaced by integrals, we obtain 



- oo - oo 



= - 

- - oo 

" < 7 - 5 - 6 ) 



(7210) 



oo oo 

where 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 425 

Now, defining (as in 7.2.15) 

D(P) - [ j Q(u)P(v | u)d(u, v) du dv (7.5.7) 

* oo oo 

we see that since 1 O < 1, the first integral is easily bounded to yield 

^^}dlld^ (7.5.8) 



oo oo 



To bound the second term we can no longer appeal to the argument that 
d(u\J#) is bounded by d Q . Instead we use a simple form of the Holder inequality 
(App. 3A) 



oo oo 



1/2 



r oo oo 

x - & 

L - oo - oc 



1 2 



(7.5.9) 



where we noted that O 2 = <I>. Next we assume that v t e M = {vj, v 2 , . . . , \ M } is the 
all-zeros vector; that is, Vj = 0. Then d(u\J#) < d N (u, vj = d N (n, 0) and 



.00 .00 



P. v (v | u)(d(u 



du 



oo oo 



= | -j e>Mu 
/"/" e(4K o 2 du 



oo oo 



c/U 



(7.5.10) 
where the last inequality follows from (7.5.1). Hence when Vj = e ^, we have 



< D(P) + d 



Q N (u)P N (M | u)0(u, v; $} du 



oo oo 



1/2 



(7.5.11) 



We now proceed to bound the ensemble average of d($\ We consider an 
ensemble of codes in which v t = is fixed and the ensemble weighting for the 
remaining M 1 codewords is according to a product measure corresponding to 
independent identically distributed components having common probability den- 



426 SOURCE CODING FOR DIGITAL COMMUNICATION 



sity (P(v): -oo < v < oo}. Now for any code ^ = {v l9 v 2 , ..., V M }, define 
J> = |v 2 , v 3 , . . . , V M } which is the code without codeword v t = 0. Then clearly 

d(u | #)<</(u | J) (7.5.12) 

and 

O(u, v; #) < O(u, v; J) (7.5.13) 

Hence for any code ^, we have from (7.5.11) and (7.5.13) 

1/2 



< D(P) + 



00 .00 



; J) 



oo co 



(7.5.14) 



Now averaging this over the code ensemble and using the Jensen inequality yields 
:D(P) + a 



r 

Q N (u)P N (\ u)<I>(u, v; &) du d\ 

J -oo 



1/2 



v; 



-co oo 



1/2 



(7.5.15) 



The term inside the bracket can now be bounded by following the proof of Lemma 
7.2.1 [(7.2.20) through (7.2.22)], replacing summations of probabilities with inte 
grals of probability densities. This yields the bound 



00 . CO 



SP N (v|u)<D(u,v; 



t -NE(R;p,P) 



(7.5.16) 



- oo oo 



where 

E(R;p 9 P)= - 



,(p,P)=-lnj 



oo I oo 



du 



-1 <p<0 

(7.5.17) 



The properties of E (p, P) are the same as those given in Lemma 7.2.2 where now 
/(P) is 



i 

/(P) = | | Q(u)P(v | u) In 

* - oo * oo 



the average mutual information. Then it follows from Lemma 7.2.3 that 

max E(R;p, P) > for K > 7(P) 

- i<p<o 

Combining these extensions of earlier results into (7.5.15) yields 



(7.5.18) 

(7.5.19) 
(7.5.20) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 427 

where 



max E(R\p, P) > for R > /(P) 



At this point we are still free to choose the conditional probability density 
{P(v | u): u, v e J>} to minimize the bound on d(M\ Suppose the fidelity criterion 
to be satisfied by the block source coding system is specified as D(P) < D. Then let 

= {P(v\u): D(P) < D} (7.5.21) 

and define 

E(R,D)= sup max E(R; p, P) (7.5.22) 

- l<p<0 



and the rate distortion function 

R(D) = inf /(P) (7.5.23) 

Pe^o 

Applying these to (7.5.20) yields the coding theorem for continuous-amplitude 
memoryless sources. 

Theorem 7.5.1: Source coding theorem For any block length N and rate R 
there exists a block code with average distortion d(J#) satisfying 



) (7.5.24) 

where 

E(R, D) > for R > R(D) 

PROOF See the proof of Theorem 7.2.1. 

We defined R(D) in (7.5.23) as the rate distortion function for a continuous- 
amplitude memoryless source, where the unbounded single-letter distortion meas 
ure satisfies a bounded variance condition. To justify this definition we need to 
prove a converse theorem. This is easily done using the same basic proof given 
earlier for the discrete memoryless sources (see Theorem 7.2.3). 

Theorem 7.5.2: Converse source coding theorem For any source encoder- 
decoder pair, it is impossible to achieve average distortion less than or equal 
to D whenever the rate R satisfies R < R(D), 

The proof of the direct coding theorem given in this section certainly applies 
as well for discrete memoryless sources with unbounded distortion measure as long 
as the bounded variance condition is satisfied. Similarly for discrete sources with a 
countably infinite alphabet we can establish coding theorems similar to Theorem 
7.5.1. An example of such a source is one which emits a Poisson random variable 
and which has a magnitude distortion measure (see Probs. 7.25, 7.26, and 7.27). 



428 SOURCE CODING FOR DIGITAL COMMUNICATION 

Here we have shown that coding theorems can be obtained for continuous 
amplitude sources using proofs that are essentially the same as those for discrete 
memoryless sources with bounded single-letter distortion measures. All of the 
earlier discussion concerning the relationship between channel and source coding 
also applies. In fact, the trellis coding theorem can also be extended in this way, as 
will be shown next. 



7.5.2 Trellis Coding Theorems for Continuous- Amplitude Sources 

We extend the results of Sec. 7.4 to the case of continuous-amplitude memoryless 
sources with unbounded distortion measures that satisfy the bounded variance 
condition of (7.5.1). The basic trellis source coding system is again presented in 
Fig. 7.10 with the only difference here being that the source and representation 
alphabet is the real line, ^ = ( 00, oo), and the distortion measure is possibly 
unbounded. 

Following the same discussion which led to (7.4.7), we have 

4>,v(u))<4v>>**) (7-5.25) 

where v* is a trellis decoder output sequence that is selected by finding v**, the 
minimum-distortion path sequence of the corresponding forbidden trellis code. 
Then defining % = {i: \f is a branch output vector of the all-zeros state path} we 
have again (7.4.11) 



/ nv i i / / n\ i i / v / 

For i 3?, recall that d n (u, \*) = d n (u, vf*) and so 

its i e S 

L- 1 

i - i e % 

L- 1 

or 

4v>, v*) < ^ L (u, v ) + - X d n (u f , vf) (7.5.27) 

since for the forbidden trellis code d NL (u, v**) < d NL (u, v ), where v is the all-zeros 
state output sequence of the forbidden trellis code. 

Thus, for any trellis code, output sequence u, and the corresponding forbidden 
trellis code, we have from (7.5.25) and (7.5.27) the bound 

4v>, v(u)) < <Uu, v ) + ) X d n (u i9 vf ) (7.5.28) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 429 



The only difference between this construction and that used in Sec. 7.4 is that we 
now consider only trellis codes where all branch outputs of the zero state are zero. 
This does not change v of the forbidden trellis code but does imply that 
^(u., vf) = d(u t , 0) for all i e #. Hence 



v(u)) < d NL (u, Y O ) + d m (u t , 0) 



Let us define the indicator function 



Then 



v(u)) 



(u, v ) + 



(7.5.29) 



(7.530) 



(7.5.31) 



Now averaging over all source sequences u and v of the forbidden code we have 
for the given trellis code 



where 



j X f f " fiM.^ 

^i = J -oo J -oo 



D(P) = I J Q(w)P(r | u)d(u, v) dv 



(7.5.32) 



du 



oo oo 



The second term can be further bounded using the Holder inequality and the 
bounded variance condition of (7.5.1). 



"R(u,, WAX) du rfv 



" 



1/2 



oo oo 






oo QO 

" 



" 

-o 

Q NL (u)(d n (u,0)) 2 du 

r 
-I 

, 

fi l ,Pj l 

- 



O I WAX) du 

1/2 



oo oo 



oo - oo 



(7.5.33) 



430 SOURCE CODING FOR DIGITAL COMMUNICATION 



Thus combining (7.5.32) and (7.5.33) we have the bound 






< D(P) 



J 



i L-l oo 

i&C f- 



1/2 



1/2 




(7.5.34) 

We now consider an ensemble of trellis codes that have zero branch vectors 
on the all-zeros state path, and on all nonzero state branch vectors have indepen 
dent identically distributed random variables with common probability density 
(P(v), ao<v< oo}. Proceeding as in the proof of Lemmas 7.4.1 and 7.4.2, we 
obtain 

L-l L-j-l J, ]l/2 

Z Z jPA (7.5-35) 

J=Q k =i L J 

where P jk is defined in (7.4.9) and bounded by 

p 4? 9(K~ 1)P9 -k[E (p,P)/R-p] /7 c lf.\ 

rj k ^ Z ^/.J.ju; 

where in this case 



. r oo ]I + P 

(p,P)=-lnj P(v)Q(u\v) l/(i+()) dv\ du 

-oo [ - oo J 



Since P jk depends on the forbidden trellis code which is the same as in Sec. 7.4, the 
bound (7.5.36) follows from the proof of Lemma 7.4.2 when sums of probabilities 
are replaced by integrals of probability densities. Thus we have finally, as in 
(7.4.25), 



d NL (u, v(u)) < D(P) + _ [o(p , P)/ ,_ p]2 (7.5.37) 



and have established the trellis source coding theorem for continuous-amplitude 
memoryless sources. 

Theorem 7.5.3: Trellis source coding theorem Given any fidelity D, constraint 
length K and rate R = (In q)/n > R(D) for some q and n, there exists a trellis 
code T K with average distortion d(T K ) satisfying 

A -(K-l)[R-R(D)]/2C 

(7.5.38) 



PROOF Having established the bound (7.5.37), the proof follows the proof of 
Theorem 7.4.1. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 431 

7.6 EVALUATION OF R(D) DISCRETE MEMORYLESS 
SOURCES* 

For discrete memoryless sources the rate distortion function R(D) is given in 
(7.2.53). This definition is analogous to that of channel capacity in channel coding 
theory where the channel capacity is the maximum rate below which the random 
coding error exponent E(R) is positive. In the source coding theorem, R(D) is the 
minimum rate above which the exponent E(R, D) is positive. Analogous converse 
theorems also exist. Thus both channel capacity and rate distortion functions are 
defined in terms of extreme values of average mutual information over some 
constrained space of probability distributions; hence, it is not surprising to find 
that techniques for evaluating rate distortion functions are similar to those for 
finding channel capacity. In fact, it is not surprising as a result that, while channel 
capacity was shown to be the maximum average mutual information, R(D) 
appears as a minimum of average mutual information subject to the distortion 
measure constraint. In App. 3C, we presented a simple computational algorithm 
for channel capacity. A similar algorithm can be used to find R(D) and this is given 
in App. 7A. 

We now examine ways of finding the rate distortion function for various 
sources and distortion measures. First we examine some properties of R(D). Note 
that in general (see Prob. 1.7) 

/(P) = 1 X QW(v\u) In JfO 



<\nA (7.6.1) 

where A is the alphabet size of the discrete memoryless source and 

/(P) > 
Hence we have the bound 

< R(D) < /f (#) < In A (7.6.2) 

Let us next examine the range of values of D for which R(D) exists. Recall from 
Sec. 7.2 that 

? D = P(v | u): D(P) = X I QWP(* I H. ") * D 



is a nonempty closed convex set for D > > min where 

Anm = I Q(u) min d(u, v) (7.6.3) 

U V 6 * 

* May be omitted without loss of continuity. 



432 SOURCE CODING FOR DIGITAL COMMUNICATION 

is the minimum possible average distortion. For D < D min , R(D) is not defined. 
Since 7(P) is a continuous, real-valued function of P, it must assume a minimum 
value in a nonempty, closed set and therefore, R(D) exists for all D > D m - tn . 

Let Anax be the least value of D for which R(D) = 0. This is equivalent to 
rinding the conditional probability {P(v \ u)} satisfying 

Wb | "Mu, i,) < D max (7.6.4) 



and for which /(P) = 0. But /(P) = if and only if ^ and V are independent (see 
Lemma 1.2.1 given in Chap. 1); that is 

P(v | u) = P(v) for all u e % v e V (7.6.5) 

Hence 

ZP(v)ZQ(u)d(u,v) (7.6.6) 



f U 



which we can minimize over {P(v)} to obtain 

<2(")d(,i>) (7.6.7) 



where the minimizing (P(v)} is zero everywhere but at the value of v which mini 
mizes d(u, v) [see (7.6.10)]. From this we see that R(D) is positive for D min < D < 
Anax R(D) is clearly a nonincreasing function of D since D l < D 2 implies 0> Dl a 
^ D2 which in turn implies R(D i )>R(D 2 ). Otherwise, the most important 
property of R(D) is its convexity, which we state in the following lemma. 

Lemma 7.6.1 For D min < D, < D max , D min < D 2 < D max , and any < 9 < 1 
R(6D 1 + (1 - 9)D 2 ) < 6R(D,) + (1 - 0)R(D 2 ) (7.6.8) 



PROOF Let P^ E 0> Dl and P 2 e 0> D2 be such that ^(Dj) = /(P^ and R(D 2 ) = 
/(P 2 ). Then since ^P! -f (1 - 0)P 2 e ^ 0Dl + (1 _ 0)D2 using the convexity of /( ), 
we have 

mm /(P) 

+(l-9)D2 

+ (1 - 0)P 2 ) 



Thus K(D) is a convex u, continuous, strictly decreasing function of D, for 
A < D < D max . The strictly decreasing property of R(D) further implies that if 
Pe ^ D yields K(D) = /(P) then D(P) = D; that is, the minimizing conditional 
probability that yields R(D) satisfies the constraint with equality and therefore lies 
on the boundary of ^ D . Figure 7.12 shows a typical rate distortion function. 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 433 




1 D Figure 7.12 A typical rate distortion 

nax function. 



Next we have from (7.6.2) 



(7.6.9) 



The only conditional probabilities {P min (v\u)} that yield R(D min ) = 7(P min ) are 



where r(w) satisfies 



Here 



! -B 



(u, v(u)) = min d(u, v) 



where 



R(D mln ) = /(P min ) 

= -IC()lnP min (tKi)) 

u 

/WO = I CWminH") 



(7.6.11) 



From this it is clear that, for the condition v(u) =f= v(u ) for u u, we have 
F min (r(u)) = Q(u) and K(D min ) = H(3t). This condition is typical for most cases of 
interest when the number of letters in ^, B, is greater than or equal to the number 
of letters in 31, A. 

We now find necessary and sufficient conditions for the conditional proba 
bility distribution P 6 & D that achieves R(D) = /(P). We seek to minimize 



(7.6.12) 



434 SOURCE CODING FOR DIGITAL COMMUNICATION 



with respect to the AB variables {P(v \ u) : v e 1^, u e %} subject to the constraints 



P(v 



w) > for all u e W, v e V (7.6.13) 

u)= 1 for all u eW (7.6.14) 



Zl i Q(u)P(v\u)d(u,v) = D , (7.6.15) 

U V 

Without the inequality constraints (7.6.13) this would be a straightforward 
Lagrange multiplier minimization problem. We proceed initially as if this were the 
case and let (OL(U) : u e <%} and s be Lagrange multipliers for the equality con 
straints (7.6.14) and (7.6.15), respectively, and consider the minimization of 

J(P; a, s) = /(P) - X a() P(v\ u) - s Q(u)P(v u)d(u, v) (7.6.16) 

U V U V 

but keeping in mind ultimately the requirement of the inequality constraints 
(7.6.13). We find it convenient to define 

j i ( u ) = e*M/QM for all w e^ (7.6.17) 

so that (7.6.16) can be written as 

J(P; X, s) = 1 1 Q(u)P(v\ U ) In , (7.6.18) 



We now assume that X and s are fixed (later we choose them to satisfy the equality 
constraints) and we find conditions for the minimization of J(P; X, s) with respect 
to the AB variables (P(I;|M)} subject only to the inequality constraints (7.6.13). 

Since 7(P) and thus J(P; X, s) are convex u functions of {P(I;|M)}, a local 
minimization of J(P; X, s) is an absolute minimization. We use this in proving the 
following theorem. 

Theorem 7.6.1 A necessary and sufficient condition for the AB variables 
{P*(v | u) : v e i^, u e %} to minimize J(P; X, s) of (7.6.18), subject only to the 
inequality constraint (7.6.13), is that they satisfy 

P*(v\u) = l(u)P*(v)e sd(u < v) if P*(v) > (7.6.19) 

and 



) < 1 ifP*(u) = 

u 

where 

p *(v) = L P*(v I ")fi(") for a11 > 6 f 

u 

PROOF (Sufficiency) Let {P*(i>|w)} satisfy conditions (7.6.19). For any c > 0, 
taking a variation cq(v \ u) about P*(r u) such that 

P*(r u) + crj(v | M) > for all u e #, r e y " (7.6.20) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 435 

and defining 



) = Z fi("M l< | ") for all u 



we have 



AJ() = J(P* + 01; *, s) - J(P*; X, 5) 



P*(i(P*(r) 



The first term in (7.6.21) is 19 

Z Z Q()P*(r ") In 



while the second term is 



(P*(v 



u r:P*(r)>0 

<X I 

u r:P*(r) = 



u r:P*(r) = 



since for P*(r) > we have 



(7.6.21) 



(7.6.22) 



i 
P*(v\u) 



0(c 



+ O(f 2 ) (7.6.23) 



= 1 



by (7.6.19), which is true by hypothesis. Hence 



u r:P*(t-) = 



0(c 2 ) (7.6.24) 



The term 0(.v) is proportional to .x. 



436 SOURCE CODING FOR DIGITAL COMMUNICATION 

Now using the inequality In x > 1 - (1/x) [see (1.1.6)], we have 



Q(u)n(v\u] 



u v:P*(v) = 



, V) 



rj(v\u] 



+ 



= I 

v.P*(v) = 



since by hypothesis for P*(v) = 

Q(u)l(u)e* u v) < 1 and 

u 

Hence 



(7.6.25) 



> 



lim 



lim 



>o 



(7.6.26) 

which assumes a local minimum at P*. By convexity of J(P; X,, s), this must be 
an absolute minimum. 

(Necessity) Let P* minimize J(P; X, s) subject to the inequality con 
straint (7.6.13). From above we have for any c > and numbers {P*(i;|w)} 
such that P*(v | u) + crj(v \ u) > 0, for all u e <%, v e i^ 



ZT 1 
z < 

u v:P*(v)>0 



n(v\u] 



O(c 2 ) (7.6.27) 



u v:P*(v) = 

First let us choose r\(v \ u) = for all v where P*(r) = 0. Then 



t>:P*(t>)>0 



0(c 2 ) (7.6.28) 



where rj(v\u) can be any set of positive or negative numbers as long as 
P*(v | u) + crj(v | u) > 0. Hence, for AJ(e) > for arbitrarily small > 0, we 
require 



P*(t; | u) = P*(r)A 

Suppose next in (7.6.27) we choose 

O 







(7.6.29) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 437 

Then 



= - I >f(i-) In fi( Mi )*" 101 - * + O(e 2 ) (7.6.30) 

r:P*(r) = [u 

Since rj(v) > when P*(i;) = 0, in order for AJ(e) > for all e > 0, we require 

*> < 1 if P*( v ) = 



To find necessary and sufficient conditions for P* e ^ D that yield R(D), in 
addition to (7.6.19) we need only choose Lagrange multipliers X, and s to satisfy 
the equality constraints (7.6.14) and (7.6.15). Hence from (7.6.14) and (7.6.19) it 

follows that X is given by 

/ \ -1 

/l(u) = P(t X d(u v} for all u e W (7.6.31) 

\ r / 

It is more convenient to keep s as a free parameter and express the distortion 
D = D s and rate distortion function R(D) = R(D S ) in terms of s. 

Theorem 7.6.2 Necessary and sufficient conditions for a conditional probabil 
ity {P(p|n)} to yield the rate distortion function R(D) at distortion D are that 
the conditions of Theorem 7.6.1 be satisfied, where Lagrange multipliers X 
satisfy (7.6.31) and s satisfies the parametric equations 



(X2()P(i;y*"- >d(u, v) (7.6.32) 

U f 

and 

R(D S ) = sD s + Q() to *.(*) ( 7 -633) 



PROOF We need only to use Ify\u) = A(u)P(v)e* m v} in D = D(P) and 
R(D) = 7(P) to obtain (7.6.32) and (7.6.33). 

Although this theorem gives us necessary and sufficient conditions for the 
conditional probabilities that yield points on the R(D) curve, actual evaluation 
of R(D) is difficult in most cases. Usually we must guess at a conditional prob 
ability and check the above conditions. There are, however, a few relationships 
which are helpful in evaluating R(D). 

Lemma 7.6.2 The parameter s in (7.6.32) and (7.6.33) is the slope of the rate 
distortion function at the point D = D s . That is, 

, OJ = ., <.*, 



438 SOURCE CODING FOR DIGITAL COMMUNICATION 



PROOF The chain rule yields the relation 
K(D} _dR _3R SRUs 

-db-dD + - 

Using (7.6.33) we have 



8R 









Recall that for P(v \ u) > we have 



Multiplying by Q(u) and summing over u e ^ gives the relation 

v) = 1 (7.6.37) 



when P(y) > 0. Differentiating with respect to 5 yields 

(u)d(u, v) + u"- = 



Multiplying by P(v) and summing over v E i^ gives 



, 



The first term is D s = D and 



which yields the relationship 



(7.6.38) 



f)e sJ(u - rt = (7.6.39) 



(7.6.40) 






Hence from (7.6.36) it follows that R (D S ) = s. 



Since R(D) is a decreasing function of D for D min < D < D max , this lemma 
implies that the parameter values of interest satisfy s < 0. We next show that the 
slope of R(D) is also continuous in this range. 

Lemma 7.6.3 The derivative R (D) is continuous for D min < D < Z) max . 
PROOF Let D min < D* < D max and consider the parameters 



s_ = lim R (D) 

D T D* 



(7.6.42) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 439 

and 

s + = lim R (D) (7.6.43) 

D [ D* 

These are defined since R(D) is a continuous, convex u function of 
D min < D < D max . We let P+ and P_ be corresponding conditional probabili 
ties. By continuity of R(D) we have 



= I(P_) (7.6.44) 

For any < 9 < 1 let P d = 9P + + (1 - 0)P_ . Certainly P 9 satisfies 
D(P 9 ) = D* so that 



< 9I(P + ) + (1 - 0)/(P_) (7.6.45) 

The second inequality follows from convexity as proved in Lemma 1A.2. Since 
7(P + ) = 7(P_ ) = R(D*), we have 

R(D*) < I(P e ) < R(D*) (7.6.46) 

Thus we must have equality in each of the above steps. On examining the 
proof of Lemma 1A.2 in App. 1A for P e (u) > 0, we have 

P e (v\u) P + (v u) P_(v\u) 



P,(r) P + (r) P_(r) 
or 

^M e ^- s .^.,, (7647) 

A_\U) 

Here A + (u) and x_(w) are the A(u) corresponding to P + (v\u) and P_(r|u), 
respectively. Since v does not appear on the left side, either s+ = s_ or 
d(u, r) = d(w), independent of r. If d(u, v} = d(u), then 

or summing over all v where P e (v) > 

1 =/U(w)^+ d(u) (7.6.49) 

Hence 

P + (v\u) = P + (v) (7.6.50) 

and consequently D* > D max since R(D*) = I(P + ) = 0. But since D < D max we 
conclude that s+ = s_ . 



440 SOURCE CODING FOR DIGITAL COMMUNICATION 



It has been shown further (Gallager [1968], Berger [1971]) that R (D) goes to 
oo as D approaches D min , and that the only place a discontinuity of R (D) can 
occur is at D = D max . We next derive another form of R(D) which is useful in 
obtaining lower bounds to R(D). 



Theorem 7.6.3 The rate distortion function can also be expressed as 



R(D)= max sD + Q(u) In A(w 

s<0, 



where 



(7.6.51) 



(7.6.52) 



Necessary and sufficient conditions for s and A, to achieve the maximum are 
the same as those given in Theorem 7.6.2. 

PROOF Let s < 0, X e A s , and P e 0> D . Then using D(P) < D we have 
_ sD - Q(u) In A(n) > /(P) - sD(P) - fiM^ I ") In A(u) 



Again using the inequality In x > 1 (1/x), we have 



/(P) - sD - 



In 



1 - 



(7.6.53) 



and clearly 



> 1 - 1 



-0 



Hence for each P e 3? D we have 
/(P) > sD 



/(P) > max 

s<0, 



ln * 



But from Theorem 7.6.2 we know that there exists a P* e 
X* e A s * such that 

= I(P*) = 5*D + fi(") In **(") 



(7.6.54) 

,, s* <0, and 

(7.6.55) 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 441 

Hence 



R(D) = max 



s<0, 



sD 



We now examine a few examples. It will be clear that even for simple cases, 
unless certain symmetries hold, it is difficult to evaluate the rate distortion func 
tion. Fortunately, there is a very useful computational algorithm available for 
computing rate distortion functions as well as channel capacities. In App. 7A we 
present this algorithm, which is due to Blahut [1972]. 



Example (Binary source, error distortion) Consider the simple binary source, error distortion 
case where # = f = (0, 1}, Q(0) = q<^ Q(l) = 1 - q, and d(u. o)=l- d m .. To find R(D), first 
we observe that D = and 



Also for this case we see that 

K(0) = JT(q) =-q\nq-(l-q)\n(\-q) (7.6.56) 

We now find R(D) for < D < q. Clearly, if for any P e 3> D we have P(0) = 0, then D(P) = q; and 
if P(l ) = it follows that D(P) = 1 - q > q. Hence for < D < q we must have, for any P e J* D , 
the condition P(0) > and P(l) > 0. The conditional probabilities that achieve the rate distor 
tion function must then satisfy 

P(r u) = /i(u)P(i-)e sd(u - l) 
Multiplying by Q(u] and summing over u E % = {0, 1} yields equations 

A(0)^ S + /1(1X1 - q)= \ 

A(0)q + /.(!)(! - q)e s = 1 
which have solutions 

;(0) = T^ 

(7.6.57) 



Now we attempt to find P(0) and P(l) of the optimum conditional probability in # D . From 
(7.6.31) 



P(0) + P(1)^ 

(7.6.58) 

^(l)= n 



which combined with (7.6.57) yield equations 

q(l + e s ) = P(0) 



442 SOURCE CODING FOR DIGITAL COMMUNICATION 

yielding solutions 



l-e s 

(7.6.59) 



1 -e s 
This then gives the parametric equation for D = D s 



1 + e* 
Hence the Lagrange multiplier s must satisfy 



(7.6.60) 



< 7 - 6 - 61 > 

Now we have for R(D), using (7.6.33) 

R(D) = sD + 6(") ln *() 



= JT(q) - JT(D) Q<D<q (7.6.62) 

Note that since Jtf(q) < Jtffy = 1, the rate distortion function for a binary 
symmetric source requires the highest rate of any binary source to achieve a given 
average distortion D. This is expected since there is greatest uncertainty in the 
outputs of the binary symmetric source. The natural generalization of this 
example will be examined next. Except for a very special case of this next example, 
the rate distortion function seems too complex to merit detailed presentation here 
(see, however, Berger [1971]). Instead we use Theorem 7.6.3 to find a lower bound 
to R(D). 

Example (Error distortion) Consider the natural generalization of the previous example where 
we are given alphabets #= iT = {1, 2, ...,/!}; source probabilities Q(l), Q(2), ..., Q(A); and 
distortion measure d(u, v) = 1 - 6 UV . Rather than derive the exact expression for R(D), we 
develop an important lower bound to it. Recall from Theorem 7.6.3 that 

R(D) > sD + X Q(u) In A(u) 
for any s < and A(l), 4(2), . . ., A(X) that satisfy 






Suppose we choose A.(k)Q(k) to be a constant for k = 1, 2, ..., A and require 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 443 



Then 



Now choose 



U-1V-+1 



ln 



and so 

l(k)Q(k) = 1 - D k = 1, 2, . 
For this choice of s < and X e A s 

D 



(7.6.63) 



(7.6.64) 



(7.6.65) 



R(D) > D In 



H(<*)-jr(D)-Dln(A- 



(7.6.66) 



Note that for A = 2, our previous example, this lower bound gives the exact 
expression for R(D). Also for the special case where Q(\) = Q(2) = = 
Q(A) = I/ A we can easily check that P(l) = P(2) = = P(A) = I/ A, with s and X 
chosen above, satisfy the necessary and sufficient conditions, and again this lower 
bound is the exact rate distortion function. It turns out, in fact, that for 

<D < (A- I) mm Q(k) 

k 

this lower bound is the exact rate distortion function for the general case. For 



where 



(A- I) mm Q(k)<D<D T 

k 



D max = 1 - max Q(k) 

k 



the exact form of R(D) is more complex and the lower bound is no longer tight 
(see Jelinek [1967]). 

In the above example, for equally likely source outputs we had a symmetric 
condition which rendered easy the determination of the exact rate distortion 
function. This case is a special example of a class of sources and distortion meas 
ures referred to as symmetric sources with balanced distortion. 

Example (Symmetric source and balanced distortion) Given % = 1 = {1, 2, . . . , A] and equally 
likely source probabilities Q(\) = Q(2) = Q(A)= I/A, suppose the distortion matrix 
{d(k, j)} has the same set of entries in every row and column. That is, there exist nonnegative 
numbers d lt d 2 , . . . , d A such that 



{d(k,j) J = 1, 2, ..., A} = {d lt d 2 , ..., d A } for k = 1, 2, ... t A 



and 



{d(kj)- k=l,2, ..., A] = {d lt d 2 , ..., d A } for j = 1, 2, ..., A 



444 SOURCE CODING FOR DIGITAL COMMUNICATION 

In this case {d(k, j)} is called a balanced distortion matrix, and we now compute the exact rate 
distortion function. By symmetry, we guess that P(l) = P(2) = = P(A) = \/A and A(l) = 
;t(2) = = A.(A). We now check to see if the necessary and sufficient conditions of Theorem 7.6.2 
are satisfied for this guess. The conditional probability must satisfy 

P(j | k) = - e sd(k - for all j, k (7.6.67) 

and from (7.6.31) 

(7.6.68) 



This conditional probability satisfies the conditions (7.6.19) with the required A value. The rate 
distortion function is given in parametric form by (7.6.32) and (7.6.33) which reduces to 



(7.6.69) 



"* 

k= 1 

and 



R(D S ) = sD s + In A - In ( e sd * \ (7.6.70) 

u=i / 

The symmetric source with balanced distortion example suggests a general 
way of obtaining a lower bound to R(D) for arbitrary discrete memoryless sources. 

Lemma 7.6.4 A lower bound to the rate distortion function for a discrete 
memoryless source with entropy H(%] is given by 

R(D) > R LB (D) = sD + //(#) - In / e sd(u < v * } \ (7.6.7 1 ) 

\ u / 



where v* satisfies 



^(- *) = max X e sd(u v) (1.6.12} 

U V V U 



and s satisfies the constraint 



PROOF Choose {A(w): u e <%} such that 

= < 7 - 6 - 74 ) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 445 

Then 






(7.6.75) 



and thus X e A s . From Theorem 7.6.3 we have, for this choice of X e A 
R(D) >sD + H(%) - In 



for any s < 0. We now choose s to maximize this lower bound. By direct 
differentiation of the lower bound with respect to s and setting the derivative 
to zero, we find that s must satisfy (7.6.73). 

Evaluation of R(D) requires finding P e 3P D that satisfy the necessary and 
sufficient conditions given in Theorem 7.6.2. Except for examples with certain 
symmetry properties this is difficult. Using the lower bound, which is often tight 
for small values of D, is a convenient way to find an approximation to R(D). 
Another approach to evaluating R(D) for a specific example is to use the computa 
tional algorithms of App. 7A. 

7.7 EVALUATION OF R(D) CONTINUOUS -AMPLITUDE 
MEMORYLESS SOURCES* 

The conditions for the evaluation of R(D) for continuous-amplitude memoryless 
sources are similar to those for discrete memoryless sources. Recall that the rate 
distortion function is defined by (7.5.23), (7.5.18), and (7.5.21) as 

R(D) = inf 7(P) nats/source symbol 
where 

/(P) = I * | Q(u)P(v \u)\n ^Y dv du 

OO* 00 V / 

and 

,00 ,00 



D = lP(v\u): D(P) = j j Q(u}P(v\u)d(u, v) dv du < D 



oo oo 



As with discrete sources, R(D) is a continuous, strictly decreasing function of D for 
D min < D < D max , where here 



Q(u)Md(u,u)du (7.7.1) 

v 

* May be omitted without loss of continuity. 



446 SOURCE CODING FOR DIGITAL COMMUNICATION 

and 

D max =inf f Q(u)d( U , v)du (7.7.2) 





The strictly decreasing property of R(D) further implies that if P e g? D yields 
R(D) = /(P), then D(P) = D. 

To find R(D), we want to minimize 

->/ i \ i 

v du (7.7.3) 



7(P)= f f Q(u)P(v\ U )\n 

Of) Of) 



P(v 



subject to the conditions on P(v \ u) 

P(v\u)>Q for all u, VE & (7.7.4) 



P(v\u)dv=l for all u e 0t (7.7.5) 

o 

f { Q(u)P(v | u)d(u, v) dudv = D (7.7.6) 

- oo - oo 

Using Lagrange multipliers for the equality constraints, (7.7.5) and (7.7.6), 
and the calculus of variations, we can obtain the continuous-amplitude form of 
Theorem 7.6.1, given in the following theorem (see Berger [1971], chap. 4). 

Theorem 7.7.1 Necessary and sufficient conditions for a conditional probabil 
ity P E 0> D to yield the rate distortion function R(D) at distortion D are that it 
satisfy 20 

P(v | u) = k(u}P(v)e sd(u < v) if P(v) > (7.7.7) 

1 l(u)Q(u)e sd(u < v) du<\ if P(v) = (7.7.8) 

oo 

where 

A( M )= J" P(v)e sd(u < v} dv\ (7.7.9) 

[ - oo J 

and where for s < 0, R(D) and D satisfy the parametric equations 

D = I J Ji(u)Q(u)P(v)e^ (tl v) d(u, v) du dv (7.7. 10) 

oo oo 

R(D) = sD + [ Q(u) In A(u) du (7.7. 1 1 ) 

- oo 

Following the same arguments as for the discrete case we have the following 
lemmas. 



20 In a strict sense, these relations hold for almost all 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 447 

Lemma 7.7.1 The parameter s is the slope of the rate distortion function at 
the point D = D s . That is 

dR(D) 



dD 



= s 



(7.7.12) 



D = D S 



Lemma 7.7.2 The derivative R (D) is continuous for D min < D < D r 
Theorem 7.7.2 The rate distortion function can be expressed as 



R(D) = sup 



s<0. 



sD+ 



Q(u) In A(w) du 



where 



*.(u)Q(u)e sd(u l) < 1, -oo < v < oo 



(7.7.13) 



(7.7.14) 



Necessary and sufficient conditions for s and X to realize the maximum are 
the same as those given in Theorem 7.7.1. 

The main difference between the rate distortion functions for continuous and 
discrete sources is that R(D) -> oo as D -> D min , since the entropy of a continuous 
amplitude source is infinite. For continuous amplitude sources there are only a 
few examples of explicit analytical evaluation of the rate distortion function. We 
present first the well-known, most commonly used example of a memoryless 
Gaussian source with a squared-error distortion measure. 



Example (Gaussian source, squared-error distortion) Consider a source that outputs an indepen 
dent Gaussian random variable each symbol time with probability density 



-oo<u<oo 



(7.7.15) 



and assume a squared-error distortion measure d(u, v) = (u v) 2 . For this distortion and source 
we have D min = and D max = a 2 . We next seek a conditional probability density P e J> D which 
satisfies the necessary and sufficient conditions of Theorem 7.7.1 for < D < a 2 . A natural choice 
is to choose, for some ? 2 



-oo<r<oo 



(7.7.16) 



This then yields the Lagrange multiplier, /.( ), which satisfies 



(7.7.17) 



448 SOURCE CODING FOR DIGITAL COMMUNICATION 

where a 2 = I /(2s). This choice of P(v) then requires P(v \ u) of the form 



All that remains is to satisfy the parametric equations for D and R(D). First, 

D= Q(u)P(v\u)d(u,v)dudv 



2 2 2 



So far a 2 is directly related to the parameter s, whereas /? 2 is unrestricted. We choose ft 2 to satisfy 
a 2 + /J 2 = <r 2 . This forces the relation on s given by 



I ,7.7,9) 



The expression for R(D) then becomes 

r * 
= .sD + j Q(u) In A(u) du 



In nats/source symbol < D < <r 2 (7.7.20) 

The above is the simplest example. We next present without proof other 
known examples. 

Example (Gaussian source, magnitude error distortion) Consider next the same Gaussian source 
with probability density given by (7.7.15) and assume now a distortion measure d(u, v) = 
\u v\. Here D min = and D max = ^/2a 2 /n. For < D < D max , the rate distortion function (Tan 
and Yao [1975]) is given parametrically by 






-In (2Q(0)) (7.7.21) 



-IU 2 

- 2Q(9))e e2l2 Q(6) + ^/ne 92 2 - 20Q(0)\ (7.7.22) 



where < 9 < oo. Similar analytical evaluation of rate distortion functions for classes of sources 
of probability densities with constrained tail decays under magnitude error distortion are also 
given in Tan and Yao [1975]. [In this example only Q(-) is the Gaussian integral function 
denned by (2.3.11).] 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 449 

Example (Exponential source, magnitude error distortion) Suppose the source probability den 
sity is 

Q( w ) = a e-*N -oc<u<oc (7.7.23) 

with a distortion measure d(u, i) = \u u\. Then D mjn = and D max = I/a. For < D < I/a, the 
choice (Berger [1971]) 

P(v) = x 2 D 2 S(v) + *(1 - a 2 /) 2 )*- 3 " 11 (7.7.24) 

yields the rate distortion function 

R(D)=-\n(xD) nats/source symbol (7.7.25) 



Example (Uniform source, magnitude error distortion) Consider a source with uniform proba 
bility density 

i: 

and a distortion measure d(u, v) \u v\. Then D min = and D max = A/2. For < D < A/2, we 
have (Tan and Yao [1975]) 

R(D) = -In [1 - (1 - 2D/A) 1 2 ] - (1 - 2D/A) 112 (7.7.27) 



Finally we note that Rubin [1973] has evaluated the rate distortion function 
for the Poisson source under the magnitude error distortion criterion. Evaluations 
of rate distortion functions for most other cases are limited to a low range of 
distortion values, wherein often a simple lower bound to the rate distortion 
function coincides with the actual rate distortion function. 

Since R(D) is generally difficult to evaluate, it is natural to consider various 
bounds on the rate distortion function. Upper bounds follow easily from the 
definition, since 

R(D) = inf 7(P) 



for any P e 3? D . The trick is to choose a convenient form for P e # D . Often, for a 
given distortion measure, there is a natural choice for the conditional probability 
density that yields a simple, convenient upper bound. For example, for the 
squared-error distortion d(u, r) = (u r) 2 , a natural choice was to let P e # D be 
the Gaussian density. 

Theorem 7.7.3 Let Q( ) be any source probability density with mean zero 
and variance <j 2 . That is, suppose 

I* uQ(u) du = Q (7.7.28) 



450 SOURCE CODING FOR DIGITAL COMMUNICATION 



and 



r 

I u 2 Q(u) du = a 



(7.7.29) 



For this source probability density and the squared-error distortion measure, 
d(u, v) = (u- r) 2 , the rate distortion function is bounded by 

R(D) < \ In nats/source symbol < D < a 2 (7.7.30) 
where equality holds if and only if Q( ) is the Gaussian density. 
PROOF For a given D in the interval < D < a 2 , let 

r_. t\ r* /_i\..i T / ^ rw 1 rw-?\ /^i T ^ -i \ 

D/ " 2 ) (7.7.31) 



D ID 



- -D/T 2 ) 
Then 

oo 

I d(u, y)P(f |M) dv = i 

oo 

and 

D(P)= C C d(u,v)Q(u)P(v\u)dvdu 

- oo - oo 

= D 
Hence P e & D and we have K(D) < /(P). But 

I 



= 1 

- QO - 

Letting /i(^) = f P(v) In ^ 

be the differential entropy of P(i;), and noting that 
Q(u)P(v\u)\nP(v\u)dvdu 



(7.7.32) 



(7.7.33) 
(7-7.34) 



27iDl 1 - - 2 

a 2 



(7.7.35) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 451 

it follows that 



R(D) < 

= h(r] - i In [2neD(l - D/a 2 )] (7.7.36) 

for the choice of P of (7.7.31). Note from (7.7.28), (7.7.29), and (7.7.31) that 

vP(v) dv = (7.7.37) 



and I v 2 P(v)dv= f Q(u)U v 2 P(v\u) dv\du 



oo 
.CO 



- I Q(u){D(l - Die 2 } + (1 - D/ff 2 ) 2 u 2 } du 

oo 

= (1 - D/a 2 )a 2 

= o 2 -D (7.7.38) 

It follows also, from Prob. 1.13, that the differential entropy for any proba 
bility density is upper-bounded by the differential entropy of the Gaussian 
density having the same mean and variance. Hence, using (7.7.37) and (7.7.38), 
we have 

h(i~)<{\n[2ne(o 2 -D}} (7.7.39) 

which in turn yields the desired bound. 

Thus, for a given variance, the Gaussian source yields the maximum rate 
distortion function with respect to a squared-error criterion. It follows that, for 
any given source of variance o 2 and squared-error fidelity criterion D, there exists 
a block code of fixed rate R > \ In (0 2 /D) nats per symbol that can achieve 
average distortion D. In fact, Sakrison [1975] has shown that for R > \ In (o 2 /D), 
codes that are designed to achieve average distortion D for the Gaussian source 
will be good (in the sense of also achieving distortion D) for any other source with 
the same variance. Similar results were obtained for sources with fixed moments 
other than the second. 

Most of the efforts in evaluating rate distortion functions have concentrated 
on deriving lower bounds to R(D). This is due in part to the fact that, for many 
sources and distortion measures, a convenient lower bound due to Shannon 
[1959] coincides with the actual rate distortion function for some lower range of 
values of the fidelity criterion D. To derive lower bounds to R(D), we examine the 
form of the rate distortion function given in Theorem 7.7.2. Specifically 



R(D) = sup 



sD + I Q(u) In A(U) du 



s<0, Xe A s 

sD + [ Q(u) In )i(u] du (7.7.40) 



452 SOURCE CODING FOR DIGITAL COMMUNICATION 

for any s < and any X e A s , where 

A s = JA(w): J k(u)Q(u)e sd( ^ v) du < 1, - 





oo < v < oo 



Again we seek a convenient choice of X e A s . For difference distortion meas 
ures d(u, v) = d(u v), which depends only on the difference u v, we have the 
following lower bound, R LB (D). 

Theorem 7.7.4: Shannon lower bound For a source with probability density 
function Q( ) and difference distortion measure d(u, v) = d(u v) 

\ r" 

R(D) > R LB (D) = sup \h(<*) + sD - In e sd(z) dz (7.7.41) 

s<0 [ -oo 

where 

h(W) = - 1 Q(u) In Q(u) du (7.7.42) 

oo 

is the differential entropy of the source. 
PROOF Let A(w) be chosen according to 

[A(w)]~ l = Q(u) | e sd(z) dz (7.7.43) 

~ OO 

Then 



I l(u)Q(u 



j e sd(z) dz 



- 1 



which establishes X e A s . Substituting (7.7.43) in (7.7.40) yields the desired 
result. 

Using direct differentiation with respect to s on the lower bound (7.7.41), we 
can easily obtain two special cases. 

Corollary 7.7.5: Squared error For d(u, v) = (u v} 2 in the above lemma we 
have 

R(D) > R LB (D) = h(3f) - i In (2neD) nats/source symbol (7.7.44) 

Corollary 7.7.6: Magnitude error For d(u, v) = \u v in the above lemma 
we have 

R(D) > R LB (D) = h(%) - In (2eD) nats/source symbol (7.7.45) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 453 

In many cases the Shannon lower bound is tight. This occurs when /( ) given 
in (7.7.43) also meets the conditions of Theorem 7.7.1, which are satisfied if and 
only if a probability density P s ( ) can be found that satisfies (see Prob. 7.18) 

f" P s (v) sd(u - v) dv 
Q( U ) = -^ (7.7.46) 



I e sd(z) dz 

* oo 

for some s > 0. For these values of s, we have R(D S ) = R LB (D S ). 

For the Gaussian source with squared-error distortion measure, the Shannon 
lower bound is tight everywhere; that is, R(D) = R LB (D) for all < D < a 2 . This is 
also true of the source with the two-sided exponential density (7.7.23) with a 
magnitude error criterion whose R(D) is given by (7.7.25). For a case in which 
the lower bound is nowhere tight consider the following. 

Example (Gaussian source, magnitude distortion) For the Gaussian source with probability 
density 

-oc<u<oo (7.7.47) 



and the distortion measure d(u, r) = u - v |, we have 

/i(#) = { In (Inea 2 } (7.7.48) 

and 

R LB (D) = i In (nff 2 /2eD 2 ) (7.7.49) 

for 



Here in general the true rate distortion function [see the example resulting in (7.7.21) and (7.7.22)] 
is strictly greater than the Shannon lower bound. However, by numerical calculations, Tan and 
Yao [1975] have shown that the maximum of R(D) - R LB (D) is roughly 0.036 nat per source 
symbol, and that at rates above one nat per source symbol the difference is less than one part in a 
million. Thus one can conclude that R LB (D) is a very good approximation of R(D) for this 
source (see Prob. 7.21). 



7.8 BIBLIOGRAPHICAL NOTES AND REFERENCES 

The seeds of rate distortion theory can be found in Shannon s original 1948 paper. 
It was another eleven years, however, before Shannon [1959] presented the fun 
damental theorems which serve as the cornerstone of rate distortion theory. In the 
late sixties there was a renewed interest in this theory, and at that time the general 
information theory texts by Gallager [1968] and Jelinek [19680] each contained a 
chapter devoted to rate distortion theory. The most complete presentation of this 
theory can be found in the text by Berger [1971], which is devoted primarily to this 
subject. 



454 SOURCE CODING FOR DIGITAL COMMUNICATION 

In this chapter, the presentation of rate distortion theory is different from 
earlier treatments in that we first emphasize the coding theorems and later discuss 
the rate distortion function, its properties, and its evaluation. The proofs of the 
coding theorems for block codes (Theorem 7.2.1) and for trellis codes (Theorem 
7.4.1) are due to the authors (Omura [1973], Viterbi and Omura [1974]). They are 
analogous to the proofs of the corresponding channel coding theorems of 
Chaps. 3 and 5. The de-emphasis of techniques for the evaluation of R(D) is due 
primarily to the fact that there now exists an efficient computational algorithm for 
R(D) which is due to Blahut [1972] and is included here in App. 7A. 



APPENDIX 7A COMPUTATIONAL ALGORITHM 

FOR R(D) (BLAHUT [1972]) 



The algorithm for computing R(D) is similar to the algorithm for channel 
capacity given in App. 3C. Recall that for a discrete memoryless source with 
alphabet ^, letter probability distribution {Q(u)\ u e <%\ representation alphabet 
V, and distortion {d(u, v): u e ^, v e i^} the rate distortion function R(D) is given 
by (7.2.53) 

R(D)= mm I(P) 

where 



and 



p(v 1 11): D(P) = Z Q(")P(v \ ")d(u, v)<D 

U V 



The parametric representation for R(D) in terms of parameter s < is given by 
(7.6.32) and (7.6.33) 



where, by (7.6.31) 

A(n) = (Y P(v)e sd(u < v) j for all w e ^ 

The transition probability {P(v \ u)} which achieves R(D S ) is given by the necessary 
and sufficient conditions of (7.6.19) 

P(v | u) = *(u)P(v)e sd(u < "> if P(v) > 



and 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 455 

A(n)e(M)e Id( " l ) <l ifP(t>) = 

u 

The algorithm for computing R(D) is based on the following theorem. 

Theorem 7A.1 Given parameter s < 0. Let {P (v): v e i^} be a probability 
vector for which P (v) > for all v e V. For integers n = 0, 1, 2, ... define 



v) 



and 



U, II) 



(7A.1) 



(7A.2) 



Then, in the limit as n -> oo, we have 



(7A.3) 



where (D s , R(D S )) is the point on the R(D) versus D curve parameterized by s. 

PROOF Consider the ID plane shown in Fig. 7 A.I. Define V(P) = I(P) - 
sD(P) which can be interpreted as the /-axis intercept of a line of slope s which 
passes through the point (/(P), D(P)). Recall that the point on the R(D)- 
versus-D curve parameterized by s has a tangent that is parallel to every such 
line of slope s, and this point lies beneath all such lines since R(D) is defined as 
a minimization over /(P). We show that V(P n ) is strictly decreasing with H, 
unless (/(P n ), D(P n )) is a point on the K(D)-versus-D curve. 




D Figure 7A.1 Sketch of ID plane. 



456 SOURCE CODING FOR DIGITAL COMMUNICATION 

From (7A.2) we have 



n+\ 



P.+M 

From this we get the difference 



- 1 



(7A.4) 



- 1 



= (7A.5) 

where again we used In x < x - 1. We have equality in (7A.5) if and only if 



P 



z 



(7A.6) 



which is one of the conditions (7.6.19) for the distribution that achieves R(D S ). 
Since V(P n ) is nonincreasing and is bounded below by R(D) sD, it must 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 457 



converge to some value V(P ao ) as n -* oo. The sequence P n must have a limit 
point P*, and by continuity of V(P) this limit point must satisfy (7A.1) 

< u \ e sd(u, v) 



,,\ y 



(7A.7) 



Thus P* satisfies necessary and sufficient conditions to achieve R(D S ) so that 
*) = R(D s )-sD s . 



The accuracy of the computational algorithm after a finite number of steps is 
given by the following theorem. 



Theorem 7A.2 Given any probability distribution (P(v): v e i r ] let 



(u 



.,-, for all, 6 



(7A.8) 



Then for {P(v\u): u e #, v e 1~] satisfying 

P(u)e sd(u < v) 



P(v u) = 



we have at the point 



the bounds 



- max In C(r) < R(D) - sD + Q(M) 



(7A.9) 



(7A.10) 



PROOF If D(P) = D then P 6 & D and 
R(D) < /(P) 



= Z I Q(u)P(v u) In 



(7A.11) 



458 SOURCE CODING FOR DIGITAL COMMUNICATION 

But 

^Q(u)P(v\u) = P(v)C(v) (7A.12) 

u 

so that 

R(D) <sD-^ Q(u) In f P(v)e sd ^ *> ] - P(v)C(v) In C(v) (7A.13) 

[v \ v 

From Theorem 7.6.3 we have 

R(D) > sD + fi() 1" A(II) (7A. 14) 

u 

where X is any vector such that 

/l(w)e(")^ (u v) < 1 for all v e V (7A.15) 

u 

Let us choose 



C max L P(v)e sd( for all u e ^ (7A. 16) 

where 

C max = max C(i?) 
f 

Then (7A.15) is satisfied and 



R(D) > s - Q(u) In 



-max In C(i?) (7A.17) 



We see that, for (P(v): v e i^} that achieves the point R(D\ we have 

R(D) = sD-^ fi(ii) In f X P()e^-- "> (7A.18) 



and 

C(i?)<l (7A.19) 

with equality when P(v) > 0. Thus 

-max In C(v) = -^ ^(^)C(r) In C(i?) 

r v 

= (7A.20) 

and the bounds in (7A.10) are tight. The two theorems suggest the following 
algorithm for a given e > level of desired accuracy. 

Step 1 : Set n = 1 and pick an initial probability P . (The uniform distribution will 
do.) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 459 

Step 2: Compute 21 for the given Q(u), d(u, v), and for any s < 



B n = -max In 

t; 

Step 3: If A, - B n < e, compute >(?) 

R(D(PJ) = sD(P a ) - y Q(u) In fy PJpV*- 



and stop. 
Step 4: If A n B n > 6, change n to n + 1 and go to step 2. 

PROBLEMS 

7.1 Prove inequality (7.2.47), using a proof similar to the proof of Lemma 1.2.2 
Chap. 1. 

7.2 For sequences of length N define 



and 



R N (D)= min ~/(P. v ) 

P.ve^o. v W 

For a discrete memoryless source where 

<2,( U )= ne(o 

show that 

U(D) = R V (D) N= 1, 2, ... 

73 (Gallager [1968]) For Theorem 7.2.4 show that 

-E" (p; P) < C = 2 + 16[ln A] 
for ^ < p < 0. It is convenient to define 



(7A.22) 
(7A.23) 

(7A.24) 



(7A.25) 



) given in 



a(u) 



21 The choice of s = yields K(D ma J = 0, where D max is given by (7.6.7). The choice of s < yields 
the point (R(D), D), where the slope is s. 



460 SOURCE CODING FOR DIGITAL COMMUNICATION 

and show that 



and 



7.4 Show that the source coding theorem given by Corollary 7.2.2 remains true when either D + c is 
replaced by D (provided D > D min ) or R(D) + <L is replaced by R(D). 

7.5 Prove Theorem 7.3.2 using the proof given in the converse source coding theorem (Theorem 7.2.3) 
and the data processing theorem (Theorem 1.2.1). 

7.6 A source and channel are said to be matched to each other when the channel transition probabili 
ties satisfy the conditions for achieving R(D) of the source, and the source letter probabilities drive the 
channel at capacity. Here R(D) = C where the time per source output is equal to the time per channel 
use. Show that when a source and channel are matched there is no need for any source and channel 
encoding to achieve ideal performance. Examine the equiprobable binary source with error distortion 
at fidelity D and the binary symmetric channel with crossover probability t where c = D. 

7.7 Consider the source encoder and decoder of Fig. 7.3. If fidelity D can be achieved with a code of 
rate R > R(D), show that the entropy of the encoder output 



where P m is the probability of index m e {1, 2, ..., M}, satisfies 

R(D) < H(W) < R 
7.8 For an arbitrary DMC, show that the expurgated exponent ex (R) satisfies 



where R = R(D) is given by (7.3.17). Also find the necessary and sufficient conditions for equality. 
Hint: Examine Prob. 3.21 and show that 

R(D S ) > R L (D S ) = sD s - In y(s, q) 

7.9 For a DMC, define the Bhattacharyya distance given by (7.3.16) and the natural rate distortion 
function given by (7.3.17). Then prove a generalized Gilbert bound for this distance measure, analo 
gous to Theorem 7.3.3. 

7.10 (Analysis of Tree Codes, Jelinek [1969]) Suppose we have a binary symmetric source with the error 
distortion measure. From (7.2.42), the rate distortion function is given by R(D) = In 2 Jf (D) nats 
per source symbol where < D < \. We now consider encoding this source with a binary tree code of 
rate R = (In 2)/n nats per source symbol where we assume R > R(D). This tree code has n binary 
representation symbols on each branch. Let 7] be such a tree code that is terminated at / branches. 
Then for source sequence u e ^ n/ , we define d(u T,) as the minimum normalized error distortion 
between u and paths in the terminated tree T t . A larger terminated tree T L where L = ml can be 
constructed from many terminated trees of length / by attaching the base nodes of these trees to 
terminal nodes of other trees. We now consider an ensemble of terminated tree codes where all branch 
binary representation symbols are independent and equally likely to be "0" or " 1 ". 



RATE DISTORTION THEORY: FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 461 



(a) Taking the expectation over the tree code ensemble for any source output sequence u, show 
that 

D* - lim E{d(u | TI)} 

I-oc 

- lim E{d(u\T ml )} 

m- oc 

exists and is independent of the source output sequence. 

(b) Next, given any 6 > and output sequence u e # nm/ , define u = (u t , u 2 , ..., u m ) where 
u, e 3f nl for each i = 1, 2, ..., m and variables z , z t , z 2 , z m associated with code tree T ml as follows: 

z = 1 

z l = number of paths with distortion D + 6 or less from u l over the first / branches 
z 2 = number of paths extending the z t paths above with distortion D + S or less from u 2 over the 
second / branches 

z m = number of paths extending the z m _ t paths above with distortion D + 6 or less from u m over the 
mth / branches 

It follows from the branching process extinction theorem (Feller [1957]) that, over the tree code 
ensemble, Pr{z m > 0} decreases monotonically with m and approaches a strictly positive limiting value, 

lim Pr{z m > 0} > 

provided EjzJ > 1. Using Chernoff bounds (see App. 8A), show that 

E{zi} = 2 l PT{d l (u,v)<D+8\u} 



Hence for small 6 such that R > R(D) - R (D)S, we can find / large enough to have E{z l }> 1. 

(c) Assuming E{z l } > 1 use the branching process extinction theorem in (b) to prove 

lim Pr {d(u \ T ml ) < D + 6} > 

m-> oc 

(d) From the definition of D* given in (a), we can choose / large enough so that 

D* - 6 < E{d(u\T,)} < D* + 6 
For such a choice of / show that 

lim Pr {d(u T ml ) < D* - 6} = 



Hint: For u = (11,, u 2 , . . ., uj, note that 



where v = (v l5 v 2 , ..., v m ) is the minimum distortion path sequence in T ml . But for / = 1, 2, ..., m 



where T\ is the subtree of T ml in which v, belongs. Thus 

Pr {d(u T ml ) < D* - 6} < Pr j- d,(u,. | T\) < D* - S\ 
\ m i=i 



462 SOURCE CODING FOR DIGITAL COMMUNICATION 

(e) Note that we have from (c) 

lim Pr {d(u \ T ml ) < D + <5} > 

m-oo 

and from (d) 

lim Pr {d(u \ T ml ) < D* - 6} = 

m-> oo 

From this show that, for any c > 0, there exists a binary tree code of rate R = (In 2)/n > R(D) such that 
the average distortion D* satisfies 

D* <D + c 

Here the average is taken over all source output sequences. 

7.11 Consider the same source coding situation presented in Prob. 7.10 where we consider binary tree 
codes of rate R = (In 2)/n nats per source symbol. For fixed source sequence u E <&,, we define over 
the tree code ensemble the probabilities 



where 



Show that 

G(t\l+l)= 

Numerical solutions to G(t \ I) show that for this symmetric source, tree codes also exhibit the doubly 
exponential behavior observed for block source coding of symmetric sources with balanced distor 
tions (see Chap. 8). 

7.12 Show that for continuous-amplitude memoryless sources if the condition on the distortion given 
by (7.5.1) is replaced by 

00 

j Q(u)d*(u, 0) du <d* for a > 1 

- oo 

then the source coding theorem (Theorem 7.5.1) has the form for (7.5.24) given by 



7.13 Show that convexity of R(D) implies that R(D) is a continuous strictly decreasing function of D 
for D min < D < > max . Show that it further implies that if P 6 0> D yields R(D) = 7(P), then D(P) = D. 

7.14 Let R(D) be the rate distortion function for a discrete memoryless source with distortion measure 
{d(u, v): u tfl, v e y\ Now consider another distortion measure defined as 

(<?(u, v) = d(u, v) - min d(u, v): u e W, v e i^} 

veY 

and let R(D) be the corresponding rate distortion function. Show that 

R(D) = R(D + D min ) 
where 

D min = J] Q(u) min d(u, v) 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 463 



7.15 Consider a source alphabet # = {0, 1} with probability 0(0) = Q(l) = \. Let the representation 
alphabet be i = {0, 1, 2} and the distortion defined as 

I v = 

d(Q, v) = 1 v = 1 

la r = 2 

r = 



where a < {. Sketch R(D) for this case. For a > 3, show that R(D) = In 2 - 

7.16 Suppose # = y" and d(u, r) = 1 - (5 ttt .. For 

0<D<(/1- l)minQ(u) 

show that 

R(D) = H(%] - Jf(D) -Din (A- I) 

7.17 (Gallager [1968]) Consider a discrete memoryless source with four equiprobable outputs from 
# = (1, 2, 3, 4}. Let r = {1, 2, 3, 4, 5, 6, 7}, and distortion be given by 

I u = v 

1 1 u = 1 or 2 and v = 5 

d(u, v) = ^ 1 u = 3 or 4 and r = 6 

3 , = 7 

oc otherwise 

Show that the rate distortion function is given as shown: 



In 2 



Figure P7. 17 



Note: With infinite distortion measure the source coding theorem still holds if there is a v* e i " 
such that u Q(u) d(u, v*) < oo (y* = 7 in this example). (For further results concerning infinite distor 
tion measures, see Gallager [1968]. Also note the discussion following Theorem 7.5.2.) 

7.18 For parameter s < 0, show that if a probability density (P s (^): oo < v < 00} satisfies (7.7.46), 
then Shannon s lower bound is tight. That is, R(D S ) = R LB (D S ). 

7.19 For memoryless sources the Shannon lower bound, R LB (D), is given by Theorem 7.7.4. 
(a) Show that the maximizing value of s < in the definition of R LB (D) satisfies 

\d(z)G s (z) dz = D 
where 



464 SOURCE CODING FOR DIGITAL COMMUNICATION 

(b) Next let ^ D be the set of all probability densities for which 

I d(z)G(z) dz<D 
and use variational calculus to show that 



R LB (D) = h(W) - max h(G) 

Ge D 

7.20 (Berger [1971].) In Prob. 7.19, let d(u, v)= \u-v\. 
(a) Show that 



R LB (D) = h(%] - In (2eD) 

(b) For R(D) = R LB (D), {P(v): -oo < v < 00} must satisfy (7.7.46) (see Prob. 7.18). Show that 
then P(v) must satisfy 

p( v ) = Q( v ) - D 2 Q"(v) - oo < v < oo 

(c) Apply (b) to a source with 

Q(u) = _ e~ a|u| oo < w < oo 
and show that 

R(D) = R LB (D) = - In (D) < D <- 

a 

(d) Apply (b) to a source with 

Q(u) = -(I + u 2 ) 2 -oo<u<oo 
n 

and show that 

3 <D < 

7.21 (Berger [1971].) For a memoryless Gaussian source and a difference distortion measure 
other than d(u, v) = (u v) 2 , show that the Shannon lower bound is never exact. That is, R(D) > 
R LB (D) for all D. Note the example of d(u, v) = \u-v\ in Sec. 7.7. 

Hint: Use Cramer s theorem. 

7.22 (Linkov [1965] and Pinkston [1966].) For a memoryless Gaussian source, find R LB (D) for 
d(u, v)= u - v | a . Check your results by specializing to a = 1 and a = 2. 

7.23 Generalize the lower bound in Lemma 7.6.4 to continuous-amplitude memoryless sources. Then 
show that this becomes the Shannon lower bound when d(u, v) is a difference distortion measure. 

7.24 Using the calculus of variations (see Courant and Hilbert [1953], chap. 4), prove Theorem 7.7.1. 

7.25 (Countably Infinite Size Alphabet) Discrete memoryless sources with a countably infinite size 
alphabet and unbound distortion measures have coding theorems that are given by Theorem 7.5.1 and 
Theorem 7.5.3, where R(D) is given by (7.5.23) with integrals replaced by summations. 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 465 

(a) For a discrete memoryless source with a countably infinite alphabet of integers ,/ = {0, 1, 
2, ...} and the magnitude distortion measure, d(u, v) = \u - r|, show that R(D) > R LB (D) where 
R LB (D) is given parametrically by 



R LB (D S ) = tf(#) + sD s - In (1 + e*) + (1 - 2(0)) In (1 - 
and 



for s < 0. 

Hint: Choose A(U) as follows (See Tan and Yao [1975].): 



(6) Show that necessary and sufficient conditions for R(D) = R LB (D) in (a) is that there exists a 
probability distribution (P(r)} that satisfies 



7.26 In Prob. 7.25, let the source have a Poisson distribution 



(u) = -e-* x>0, u = 0, 1, 2, .. 



Show that R(D) > R LB (D), for all < D < D max . 

7.27 In Prob. 7.25, let the source have a geometric distribution 

Q(u} = (\-Q} u < 9 < 1, u = 0, 1, 2, ... 
For this case show that R(D) = R LB (D) for all < D < D c where 



and s c is given by 



otherwise. 



(See Tan and Yao [1975].) 

7.28 (Shannon s First Theorem Revisited, Gallager [1976].) Consider a DMS with alphabet # and 
probability distribution Q(u), u$t. We encode each sequence u 6 3f N of length N by an index 
/(u) = m e (1, 2, ..., M} where M = e RN . The index m is sent over a noiseless channel to a source 
decoder that estimates the sequence by 



466 SOURCE CODING FOR DIGITAL COMMUNICATION 

That is, u is chosen among all u which satisfies /(u) = m and maximizes Q N (u), the probability of the 
sequence u. We want to show that there exist encoders [described by/(u) = m] such that 



as long as R > 



lim Pr {u i= u} = 

N-oo 

the entropy of the source. This system is shown below. 





ue<U w 


Source 
encoder 


DMS 







I 


1 






1 


Source 


Noiseless 


i m 


decoder 


channel 


! *~ 
I 


u = max l Q N (u) 




I 


f(u ) = m 









me{l,2,..,/W},M = , RN 



Figure P7.28 

(a) Define the functions 



...... -K 



Show that for any < p < 1 

Pr {u u} < Q N (u) I iA(u, u | f )<D(u, u | Q) 

u 

(b) Next, show that for any < p < 1 



(c) Randomly choose a source encoder function f such that over the ensemble of encoder 
functions for any u = u and any m, m we have 

Pr {/(u) = m, /(u) = m | u, u} = Pr {/(u) = m | u} Pr {/(u) = m | u} 



M 2 
Averaging Pr {u u} over the ensemble of encoders, show that 



<p < 1 



(rf) Prove the source coding theorem from above by showing that for any N there exists an 
encoder and decoder such that 



Pr {u=/=u} < 
where E S (R) > for R > H(%\ 

7.29 (Slepian and Wolf Extension to Side Information.) Suppose in Prob. 7.28 only the source decoder 
has additional side information v e T^ N such that when the source sequence is u e ^ then the source 
decoder receives index m from the channel as well as v. Here u, v have joint distribution 



RATE DISTORTION THEORY! FUNDAMENTAL CONCEPTS FOR MEMORYLESS SOURCES 467 

The source decoder chooses u where 

u = max 1 Q v (u v) 

Prove the generalization of Prob. 7.28 for this side information at the decoder situation, by showing 
that for any N there exist encoders and decoders such that 

where E S (R) > for R > H(% i ). 

7.30 (Joint Source and Channel Coding Theorem, Gallager [1968].) 

(a) Let p v (v | x) be the transition probability assignment for sequences of length N on a discrete 
channel, and consider an ensemble of codes, in which M codewords are independently chosen, each 
with a probability assignment <j v (x). Let the messages encoded into these codewords have a probability 
assignment Q m , 1 < m < M, and consider a maximum a posteriori probability decoder, which, given 
y, chooses the m that maximizes Q m p.v(y|x m ). Let 



be the average error probability over this ensemble of messages and codes, and by modifying the proof 
of (3.1.14) where necessary, show that 



P e < 



(b) Let the channel be memoryless with transition probabilities p(y|.x), let the letters of the code 
words be independently chosen with probability assignment <j(.x), and let the messages be sequences of 
length L from a discrete memoryless source 31 with probability assignment Q(i), I < i < A. Show that 

P ^ O -* E O(P-<\ 



= (l+p)\n 



(c) Show that E s (0) = 0, that 



= tf (#) 



and that E s (p) is strictly increasing in p [if no Q(i) = 1]. 

(d) Let /. = L/N, and let N -> oo with A fixed. Show that P e - if 
channel capacity. 



< C where C is the 



CHAPTER 

EIGHT 

RATE DISTORTION THEORY: 
MEMORY, GAUSSIAN SOURCES, 
AND UNIVERSAL CODING 



8.1 MEMORYLESS VECTOR SOURCES 

Chapter 7 presented the rate distortion theory for memoryless sources that emit 
discrete or continuous random variables. For these sources, an output occurs once 
every T s seconds, and the sequence of outputs are independent random variables 
with identical probability distributions. We can extend these results to mem 
oryless sources with outputs that belong to more abstract alphabets. For example, 
the output of the memoryless source may be a random vector, a continuous-time 
random process, or a random field. By generalizing in this manner, we can extend 
the theory to more general sources with memory. 

Consider a memoryless source that outputs every T s seconds a random vector 
of dimension L denoted by x. Here 

x = (u (l \ u (2 \ ...,K (L) ) (8.1.1) 

where 1 

(/) e # = {<!!, fl 2 , ...a A } /= 1, 2, ...,L 



1 We can equally allow each component to belong to a different alphabet. Although x is a vector, 
we regard it as a letter from some abstract alphabet X = % L . 

468 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 469 

Denote the alphabet for all such vectors by 3C = 31 L and assume that the proba 
bility distribution for x 6 X is given by Q(x) where x 6 3C. Note that the com 
ponents of x are not necessarily independent. We represent the L-dimensional 
vector source outputs by vectors 

y = (v^\ v (2 \ ...,t> (L) ) (8.1.2) 

belonging to the alphabet & = i \ where i-~ = {b l9 b 2 , .., b B }. Throughout this 
discussion, assume that for each source-user pair of vectors x 6 31 L and y e i \ , 
we have a bounded distortion defined by the set of L distortion measures 



</<>, u)<d$< 



for all we #, r 6 i \ / = 1, 2, ..., L (8.1.3) 



The memoryless source that emits a vector of dimension L every T s seconds 
can be viewed as L memoryless sources with outputs every 7^ seconds that are not 
necessarily independent of each other. This description is shown in Fig. 8.1, where 
we assume that only one noiseless channel is available. From this viewpoint, we 
have L users who seek an estimate of the corresponding L source outputs, and 
each source-user pair has a distortion measure given by (8.1.3). 

Although a single-letter distortion measure is given for each source-user com 
ponent pair, there is no overall fidelity criterion for evaluating or designing a 
source encoder-decoder system. A vector distortion measure consisting of the L 
single-letter distortion measures, for example, is inadequate because two systems 
yielding two average vector distortions generally cannot be compared, since vec 
tors, unlike real numbers, cannot be completely ordered. Therefore, we require 
some overall real-valued distortion measure to proceed further in our analysis. 
Next, we consider two such distortion measures. 





Source 



Encoder 



H 



Noiseless 
channel 



Source 



Decoder 





Figure 8.1 Multiple source-user system. 



470 SOURCE CODING FOR DIGITAL COMMUNICATION 

8.1.1 Sum Distortion Measure 

A natural choice for a single real-valued distortion measure is the sum distortion 
measure between x e 3C and y e J defined by 

y(*,y)= Z^V W ) (8.1-4) 

/=! 

where 



\ U (2 \ ..., 






For sequences of N successive terms x e 9E N and y e <& N the obvious generalization 
is 



* 



>,v ( ") (8.1.6) 

/=! 

where 

dJHrf*f)-l I^ KU ) /=1,2,...,L (8.1.7) 

^ n=l 

Since ^/ (/) (w, u) < 4 < for a11 * we have 



L 



Hence, for the sum distortion measure defined above, we have reduced the prob 
lem to a single discrete memoryless source with alphabet #", probability Q( - ), 
representation alphabet ^, and a bounded single-letter distortion measure y(x, y). 
The coding theorems of Sec. 7.2 apply directly and the rate distortion function is 
given by [see (7.2.53)] 

R(D)= min /(P) (8.1.8) 



where 

)y(x, y)<D (8.1.9) 



x y 

From Theorem 7.6.1, necessary and sufficient conditions for P e @> D to 
achieve R(D) = /(P) are given by 

P(y\x) = l(x)P(y)e s (x < y) if P(y) > (8.1.10) 

and 

X ^^(xy^ ^ < 1 if P(y) = (8.1.11) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 471 

where 

xeZ (8.1.12) 

and s < satisfies the parametric equations for R(D) 

D = Z Z i(x)Q(x)P(y)e s/(x > y) y(x, y) (8.1.13) 

x y 

and 

#(>) = sD + X QM In A(x) (8.1.14) 

X 

Note that the components of each x are not necessarily independent, although the 
successive vectors Xj, x 2 , ..., X N are mutually independent. A special case of 
interest is when the L components of the source output vectors are independent of 
each other, so that in the description of Fig. 8.1, we have L independent mem- 
oryless sources. 

Lemma 8.1.1: Independent components sum distortion measure For in 
dependent components and the sum distortion measure, the rate distortion 
function is given parametrically by 

D.-i,UP (8.1.15) 

/= i 
and 

L 

R(D S )= K (/) (D< >) (8.1.16) 

/= i 

where R (l) (D ( s l) ) is the rate distortion function for the /th component with the 
/th distortion measure, and is given parametrically by the same parameter 
s<0forall /= 1, 2, ...,L. 

PROOF Let {Q (l) (u): u e %} be the /th source output component probability 
distribution. Recall that the distortion measure for this component is given by 
d (l) (u, v). (We can regard each component as an output of some discrete 
memoryless source.) Suppose the conditional probability P (l) (v\u) achieves 
the rate distortion function of the /th component sequence for parameter 
s < and thus satisfies 

P (l} (v | u) = x ( >)P (/) (i> sd( )(u < "> if P (l) (v) > (8.1.17) 

and 

X / ( e ( <? sd " )(u< v) < 1 if P (l) (r) = (8.1.18) 

u 

where 

- 1 

u e % (8.1.19) 



472 SOURCE CODING FOR DIGITAL COMMUNICATION 

and that it also satisfies the parametric equations 



v) (8.1.20) 

U V 

and 

K (I) (D?>) = sO? + X <2 In A< (8.1.21) 



Since the sources are independent, we have 



ne> ( ") (8.1.22) 



Defining 



L 

P(y\x)= Y[P (l) (v (l} \u (l) ) (8.1.23) 

/= i 

and 

A(x) = n^V ) (8.1-24) 

/=! 

we see that this choice of {P(y|x): y e <^, x e #} and (A(x): x e #*} satisfy the 
necessary and sufficient conditions of (8.1.10) to (8.1.14), giving the desired 
result. 

One expects that when the L source output components are not independent, 
then the rate distortion function is upper-bounded by the corresponding rate 
distortion function when we assume the components are independent. We show 
this next. 

Theorem 8.1.1 For the sum distortion measure, the rate distortion function 
R(D) is bounded by R(D), the rate distortion function obtained if the source 
output components are independent with the same marginal probability dis 
tributions. That is, 

R(D)<R(D) (8.1.25) 

where R(D) is given by (8.1.15) and (8.1.16). 

PROOF Recall that for any P e 0> D 

R(D)<I(P) (8.1.26) 

But from Lemma 1.2.1 

1 1 Q( X )P(y X ) In *> 

JC > * V/ / 

P <- V I*> (8.1.27) 






RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 473 

for any probability distribution P(y). Choose 

L 
p( y ) = Y\pM( v W) (8.1.28) 

/=! 

and 

P(y\x)= n^ W") (8.1.29) 

/= 1 

where {P (l) ( )} and {P (/) ( | )} correspond to P 6 ^ DS(/) that achieves R (l) (D ( s l) ) 
for each / = 1, 2, . . . , L. Then 

L p</)/,,(/) I ,,(/n 



V ) 



/=! 

= K (/) (D (/) ) (8.1.30) 

/=! 

These theorems also hold for continuous-amplitude random vectors under 
the bounded variance condition on the distortion measures of each of the L 
source-user component pairs. The memoryless source with vector outputs, 
together with the above sum distortion measure, is a very useful model in under 
standing the problem of encoding sources with memory. We will return to the 
above results when we discuss both discrete-time and continuous-time sources 
with memory. 

Example (Gaussian vector sources, squared error distortion) Suppose we have a memoryless 
source that emits every T s seconds a vector with L independent zero-mean Gaussian components 
where 



" 2(l} 



| u 2 Q (l} (u) du = a] I = 1, 2, . . . , L (8.1.31) 

X 

Also let d (l) (u, r) = (u r) 2 for / = 1, 2. .. ., L. For the /th source-user component pair (see 
Sec. 7.7), we have the rate distortion function 



(,,.32, 

U at<D ( 

with slope (Lemma 7.7.1) 

2 ^ (8 L33) 




a < 



474 SOURCE CODING FOR DIGITAL COMMUNICATION 

for / = 1, 2, . . . , L. Hence for common parameter s, for each component 

1 



-oo < s < -- - 
/>> = { (8.1.34) 



or 

D ( s = min (6, a] ) (8.1.35) 

where 

9= --- >0 (8.1.36) 

For the sum distortion measure, the rate distortion function is thus given in terms of parameter 

0as L 

D e = min(0, <T?) (8.1.37) 

1=1 

and 

R(D 9 )= max(o,iln -{ ) (8.1.38) 

1=1 "/ 

For small distortions D < min {<rj, <r|, ..., <rj}, this becomes 

(8-1.39) 



8.1.2 Maximum Distortion Measure 

For the memoryless source with vector outputs, where there is a set of L source- 
user component distortion measures (8.1.3), another natural choice for a single 
real-valued distortion measure for sequences of length N is to define the distortion 
between x e 3C N and y e / N as 

y w (x, y) = max {<(u> v>) - 9,} 

I 

= maxj-J ? d><V<, >)-0,! (8.1.40) 

/ I-W n=l 

where O l9 6 2 , ...,0 L is a set of nonnegative real numbers. Recall that for each 

n= 1,2,...,N 

x n = W\ u?\ ...,ui) 

y.-W" f P,...,P) 

This distortion measure is essentially the maximum of the L distortions of the 
source-user pairs. The bias parameters {0J allow for control of the amount of 
distortion of each source-user pair. Since we attempt to minimize distortion 
7jv(x, y), this can be viewed as a minimax approach. Note that although the source 
is memoryless, the above distortion is no longer a single letter distortion measure. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 475 

That is 

Hence it appears that the coding theorem of Sec. 7.2 will no longer apply. 
However, with only slight modifications, the earlier coding-theorem proofs can be 
applied to this maximum distortion measure. 

For a code M = {y l9 y 2 , . . . , y M } of block length N and rate R = (In M)/N nats 
per source symbol, the average distortion is 



v(<) = I e,v(x)y(x 



= Z6jv(x)miny,v(x, y) 

x y e 



(8.1.43) 



Now for any conditional probability distribution {P(y |x): y E #, x e 3C\ consider 
a code ensemble where all codeword components are selected independently 
according to (P(y): y e ^} where 

<) ye& (8.1.44) 



Then following Sec. 7.2 leading to (7.2.30), we find that the code-ensemble average 
of y(M) is bounded by 



- 



(8.1.45) 



where 



and where 



= 11 C.v(xV\(y I x) max {<(u<, v ) - 



*-* 

/=i 

E(R, P) = max E(R\ p, P) 

- i<p<o 

E(R;p,P)= -pR + E (p,P) 



(8.1.46) 

(8.1.47) 
(8.1.48) 
(8.1.49) 
(8.1.50) 



and 



(8.1.51) 



476 SOURCE CODING FOR DIGITAL COMMUNICATION 

The only change from the form in (7.2.30) is the term y N (P) which is bounded 
further by defining sets 

j*i = {(x, y): d ( ff(u (l \ v (/) ) - 0, > 0} / - 1, 2, . . ., L (8.1.52) 

and the union 



** = U ^i = (*> y) : max ( (/) > v(/) ) - 0i) > o (8.1.53) 

/=! / 

Then, restricting the sum to st and using the union-of-events bound, we have 



^ Z Z OVMy I *) max {4V. v (/) ) - 0, 

(x, y) e j* / 



<max4 IJ ZZ fi*MJyy|x) 

I (x, y) e st 

I L 

< 70 Pr (x, y) e d = \]s/ k 



1=1 

Using this to further bound (8.1.45) gives 

L 

< y V Pr (d (/) (u (i) v (/) ) > 9 \P] + y e~ NE(R p) (8 1 55) 



/=! 

where 

E(R, P)>0 forfl>/(P) (8.1.56) 

Suppose we now wish to encode in such a way as to achieve average distor 
tions {D (l) : I = 1, 2, . . . , L} for each source-user pair. Let D = (D (l \ D (2 \ . . . , D (L) ) 
be the desired vector distortion. Consider the average of d (l \u (l \ v (l} ) over the joint 
distribution Q(x)P(y \ x) = Q(u (1 \ u (2 \ ..., u (L) )P(v {l \ e< 2) ,..., v (L) \u (l \ u (2 \ . . . , u (L) ) 

Z Z Q(x)P(y \ x)d (l) (u (l \ v (l) ) = Z Z Q (l \u)P (l \v \ u)d (l \u, v) (8.1.57) 

x y u v 

where Q (l) (u) and P (l) (v\u) are marginal distributions of Q(x) and P(y\x\ and 
define the class of conditional probability distributions 

) : Z Z Q(x)P(y\x)d (l) (u (l \ v (l) ) < D (/) ; / - 1, 2, ..., L (8.1.58) 



We now define the vector rate distortion function as 

K(D)= min/(P) (8.1.59) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 477 

To show that #(D) indeed is the rate distortion function for encoding each 
source-user component pair with distortion {D (/) : / = 1, ..., L}, we must prove a 
coding theorem and its converse. 

Theorem 8.1.2: Source coding theorem vector distortion Given c > and 
desired distortions {D (/) : / = 1, 2, ..., L} for the L source-user component 
pairs, there exists an integer N e such that for each block length N > N c there 
exists a code M = {y l9 y 2 , . . . , y M } of rate R < R(D) + e for which 



G, v (x) min max 



- D> < c 



(8.1.60) 



That is, the /th source-user pair has average distortion less than or equal to 

PROOF In equality (8.1.55), choose parameters O t = D (l) + e/2. Then for 
each / 

l I 1 N (i) (i) (i) e 

(8.1.61) 

For Pe^ D and source distribution Q( ), the terms {d (l) (uH\ vli } ): 
n = 1, 2, ..., N} are independent, identically distributed random variables 
with mean values less than or equal to D (l \ Hence by the weak law of large 
numbers 



Pr 



lim Pr 



P =0 



(8.1.62) 



for any P e ^ D . In particular, let P e ^ D achieve K(D) = /(P). Then from 
(8.1.55) and (8.1.62), for any R > R(D) there exists an integer N t such that, for 
any block length N > N t 



y(%) < 70 I Pr \dW(u (l \ v (/) ) > 

/=! I 



-NE(R,P) 



(8.1.63) 



Hence there exists a code ^ of rate R < R(D) + and block length N > 
such that 



mn max 



or 






II N 

Q N (\) min max - V 



(8.1.64) 
< (8.1.65) 



478 SOURCE CODING FOR DIGITAL COMMUNICATION 

Theorem 8.1.3: Converse source coding theorem vector distortion If a 

code $ = {y l9 y 2 , . . . , y M } of block length N and rate R = (In M)/N achieves 
average distortion {D (/) : / = 1, 2, ..., L} for each of the L source-user com 
ponent pairs, then R > R(D) where D = (D (1) , D (2) , . . ., D (L) ). 

PROOF For the code ^ = {y 1? y 2 , . . ., y M }, define the conditional probability 

V e $ and 



(8.1.66) 
otherwise 

Then since code 3ft achieves average distortion D (/) for each /, it follows that 

/= 1, 2, ...,L (8.1.67) 



y 



Let P (n) (y|x) be the nth marginal distribution of P N (y|x) and define the 
probability distribution 



Then for each / 

Z I e*(xn(y | xKV, v (l) ) = Z I fi w (xn(y I ") Z 



xy xy 



x y 



<D (/) (8.1.69) 

Hence P given by (8.1.68) belongs to ^ D and thus 



(8.1.70) 
From inequalities (7.2.46) and (7.2.47), we have bounds 



n=l 



<-lnM 
N 



(8.1.71) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 479 

Theorems 8.1.2 and 8.1.3 establish -R(D) as the rate distortion function for the 
vector source with a maximum distortion measure. For the special case where the 
L source components are independent, we have the following corollary. 

Corollary 8.1.4: Independent components maximum distortion When the 
L source components are independent so that 



/=! 

where x = (u (l \ u (2 \ ..., u (L) ), the rate distortion function is 

L 

K(D)= fl (/) (D (/) ) (8.1.72) 

/=! 

where R (l) (D (l) ) is the rate distortion function of the /th component of the 
source output vector. 

PROOF This follows directly from the proofs of Theorems 8.1.2 and 8.1.3 with 
the further independence P(y \ x) = f[f = x P (l) (v (l) \ u (l} ). Heuristically, since the 
sources are independent in the multiple source-user description of Fig. 8.1, 
the source encoder is forced to send, for each source-user pair, enough infor 
mation to achieve its distortion independent of information for other source- 
user pairs. 



8.2 SOURCES WITH MEMORY 

Although many sources that arise in practice can be modeled as discrete-time 
sources with real-valued output symbols, they generally have memory of some 
sort. By taking advantage of the statistical dependence between source output 
symbols, for a given fidelity criterion, sources with memory can be encoded using 
fewer bits per source symbol than with corresponding memoryless sources. For a 
given average distortion level D, the rate distortion function for a source with 
statistical dependence between output symbols is less than for a corresponding 
memoryless source. Theorem 8.1.1 shows this to be true in a special case. Indeed, 
for memoryless sources, the data rate cannot be reduced without incurring large 
distortions. For this reason, source coding techniques of rate distortion theory are 
mainly worthwhile for sources with memory. In this section, we examine discrete- 
time stationary sources with memory and define the rate distortion function for 
discrete-time stationary ergodic sources. 

Many sources, such as speech, are modeled as continuous-time sources. Con 
tinuous-time sources can be treated as discrete-time sources with source alphabets 
that are time functions. By considering general alphabets, we can treat a large 
class of sources, including picture sources such as television. Coding theorems for 
discrete-time stationary ergodic sources with general abstract alphabets are given 
in Berger [1971]. In Sec. 8.4 of this chapter, we examine a few Gaussian source 
examples of these more abstract alphabets. 



480 SOURCE CODING FOR DIGITAL COMMUNICATION 

Let us consider now a discrete-time source with statistically dependent source 
output symbols. For convenience, attention is restricted to a source with discrete 
output alphabet <% = {a l9 a 2 , . . . , a A }. Let u = (..., u_ l9 u , w 1? . . .) denote the 
random sequence of output letters produced by the source. 2 The source is com 
pletely described by the probabilities 

fijaj, a 2 , ..., a L ; t) = Pr {u 1+t = a lf i/ 2 + r = a 2 , ..., W L+ , = a L } 

for all times t and lengths L. In general, little can be said about source coding of 
sources which are nonstationary. Hence, we assume throughout this section that 
the source is stationary; that is 

Q L (* 19 <x 2 , ..., a L ; t) = Q L (oL lt <x 2 , ..., a L ) (8.2.1) 

is independent of time t for all letter sequences {a l5 a 2 , . . . , a L } and all lengths L. 
In addition to assuming that the source is stationary, we temporarily require 
that the source also be ergodic. (Later, in Sec. 8.6, we shall relax this ergodicity 
assumption by examining an example of a nonergodic stationary source which we 
can encode efficiently.) Ergodicity is essentially equivalent to the requirement that 
the time averages over any sample source output sequence are equal to the en 
semble averages. Specifically, let u = (..., M_ l5 u , M I? ...) be a sample output 
sequence and let u denote the sequence u shifted in time by / positions. That is 

u l t = u t + l for all t (8.2.2) 

Also, let / N (u) be a function of u that depends only on u l9 u 2 , , U N . Then a 
stationary source is ergodic if and only if for all N > 1 and all such functions/ N (u) 
for which 

{|/|}<oo (8.2.3) 

we have 

l I /N(U>) = {/} (8.2.4) 

L^oo *- /=! 

for all source sequences u (except at most a set of probability zero). Here E{ } is 
the usual ensemble average. The ergodicity assumption will ensure that if a source 
code is " good " for encoding a particular sample sequence with fidelity D, then it 
will also be good for all sample sequences of the stationary ergodic source. This 
will become even more evident when we consider an example of a nonergodic 
stationary source. 

Now suppose we have a discrete-time stationary ergodic source, as described 
above, with discrete source alphabet ^ = {a l5 a 2 , , a A }, representation 
alphabet *V = {b l9 b 2 , ..., b B }, and a bounded single-letter distortion measure 
{d(u, v)} where < d(u, v) < d for all M, v. The source probabilities are given by 



2 Continuous-amplitude sources can be easily handled by replacing probability distributions with 
probability density functions. Coding theorems follow with an appropriate bounded moment condi 
tion similar to that imposed in Sec. 7.5. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 481 

{fii.( M i "i.-i-t "L)> L > 1}. Shannon [1959] and Gallager [1968] have shown that 
the rate distortion function is given by 

R(D) = lim R L (D) (8.2.5) 

L-oc 

where 

R L (D)= min -I(P L ) (8.2.6) 



Q L (u)P L (v |uy L (u, v) < D (8.2.7) 

U V 

/(Pi.) = Z Z Q,(")P i (v | u) In ^Jj! (8.2.8) 

u v r iA V / 

PL(V) = Z CL(U)P L (V u) (8.2.9) 



U 



< 8 - 2 10 ) 



The coding theorem for this case is rather difficult to prove. A direct proof will be 
given here only for the Gaussian source with the squared-error distortion meas 
ure. General proofs can be found in Gallager [1968] and Berger [1971]. 

We present instead a simple heuristic argument which requires an additional 
assumption that appears reasonable for many real source models. Assume that 
there exists a finite interval T such that source outputs separated by T or more 
units of time are statistically independent. That is, for two random source output 
letters u t and u t > at corresponding times t and t , for which 1 1 t \ > T , we have 

Q(u t ,U t ,}=Q(u t )Q(u t ,} (8.2.11) 

For many real sources, a model with such a finite interval of dependence seems 
reasonable. From a mathematical point of view, this is a rather strong assumption 
which simplifies our heuristic argument that (8.2.5) is the rate distortion function. 
Consider grouping together consecutive source output symbols into groups 
of length L+ T . Out of each group we only attempt to encode the first L 
source output symbols and ignore the remaining T symbols (i.e., we neither 
represent them nor send them, although the decoder knows that these last T 
symbols are missing). Because of our assumption we then have a sequence of 
independent identically distributed sets of source output sequences, each consist 
ing of L source output symbols. Defining 

x = (u v , u 2 , ..., u,) e X = %, 

y = (lll2 J 6 * = y-* (8 - 2 12) 

and distortion 

1 L 
d L ( x , y) = T Y d(u t , v t ) < d (8.2.13) 



482 SOURCE CODING FOR DIGITAL COMMUNICATION 

we have a new extended discrete memoryless source with source probability 
Q L (x) = Q L (UI, u 2 , . . . , U L ) for each letter x e 3C = ^ L and single-letter distortion 
measure d L (x, y). Applying the results of Sec. 8.1 for a memoryless source with 
vector outputs, the rate distortion function for the extended discrete memoryless 
source is given by 

R(D; L) = min T(P L ) nats/extended source symbol (8.2.14) 
where 



u): I I QP L (v |u)4(u, v) < D 



(8.2.15) 



Here the dimensions of R(D; L) are nats per symbol of the extended source. But 
each extended source symbol corresponds to L + T actual source output sym 
bols. Hence, in terms of nats per actual source symbol, (8.2.14) becomes, using 
(8.2.6), 

^ = R L (D) nats/source symbol (8.2.16) 

L -+ TO L + TO 

Since the T unrepresented source symbols can produce upon decoding the maxi 
mum distortion d , this means that by using the above encoding strategy we can 
achieve average distortion 

D + d 
with a code of rate 



Clearly, by letting L-> oo, we can achieve average distortion D with a code rate 

R(D) = lim R L (D) 

L-KX, 

Hence we see that if there exists a finite interval T such that source outputs 
separated by T or more units of time are statistically independent, then the 
heuristic proof of the coding theorem follows directly from the coding theorem for 
memoryless vector sources. For general stationary ergodic sources there are simi 
lar (though more difficult to prove) coding theorems resulting in the definition of 
the rate distortion function given in (8.2.5). 

In the above discussion we assumed that R L (D) converged for stationary 
ergodic sources. We can interpret R L (D) as the normalized rate distortion function 
of a memoryless vector source of L components as described in Sec. 8.1. By 
increasing L, more of the statistical dependence between source outputs can be 
exploited, so that we expect the required rate per source symbol to decrease with 
an increase in L. This is true for all stationary sources, as shown next. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 483 

Lemma 8.2.1 



R(D) = lim R L (D) = inf R L (D) 



(8.2.17) 



L-oo 



PROOF Consider integers / and w and let N = I + w. Let P, and P m be condi 
tional probabilities that achieve R^D) = (l//)/(P,) and R m (D) = (l/w)/(P m ) 
and define 

P N (v | u) = P^v | u )P m (v m | u m ) where v = v v" 1 , u = u l u m (8.2.18) 
Then 



+ 



= 11 Q 

= i I Z 

^v u / v / 



(8.2.19) 



Hence P v (v|u) belongs to 3? D N , and from Lemma 1.2.1 of Chap. 1 






(8.2.20) 



where P N (v) is any probability distribution. We choose P N (v) = P / (v )P m (v m ) 
where 

p^H I aw^i") 

- / 1 \ X ^ ^^. /\T-4/Ml\ V*"*/ 

and 
Hence 



(Pi) 



\T \ m 

N \m 



(8.2.22) 



484 SOURCE CODING FOR DIGITAL COMMUNICATION 

Now let R(D) = inf L > 1 R L (D). Then for any e > 0, choose N to satisfy 

R N (D)<R(D) + c (8.2.23) 

From (8.2.22), letting / = m = N, we have 

R 2N (D)<R N (D) + R N (D) 
= R N (D) 

< R(D) + (8.2.24) 

Similarly 

+ e for all /c > 1 (8.2.25) 



For any integer L, we can find /c and j such that L = kN + j where < j < 
N-l. Then 

kN i 

R L (D)< R kN (D) + Rj(D) 

N 

e] + -R j (D) 
~Rj(D) 
< R(D) + e + ~ R(D) where R(D) = sup R L (D) (8.2.26) 

L L>1 

Since c > is arbitrary, we have 

lim R L (D) = R(D) 

L-oo 

Having given a heuristic argument to motivate the coding theorem for at least 
a subclass of stationary ergodic sources, we next prove the general converse 
coding theorem. 

Theorem 8.2.1: Converse source coding theorem stationary ergodic sources 

For any source encoder-decoder pair, if the average distortion is less than 
or equal to D, then the rate R must satisfy R > R(D). 

PROOF Any encoder-decoder pair defines a mapping from source sequences 
to user sequences. For any length N, consider the mapping from W N to i^ N , 
where we let M be the number of distinct sequences in i^ N into which se 
quences of W N are mapped. Define 

, (1 if v is the sequence into which u is mapped 

" /v( V U ) = \rv (5.2.2/j 

|0 otherwise 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 485 

Then if this mapping results in average distortion of D or less, we have average 
distortion 



< D (8.2.28) 

and hence P v e # D v . Thus 



*^N\" / r * V* N/ 

< ~ In M (8.2.29) 

Since 

K(D) - inf R L (D) < R N (D) 

L> 1 

we have R(D) < (l/N) In M = K. 

This converse theorem together with the heuristic proof of the coding theorem 
completes our justification of R(D) given by (8.1.5) as the rate distortion function 
for stationary ergodic sources. 3 This discussion easily extends to continuous am 
plitude stationary ergodic sources where, instead of a bounded distortion meas 
ure, we require a bounded moment condition on the distortion measure. 

Another form of R(D) for stationary ergodic sources can be obtained using a 
definition given in terms of random processes, rather than limits of minimizations 
involving random vectors. This definition of a rate distortion function is analo 
gous to Khinchine s process definition of channel capacity [1957]. Again consider 
the stationary ergodic source described above. Next suppose there is a jointly 
stationary ergodic random process {u n , v n ] consisting of pairs u n e % and r n e 1 . 
This implies that there is a consistent family of probability distribution functions 
{P v (u, v): u 6 # v , v e 1~ N } for all N which satisfies the condition 

Q.v(u) = I P.v(u, v) for all N (8.2.30) 

V 

Given a stationary ergodic source, there always exists a jointly ergodic pair source 
that satisfies this condition. Since the pair process is stationary, we can define the 
average per letter mutual information 

/,= lim ~I p (^ N ,r N ) (8.2.31) 



3 This converse theorem is true for nonergodic stationary sources if we interpret average distortion 
as an ensemble average. 



486 SOURCE CODING FOR DIGITAL COMMUNICATION 

where the subscript p emphasizes the dependence of the particular pair processes. 
In addition, we have average distortion 



I" 



= Z Z p ("> W(u, ) (8.2.32) 

U r 

For this particular jointly ergodic process, a sequence of block codes of rates 
approaching I p can be found that can achieve average distortion arbitrarily close 
to> p . 

Theorem 8.2.2 Given any c > and the jointly stationary ergodic process 
defined above which satisfies the source condition (8.2.30), there exists a 
sequence of block codes {& N } each of rate R < I p + e such that the average 
distortions [d($ N )} satisfy 

lim d(3$ N ) < D p + c 

PROOF For any block code 4 & N = [\ l9 v 2 , . . . , V M }, we have average distortion 

Ps) 

)d(u\a N ) (8.2.33) 



where 

d(u\ N )= min ^(u, v) (8.2.34) 

v e3$N 

Let 

(Dfu m\-{ 1 d( U \^)>d,(u,y) 
^-| 

Then 

= I Z ^(u, 



Noting that in the first term we have 

d(u |^)[1 - <D(u, v; ^)] < J N (u, v) (8.2.36) 



4 Each codeword here corresponds to the choice of N random processes Vj, v 2 , ..., v v as a 
mapping for the N source processes u t , u 2 , . . . , U N . 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 487 



and in the second term we can have 

d(u\3 N )<d 
then 

d( N ) < D p + d I P*(u, v)<D(u, v; N 

U V 

Defining 5 

/\(v) = lP N (u,v) 

U 

we bound the second term using the Holder inequality, 
P v (u, v)<D(u, v; N ) 



= IlP W (v)[ 



*IK 



<D(u, v; 



(8.2.37) 
(8.2.38) 

(8.2.39) 



(8.2.40) 



for any - 1 < p < 0. Averaging the bound with respect to an ensemble of 
codes of block length N and rate R where codewords are chosen indepen 
dently with probability distribution P N (v), results in 



v; 



P. v (v)d)(u, ;* 

./ i r 

\M + 1 / 



= e Np * (8.2.41) 

where the first inequality follows from the Jensen inequality and the first 
equality follows from the complete symmetry of {v, Vj, v 2 , ..., V M }. Thus, 



(8.2.42) 



for - 1 < p < 0. Certainly there exists at least one code ^ N in the ensemble 
which also satisfies this bound on the ensemble average. 

Next, letting p = a/N for any a > 0, choose N > a so that 



p= - 



(-1,0) 



5 As in the proof of Lemma 7.2.1, v is a dummy vector. 



488 SOURCE CODING FOR DIGITAL COMMUNICATION 



Now consider the identity 



l + p 



N + l- 



= 1 + ~ + o(N) where 



= (*/N) 2 /[l - (a/N)] (8.2.43) 



and consider the inequality 

y |y P N O 

^_j \L^ J* V 



l + p 



< 1 1 e^u 



= |y y 

\L L 

\ U V 



v) 



Q N (u)P N (v 

P (U, V 



Substituting this into (8.2.42) yields for some code & N 



<D 



ln 



v) 



(8.2.44) 



1 1 - *IN 



(8.2.45) 

for any a > and N large enough to guarantee N > a. For jointly ergodic 
sources (McMillan [1953]) we have 

r / \ 

(8.2.46) 



where the convergence is with probability one. Hence 



lim d(3 N ) <D p + d Q e~* (R - 1 ^ (8.2.47) 

N-*oo 

Finally, choose R = I p 4- e/2 and a = (2/e) In (d A) where e < </ , so that 



lim d(38 N ) <D p + 



for K < I p + . 



According to Theorem 8.2.2, K(D) is the smallest possible rate for average 
distortion D. Thus 



inf I p > R(D) 

D P <D 



(8.2.48) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 489 

where the minimization is taken over all stationary ergodic joint processes that 
satisfy D p <D and (8.2.30). It has been shown (Gray et al. [1975]) that the 
minimization can be taken with respect to all jointly stationary sources since all 
jointly stationary sources that satisfy the minimization can be approximated 
arbitrarily closely by a stationary ergodic source. In addition Gray et al. [1975] 
have proven a converse coding theorem analogous to Theorem 8.2.1 which 
establishes that 

R(D) = inf I p (8.2.49) 

D P <D 

For stationary ergodic sources, there are two equivalent definitions of the rate 
distortion function, R(D), given by (8.2.5) and (8.2.49). In (8.2.5), R(D) is given as 
a limit of minimizations involving random vectors, whereas in (8.2.49) it is given in 
terms of minimizations involving random processes. In either case, R(D) is gen 
erally difficult to evaluate for most stationary ergodic sources. This is one of the 
main weaknesses of the theory. 

The most direct way to compute R(D) is to first find the form of R L (D) and 
then take the limit as L -> oo. R L (D), given by (8.2.6), can be interpreted as the rate 
distortion function of an L-dimensional memoryless vector source, where the 
vector components are not necessarily independent and the distortion between 
vector outputs and representation vectors is the sum distortion measure. Thus 
R L (D) is exactly the rate distortion function of a vector source with the sum 
distortion measure discussed in Sec. 8.1.1. There we found a simple expression for 
this rate distortion function when the component sources were independent. If by 
an appropriate transformation we can reduce the calculation of R L (D) to that of 
the independent component sources, then we can obtain an equally simple expres 
sion for R L (D) and often obtain R(D). We do this in the following example. 



Example (Gaussian source, squared-error distortion) Consider a discrete-time zero-mean sta 
tionary Gaussian source with output sequence (..., u_j, u , u 1? ...) and correlation between 
outputs u { and u, denoted by 

</>.. = ( j) l ._. l = {u,.u,} for all i, ; (8.2.50) 

Stationarity implies that this correlation depends only on |i j\ and since the source is Gaussian 
it also implies that it is ergodic. We wish to calculate R(D) for this source with the squared-error 
distortion measure 



(8.2.51) 



(82 52) 



for any u, v L where we have # = i = . We begin by calculating R L (D). 
The sequence u L has the joint density function 



490 SOURCE CODING FOR DIGITAL COMMUNICATION 
where 



<D 



01 



0o 
0i 



(8.2.53) 



is the covariance matrix. Here, assume that O is positive definite so that <D l exists for any finite 
L. Let F denote the unitary modal matrix whose columns are the orthonormal eigenvectors 
of O with eigenvalues A l5 A 2 , ..., A L . Since <P is positive definite, the eigenvalues are positive. 
Letting 



A = 



(8.2.54) 



we have 



<D = FAF r and 



(8.2.55) 



Define transformed source and representation vectors by u = uF and v = vF, where now 
u has covariance matrix A and probability density 



.(*)= 



(8.2.56) 



Note that the components of u are independent random variables. In addition, since FF r = / we 
have 



= (U - V)(U - Y) T 



-(u-v)(u-y) r 
d L (u, v) 



(8.2.57) 



For any conditional probability density P L (v u), let P L (v |u) be the corresponding density for v 
conditioned on u. Since F is an invertible mapping, it preserves average mutual information 

. .* P fvlu) 

/(PJ=| - e,(u)P L (v|u)ln -^rfu 



7(P L ) 



(8.2.58) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 491 

Also, d L (u, v) = d L (u, v) implies 



= D(P L ) (8.2.59) 

Thus K L (D) can be expressed in terms of the transformed space 



R L (D)= inf jI(P L ) 

PL^D.L L 



(8.2.60) 

PL^D.L 

where 

>D.L = {^L(V|"):^L)<^} (8-2.61) 



Since u J* L has independent components, we can regard R L (D) as the rate distortion function of 
a vector source with independent components where the /th component source output has 
density function 

2 ^ (8.2.62) 



V27T/, 

In Lemma 8.1.1, R L (D) in nats per sample is given by the parametric equations 

D, = j i^ (8-2.63) 

*- 1=1 

and 

RL(!>,) = 7 I ( W) (8-2.64) 

L /=1 

where ^" (D^ 1 ) is the rate distortion function of the /th component source. The example in Sec. 8.1 
gives for this case the parametric equations for R L (D) and for parameter 6 = - 1 2s > 

D e = \ X mm (6, A,) (8.2.65) 

*- /=! 



and 



(8-2.66) 



We are now ready to pass to the limit L-> x. To do this we need to use the well-known 
limit theorem for Toeplitz matrices. 

Theorem 8.2.3: Toeplitz distribution theorem Let <I> be the infinite covari- 
ance matrix. The eigenvalues of <J> are contained in the interval 6 < X < A, 
where d and A denote the essential infinum and supremum, respectively, of the 
function 

<*>(co)= k *-* (8.2.67) 



492 SOURCE CODING FOR DIGITAL COMMUNICATION 

Moreover, if both 6 and A are finite and G(A) is any continuous function of A, 
then 

lim r G(A< L) ) = -!- f G[0>(co)] dco (8.2.68) 

L^oo L 1=1 2ft J- n 

where A} L) is the /th eigenvalue of O, the L x L covariance matrix. 
PROOF See Grenander and Szego [1958, sec. 5.2]. 

Applying this theorem to (8.2.65) and (8.2.66), we have the parametric equation for R(D) 
given by 



D g = min [0, <&(<)] 



and 



= max 

4nL n 



0, In 



6 



* 



(8.2.69) 



(8.2.70) 



For this Gaussian source with the squared-error distortion measure, we now prove a coding 
theorem, in essentially the same way as was done earlier for memoryless sources, by encoding 
the transformed source output sequence u = uF e & L with M codewords denoted 
$ = {v t , v 2 , ..., V M }. For any conditional density function 



( 8 - 2 - 71 ) 



we follow the coding theorem of Sec. 7.2 to get a bound on the average distortion (over code and 
source ensemble) 

d(@}<\ D< > + doe- (L/2)L(R "- Pl) (8-2.72) 

L , = 1 

where 



- oo oo 

3 max A 



(8.2.73) 
(8.2.74) 



E L (R,P,P L )=-pR-jln\C C (C C P L (*)Q L (&\*) llll+ > } d*}"]du (8.2.75) 

^ L -oo -oc \ - oo -oo / J 



= II 

i= i 



(8.2.76) 



and 



= n Q (I) ( 



( 8 - 2 - 77 ) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 493 



For a given parameter 0, choose (Tan [1975]) 

f*W 
wftlfi,) 



A,<0 

e -(f,-0,u,)2 20,0 / > 



where 



This choice yields 



and 



Thus 



min (0, A,) 



E L (R,p t P L )=-pR-- 



Lfr 



max 



A, + p0 



d(*) < j I min (0, x,) + d exp f- ^ ( -pi? - | max JO, In 
L i=i I 2 \ L l=l 2 

The Toeplitz distribution theorem gives 



and 



where 



lim - 



l - p 

lim 2] max P ~ ^ n 

t-x ^1=1 2 



x (p, 0) = - I max 0, p In 

dTT J 



P) 



dco 



Here x (p, 0) has all the usual properties given in Lemma 7.2.2 where 

lim dE ^ e] 



(8.2.78) 

(8.2.79) 
(8.2.80) 
(8.2.81) 

(8.2.82) 

(8.2.83) 
(8.2.84) 
(8.2.85) 
(8.2.86) 



(8.2.87) 



Defining 

(/?, D g )= max [-p/? + x (p, 9)] 
- i <P<O 

we have that for each t > and 2 > there exists an integer N (0, R, e^ c 2 ) such that for each 
L > N there exists a block code M of rate R and block length L such that 

*- * (8.2.88) 



where E(R. D g ) > for R > R(D e ). 

This bound gives the rate of convergence to the rate distortion limit 
(D e , R(D e )) and can be generalized to continuous-time Gaussian sources and 



494 SOURCE CODING FOR DIGITAL COMMUNICATION 

Gaussian image sources together with the squared-error distortion measures. Ex 
plicit evaluation of the rate distortion function 

R(D) = lim R L (D) 

L-^oo 

as was done in this example is generally possible only if R L (D) can be expressed as 
the rate distortion function of a vector source with independent components and a 
sum distortion measure. Otherwise we must settle for bounds on R(D). 



8.3 BOUNDS FOR R(D) 

For sources with memory, R(D) is known exactly only for a few cases, primarily 
those involving Gaussian sources with a squared-error distortion measure. Easy- 
to-calculate bounds to R(D) are therefore very important for general stationary 
ergodic sources. Lower bounds particularly are useful since they represent limits 
below which one cannot encode within the desired fidelity. 
Recall that for stationary ergodic sources 

R(D) = lim R L (D) 

L-K 

= MR L (D) 

L 

<R,(D) (8.3.1) 

Hence a trivial upper bound is R(D\ which may be found analytically or by using 
the computational methods of App. 7A. For a squared-error distortion measure, 
there is a more general version of Theorem 7.7.3 which shows that the Gaussian 
source has the largest rate distortion function. 

Theorem 8.3.1 For any zero-mean stationary ergodic source with spectral 
density 

*M= &*-" (8.3.2) 

k=-oo 

where </> fc = E{u t u t+k } and the squared-error distortion measure, the rate dis 
tortion function R(D) is bounded by 



< - max 



- n 

where 6 satisfies 



,ln-p|<fa) 



D = -L f K min [0, 0(w)] da> (8.3.4) 

2?r J_ n 

That is, for a given spectral density, the Gaussian source yields the largest rate 
distortion function. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 495 

PROOF Recall from (8.2.55) that <D = TAP 7 , and A = diag (A ls ..., A L ) is the 
diagonal matrix of eigenvalues of O. The components of u = uF are uncor- 
related random variables with covariance matrix A. The rate distortion func 
tion R L (D) can be expressed in terms of these transformed coordinates. Now 
recall from Theorem 8.1.1 that 

R L (D) < R L (D) (8.3.5) 

where R L (D) is the rate distortion function obtained if the coordinates of 
u = uF are independent. R L (D), on the other hand, is given by the parametric 
equations (see Lemma 8.1.1) 



and 

RL(D,)=J I* (I W) (83.7) 

** /=! 

where R (l) (D ( s l} ) is the rate distortion function of the /th component source with 
the squared-error distortion measure. From Theorem 7.7.3, there is the fur 
ther bound 



<i max o, In > (83.8) 

Thus 

R L (D)<Ri(D) (8.3.9) 

where R 9 L (D) is the rate distortion function for the Gaussian source with the 
same spectral density. Taking the limit as L- oo, we get the desired result. 

Thus we have shown that one general upper bound on R(D) is simply R ^(D), 
the first-order rate distortion function, while for the squared-error distortion, a 
bound can be obtained which is the known rate distortion function of a Gaussian 
source with the same covariance properties. Lower bounds can also be found by 
generalizing the lower bounds for memoryless sources. 

Suppose we have a continuous-amplitude stationary ergodic source and some 
distortion measure. Let R L (D) be its Lth-order rate distortion function. The L- 
dimensional version of Theorem 7.6.3 is 



R L (D) = sup 

S<0, X/,6 Aj 

where 



sD + - C ---f 00 Q L (u) In A L (u) du\ (8.3.10) 

L J - QO J - oo 



(8.3.11) 



496 SOURCE CODING FOR DIGITAL COMMUNICATION 

Now choose 

where X e A s . Then 

1 / \ 

R L (D) > - M^L) ~ ^W + SU P \ sD + [ Q( u ) ln *( u ) du I 

** s<0, AeA s \ -oo / 

L 

where 

h(W L ) = -C f fi L (u) In Q L (u) i/u (8.3.14) 

- oo - oo 

This results in the following theorem. 

Theorem 8.3.2 For a stationary ergodic source with rate distortion function 

R(D) = lim R L (D) 

there is the lower bound 

(8.3.15) 



where 

h(W) = - I Q(u) In Q(u) du (8.3.16) 

~ 00 

is the first-order differential entropy rate of the source and 

fc=-lim )f" -f" S L (u)lne L (u)du (8.3.17) 

L->oo ^ J -oo J -oo 

is the differential entropy rate of the source. 

PROOF Take the limit as L-> oo in (8.3.13). The limiting value h exists and is 
approached monotonically from above (see Fano [1961]). 

In this general lower bound, we express the bound in terms of Ri(D), which 
can usually be found by computational methods. Of course, further lower bounds 
exist for R^D), as described in Sec. 7.7. For difference distortion measures there 
is also a generalized Shannon lower bound (see Prob. 8.7) given by 6 



R(D,) > R LB (D S ) = h + sD s - In I e sd ^ dz 



(8.3.18) 



where D s is the distortion level associated with parameter 5. 
6 We assume 



f e sd(z} dz 



< oo 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 497 

Corollary 8.3.3 For a stationary Gaussian source with spectral density func 
tion Q>(co) and the squared-error distortion measure, the rate distortion func 
tion, R(D), is bounded from below by 



R(D)>R LB (D) = 



where 



E = exp - 



In 



E 
D 

dco 



(8.3.19) 



(8.3.20) 



is both the entropy rate power and the one-step prediction error of the Gaus 
sian source (see Grenander and Szego [1958, chap. 10]). Moreover, R(D) = 
R LB (D) for D < <5, where 6 is the essential infimum of d>(co). 



PROOF For the Gaussian source where 






we have 



and 

(see Prob. 8.8). Thus 

R LB (D) = R,(D) + h - 



= i In (2nea 2 ) 
h = i In (2neE) 



= i In T + i In (2neE) - i In (2nea 2 ) 



(8.3.21) 

(8.3.22) 
(8.3.23) 



From (8.2.69) and (8.2.70), we see that if 9 < 6 then 



D e = | min 

*- /C * _ 



and 



= 6 



= - max 



0, In 



, 
In 



O 



(8.3.24) 



(8.3.25) 



1 /" 

In O(o>) ^w - In D 
2n J. n 



= |to 



(8.3.26) 



498 SOURCE CODING FOR DIGITAL COMMUNICATION 

As just shown in the Gaussian case, the generalized Shannon lower bound 
given by (8.3.15) is often equal to R(D) for a range of small D. The examples of 
Sec. 7.7 imply that in most cases the Shannon lower bound is fairly tight for all D. 



8.4 GAUSSIAN SOURCES WITH SQUARED-ERROR 
DISTORTION 

Up to this point we have always assumed that the source probability distributions 
and the distortion measure are given. In practice, statistical properties of real 
sources are not known a priori and must be determined by measurement. 
Typically only the mean and correlation properties of a source are available. 
These first- and second-order statistics of a source are sufficient to completely 
characterize a source if it is Gaussian, an assumption which is often made in 
practice. In many cases one can justify the Gaussian assumption with a central 
limit theorem argument. The choice of distortion measure depends on the applica 
tion, and again it is usually not known a priori. In speech and picture compression 
applications, for example, there have been evaluations of various distortion meas 
ures based on subjective fidelity ratings of compressed speech and pictures. In 
practice, the most commonly used distortion measure is the squared-error 
distortion. 

For the most part in data compression practice, the sources are assumed to be 
Gaussian and the distortion measure is assumed to be squared error. Theorem 
8.3.1 shows that, for the squared-error distortion measure, the Gaussian assump 
tion results in the maximum rate distortion function. Thus for a given fidelity D, 
the value of the rate distortion function of the Gaussian source R(D) is an achiev 
able rate regardless of whether or not the source is Gaussian. Another important 
point is the fact that the Gaussian source with the squared-error distortion meas 
ure is the only example where the rate distortion function is easily obtained for all 
sorts of generalizations. These serve as a baseline with which various compression 
techniques can be compared. We look first at quantization of a memoryless Gaus 
sian source and compare the resulting averaged squared error with the corre 
sponding distortion that is achievable according to the rate distortion function. 
Then we examine more general Gaussian sources with memory and find expres 
sions for their rate distortion functions. 



8.4.1 Quantization of Discrete-Time Memoryless Sources 

We begin with the simplest of sources, the discrete-time memoryless Gaussian 
source with the squared-error distortion measure, where the rate distortion func 
tion is given by (7.7.20) 



R(D) = i In - < D < <7 



Here the source outputs are independent Gaussian random variables with zero 
mean and variance a 2 . For this example, R(D) represents the minimum rate 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 499 

required to achieve average squared-error distortion D and, as shown by Theorem 
7.7.3, even for non-Gaussian sources, R(D) given above represents an achievable 
rate for the squared-error distortion measure. 

The simplest and most common data compression technique is quantization 
of the real-valued outputs of the source. An m-level quantizer, for example, con 
verts each source output u e M into one of m values q l9 q 2 , . . . , q m This can best 
be described in terms of thresholds 7\, T 2 , ..., T m _ l where the source output 
u e M is converted to q(u) e {q l9 q 2 , ..., q m } according to 



q(u) = 



= 2, 3, ..., w- 1 



(8.4.1) 



1m 



The m-level quantizer converts each source output independently of other outputs 
and yields an average distortion 



0-,= ! I 

/=! T/- 




(8.4.2) 



where we take T = oo and T m = oo. Since there are m quantized values, this 
requires at most R m = In m nats per output of the source to send over the channel 
the exact quantized values. In Fig. 8.2, we plot the theoretical limit (D, R(D)) 
together with (D m , R m ) for various values of m. Here we take the values of{q lt q 2 , 
..., q m } and thresholds {7"i, ..., r m _j} that minimize D m as determined by Lloyd 
[1959] and Max [I960]. 



R nats 



Uniform quantizers, optimized 
and coded (Goblick and Holsinger) 



Lloyd-Max quantizers, 
uncoded 




icr 4 icr 3 icr 2 

Figure 8.2 Quantization techniques. 



D/a : 



10- 



500 SOURCE CODING FOR DIGITAL COMMUNICATION 

The quantization technique can be improved by observing that quantization 
level q l has probability 

-4= e- u2/2a2 du 



and the entropy of the quantized values is 



H m = - P, In P, 

/=! 

< In m (8.4.4) 

We can encode without distortion (see Chap. 1) the quantized source outputs with 
rate arbitrarily close to H m . Goblick and Holsinger [1967] investigated the mini 
mization of H m by varying m, {q t }, and {7]}, subject to the requirement that 
D m < D for uniform 7] !]_!. Their results consist of a family of uniform 
quantizers whose performance envelope is shown in Fig. 8.2. Quantization with 
distortionless coding of the quantized source outputs results in a required rate 
which is only about 0.2 nats per source output more than the theoretical limit 
given by the rate distortion function R(D) = ^ In (a 2 /D). This is not too surprising 
since the source is memoryless. Also, the distortionless source coding of the 
quantized values requires both memory and the use of codewords similar to the 
procedure for encoding the source directly with a fidelity criterion. If distor 
tionless coding is not used, then the performance gets worse as rate increases, as 
shown by the Lloyd-Max quantizers in Fig. 8.2. 

Although our example is based on the Gaussian memoryless source with 
squared-error distortion measure, for a large class of memoryless sources and 
distortion measures, the simple quantization technique should result in perform 
ance close to the theoretical rate distortion limit. Quantization followed by 
distortionless coding of the quantized values will further improve the perform 
ance. This example points out the fact that, in practice for most memoryless 
sources, quantization is an efficient technique. For real-valued sources with 
memory and for more general sources, quantization by itself is no longer 
adequate. 

8.4.2 Discrete-Time Stationary Sources 

Consider next a discrete-time stationary (ergodic) Gaussian source with output 
autocorrelation 

<t> k = E{u t u t+k ] allf,/c (8.4.5) 

For the squared-error distortion measure, the rate distortion function is given in 
terms of parameter 9 in (8.2.69) and (8.2.70) 

1 r 71 1 r rt F O(o>)l 

D e = - min [0, O(w)] dw and R(D e ) = - max 0, In -- da) 

2ft J - n 471 -L^ U \ 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 501 

where 

0(co) = fae-*" (8.4.6) 

k=- oo 

Recall from the example in Sec. 8.2 that the above rate distortion function was 
derived by considering the encoding of transformed source outputs. In particular, 
for large integer N, let F be the unitary modal matrix of the correlation matrix 



and transform each source output sequence of length N, ue N , into u = uF. 
The components of u are uncorrelated (also independent for this Gaussian 
source). The Nth-order rate distortion function can be determined by regarding 
each component of u as an independent output of a memoryless source, where an 
output occurs each time N actual source outputs occur. There is no loss in 
encoding the transformed variables, since the transformation preserves the 
squared-error distortion measure. That is, for u = uF and v = vF, we have 

1 . 2 

<W v = -||i 

J_ 

~N 

= 4v(u, v) (8.4.7) 



lu-vll 2 



We have already shown in Sec. 8.4.1 that, for a memoryless Gaussian source, 
quantization of the source outputs is an efficient way to encode. For a Gaussian 
source with correlated outputs, this suggests that we should first transform the 
source output sequence into an uncorrelated sequence and then quantize. This is 
in fact the most common data compression procedure. We may argue intuitively 
that since we have an efficient and simple data compression technique for mem 
oryless sources, we should first " whiten " the source output sequence and by so 
transforming it, obtain a memoryless (uncorrelated) sequence which can thus be 
efficiently encoded by quantization. The transformation should be chosen so as to 
preserve the distortion measure. For example, let T be an invertible transforma 
tion so that the output sequence u is transformed into the uncorrelated sequence 
u = u^T Let q be the quantized sequence of u and assume this is sent over the 
noiseless channel. The decoder uses q = qT~ * as the representation of the source 
sequence u. 



quantization (8.4.8) 



502 SOURCE CODING FOR DIGITAL COMMUNICATION 

For the squared-error distortion measure, the unitary modal matrix of the covari- 
ance matrix satisfies this requirement. Here, quantization may be slightly more 
general in that different quantizers may be applied to different components of the 
uncorrelated sequence u. 



8.4.3 Continuous-Time Sources and Generalizations 

Up to this point we have examined only discrete-time sources with source 
alphabets which are sets of real numbers. Many common information sources 
with outputs such as voice waveforms and pictures can be modeled as discrete- 
time real-valued sources only if the source has been sampled in an appropriate 
manner. In this section we take the approach of modeling all such more general 
sources as discrete-time sources with abstract alphabets. For continuous-time 
sources such as voice, for example, we consider sources that emit a continuous- 
time waveform each unit of time. Thus each unit of time the discrete-time model 
for a voice source emits an element belonging to the more abstract alphabet of 
continuous-time functions. Picture sources or television can similarly be modeled 
as a discrete-time source with the source alphabet consisting of pictures. Hence, by 
allowing the source alphabets to lie in more general spaces, we can model more 
general classes of sources. 

The corresponding source coding problem for general sources modeled in this 
manner can be formulated conceptually in the same way as for those with real 
source alphabets. Defining appropriate probability measures on the abstract source 
and representation alphabets and defining a distortion measure between elements 
in these alphabets, Berger [1971] has formulated the problem in this more abstract 
setting. The resulting rate distortion functions are defined in terms of mutual 
information between source and representation alphabets in the same manner as 
those given earlier for stationary ergodic sources with real alphabets. The main 
difference lies in the more general probability measures required for the abstract 
alphabets. 

We do not attempt to prove coding theorems for discrete-time stationary 
ergodic sources with abstract alphabets. Indeed, we will not even define the corre 
sponding rate distortion function. Besides requiring some measure-theoretic 
definitions, generally these rate distortion functions are difficult to evaluate and 
are known exactly only for some special cases. In this section, we present only a 
few of the known cases for which the rate distortion function can be evaluated by 
reducing the source outputs to a countable collection of independent random 
variables, and where the distortion measure can be defined in terms of these 
representative random variables. 

Before proceeding with various examples we point out that, although we can 
derive rate distortion functions for sources with abstract alphabets, to achieve the 
limiting distortions implied by these functions requires coding with codewords 
whose components are elements from the abstract representation alphabet. In 
practice this is usually too difficult to accomplish. The rate distortion function 
does, however, set theoretical limits on performance and often motivates the 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 503 

design of more practical source encoding (data compression) schemes. The Gaus 
sian source with squared-error distortion which is presented here represents the 
worst case for the commonly used squared-error criterion. This and the sub 
sequent examples are often used as standards of comparison for practical data 
compression schemes. 

Continuous-time Gaussian process, squared-error distortion Consider a source that 
emits the zero-mean random process of T seconds duration. {u(t): < t < T}. As 
we stated above, our approach is to model this source as a stationary ergodic 
discrete-time source with source alphabet consisting of time waveforms of dura 
tion T. Assume the energy of the output samples to be finite and choose the source 
alphabet to be 

I T \ 

% = \u(t): | u 2 (t) dt<ao\ (8.4.9) 

-o 

and the representation alphabet to be 

I - T \ 

r = \v(t): | v 2 (t)dt< oo (8.4.10) 

o 

That is, our abstract alphabets are # = i ~ = L 2 (T), the space of square-integrable 
functions over the interval < f < T, and the distortion measure 

d T : # x r->[0, oo) (8.4.11) 

satisfies a bounded second moment condition. The rate distortion function is de 
fined as a limit of average mutual information defined on abstract spaces % N 
and f~ N . For stationary ergodic discrete-time sources with these alphabets, there 
are coding theorems which establish that the rate distortion function does in fact 
represent the minimum possible rate to achieve the given distortion. 

Modeling sources which generate continuous-time random processes as 
discrete-time sources is somewhat artificial since we do not assume continuity of 
the random process between successive source outputs (see Berger [1971]). Rather, 
we usually have a single continuous random process of long duration which we 
wish to encode efficiently. Still, in our discrete-time model, by letting the signal 
duration T get large, we can usually reduce the source to a memoryless vector 
source with outputs of duration T. This is analogous to the arguments in the 
heuristic proof of the coding theorem for stationary ergodic sources given in 
Sec. 8.2. When we assume the discrete-time source is memoryless, then the rate 
distortion function depends only on the single output probability measure, 
namely on the space J ll x i and the distortion d T : -ft x i -> [0, oo). We denote 
this rate distortion function as R T (D). 

Even with the memoryless assumption, the rate distortion function R T (D) is 
difficult to evaluate. The key to its evaluation is the reduction of the problem from 
one involving continuous-time random processes to one involving a countable 
number of random variables. A natural step is to represent the output and rep- 



504 SOURCE CODING FOR DIGITAL COMMUNICATION 

resentation waveforms 7 in terms of an orthonormal basis {f k (t)} for L 2 (T) such 
that 

u(t) = f u (k) f k (t) 0<t<T (8.4.12) 

fc=i 

and 

v (t) = v Wf k ( t ) 0<t<T (8.4.13) 

fc=i 

for any u e W and veV. If now the distortion measure d T : tft x V -> [0, oo) can 
be expressed in terms of the coefficients [u (k} ] and [v (k) ], then R T (D) is the rate 
distortion function of a memoryless source with a real vector output. Earlier in 
Sec. 8.1, we examined such rate distortion functions for the sum distortion 
measure 

d(u,v)= f d (k >( M ( *>, t> (k) ) (8.4.14) 

k=l 

All known evaluations of R T (D) involve reduction to not only a memoryless 
vector source with a sum or maximum distortion measure, but to one having 
uncorrelated vector components. This can be easily accomplished by choosing the 
basis {f k } to be the Karhunen-Loeve expansion of the source output process. That 
is, choose the f k (t) to be the orthonormal eigenfunctions of the integral equation 

s)f(s) ds = ;/(!) 0<t<T (8.4.15) 

where </>(, s) = E{u(t)u(s)} is assumed to be both positive definite and absolutely 
integrable over the rectangle 8 < s, t < T. 

For each normalized eigenfunction f k (t ), the corresponding constant A k is an 
eigenvalue of </>(r, s). This choice of orthonormal basis yields the representation 9 

"(0= I (k) /*W (8A16) 

k=l 

where 

E{u (k) u u) } = l k d kj for /c, j = 1, 2, . . . (8.4.17) 

The choice of distortion measure is not always clear in practice. Yet, even though 
we are concerned with encoding a random process, there is no reason why we 
cannot choose distortion measures that depend directly on the expansion 

7 For source output {u(t): < t < T}, this representation holds in the mean square sense uniformly 
in t e [0, T]. 

8 This is a sufficient condition for the eigenfunctions {/ k } to be complete in L 2 (T). However, 
completeness is not necessary, for we can, without loss of generality, restrict our spaces to the space 
spanned by the eigenfunctions. 

9 Without loss of generality, we can assume A t > A 2 > . If {u (k) } are mutually independent, this 
representation holds with probability one for each t e [0, T]. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 505 

coefficients of the random process with respect to some orthonormal basis. 
Indeed, practical data compression schemes essentially use this type of distortion 
measure. The squared-error distortion measure lends itself naturally to such a 
choice for while d T : u x r - [0, oo) is given by 

d T (u,v) = ~\ T [u(t)-v(t)] 2 dt (8.4.18) 

1 *o 

it may also be expressed in terms of the Karhunen-Loeve expansion coefficients 

M, v) = ^ f (u"> - )* (8.4.19) 

*=1 

The rate distortion function R T (D) is thus the rate distortion function of a mem- 
oryless vector source with uncorrelated components and a sum distortion meas 
ure. It follows from Lemma 7.7.3 and Theorem 8.1.1 that R T (D) is bounded by the 
corresponding rate distortion function for the Gaussian source. Thus from (8.2.65) 
and (8.2.66), we have 



(8.4.20) 
where 6 satisfies 

D = l_ f>m(MJ (8A21) 

* * = i 

Here (8.4.20) becomes an equality if and only if the continuous-time random 
process is Gaussian. Further, if we now let T -> oo and we assume the source 
output process is stationary with spectral density 

0(o)= | 4>(i)e- im dr (8.4.22) 

oo 

where </>(T) = E{u(t)u(t + T)}, then based on a continuous-time version of the Toe- 
plitz distribution theorem (see Berger [1971], theorem 4.5.4) 10 we have 



\imR T (D) <- max 0, In 



4* -L 9 

where 6 satisfies 



dco (8.4.23) 



D = -- min [9, O(co)] dco (8.4.24) 

2n J- 9 

with equality if and only if the source output process is Gaussian. 



This requires finite second moment, 0(0) < oo, and finite essential supremum of O(c 



506 SOURCE CODING FOR DIGITAL COMMUNICATION 

Again we see that for the squared-error distortion measure the Gaussian 
source statistics yield the largest rate distortion function among all stationary 
processes with the same spectral density O(o>). The Gaussian source rate distor 
tion function 

R g (D) = ~-C max 0, In ^ | dco (8.4.25) 

4ft - oo 

where 6 satisfies (8.4.24) often serves as a basis for comparing various practical 
data compression schemes. 

Example (Band-limited Gaussian source) An ideal band-limited Gaussian source with constant 
spectral density 

\(o\<2nW 

(8.4.26) 

\o) >2nW 

yields the rate distortion function 

R*(D) = WMn 0<D <ff 2 (8.4.27) 

This is Shannon s [1948] classical formula. It is easy to see that this is also the rate distortion 
function for any stationary Gaussian source of average power a 2 whose spectral density is flat 
over any set of radian frequencies of total measure W. 

Gaussian images, squared-error distortion Information sources that produce pic 
tures (two-dimensional images) may be modeled as discrete-time sources with 
outputs that are two-dimensional random fields represented by 

sf, Ws|J (8A28) 

Images are usually described by the nonnegative image intensity function {i(x, y}\ 
| x | < L/2, | y | < L/2}. We assume that the source output is u(x, y) = In i(x, y), 
which is modeled here as a zero-mean Gaussian random field. In addition, if 
u(x, y) and v(x, y) are any two-dimensional functions, we define the distortion 
measure to be 

L/2 L/2 

[u(x, y) - v(x, y)] 2 dx dy (8.4.29) 

*" -L/2 J -L/2 

The fact that we encode u(x, y) = In i(x, y), the log of the intensity function, with a 
mean square criterion may appear somewhat artificial. There is, however, 
evidence (see Campbell and Robson [1968] and Van Ness and Bouman [1965]) that 
an observer s ability to determine the difference between two field intensities 
corresponds to the difference between corresponding transformed fields of the 
logarithm of the intensities. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 507 

Thus, for sources that produce two-dimensional images, we model our source 
as a discrete-time source that outputs a zero-mean Gaussian random field. The 
abstract source and representation alphabets are assumed to be 

j .L/2 .L/2 I 

# = r = \u(x, y): \ u 2 (x, y) dx dy < oo (8.4.30) 

-L/2 -L/2 

and we choose the squared-error distortion measure given by (8.4.29). If we 
assume the discrete-time source is stationary and ergodic, then a rate distortion 
function can be defined which represents the smallest rate achievable for a given 
average distortion. First assume that the discrete-time source is memoryless. This 
means that successive output images of the source are independent and the rate 
distortion function R L (D) depends only on the probability measures on J U x V 
and the single output distortion measure given in (8.4.29). 

For the memoryless case, evaluation of R L (D) is the natural generalization of 
the continuous-time problem given above. We begin by defining the autocorrela 
tion function of the zero-mean Gaussian random field as 

0(x, >; x , y ) - {M(X, >)"(* , y )} (8.4.31) 

To be able to evaluate R L (D\ we again require a representation of source outputs 
in terms of a countable number of independent random variables, and again we 
attempt to express our distortion measure in terms of these random variables. 
With the squared-error distortion measure, any orthonormal expansion of the 
source output random field will suffice. To have independent components, 
however, we need the Karhunen-Loeve expansion. We express outputs as 

u(x,y)= fXfcfcy) |x| <^, \y\ <^ (8.4.32) 

k=i 22 

where 

.L/2 .L/2 

u<*> = I I u(x, y)f k (x, y) dx dy (8.4.33) 

*-L/2 -L/2 

and {/ k (x, y)} are orthonormal functions (eigenfunctions) that are solutions to the 
integral equation 

.L/2 .L/2 

V(x, y) = | | </>(*, y, * , /)/(* , y) dx dy (8.4.34) 

*-L,2 -L/2 

For each eigenfunction / fc (x, y), the corresponding eigenvalue / fc is nonnegative 
and satisfies the condition 11 

E{u (k} u (J} } = A k 6 kj for /c, j = 1, 2, . . . (8.4.35) 



11 Again we assume / M >A 2 >---. This representation holds with probability one for every 
x, ye [-1/2, L/2]. 



508 SOURCE CODING FOR DIGITAL COMMUNICATION 

As for the one-dimensional case, we assume that the autocorrelation </>(x, y\ x , /) 
satisfies the conditions necessary to insure that the eigenfunctions {f k } span the 
alphabet space <% = V. Thus for any two functions in ^ = V, we have 



* 



u(x,y)= u*f k (x,y) (8.4.36) 



v(x,y)= ZvVK(x,y) (8.4.37) 

fc=l 

and the distortion measure becomes 

U. ) = T* I (" W - " W ) 2 (8.4.38) 

L> k=l 

For this sum distortion measure, R L (D) is now expressed in terms of a memoryless 
vector source with output u = {w (1) , w (2) , . ..} whose components are independent 
Gaussian random variables, with the variance of u (k) given by A fc , for each k. The 
rate distortion function of the random field normalized to unit area is thus (see 
Sec. 8.2) 



JO, i In 



R L (D) = -= Y max 0, i In ^ 1 (8.4.39) 

k=i \ # / 

where 9 satisfies 

D = ^ I min (9, k k ) (8.4.40) 

I^t 1, i 

Here R L (D) represents the minimum rate in nats per unit area required to encode 
the source with average distortion D or less. 

Since eigenvalues are difficult to evaluate, R L (D) given in this form is not very 
useful. We now take the limit as L goes to infinity. Defining 

R g (D)= \imR L (D) (8.4.41) 



L->oo 



we observe that R 9 (D) represents the minimum rate over all choices of Land thus 
the minimum achievable rate per unit area. In addition, since for most images L is 
large compared to correlation distances, letting L approach infinity is a good 
approximation. To evaluate this limit we must now restrict our attention to 
homogeneous random fields where we have 

0(x, y\ x , /) - </>(* -x ,y- y ) (8.4.42) 

This is the two-dimensional stationarity condition and allows us to define a 
two-dimensional spectral density function, 

0(r x , r y )e ~ ***+*** dr x dr y (8.4.43) 



oo oo 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 509 

Sakrison [1969] has derived a two-dimensional version of the Toeplitz distribu 
tion theorem which allows us to evaluate the asymptotic distribution of the 
eigenvalues of (8.4.34). This theorem shows that for any continuous function G(A) 

f G[OK,w,)]</ Wjc /w y (8.4.44) 

k=l -oo *-oo 

Applying this theorem to (8.4.39) and (8.4.40) yields 
R g (D)= \imR L (D) 



max 



0, In 



d\v x dw y (8.4.45) 



where 9 satisfies 

1 .00 .00 

~ 4n 2 L^ Lj* 

As with our one-dimensional case, R 9 (D) is an upper bound to all other rate 
distortion functions of non-Gaussian memoryless sources with the same spectral 
density O(w je , w y ), and thus serves as a basis for comparison for various image 
compression schemes. 

Example (Isotropic field) An isotropic field has a correlation function which depends only on the 
total distance between two points in the two-dimensional space. That is, 

By defining r, T , H-, and 6 W as polar coordinates where 

r x = r cos d r r y = r sin 9 r (8.4.48) 

and 

w^ = H- cos W w y = w sin H . (8.4.49) 

we obtain 







= 2 ^(r)J (wr)r^r (8.4.50) 

*o 

where J ( ) is the zeroth order Bessel function of the first kind. Since there is no W dependence 

fc(w) = 2xCri(r)JJwr) dr (8.4.51) 

o 



510 SOURCE CODING FOR DIGITAL COMMUNICATION 

where 





(8.4.52) 

O(w) and 4>(r] are related by the Hankel transform of zero order. 

For television images, a reasonably satisfactory power spectral density is 



resulting in 

3>(r) = e -W d < (8.4.54) 

where d c = l/w is the coherence distance of the field (Sakrison and Algazi [1971]). 

For many sources successive images are often highly correlated so that the 
above memoryless assumption is unrealistic. We now find an upper bound to the 
rate distortion function of a discrete-time stationary ergodic source that emits the 
two-dimensional homogenous Gaussian random field described above. Let the 
nth output be denoted 

^ n ( X ,y):\ X \<^,\y\<^ (8.4.55) 

Again use the usual Karhunen-Loeve expansion 

u a (x,y)= J n<j%(jc,y) (8.4.56) 

fc=l 

where {f k ( , )} and {A k } are eigenfunctions and eigenvalues which satisfy the in 
tegral equation of (8.4.34). By the assumed stationarity of the discrete-time source 
with memory, the autocorrelation of the random field 0(x, y\ x , /) is inde 
pendent of the output time index rc, and hence eigenfunctions and eigenvalues 
are the same for each output of the discrete-time stationary ergodic source. We 
now have a source that outputs a vector u n = (u ( n l \ u ( n 2 \ ...) at the nth time. 

The rate distortion function of the discrete-time stationary ergodic source is 
given by 

R L (D)= lim R LtN (D) (8.4.57) 

JV-oo 

where R L N (D) is the Mh-order rate distortion function [i.e., which uses only the 
first N terms in the expansion (8.4.56)]. We can upper-bound R L (D) by the rate 
required with any particular encoding scheme that achieves average distortion D. 
Consider the following scheme: 

1. Encode each Karhunen-Loeve expansion coefficient independently of other 
coefficients. 12 That is, regard the kih coefficient sequence {u ( f\ u ( 2\ ...} as the 

12 This amounts to partitioning the source into its spatial spectral components and treating suc 
cessive (in time) samples of a given component as a subsource which is to be encoded independent 
of all other component subsources. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 511 

output of a zero-mean Gaussian subsource and encode it with respect to a 
squared-error distortion measure with average distortion D (k) . 
2. Choose the distortions D (1 \ D (2 \ ... so as to achieve an overall average 
distortion D. 

The required rate for the above scheme, which we now proceed to evaluate, 
will certainly upper-bound R L (D). Let us define correlation functions for each 
subsource. 

</> = {>,} (8.4.58) 

and corresponding spectral density functions 

^*)(w)= j[ 0< k >(r)e- >w (8.4.59) 

r = oo 

Consider encoding the sequence [u ( f\ u ( }\ . . .} with respect to the squared-error 
distortion measure. From (8.2.69) and (8.2.70), we see that for distortion D (k) the 
required rate is 



#<*>(><*>) = -A I * max 



4;r . _ 



0,ln^ -\dw (8.4.60) 



where 9 satisfies 



D (k) = -- I* min [ft iA (k) (w)] dw (8.4.61) 

2n _ 



Here R (k) (D (k) ) is in nats per output of the subsource. 

Recall that the total single-output distortion measure is 

1 x 
d L (u, r) = -2 I (" (k) - f (k) ) 2 (8.4.62) 

i? At 

Hence, choosing {D (fc) } such that 

D = J2 I ^^ ( 8A63 ) 

will achieve average distortion D. The total rate per unit area is given by 

1 



_ RW(D (k >) (8.4.64) 

^ k=i 



Thus we have 



1 if* 

72 L; ~A~ maX 



0, In 



(8.4.65) 



where now we choose 9 to satisfy 

1 x i n 
D = -2 X I min [ft i// (A:) (w)] dw (8.4.66) 

We consider next a special case for which this upper bound is tight. 



512 SOURCE CODING FOR DIGITAL COMMUNICATION 

Example (Separation of correlation) Suppose the time and spatial correlation of source outputs 
separate as follows: 

where <p(0) = 1. 

Recall that any two Karhunen-Loeve expansion coefficients uj, k) and uj/| t are given by 

L/2 L/2 

u ( n } = I \ "(*> y)f k (x, y) dx dy 

-L/2 -L/2 

L/2 L/2 (8-4-68) 



Thus we have correlation 

.L/2 L/2 

E ( U n , XH-T) = {"(*> y) U n + r( X> > /)}/k(*> V)/^ /) dx d V d * d V 

-L/2 -L/2 

L/2 L/2 

= I | <?(T)<(X - * , y - y )f k (x, y)fj(x , y ) dx dy dx dy 

J -L/2 J -L/2 

L/2 L/2 

= 9Wk \ I fk(x > y )fj(x , y ) dx dy 

-L/2 -L/2 

- A k v(i)d k j (8.4.69) 

Hence 



- A k ^(T) (8.4.70) 

and for any k ^ j 

^ all T (8.4.7 1 ) 



Since we have Gaussian statistics, the uncorrelated random variables are independent random 
variables and the different Karhunen-Loeve expansion coefficient sequences can be regarded as 
independent subsources. Lemma 8.1.1 shows that the upper bound given in (8.4.65) is in fact 
exact, and we have for this case 



dw (8.4.72) 

" k=l ^^ -ic 

where 6 is chosen to satisfy 

D = \ f I min [0, A k #(w)] dw (8.4.73) 

Using (8.4.44) in taking the limit as L -> oo, we have the limiting rate distortion function given by 
R(D}= \\mR L (D] 



167t J_ n ^ 

where satisfies 



f max o, b ?! to. *, * (8.4.74) 

J J ^ 



1 r" r 00 r 00 

D = ^ min [0, O(w x , w > ,)i//(w)] ^w x ^w y rfw (8.4.75) 

8;r J_ m J __ J 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 513 

and where O(H I , C , vv v ) is given by (8.4.43) and 



This example shows that the particular scheme of encoding expansion coefficients 
independently of one another is an optimum encoding scheme when the time and 
spatial correlations are separated as in (8.4.67). This general idea of taking a 
complex source and decomposing it into independent subsources, which are 
encoded separately, is a basic design approach for practical data compression 
schemes. 



8.5 SYMMETRIC SOURCES WITH BALANCED DISTORTION 
MEASURES AND FIXED COMPOSITION SEQUENCES 

In Sec. 7.6 we found that for symmetric sources with balanced distortion meas 
ures, the rate distortion functions are easily obtained in closed parametric form 
[see (7.6.69) and (7.6.70)]. We now show that these symmetric sources with bal 
anced distortion measures have the property that, for fixed rate arbitrarily close 
to R(D) and sufficiently large block lengths, there exist codes that encode every 
source output sequence with distortion D or less. This is a considerably stronger 
result than that stated in Theorem 7.2.1 which shows this only for the average 
distortion. A similar strong result holds for the encoding of sequences of fixed 
composition of an arbitrary discrete source and this will lead us in the next section 
to the notion of robust source coding techniques that are independent of source 
statistics. We begin by restating the definition of symmetric sources and balanced 
distortion measures. 



8.5.1 Symmetric Sources with Balanced Distortion Measures 

A symmetric source is a discrete memoryless source with equally likely output 
letters. That is, 

if = fa, 02,. ..,*} (8.5.1) 

where 

Q(a k ) = ~ fc=l,2,...,,4 (8.5.2) 

Assuming the same number of representation letters as source letters where 
i~ = {/>!, 6 2 , ^}> f r a balanced distortion measure, there exist nonnegative 
numbers {d l , d 2 , ..., d A } such that 

{d(u, b,\ d(u, b 2 \ ..., d(u, b A )} = [d l9 d 29 ...,d A ] for all u e V 

and (8.5.3) 

{d(a l9 v\ d(a 2 , v), . . . , d(a A , v)} = {d^ d 2 ,...,d A } for all v e i 



514 SOURCE CODING FOR DIGITAL COMMUNICATION 

The rate distortion function R(D) is given parametrically by 



** 



(7.6.69) 



k=l 



R(D S ) = sD s + In A - In ( e sdk } (7.6.70) 

u=i / 



where s < is the independent parameter. 

Consider again the block source encoding and decoding system of Fig. 7.3. As 
we did earlier, we prove a coding theorem by considering an ensemble of block 
codes of size M and block length N. By symmetry in this ensemble, we choose 
code $ = {Y!, v 2 , ..., V M } with uniform probability distribution, that is 

MN 

(8-5.4) 

Here each code letter is chosen independently of other code letters and with a 
uniform one-dimensional probability distribution. Furthermore, since the distor 
tion matrix is balanced, for fixed u e <%, the random variable d(u, v) is independent 
of u. That is, for any u e W 

Pr {d(u, v ) = d k \u} = - k = 1, 2, . . . , A (8.5.5) 

VT. 

This means that for any fidelity criterion D and any two source sequences 
u, u e WH we have 

Pr {^(u, v) > D u} = Pr {</> , v) > D \ u } (8.5.6) 

This is the key property of symmetric sources with balanced distortion measures 
which we now exploit. 

Lemma 8.5.1 Given block length N, distortion level D>D mm , and any 
source output sequence u e 91 N , over the ensemble of codes 3$ of block length 
N and rate R > R(D) 

Du<e~ MFN(D) 



(8.5.7) 

where 

o(N) -> as N -> oo 
and 

F fl (D) = Pr{d fl (u,v)<D\ U } (8.5.8) 

is independent of u e V N . 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 515 

PROOF Let .$ = {v t , v 2 , . . ., V M }. Then since code words are independent and 
identically distributed, according to (8.5.6) 

Pr {d(u\) >D\u} = Pr Jmin d v (u, v) > D\u\ 



ve.3 






= Pr{d v (u, v m )>D:m=l 

PrKv(u,v m )>D|u 
= [1 - F X (D)] M 



where the inequality follows from In x < x 1. 
Next note that, for fixed u e <% N 

4*M) = ^ X </(, O (8.5.10) 

n= 1 

is a normalized sum of independent identically distributed random variables. 
In App. 8 A we apply the Chernoff bounding technique to obtain for any e > 



(8.5.11) 

\ ^ vt / 

where s satisfies 

A 

D-e = k -^- (8.5.12) 



We assume D > D min and choose e > small enough so that D - e > D min . 
This guarantees that s is finite and converges to a finite limit as e - 0. In 
particular, choosing 

C= J^? (8 - 5 - 13) 

we have 

:-lnF v (D)< -R(D) (8.5.14) 



From this lemma it follows immediately that the average distortion 
satisfies 



D + j ^-exp.v[K-d (8.5.15) 

and hence that there exists a code & for which d() also satisfies this bound. 
Comparing this with Theorem 7.2.1, we see that this lemma is a stronger result 



516 SOURCE CODING FOR DIGITAL COMMUNICATION 

since the second term here is decreasing at a double exponential rate with block 
length N, compared to the single exponential rate of Theorem 7.2.1. Another 
observation is that Lemma 8.5.1 holds regardless of the source probability distri 
bution and is true even for sources with memory. This happens since we have a 
balanced distortion matrix and assume a uniform distribution on the code 
ensemble. Of course, when the source output probability distribution is not uni 
form, we cannot say that the R(D) of the symmetric source is the rate distortion 
function. It is clear, however, that the rate distortion function of the symmetric 
source, R(D), is an upper bound to the rate distortion functions of all other 
sources with the same balanced distortion, since we can always achieve distortion 
arbitrarily close to D with rate arbitrarily close to R(D). We consider this in 
greater detail when we examine the problem of encoding source sequences of fixed 
composition. We next prove the source coding theorem for symmetric sources 
with balanced distortion measures. 



Theorem 8.5.1 For a symmetric source with a balanced distortion measure 
and any rate R where R > R(D), there exists a block code $ of sufficiently 
large block length N and rate R such that 

d(u \@)<D for all u e W N (8.5.16) 



PROOF For any code $ of block length N and rate R, define the indicator 
function 



for u e W N . Averaging <I> over source output sequences gives 

Ie*(u)<I>(u|) = -^<l>(u|) (8.5.18) 

U A U 

Averaging this over the ensemble of codes yields 



u A 

= XQ w (u)Pr {*( |) >D|u 



(8.5.19) 

where the inequality follows from Lemma 8.5.1. This means there exists at 
least one code ^ for which 



<e 



u 

-expN[R-R(D) + o(N)] 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 517 

or 



(8.5.20) 

u 

The bound can be made less than 1 by choosing N large enough when 
R > R(D). Then we have 

1 (8.5.21) 



But by the definition (8.5.17), for each u, O(u |^) can only be and 1. Hence 
(8.5.21) implies that O(u |J?) = for all u e ^ N , which requires d(u \M) < D 
for all u. 

Since (8.5.16) holds for all output sequences, we see that this theorem holds 
for any source distribution (Q N (u): u e tfS N } when R(D) is the symmetric source 
rate distortion function and R > R(D). For any other source distribution, the 
actual rate distortion function will be less than that of the uniform distribution. 

8.5.2 Fixed-Composition Sequences Binary Alphabet Example 

There is a close relationship between symmetric sources with balanced distortions 
and fixed-composition source output sequences of an arbitrary discrete source. 
For sequences of fixed composition, we can prove a theorem analogous to 
Theorem 8.5.1. Although this property is easily generalizable to arbitrary discrete 
alphabet sources with a bounded single-letter distortion measure (see Martin 
[1976]), we demonstrate the results for the binary source alphabet and error 
distortion measure. 

Suppose we have a source alphabet ^ = {0, 1}, a representation alphabet 
i^ = {0, 1}, and error distortion measure 

d(k, j)=\- d kj for /c, j = 0, 1 (8.5.22) 

For u e 3S N , define its weight as w(u) = number of Is in u, and define the composi 
tion classes, 

<?(/) = {u:ue# w , w(u)=/} / = 0, 1, 2, ..., N (8.5.23) 

with probabilities 



- 

/ = 0, 1, 2, ..., N (8.5.24) 



and corresponding rate distortion functions [see (7.6.62)] 



R(D- Q (/) ) = je~ - JT(D) (8.5.25) 



0<D<min(-,l -- | / = 0, 1, 2, . , N 

\N N j 



518 SOURCE CODING FOR DIGITAL COMMUNICATION 

Using the Chernoff bound (see Prob. 1.5) we have for the number of se 
quences in %y(/), denoted \^ N (l)\ 



: e N * (l/N) (8.5.26) 

This means we can always find a code of rate 

R > * ( N) SUCh that M = eNR> \VN(I)\ 

which can uniquely represent each sequence in ^ N (l) and thus achieve zero distor 
tion. We shall encode some composition classes with zero distortion and others 
with some nonzero distortion. 

Let us now pick 6 such that < 6 < In 2, pick fixed rate R in the interval 
6 < R < In 2, and choose < e < 0.3 to satisfy 

jr(c) < d (8.5.27) 

Observe that we can make e and 6 as small as we please and still satisfy (8.5.27). 
Let the binary distribution Q* satisfying Q*(l) <i be defined parametrically in 
terms of the rate R, e and d, as follows : 

= JT(fi*(l)) - JT() + 6 (8.5.28) 

Also let /* be the largest integer such that l*/N < Q*(l) < i Then from Fig. 8.3 
we see that for any fixed composition class %y(/) where either 

l/N < l*/N or 1 - If N < l*/N (8.5.29) 

we have 



~ - *ji - * (Q * (l)) (8 - 5 - 30) 

and 

(8.5.31) 



Thus for any composition class <# N (l) for which jjf(l/N) < Jj?(Q*(l)), we can find a 
block code of rate R and block length N such that 



(8.5.32) 

and from (8.5.26) 

M = e NR > e N * (llN) > | V N (l) | (8.5.33) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 519 




_[*_ Q* 0.2 _/_ 0.4 

N N 



0.6 



0.8 i _J^ 1.0 

N 



Figure 83 Binary entropy relationships. 



Therefore, since there are more representation sequences M than sequences in the 
class, such a code can encode sequences from ^ N (l) with zero distortion where / 
satisfies (8.5.29). 

For any other fixed composition class ^ N (l) for which instead 



/* L _ l * 

N < N < ~N 



define D { > c to satisfy 



R = 



(8.5.34) 



(8.5.35) 



520 SOURCE CODING FOR DIGITAL COMMUNICATION 

Such a D l can be found in the range e < D t < l/N. This is illustrated in Fig. 8.3. We 
show next that, like our result for the symmetric source with balanced distortion 
measure presented in Theorem 8.5.1, we can find a code of rate R such that all 
sequences in ^ N (l) can be encoded with distortion D, or less. First we establish a 
lemma analogous to Lemma 8.5.1 by considering an ensemble of block codes 
^ = { v i> v 2 , ..., V M } of block length N and rate R = (In M)/N with probability 
distribution 



= n nno (s.5.36) 

m=l n=l 

where 



l) (8.5.37) 

and P (l} (v | u) is the conditional probability yielding the rate distortion function 



Lemma 8.5.2 Let c > 0, d > and rate d < R < In 2 satisfy (8.5.27) and 
(8.5.28). For a fixed composition class %> N (l) satisfying (8.5.34), D, satisfying 
(8.5.35), and any u e %> N (l\ over the ensemble of block codes with probability 
distribution (8.5.36) 



Pr {d(u\0) >D l \ 
PROOF 



= Pr [d N (u, v m ) > D I; m = 1, 2, ..., M |u e N (l)} 



^ e -MPr{d N (u, v)<Di\ue<# N (l)} (8.5.39) 

Here the key property we employ is that Pr {d N (u, v) < D t \ u e ^ N (l)} is 
independent of u e ^ N (l\ since only the composition determines the probabil 
ity distribution of d N (u, v), which is a normalized sum of independent (though 
not identically distributed) random variables. The generalized Chernoff 
bounds in App. 8A again suffice for our purpose. Here we have 



Pr {</>, v) < D f |u G <*(/)} > l - -^-w^^)- "^)] (g.5.40) 

Substituting (8.5.35) into (8.5.40) and the result into (8.5.39) then gives us the 
desired result. 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 521 

It is easy to see that, for e < 0.3, we have 3tf(e)> e In (c/2) so that 
6 > -c In (c/2) > (see Prob. 8.12). Hence the exponent [6 + e In (c/2)] > in 
(8.5.38). From this lemma follows the desired result. 

Theorem 8.5.2 Let c > 0, d > satisfy Jjf(c) < S. For sufficiently large integer 
N* 9 for any rate R in the interval d < R < In 2, and any composition 
class %v(/) where N > N*, there exists a code ^ of block length N and rate R 
such that 



d(u |#|) < D, for all u e < N (l) (8.5.41) 

where D, satisfies 



when 



and D, = otherwise. Here Q*(l) < \ satisfies 



PROOF For l/N [Q*(l\ 1 - fi*(l)], D l = as a result of (8.5.33). Now for 
any l/N e [(?*(!), 1 - fi*(l)], suppose we have a source that emits only se 
quences from %v(0 w ^h equal probabilities. For any block code $ of block 
length N and rate R, define the indicator function 



Averaging O over output sequences, we obtain 

(8 - 5 - 43) 



Next consider an ensemble of block codes where code M = {v l5 v 2 , . . . , V M } is 
chosen according to the probability distribution (8.5.36) and (8.5.37). Averag 
ing (8.5.43) over this code ensemble yields 

* i . / i ^,\ ^-i ^ 

= 2-< \w i]\\ 



522 SOURCE CODING FOR DIGITAL COMMUNICATION 

where the inequality follows from Lemma 8.5.2. Using the bound 
| ##(/) | < 2 N , it follows that there exists a code ^ of block length N and rate 
R such that 

X O(u|#,)< X O(u|#) 

u e #jv(0 u e <# N (l) 

^ TN _ (1 4/Ne 2 ) exp N[d+ c In (e/2)] /o r AC\ 

^ ^ c ^O.J.tJ^ 

Choosing N* to be any integer for which the bound is less than one, it follows 
as in the proof of Theorem 8.5.1 that O(u 1^) = for all u e ^ N (/). 



This theorem shows that given any < 6 < In 2, rate R such that (5 < R < 
In 2, and < e < 0.3 satisfying ^f (e) < d, for any composition class % N (l) where 
JV > N*, we can find a block code, & t , of block length N and rate fl such that 
d(u\^ t ) = for all u e V N (l) if JP(1/N) < tf(Q*(\)\ and d(u\&i) < D l for all 
u e %v(/) if JT(//N) > *r(Q*(l)) where Q* satisfies (8.5.28) and D, > satisfies 
(8.5.35) (see also Fig. 8.3). 

It is natural to define the composite code 

N 

@ c = \j t (8.5.46) 

/ = o 

which has (N + l)e NR elements (e NR for each of the N + 1 composite classes) and 
hence rate 



For the code ^ c , we have 

d(u\$ c )<D l ifu^(/) (8.5.48) 

where we take D l = if je(l/N) < JT(Q*(1)). We see that, as N - oo, R c -> R, and 
thus by choosing N large enough we can make the rate of the composite code J? c 
arbitrarily close to R. 

Up to this point, the results depend only on the source alphabet and are 
independent of the source statistics. The composite code 3# c satisfies (8.5.48) 
regardless of the actual source statistics. Suppose, however, that our binary source 
is memoryless with probability Q(l) = q < \ and Q(0) = 1 - q. Then the rate 
distortion function for this source is R(D) = jtf(q) - J-f(D) for < D < q. How 
well does the composite code encode this source? The average distortion using the 
composite code is 



1 = 



Z e 

tfjv(J) 

< z <? <! - r D, 

(i) 

r ^ (8-5.49) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 523 

As N increases, (f)g (l - q) N ~ l concentrates its mass around its mean Nq. This 
follows from the asymptotic equipartition property (McMillan [1953]) which says 
that, as block length increases, almost all source sequences tend to have the same 
composition. Thus we have (see Prob. 8.13 and Chap. 1) 

lim CWO -)"- *> = *> (8-5-50) 

N-oc / = 

where D satisfies (8.5.35) with 1 = Nq\ that is 



= R(D) + 6 (8.5.51) 
The code rate for J$ c then becomes 

Rc = R(D) + 6 + M^l> (8 .5.52) 

Hence given any r\ > 0, we can find 6 small enough and N large enough so that 

4# C )<D + ^ (8.5.53) 

and 

R c < R(D) + r] (8.5.54) 



Thus the composite codes can encode any memoryless binary source with error 
distortion arbitrarily close to the theoretical rate distortion limit. This is a robust 
source encoding scheme for memoryless sources in the sense that the same compo 
sition class code is efficient (near the rate distortion limit) for all such sources and 
the composite code is constructed independent of actual source statistics. 

The preceding example of a binary alphabet with the error distortion measure 
can be generalized to arbitrary discrete alphabets and arbitrary single-letter dis 
tortions (see Prob. 8.14). Further generalizations are possible by considering fixed 
finite sequences of source outputs as elements of a larger extended discrete 
alphabet. In this manner, the robust source coding technique can be applied to 
sources with memory (see Martin [1976]). The basic approach of considering a 
single source as a composite of subsources and finding codes for each subsource in 
constructing a total composite code is also used in encoding nonergodic station 
ary sources. This is referred to as universal source coding and is discussed in 
Sec. 8.6. 

We have demonstrated a similarity between symmetric sources with balanced 
distortions and fixed composition classes. In general with any discrete alphabet, 
for any fixed composition class, we may define a function R(D; Q) where Q is the 
distribution determined by the composition. We can show that if R > R(D; Q) 
and the block length is large enough, we can find a code that will encode all 
sequences of the composition class to distortion D or less. Certainly if 

R > max R(D; Q) (8.5.55) 

Q 



524 SOURCE CODING FOR DIGITAL COMMUNICATION 

then every output sequence can be encoded with distortion D or less. Symmetric 
sources with balanced distortions have the property that 

R(D) = maxR(D ,Q) (8.5.56) 

Q 

Thus the symmetric source coding theorem (Theorem 8.5.1) is actually a special 
case of the composition class source coding theorem (Theorem 8.5.2 appropriately 
generalized to arbitrary discrete alphabets and any single-letter distortion meas 
ures). See Probs. 8.14 and 8.15 for generalizations and further details. 

8.5.3 Example of Encoding with Linear Block Codes 

We conclude our discussion with a coding example for the simplest symmetric 
source with balanced distortion, the binary symmetric source with error distortion 
measure. This example, due to Goblick [1962], shows that Theorem 8.5.1 is 
satisfied with a linear binary code. 

Let W = V = {0, 1}, 2(0) = 2(1) = i and d(kj) = 1 - d kj . The rate distor 
tion function is, of course, R(D) = In 2 tf (D) for < D < \. Now we consider 
linear binary (N, K) codes for source coding where the rate is r = K/N bits per 
symbol or R = (K/N) In 2 nats per symbol. First consider K binary sequences of 
length N,{b l9 b 2 , . . . , b K }, which we call code-generator vectors. With these gener 
ator vectors we generate a sequence of codes of block length N and different rates 
by defining for / = 1, 2, . . . , K the subcodes 



= { v: v = dbi c 2 b 2 C| b,} (8.5.57) 

where the binary coefficients c l9 c 2 , ..., c t are all possible binary sequences of 
length /. There are then 2 codewords in ^(/). By defining the set 



I; b l+ 1) = {v: v = v b /+ 19 v E #(/)} (8.5.58) 

we see that 

+ i) = #(/) u #(/; b l+1 ) (8.5.59) 



That is, code ^(/ + 1), which has rate (/ + \)/N bits per symbol, is the union of 
code ^(/), which has rate l/N bits per symbol, and a " shifted " version of this code 
denoted #(/;b| +1 ). 

Generate the ensemble of linear binary codes obtained by randomly selecting 
the generator vectors such that all components of all vectors are treated as 
independent equiprobable binary random variables. Since there are Nl compon 
ents in the generator vectors b l9 b 2 , . . . , b, , the code ^(/) has ensemble probability 
distribution given by 

(ir (8.5.60) 



Recall that u e W N also has a uniform probability distribution so that over the 
source and generator ensembles u and u b are independent binary vectors. 
(Check this for N = 1 and generalize.) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 525 

The usual ensemble coding argument must be modified here to a series of 
average coding arguments and a sequential selection of codeword generators, 
Since code M(l + 1) is constructed from code J(/) and another randomly selected 
generator vector b /+1 , we have 

Pr [d(u\(l + 1)) >D\(l)} = Pr (d(u\3(l)) > D, d(u\(l b l+1 )) > D\(l)} 

(8.5.61) 

where the probability is over the ensemble of u e % N and b,+ x e i^ N . But now 

= min d N (u, v b, + 1 ) 

v e(l) 

= min </ N (u0b l+1 , v ) 

v eSHl) 

(8.5.62) 



and, since u and u 0b /+1 are independent of each other, (8.5.61) becomes 
Pr{J(u|^(/+ l))>D\(l)} 

= Pr {d(u | *(/)) > D | A(t)} Pr {d(u b l+ 



(8.5.63) 
The left side of (8.5.63) can also be written as an average over b /+1 



(8.5.64) 

b,+ i 

Hence given any code M(\\ there exists a generator vector b /+1 such that 

(8.5.65) 

We can select a sequence of generator vectors b 1? b 2 , . . . , b K such that for each /, 
(8.5.65) holds. Then for such a set of K generator vectors we have 



Pr {d(u | (K)) >D\$(K}}< [Pr {d(u \ ^(0)) > D 

= [Pr [d(u, 0) > D}] 2 " 
= [1 - F N (D)] 2 * 
<e~ 2KFN(D) (8.5.66) 

where we have used In x < x - 1 and defined F N (D) = Pr {d(u, 0) < D}. From 
App. 8A, we have 

F N (D)>e- N[R(D)+0(N)} 



526 SOURCE CODING FOR DIGITAL COMMUNICATION 



so that there exists a code &(K) such that 

Pr {d(u\(K)) >D\@(K)} < e -**pWR-R(D)+o(N) } (8 5 



where R = (K/N) In 2. Following the same argument as in the proof of Theorem 
8.5.1, we see that by choosing N sufficiently large, for any fixed rate 



R = (K/N) In 2 > R(D) = In 2 - 
there exists a linear binary (TV, K) code &(K) such that 

d(u | @(K)) < D for all u e W N (8.5.68) 



Thus for a binary symmetric source and error distortion measure, a uniform 
distortion condition is met by a linear code. 



8.6 UNIVERSAL CODING 

The source coding theorems of Sec. 8.2 were restricted to stationary ergodic 
sources. The formal definition of R(D) given by (8.2.5), however, can also be given 
for nonergodic stationary sources where Lemma 8.2.1 still applies. The converse 
coding theorem (Theorem 8.2.1) applies to nonergodic stationary sources only if 
we interpret average distortion as an ensemble average. The coding theorems, 
however, do require that the sources be ergodic. One might expect that it would be 
possible to prove coding theorems for arbitrary stationary sources and then show 
that R(D) is indeed the minimum possible rate that can be achieved with ensemble 
average distortion of D or less. We present next, however, a counterexample 
which shows that R(D) given by (8.2.5) does not represent the minimum possible 
rate necessary to achieve ensemble average distortion D for nonergodic stationary 
sources. 

Example (Gray [1975]) Consider a memoryless Gaussian source of zero mean and variance a 2 . 
For the squared-error distortion measure d(u, v) = (u v) 2 , the rate distortion function is given 
by (7.7.20) 

R*>(D) = \ In nats/symbol < D < a 2 (7.7.20) 

Next suppose we have any stationary source whose outputs are random variables (not necessarily 
independent) with zero mean and variance a 2 . For the squared-error distortion, we can define the 
function R(D) as given by (8.2.5). Lemma 8.2.1 shows that 

R(D)<R 1 (D) (8.6.1) 

where Ri(D) is the rate distortion function for the corresponding memoryless source. From 
Theorem 7.7.3, we have the inequality 

/MDj^iln ~ =R 9 (D) (8.6.2) 

with equality if and only if the source single-letter probability density is Gaussian. Hence, for any 
rate R, if we pick D l and D g to satisfy 

(8.6.3) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 527 




Figure 8.4 Composite source, 
then we have 



(8.6.4) 



with equality if and only if the memoryless source is Gaussian. 

Now consider a composite source consisting of two memoryless Gaussian subsources, each 
of zero mean. One subsource has variance a\ and the other subsource has variance a\ ^ a\. The 
composite source has the output sequence of the first subsource with probability \ and the output 
sequence of the second subsource with probability -|. This source is sketched in Fig. 8.4. Hence 
u e RS has probability density 



2(2na 2 } N 



2\N/2 



- \\m\\illa\ 



(8.6.5) 



which is clearly non-Gaussian when a \ a\. The composite source has memory and is station 
ary. It is not ergodic. 13 Its first-order density is 



(8.6.6) 



where 



and 



| uQ(u) du = Q 

oo 

f * u 2 Q(u) du = l I u 2 Q (l) (u) du+ l -\ u 2 Q (2 \u) du 

- ^*- ^"- 



= a 2 (8.6.7) 

For the distortion d(u, v) = (u - r) 2 , we can define R(D) and R j(D). For any rate R, we have from 
(8.6.4) that Dj, where R = R^D^ satisfies 



a 2 e~ 2R >D 1 or <j 2 e~ 2R = 
where 6 > 0, since (8.6.6) is not a Gaussian density function. 



(8.6.8) 



13 The variance of any sample output sequence is either a\ or a\, while the ensemble variance is 



2 = 



528 SOURCE CODING FOR DIGITAL COMMUNICATION 



Let ^ be any block code of block length N and rate R. If this code is used to encode the 
composite source, the ensemble average distortion is 

Q N (u)d(u\0)du 



(8.6.9) 
But 

00 (8.6.10) 



oo oo 



is the average distortion for the zero-mean Gaussian source with variance a\. The converse 
coding theorem states that 

d,(^}>a 2 e~ 2R (8.6.11) 

and similarly 

d 2 (@}>a 2 2 e- 2R (8.6.12) 

yielding 



= Di + <5 (8.6.13) 

where d > 0, according to (8.6.8). 

If R(D) represents the achievable rate for which we can encode the stationary composite 
source with ensemble average distortion D or less, then given any e > we can find a block code 
of rate 14 R = R(D) such that 

d(&)<D + c (8.6.14) 

But from (8.6.1) and (8.6.3) we have that R = R(D) = R^D^ < R^D) which implies 

D<D l (8.6.15) 

and so 

<W </>! + (8.6.16) 

However, from (8.6.13), 

/>! + (8.6.17) 



which is a contradiction since we can choose c < d. Hence R(D) does not represent minimum 
achievable rates for the stationary composite source. 

The above counterexample shows us that the function R(D), although 
definable for arbitrary stationary sources, has operational significance only for 
stationary ergodic sources. It turns out, however, that a stationary source in 



In Corollary 7.2.2 we can replace R(D) + c by R(D) (see Prob. 7.4). 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 529 

general can always be viewed as a union of stationary ergodic subsources (Gray 
and Davisson [1974]). (In the above counterexample the source consisted of two 
subsources.) This fact has led to the development of coding theorems for general 
stationary sources without the ergodicity assumption. We illustrate this generali 
zation to nonergodic stationary sources with a simple example. 



Example (Stationary binary source) Suppose we have a binary source which consists of L mem- 
oryless binary subsources as shown in Fig. 8.5, where the /th subsource y , outputs independent 
binary symbols with probability p l of a " 1" output at any given time, for / = 1, 2, ..., L. The 
composite binary source has as its output sequence the output sequence of one of its subsources. 
It has a priori probability ;:,(/ = 1, 2, ..., L) of being connected to subsource Sf l for all time. 



Hence u = 



.) has probability 
J\(u) - I P* 



(8.6.18) 



where w(u) is the number of "l"s in u. Clearly this binary source is a stationary source. It is not 
ergodic since any sample output sequence (. .., u_j, u , u l , ...) has time average 



lim ]T u n = Pi 

if it is the output of subsource y,, whereas the ensemble expectation is 

L 



(8.6.19) 



(8.6.20) 




Figure 8.5 Stationary nonergodic binary source. 



530 SOURCE CODING FOR DIGITAL COMMUNICATION 

Suppose we have the representation alphabet f~ = fy = {0, 1}, and error distortion measure 
d(u, v) = 1 d uv . What is the smallest average distortion we can achieve for this binary source ? 
Although R(D) can be defined in terms of (8.6.5), the previous example showed that R(D) does not 
necessarily represent the minimum rate that can achieve average distortion D. We do know that, 
given e > 0, there exist block codes l , 3& 2 , . . ., 3$ L of block length N and rate R such that the 
first subsource can be encoded using code 3$ l with average distortion 

d l (& l )<D l + t (8.6.21) 

where D l satisfies 



(8.6.22) 

Similarly, the /th source can be encoded using code & 1 with average distortion 

</ (# ) < D l + e (8.6.23) 

where D l satisfies 

R = R(D l - Pl ) 

= jrfa) - 3f(D l ) (8.6.24) 

In other words, for a given rate R and any c > we can find for each subsource a block code 
which will give average distortion within t of the smallest average distortion possible for that 
subsource. The converse theorem applied to the /th subsource says we cannot do any better than 
average distortion > . 

Suppose we construct a code for our nonergodic stationary binary source as the union of the 
above codes designed for each subsource and denote this composite code, 

J3 c =(j l (8.6.25) 

/= i 

This code has 

M c = Le NR (8.6.26) 

codewords since there are e NR codewords in each of the subcodes <#*, 3$ 2 , . . . , 3$ L . The rate of the 
composite code is thus 

R c = (In M C )/N 

= K + 1 ^ (8.6.27) 

where, as N approaches infinity, (In L)/N converges to zero. For any source sequence of length N, 
u = (j, u 2 , ..., U N ), this code has distortion 

d(u\& c )= mind v (u, v) 

V6 C 

= min min d N (u, v), min d N (u, v), ..., min d N (u, v) 

!v e J l ve3 2 veML 

= min {d(u I l ), d(u \ 2 \ . . . , d(u \ M L }} (8.6.28) 

Hence the average distortion using code $ c is at least as small as is achievable with the know 
ledge of which subsource is connected to the output and using the appropriate subcode. That is 

d(3 c ) <D l + e (8.6.29) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 531 

if subsource y ^ is connected to the output. Hence, for a fixed code rate and by choosing large 
enough block lengths, the code j c can have average distortion arbitrarily close to the minimum 
possible average distortion. 

In Sec. 8.2, we established the performance of the best possible encoding 
methods for stationary ergodic sources. Generalizing on the above example, we 
may show that when a source can be modeled as a finite collection of stationary 
ergodic subsources, then by using good codes for each of the subsources to form a 
composite code for the overall stationary, but not necessarily ergodic, source, we 
can still achieve the minimum average distortion possible for a fixed rate. This 
technique generalizes to a large class of nonergodic stationary sources, because 
nonergodic stationary sources can generally be represented as a collection of 
stationary ergodic subsources, characterized by an a priori probability distribu 
tion that any particular subsource output sequence is the total source output 
sequence. Although for many sources the number of subsources thus required is 
infinite, under certain topological conditions (on both the source and the distor 
tion measure) the collection of subsources can be approximated by a finite collec 
tion of subsources. Once the finite approximation is made, we can proceed as in 
the above example. To illustrate this approach, we return to the binary example, 
but now with an uncountable number of stationary ergodic subsources. 

Example (Binary source with a random parameter) Consider a memoryless binary source where 
the probability p of a " 1 " is a random variable with range between and 1. We wish to encode 
this source using the error distortion measure d(u, r) = 1 6 ut .. If p e [0, 1] were known we 
would have a memoryless binary source which is stationary and ergodic. Because of the random 
parameter p, the overall source is stationary but nonergodic. In order to reduce this problem to 
the case of our previous example, we need to approximate the set of all possible subsources by a 
finite set of subsources. To do this, we define a distance between two binary memoryless sources, 
each with known but different parameters. 

Let y and & be two binary memoryless sources with parameters p and p respectively. Let 
Q(u, u] be any joint distribution such that 



P = I 6(1, ") = 6(1, 0) + 6(1, 1) (8-6.30) 

u = 



and 



p= <2(u, 1) = 6(0, 1) + 6(1,1) (8.6.31) 

u = 

That is, let Q(u, ii) be any joint distribution with marginal distributions corresponding to sources 
y and &. Define the distance between the two sources as 

d(p, p) = min Z Z 6(", "M("> ") (8-6.32) 

Qe 3 u u 

where is the collection of such joint distributions. Then 

Z Z 6(", "V(" ") = 6(0, 1) + Q(l 0) (8.6.33) 

since d(u, v) = 1 - 8 U1 . is the distance measure. It follows easily (see Prob. 8.17) that 

3(P*P)= \P-P\ 



532 SOURCE CODING FOR DIGITAL COMMUNICATION 



Let 38 be any block code of length N and let u e W N be an output sequence of length N from 
source y and u e ^ N be an output sequence from source y. Let v(ii) e J# satisfy 

d N (u, v(u)) = min d(u, v). 

ve3 

Then 

min d N (u, v) < d N (u, v(u)) 

ve jU 

< 4v(u, u) + 4v(u, v(u)) 

= d N (u, u) + min d N (u, v) (8.6.35) 

Ve< 

where the second inequality is the triangle inequality which this error distortion measure clearly 
satisfies. By symmetry we then have 

min d N (u, v) < d N (u, u) + min d N (u, v) (8.6.36) 

v e 36 v e Si 

Now averaging either (8.6.35) or (8.6.36) with respect to the joint distribution Q(u, u) which 
satisfies (8.6.30), (8.6.31), (8.6.32), and (8.6.34), we obtain 

| </(# \p)-d(@\p)\< d(p, p) = p-p\ (8.6.37) 

where d(@t \ p) and d(@ \ p) are the average distortions attained with code & for source y and y, 
respectively. This "mismatch" equation tells us the maximum average distortion loss we can 
have when applying a code designed for one source to another source. It allows us to make a 
finite approximation to the source space since, when two sources are close in source distance 
3(p, p), then a good code for one source is also good for the other. In addition, if R(D, p) and 
/?(>; p) are the rate distortion functions for the two sources, we can easily show (see Prob. 8.18) 
that 

R(D + d(p t p); p) < R(D; p) < R(D - d(p, p)- p) (8.6.38) 

Given any c > let us divide the unit interval into L equally spaced intervals of length less 
than e, which requires L > 1/e. Let p 1? p 2 , . . . , p L be the midpoints of the L intervals. By construc 
tion |p, p/+ 1 1 < e, and for any p e [0, 1] we have 

min |p- p,\ <c (8.6.39) 

Hence for any subsource with parameter p there is a subsource with parameter in the finite set {p x , 
p 2 , ..., p L } which is within "source distance" e. We now use subsources corresponding to these 
parameters as the finite approximation to the uncountable set of subsources. Following the 
results of our earlier example, we find codes $ l , $ 2 , ..., $ L satisfying (8.6.21) to (8.6.24) and 
define the composite code 

^ c = |J & (8.6.40) 

/= i 

For any subsource with parameter p e [0, 1] we have from (8.6.37) that the average distortion 

*) (8.6.41) 



where p* e {p t , p 2 , ..., p L ] such that d(p, p*) = \p- p*\ < Then since d(@ c \p*) < D* + 
[see (8.6.29)], we have 



(8.6.42) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 533 
where D* satisfies [see (8.6.24)] 



R = 

= R(D*;p*) (8.6.43) 

For the source with parameter p, the smallest average distortion possible is D where 

R = R(D, p) 

= R(D*;p*) (8.6.44) 

But from (8.6.38) we have 

R(D* + ; p) < R(D*, p*) = R(D; p) < R(D* - e; p) (8.6.45) 

and so 

D* < D + c (8.6.46) 

Thus, finally substituting in (8.6.42), we obtain 

d( c \p)<D + 3c (8.6.47) 

The code rate for the composite code j c is 

R c = R + (In L)/N (8.6.48) 

which approaches R as N -* oo. This shows that for any fixed rate, regardless of the value of the 
unknown parameter, p, we can use a single code c to encode our binary source with unknown 
parameter with an average distortion which is asymptotically equal to the minimum achievable 
when the parameter is known. 

The method of this example generalizes to a large class of nonergodic station 
ary sources and distortion measures. The basic idea is to first observe that all 
nonergodic stationary sources can be represented as a collection of stationary 
ergodic subsources (Rohlin [1967]). By defining a distance measure (see 
Prob. 8.18) on the subsource space, we can often "carve up" this space into a 
finite number of subsets with each subset of subsources approximated by a single 
subsource. This finite approximation then allows us to design good codes for each 
of the finite representative subsources and take the union of these as the code for 
the actual source. If there are L such subsources, then the rate of the composite 
code is at most (In L)/N larger than the rate for each subcode. For sufficiently 
large N, this additive term is negligible. 

Universal coding refers to all such techniques where the performance of the 
code selected without knowledge of the unknown " true " source converges to the 
optimum performance possible with a code specifically designed for the known 
true source. The technique of representing or approximating a source as a finite 
composite of stationary ergodic subsources and forming a union code is one of 
several universal coding techniques. Another closely related technique involves 
using a small fraction of the rate to learn and characterize the stationary source, 
and then using the rest of the rate in encoding the source outputs. Earlier, in 
Sec. 8.5, we considered a stronger robust coding technique for finite alphabet 



534 SOURCE CODING FOR DIGITAL COMMUNICATION 

sources wherein the source outputs were classified according to a finite set of 
composition classes. This technique also is independent of the source statistics and 
is conceptually related to the approach in this section. In all cases the purpose is to 
encode unknown or nonergodic sources which often may be characterized as 
sources with unknown parameters. The main result of these two sections is that 
these universal coding techniques can asymptotically do as well as when we know 
the unknown parameter exactly. A secondary purpose of this section is to demon 
strate that, unlike stationary ergodic sources, there is no single function for noner 
godic stationary sources which plays the role of the rate distortion function. 



8.7 BIBLIOGRAPHICAL NOTES AND REFERENCES 

Sources with memory were also first treated by Shannon [1948, 1959]. The calcu 
lation of the rate distortion function for discrete-time Gaussian sources is due to 
Shannon [1948], and the rate distortion function for a Gaussian random process is 
due to Kolmogorov [1956]. Sakrison and Algazi [1971] extended this to Gaussian 
random fields. Except for the Gaussian sources with squared-error distortions, the 
evaluations of rate distortion functions are difficult, and various bounds due to 
several researchers, have been developed. 

The robust source encoding of fixed composition sequences presented here 
appears in Berger [1971], while the techniques of universal coding are due to Ziv 
[1972], Davisson [1973], and Gray and Davisson [1974]. 



APPENDIX 8A CHERNOFF BOUNDS FOR 
DISTORTION DISTRIBUTIONS 



8A.1 SYMMETRIC SOURCES 

For the symmetric source defined by (8.5.1), (8.5.2), and (8.5.3) we have the rate 
distortion function given parametrically by (7.6.69) 



I> sdk 

and (7.6.70) 

R(D S ) = sD s + In A - In ( e sd < 

u = i 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 535 

where s < 0. We now bound F N (D) = Pr {d v (u, v) < D |u} where v has uniform 
probability distribution P v (v) = l/A N . Using E{ } for expectation with respect to 
v for any a > 0, we have the Chernoff bound 

F. v (D) = Pr{d,.(u,v)<Z>|u} 



= Pr vdfc, v n )- ND <0 
U=i 



ND- 



n=l 



= exp -N - 



u) 

/ 



lnX-ln( IX* 1 *) 

Vkl /J 



By choosing a = - s > 0, where s satisfies (7.6.69) and (7.6.70), we have the bound 

F s (D)<e-* R(D} (8A.2) 

To derive a lower bound to F V (D), define, for any /? < 0, 



and note that 



-ln( Y - ^ 

1=1 ~A e } 



(8A.3) 



and 



k=l 



k = 



V 2 



<= 1 



(8A.4) 



(8A.5) 



536 SOURCE CODING FOR DIGITAL COMMUNICATION 

Here since d k < d for k = 1, 2, . . . , A 

< n"(p) < dl (8A.6) 

For each u e ^, define a tilted probability on Y given by 



/c= 



= P(v)e ftd{u v) - M (8A.7) 

Given u e ^ N , the tilted distribution for v e iT N becomes 



(8A.8) 
Note that for this tilted distribution 

I/ J w (v|u)rf w (u,v) = |i ( (8A.9) 

V 

and 



<^ (8A.10) 

Given any 6 > 0, we then have the bounds 



Z 

|d N (u, v)-n 



] V p 

|d N ( U , V)-(I ( 



(8A .n) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 537 

The Chebychev inequality (see Prob. 1.4) gives 



|d/v(u, v)-/* ( 

>1-A (8A.12) 

since d v (u, v) has mean //(/?) and variance n"(fl)/N over the tilted distribution. 
Here (8 A. 11) becomes 



Pr K V (U, v) < v (p) + 6|u} > l - -ww-rt/TOg/w* (8A.13) 



When D > D min and c > is small enough so that D - e > D min , we can 
choose /? < to satisfy 






(8A.14) 



Let s satisfy (7.6.69) so that ^ (s) = D. Then 

J P fV"(a) d* dai = M) - n(s) - (0 - s)ii (s) (8A.15) 

Since // (a) > we have 



li(s) + (p-s)v r (s) (8A.16) 

so that subtracting Pn (p) = /3D fie = Pn (s) fie from both sides gives 



j? (8A.17) 

where we use (7.6.70). Using (8A.14) and (8A.17) in (8A.13), we get the desired 
result 



(8A.18) 



(O I - 
1 ~rr-2 \ e 



538 SOURCE CODING FOR DIGITAL COMMUNICATION 

8A.2 BINARY ALPHABET COMPOSITION CLASS 

We have a source alphabet ^ = {0, 1}, a representation alphabet i r = {0, 1}, and 
error distortion measure d(k, j) = 1 S kj for k, j = 0, 1. For fixed integers N and 
/ < N, define as in (8.5.23), (8.5.24), and (8.5.25) 



N " =1 



R(D , Q (f >) = *-*(D) <D <min , 1 -~ 

Now pick any < d < In 2, fixed rate R such that d < R < In 2, and choose 
< 6 < 0.3 to satisfy (8.5.27). Assume / is such that there exists a D l > c where 

from (8.5.35) 

R = R(D t ; Q (l) ) + d 

We now find bounds as in (8.5.36) and (8.5.37) for Pr {d N (u, v) < D,|u e V N (l)} 
where v e i^ N has probability distribution 



where 



and P (/) (i; | u) is the conditional probability distribution yielding the rate distortion 
function, R(D t ; Q (l) ) = I(P (l) ). 

Using E{ } for expectation with respect to v, for any s < we have the 
Chernoff bound 



Pr {</>, v) < 



u e 



n= 1 

= e- NsDl [E{e sd(l < v) }] l [E{e sd(0 < 



= exp -NsD, - In [P (/) (0)e s 4- P (/) (l)] 

(8A.19) 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 539 



We choose s to satisfy the parametric equations for D = D s and R(D S ). From 
(7.6.58), we have 



_/_ 

N 

N 



and 

Hence 

sD t - In [P (/) (0)> 5 



s = ln 



- l - ~ In 



1-0, 



(8A.20) 
(8A.21) 



and 



Pr {d v (u, v) < D,|u e #. v (/)} ^ 
To derive a lower bound, we first define, for any /? < and u e < v i 

u) = ^ln[{^ v ^ (u - v) |u6^ v (/)}] 

1 . 



(8A.23) 



U 6 <* 



= to 



Derivatives with respect to /? are 



and 



- In 



P (l> (0)e l 



(8A.24) 



(8A.25) 



4- 






(8A.26) 



540 SOURCE CODING FOR DIGITAL COMMUNICATION 

Here we have 

</*"( | u)< 1 
For a given u e ^ N (/), define a tilted probability on 

P fvW v < M u v) 
~ r \r\ v ic 



(8A.27) 



N given by 



For this tilted distribution, we have 



and 



K (v |u)(<Mu, v) - // (/ I u)) 2 = 






(8A.28) 
(8A.29) 

(8A.30) 



Now following the same inequalities as in (8A.11), (8 A. 12), and (8A.13), we 
have 

Pr L(u, v) < n (p\n) + u e 



N 



(8A.31) 

Since D, > e > 0, we have D t - c/2 > c/2 > 0, and we can choose ft to satisfy the 
parametric equations for D l - e = D p and R(D ft ; Q (/) ). Hence from (7.6.58) we 
have 



and 



giving us the relationships 



(8A.32) 



(8A.33) 



1-^ 



/ 
N 



1 - 






= D { - 



(8A.34) 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 541 



and 



(8A.35) 



Choosing the parameter s to satisfy D, = D s , we then have, as in (8A.17) 



Q>) + - 
Using (8A.35) and (8A.36) in (8A.31) results in the lower bound 

ft-{<k(o,?)D,|oetf N (/)}: 

From (8A.34) we have 



-M) 



since D, - c/2 > e/2 > 0. Hence (8A.37) becomes 

- ~ 



(8A.36) 



(8A.37) 



(8A.38) 



PROBLEMS 

8.1 Consider L independent memoryless discrete-time Gaussian sources in the multiple-source-user 
system of Fig. 8.1. Let a\, a\, . ... o 2 L be the output variance of the sources, and for some positive 
weights Wj, w 2 , ..., w t define the sum distortion measure 

d(u, v) = X w /(" (/) - l-</) ) 2 
/= i 

where u ( " is the /th source output symbol. Find a parametric form for the rate distortion function in 
terms of the variances and weights. 



542 SOURCE CODING FOR DIGITAL COMMUNICATION 

8.2 (a) In (8.2.66), for D < mm (A t , A 2 , ..., A L }, show that 

1 <D 



where <D is the covariance matrix defined in (8.2.53) and /l t , A 2 , ..., A L are its eigenvalues, 
(b) In (8.2.66), let = max {A l5 A 2 , . . ., A L } and show that 



and 



8.3 Verify Eq. (8.2.72) by following the proof of the source coding theorem in Sec. 7.2. 

8.4 Consider a discrete-time first-order Gaussian Markov source with 



For the squared-error distortion show that 

1 - P 2 1 - P 



R(D) = i In 



D 



(For large >, see Berger [1971], example 4.5.2.2.) 

8.5 For any discrete-time zero-mean stationary ergodic source with spectral density <D(co) and the 

squared-error distortion measure, show that the rate distortion function is bounded by 



where 



a 2 = C <D(co) dc 

Z7T J 



2 -. 
Generalize this for continuous-time stationary sources where 

O(w) = for | co | > co 

8.6 Generalize the lower bound given in Theorem 8.3.2 to 

R(D) > R L (D) + h- h(qt L ) for any integer L 

8.7 Prove the generalized Shannon lower bound given by (8.3.18). 

8.8 For a stationary discrete-time Gaussian source with spectral density function ^>(w), show that the 
differential entropy rate is 

h = i In (2neE) 
where 

E = exp In <I>(o>) da> 
In *_ 

8.9 Suppose we have a continuous-time Gaussian Markov source with spectral density 

<D(co) = A 

where A is a normalizing constant that satisfies 

1 r 

a O(o>) da) 
2n J 



RATE DISTORTION THEORY! MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 543 

For this case show that for the squared-error distortion measure 



where /? satisfies 

" 1+1" 



Here 1 + ft 2 = A/9 = 2a 2 /((o 9), where is the usual parameter (Berger [1971]). 

8.10 For the continuous-time Gaussian process with the squared-error distortion measure discussed in 
Sec. 8.4.3, derive the source coding theorem in the same manner as shown in Sec. 8.2 for the discrete- 
time Gaussian process. That is, derive a bound similar to (8.2.88). 

8.11 Verify that (8.5.15) follows from (8.5.7). 

8.12 Show that 

jf(e) = In e - (1 - c) In (1 -<:)>- In - for 6 < 0.3 

8.13 Prove (8.5.50) by using the converse source coding theorem (Theorem 7.2.3) and the law of large 
numbers in 

lejrtr-ff- D,- x W(i-*r 0,+ i wo-^r a, 

/ = \l-nq\<n;- \l-nq\>n-/ 

*>,+,>+ I WO -<?)*- 

\l-nq\>n-; 

for any y > 0. 

8.14 (Generalization of Sec. 8.5.2) Consider source alphabet ^/ = {a^ a 2 ,-..,a A }, representation 
alphabet i = [b^ b 2 , , b B ], and distortion \d(u, r): u e #, re i } such that D min = 0. Next define, 
for any u e % N , the numbers 

n(a k | u) = number of places where u n = a k k = 1, 2, . . . , A 
define the composition vector 



= (n(a l \u),n(a 2 \u),...,n(a A \u)) 
and define the composition classes 



where L v is the number of distinct compositions of output sequences of length N. For the /th composi 
tion C, = (n (] , n h , . . ., n,J, define the probability 

Q (l) M = ~ /c=l,2, ...,/l 

and the rate distortion function K(D; Q U) ), which is the rate distortion function of a memoryless source 
with output probability distribution Q <() . 

(a) Show that L v <(N + \) A ~ l . 

(b) Pick 6 > 0, 6 > 0, and rate R such that 



6 < R < max R(e; Q) 

Q 



and let Q* satisfy 



544 SOURCE CODING FOR DIGITAL COMMUNICATION 

For a fixed composition class % N (l) where R(c; Q*) < R(t; Q (/) ), define D, > c such that 

R = R(D l ; Q (/) ) + & 

Generalize Lemma 8.5.2 and Theorem 8.5.2 for this case. 

(c) Construct composite codes and show that, if the source is memoryless, these composite codes 
can approach the rate distortion limit. 

8.15 For a memoryless source with source alphabet ^, probability {Q(u): u e ^}, representation 
alphabet V, and distortion measure {d(u, v}:ue^,vE V~], let R(D) be the rate distortion function 
and define 

P N (R, D) = mm Pr {d(u | <) > D \ ] 

& 

where the minimization is over all codes of block length N and rate R > R(D). Define the exponent 

F(R, D) = - lim - In P N (R, D) 

N-oo N 

(a) Let Q be any other source probability distribution and R(D; Q ) the corresponding rate 
distortion function. Show that 

F(R, D) = oo if R > max K(D; Q ) 
Q 

Note that for symmetric sources with balanced distortions 

R(D) = max R(D, Q ) 

Q 
and thus F(R, D) = oo for all R > R(D). 

Hint: Prob. 8.14 shows that, if R > max Q , R(D, Q ), then all sequences can be encoded with 
distortion less than or equal to D for large enough N. 

(b) Consider the composition classes defined in Prob. 8.14. Sterling s formula gives 



n( ai \u)\n(a 2 \u)\ ~n(a A \u)\ 

_ a -N(J(Q( \Q) + a (N)) 

where 



J(Q ( \ Q) = Z (2 (/) () In 

Use the results of Prob. 8.14 to show that, for 

R(D) < R< max R(D, Q ) 
Q 

we have 

P N (R, D) < Pr {u e <*(/): R(D) < R(D; Q w ), I = 1, 2, ..., L N } 
< X Pr {u e ^} 

l:R(D)<R(D, QO) 



V 
l:R(D)<R(D, 



Then show that for any 6 > 



F(R, D) > min J(Q, Q) 




where Q satisfies R(D) = R - S < R(D, Q). 



RATE DISTORTION THEORY: MEMORY, GAUSSIAN SOURCES, AND UNIVERSAL CODING 545 

(c) For R(D) < R< max R(D, Q ), let Q be any probability distribution such that 

Q 

R < R(D- Q) 

Use the converse source coding theorem (Theorem 7.2.3) to show that there exists an a > (indepen 
dent of N) such that any code of rate R satisfies 

)> D\J$} >a 

Here Pr { } is the probability using distribution Q. 

(d) Next show that, for any 7 > and any code $ of block length N and rate R such that 
R(D) <R< R(D; Q), we have 



Pr (d(u\) > D \} > (a - ~ \ <r W( - Q>+-/] 



where 

- 2 = L 

u 

Hint: Define the region 

G..= !u: 



<y 



and obtain the lower bound to Pr (d(u\j$) > D\M} by restricting the summation to this subset of 
outputs. Then lower-bound Q N (u) by Q N (u) exp {-A r [J(Q, Q) + 7]} and use both (c) above and the 
Chebyshev inequality. 

(e) Combine the above upper and lower bounds to P N (R, D) to show that 

F(R, D) = min J(Q, Q) 


where Q satisfies R < R(D; Q) and where 

R(D) <R< max R(D; Q ) 
Q 

8.16 In (8.5.68) we showed that, for the binary symmetric source with error distortion, linear codes can 
achieve the rate distortion limit. Consider using the linear (7, 4) Hamming code for encoding source 
sequences of block length N = 1. This code has rate R = (4/7) In 2 nats per source symbol. Find the 
average distortion using this code and compare it with the rate distortion limit. 

8.17 Show that, for the set of joint distributions Q(u, u) on (0, 1} x (0, 1} where 

P = 6(1,0) + fi(l, 1) 

p = C(i,i) + Q(o,i) 

for given p and p, d(p, p) as defined in (8.6.32) becomes 

d(p,p}= \P~P\ 

8.18 Let y and Sf be two memoryless sources that differ only in their source probabilities, 
{Q(u): u e 31} and {Q(ii): u e #}. Let +~ = % be a common representation alphabet and suppose that 
the distortion measure {d(u, v): u e <%, v e V} satisfies the triangle inequality 

d(x, > ) < d(x, z) + d(z, > ) for all x, y, z ^ 
and the symmetry condition 

d(u, u) = d(ii, u} for all u, u E 31 



546 SOURCE CODING FOR DIGITAL COMMUNICATION 

For all joint distributions {Q(u, u): w, u e ^} where 

Q(u) = Q(u, u) for all u e 

u 

and 

<?() = Z <?("> ") for all u e 

define the distance between the two sources as 



min X Z 6(M, u) d(u, u) 

Q u u 



Show that (8.6.37) generalizes to 



where d(J$ \ y) and d($l &) are the average distortions for sources y and ,9" respectively when using 
the same block code M. This is the general "mismatch theorem" for memoryless sources. If R(D, Q) 
and R(D, Q) are rate distortion functions of the two sources, show that 

R(D + d(<S, "); Q} < R(D- Q) < R(D - d(&\ &); Q) 
This is the general form of (8.6.38). 



BIBLIOGRAPHY 



Abramson, N. (1963), Information Theory and Coding, McGraw-Hill, New York. 

Acampora, A. S. (1976), "Maximum-Likelihood Decoding of Binary Convolutional Codes on Band- 
Limited Satellite Channels," Conf. Rec., National Telecommunication Conference. 

Anderson, J. B., and F. Jelinek (1973), "A 2-Cycle Algorithm for Source Coding with a Fidelity 
Criterion," IEEE Trans. Inform. Theor., vol. IT- 19, pp. 77-91. 

Arimoto, S. (1976), "Computation of Random Coding Exponent Functions," IEEE Trans. Inform. 
Theor., vol. IT-22, pp. 665-671. 

Arimoto, S. (1973), "On the Converse to the Coding Theorem for Discrete Memory less Channels," 
IEEE Trans. Inform. Theor., vol. IT- 19, pp. 357-359. 

Arimoto, S. (1972), "An Algorithm for Computing the Capacity of Arbitrary Discrete Memoryless 
Channels," IEEE Trans. Inform. Theor., vol. IT- 18, pp. 14-20. 

Arthurs, E., and H. Dym (1962), "On the Optimum Detection of Digital Signals in the Presence of 
White Gaussian Noise A Geometric Interpretation and a Study of Three Basic Data Trans 
mission Systems," IRE Trans. Commun. Syst., vol. CS-10, pp. 336-372. 

Ash, R. B. (1965), Information Theory, Interscience, New York. 

Berger, T. (1971), Rate Distortion Theory, Prentice-Hall, Englewood Cliffs, N. J. 

Berlekamp, E. R. (1968), Algebraic Coding Theory, McGraw-Hill, New York. 

Blahut, R. E. (1974), "Hypothesis Testing and Information Theory," IEEE Trans. Inform. Theor., 
vol. IT-20, pp. 405-417. 

Blahut, R. E. (1972), "Computation of Channel Capacity and Rate-Distortion Functions," IEEE 
Trans. Inform. Theor., vol. IT- 18, pp. 460-473. 

Blake, I., and R. C. Mullin (1976), An Introduction to Algebraic and Combinatorial Coding Theory, 
Academic, New York. 

Bode, H. W., and C. E. Shannon (1950), "A Simplified Derivation of Linear Least-Squares Smoothing 
and Prediction Theory," Proc. IRE, vol. 38, pp. 417-425. 

Brayer, K. (1971), "Error-Correcting Code Performance on HF. Troposcatter, and Satellite Chan 
nels," IEEE Trans. Commun. Technol, vol. COM- 19, pp. 835-848. 

547 



548 PRINCIPLES OF DIGITAL COMMUNICATION AND CODING 

Bucher, E. A., and J. A. Heller (1970), "Error Probability Bounds for Systematic Convolutional 

Codes," IEEE Trans. Inform. Theor., vol. IT- 16, pp. 219-224. 
Bussgang, J. J. (1965), "Some Properties of Binary Convolutional Code Generators," IEEE Trans. 

Inform. Theor., vol. IT- 11, pp. 90-100. 
Campbell, F. W., and J. G. Robson (1968), "Application of Fourier Analysis to the Visibility of 

Gratings," J. Physiol., vol. 197, pp. 551-566. 
Courant, R., and D. Hilbert (1953), Methods of Mathematical Physics, vol. 1, Wiley-Interscience, New 

York. 
Darlington, S. (1964), "Demodulation of Wideband, Low-Power FM Signals," Bell Syst. Tech. J. 

vol. 43, pp. 339-374. 
Davisson, L. D. (1973), "Universal Noiseless Coding," IEEE Trans. Inform. Theor., vol. IT- 19, 

pp. 783-795. 

Elias, P. (1960), unpublished. (See Berlekamp, E. R. [1968].) 
Elias, P. (1955), "Coding for Noisy Channels," IRE Conv. Rec., pi. 4, pp. 37-46. 
Fano, R. M. (1963), "A Heuristic Discussion of Probabilistic Decoding," IEEE Trans. Inform. Theor., 

vol. IT-9, pp. 64-74. 

Fano, R. M. (1961), Transmission of Information, MIT Press, Cambridge, Mass., and Wiley, New York. 
Fano, R. M. (1952), "Class Notes for Transmission of Information," Course 6.574, MIT, Cambridge, 

Mass. 

Feinstein, A. (1958), Foundations of Information Theory, McGraw-Hill, New York. 
Feinstein, A. (1955), "Error Bounds in Noisy Channels without Memory," IRE Trans. Inform. Theor., 

vol. IT-1, pp. 13-14. 
Feinstein, A. (1954), "A New Basic Theorem of Information Theory," IRE Trans. Inform. Theor., 

vol. PGIT-4, pp. 2-22. 
Feller, W. (1957), An Introduction to Probability Theory and its Applications, vol. 1, 2d ed. Wiley, 

New York. 

Forney, G. D., Jr. (1974), "Convolutional Codes II: Maximum-Likelihood Decoding" and "Convolu 
tional Codes HI : Sequential Decoding," Inform. Contr., vol. 25, pp. 222-297. 
Forney, G. D., Jr. (1973), "The Viterbi Algorithm," Proc. IEEE, vol. 61, pp. 268-278. 
Forney, G. D., Jr. (1972a), "Maximum-Likelihood Sequence Estimation of Digital Sequences in the 

Presence of Intersymbol Interference," IEEE Trans. Inform. Theor., vol. IT- 18, pp. 363-378. 
Forney, G. D., Jr. (\912b). "Convolutional Codes II: Maximum Likelihood Decoding," Stanford 

Electronics Labs. Tech. Rep. 7004-1. 
Forney, G. D., Jr. (1970), "Convolutional Codes I: Algebraic Structure," IEEE Trans. Inform. Theor., 

vol. IT-16, pp. 720-738. 
Forney, G. D., Jr., and E. K. Bower (1971), "A High-Speed Sequential Decoder: Prototype Design and 

Test," IEEE Trans. Commun. Tech., vol. COM- 19, pp. 821-835. 
Gallager, R. G. (1976), private communication. 
Gallager, R. G. (1974), "Tree Encoding for Symmetric Sources with a Distortion Measure," IEEE 

Trans. Inform. Theor., vol. IT-20, pp. 65-76. 

Gallager, R. G. (1968), Information Theory and Reliable Communication, Wiley, New York. 
Gallager, R. G. (1965), "A Simple Derivation of the Coding Theorem and Some Applications," IEEE 

Trans. Inform. Theor., vol. IT-11, pp. 3-18. 

Gantmacher, F. R. (1959), Applications of the Theory of Matrices, Interscience, New York. 
Geist, J. M. (1973), "Search Properties of Some Sequential Decoding Algorithms," IEEE Trans. 

Inform. Theor., vol. IT-19, pp. 519-526. 
Gilbert, E. N. (1952), "A Comparison of Signalling Alphabets," Bell Syst. Tech. J., vol. 31, 

pp. 504-522. 
Gilhousen, K. S., J. A. Heller, I. M. Jacobs, and A. J. Viterbi (1971), "Coding Study for High Data Rate 

Telemetry Links," Linkabit Corp. NASA CR- 114278 Contract NAS 2-6024. 
Goblick, T. J., Jr. (1962), "Coding for a Discrete Information Source with a Distortion Measure," 

Ph.D. Dissertation, MIT, Cambridge, Mass. 
Goblick, T. J., Jr., and J. L. Holsinger (1967), "Analog Source Digitization: A Comparison of Theory 

and Practice," IEEE Trans. Inform. Theor., vol. IT- 13, pp. 323-326. 



BIBLIOGRAPHY 549 

Gray, R. M. (1975), private communication. 

Gray, R. M., and L. D. Davisson (1974), "Source Coding without the Ergodic Assumption," IEEE 

Trans. Inform. Theor., vol. IT-20, pp. 502-516. 

Gray, R. M., D. L. Neuhoff, and J. K. Omura (1975), "Process Definitions of Distortion-Rate Func 
tions and Source Coding Theorems," IEEE Trans. Inform. Theor., vol. IT-21, pp. 524-532. 
Grenander, U.. and G. Szego (1958), Toeplitz Forms and Their Applications, University of California 

Press. Berkeley. 
Hardy, G. H., J. E. Littlewood, and G. Polya (1952), Inequalities, 2d ed., Cambridge University Press, 

London. 
Heller, J. A. (1975), "Feedback Decoding of Convolutional Codes," in A. J. Viterbi (ed.). Advances 

in Communication Systems, vol. 4, Academic, New York, pp. 261-278. 
Heller, J. A. (1968), "Short Constraint Length Convolutional Codes," Jet Propulsion Labs. Space 

Programs Summary 37-54, vol. Ill, pp. 171-177. 
Heller, J. A., and I. M. Jacobs (1971), "Viterbi Decoding for Satellite and Space Communication," 

IEEE Trans. Commun. Technoi, vol. COM- 19, pp. 835-848. 

Helstrom, C. W. (1968), Statistical Theory of Signal Detection, 2d ed., Pergamon, Oxford. 
Huffman, D. A. (1952), "A Method for the Construction of Minimum Redundancy Codes." Proc. IRE, 

vol. 40, pp. 1098-1101. 
Jacobs, I. M. (1974), "Practical Applications of Coding," IEEE Trans. Inform. Theor., vol. IT-20, 

pp. 305-310. 
Jacobs, I. M. (1967), "Sequential Decoding for Efficient Communication from Deep Space." IEEE 

Trans. Commun. Technoi., vol. COM- 15, pp. 492-501. 
Jacobs. I. M., and E. R. Berlekamp (1967), "A Lower Bound to the Distribution of Computation for 

Sequential Decoding." IEEE Trans. Inform. Theor., vol. IT-13, pp. 167-174. 
Jelinek, F. (1969a), "A Fast Sequential Decoding Algorithm Using a Stack," IBM J. Res. Dei., vol. 13, 

pp. 675-685. 
Jelinek. F. (19696). "Tree Encoding of Memoryless Time-Discrete Sources with a Fidelity Criterion," 

IEEE Trans. Inform. Theor., vol. IT- 15. pp. 584-590. 

Jelinek, F. (1968a), Probabilistic Information Theory, McGraw-Hill, New York. 
Jelinek, F. (19686), "Evaluation of Expurgated Bound Exponents," IEEE Trans. Inform. Theor., 

vol. IT- 14, pp. 501-505. 

Kennedy. R. S. (1969), Fading Dispersive Communication Channels, Wiley, New York. 
Kohlenberg, A., and G. D. Forney, (1968), "Convolutional Coding for Channels with Memory," IEEE 

Trans. Inform. Theor., vol. IT- 14, pp. 618-626. 

Kolmogorov, N. (1956), "On the Shannon Theory of Information Transmission in the Case of Contin 
uous Signals," IRE Trans. Inform. Theor., vol. IT-2, pp. 102-108. 
Kraft, L. G. (1949), "A Device for Quantizing, Grouping and Coding Amplitude Modulated Pulses," 

M.S. Thesis, MIT. Cambridge, Mass. 
Kuhn, H. W., and A. W. Tucker (1951), "Nonlinear Programming," Proc. 2nd Berkeley Symp. Math. 

Stat. Prob., University of California Press, Berkeley, pp. 481-492. 
Landau, H. J., and H. O. Pollak (1962), "Prolate Spheroidal Wave Functions, Fourier Analysis, and 

Uncertainty-Ill," Bell System Tech. J., vol. 41, pp. 1295-1336. 
Landau, H. J., and H. O. Pollak (1961), "Prolate Spheroidal Wave Functions, Fourier Analysis, and 

Uncertainty-II," Bell System Tech. J., vol. 40, pp. 65-84. 
Lesh, J. R. (1976), "Computational Algorithms for Coding Bound Exponents," Ph.D. Dissertation, 

University of California, Los Angeles. 

Lin, S. (1970), An Introduction to Error-Correcting Codes, Prentice-Hall, Englewood Cliffs, N.J. 
Linkov, Yu. N. (1965), "Evaluation of -Entropy of Random Variables for Small ," Problems of 

Inform. Transmission, vol. 1, pp. 12-18. (Trans, from Problemy Peredachi Informatsii, vol. 1, 

pp. 18-26.) 
Lloyd, S. P. (1959), "Least Square Quantization in PCM," unpublished Bell Telephone Lab. memo, 

Murray HilL N.J. 
Lucky, R. W., J. Salz, and E. J. Weldon (1968), Principles of Data Communication, McGraw-Hill, New 

York. 



550 PRINCIPLES OF DIGITAL COMMUNICATION AND CODING 

Mackechnie, L. K. (1973), "Maximum-Likelihood Receivers for Channels Having Memory," Ph.D. 

Dissertation, University of Notre Dame, Indiana. 
Martin, D. R. (1976), "Robust Source Coding of Finite Alphabet Sources via Composition Classes," 

Ph.D. Dissertation, University of California, Los Angeles. 
Massey, J. L. (1974), "Error Bounds for Tree Codes, Trellis Codes, and Convolutional Codes with 

Encoding and Decoding Procedures," Lectures presented at the Summer School on " Coding and 

Complexity," Centre International des Sciences Mechaniques, Udine, Italy. (Notes published by 

Springer-Verlag.) 
Massey, J. L. (1973), "Coding Techniques for Digital Communications," tutorial course notes, 1973 

International Conference on Communications. 
Massey, J. L. (1972), " Variable- Length Codes and the Fano Metric," IEEE Trans. Inform. Theor., 

vol. IT- 18, pp. 196-198. 

Massey, J. L. (1963), Threshold Decoding, MIT Press, Cambridge, Mass. 
Massey, J. L., and M. K. Sain (1968), "Inverses of Linear Sequential Circuits," IEEE Trans. Computers, 

vol. C-17, pp. 330-337. 

Max, J. (1960), "Quantizing for Minimum Distortion," IRE Trans. Inform. Theor., vol. IT-6, pp. 7-12. 
McEliece, R. J., and J. K. Omura, (1977), "An Improved Upper Bound on the Block Coding Error 

Exponent for Binary-Input Discrete Memoryless Channels," IEEE Trans. Inform. Theor., 

vol. IT-23, pp. 611-613. 
McEliece, R. J., E. R. Rodemich, H. Rumsey, and L. R. Welch (1977), "New Upper Bounds on the 

Rate of a Code via the Delsarte-MacWilliams Inequalities," IEEE Trans. Inform. Theor., 

vol. IT-23, pp. 157-166. 
McMillan, B. (1956), "Two Inequalities Implied by Unique Decipherability," IRE Trans. Inform. 

Theor., vol. IT-2, pp. 115-116. 
McMillan, B. (1953), "The Basic Theorems of Information Theory," Ann. Math. Stat., vol. 24, 

pp. 196-219. 
Morrissey, T. N., Jr. (1970), "Analysis of Decoders for Convolutional Codes by Stochastic Sequential 

Machine Methods," IEEE Trans. Inform. Theor., vol. IT- 16, pp. 460-469. 
Neyman, J., and E. Pearson, (1928), "On the Use and Interpretation of Certain Test Criteria for 

Purposes of Statistical Inference," Biometrika, vol. 20A, pp. 175-240, 263-294. 
Odenwalder, J. P. (1970), "Optimal Decoding of Convolutional Codes," Ph.D. Dissertation, Univer 
sity of California, Los Angeles. 
Omura, J. K. (1975), "A Lower Bounding Method for Channel and Source Coding Probabilities," 

Inform. Cont., vol. 27, pp. 148-177. 
Omura, J. K. (1973), "A Coding Theorem for Discrete-Time Sources," IEEE Trans. Inform. Theor., 

vol. IT- 19, pp. 490-498. 
Omura, J. K. (1971), "Optimal Receiver Design for Convolutional Codes and Channels with Memory 

via Control Theoretical Concepts," Inform. Sci., vol. 3, pp. 243-266. 
Omura, J. K. (1969), "On the Viterbi Decoding Algorithm," IEEE Trans. Inform. Theor., vol. IT-15, 

pp. 177-179. 
Oppenheim, A. V., and R. W. Schafer (1975), Digital Signal Processing, Prentice-Hall, Englewood 

Cliffs, N.J. 

Peterson, W. W. (1961), Error-Correcting Codes, MIT Press, Cambridge, Mass. 
Peterson, W. W., and E. J. Weldon (1972), Error-Correcting Codes, 2d ed., MIT Press, Cambridge, 

Mass. 
Pile, R. (1968), "The Transmission Distortion of a Source as a Function of the Encoding Block 

Length," Bell Syst. Tech. J., vol. 47, pp. 827-885. 
Pinkston, J. T. (1966), "Information Rates of Independent Sample Sources," M.S. Thesis, MIT, 

Cambridge, Mass. 
Plotkin, M. (1960), "Binary Codes with Specified Minimum Distance," IRE Trans. Inform. Theor., 

vol. IT-6, pp. 445-450, originally Res. Div. Rep. 51-20, Univ. of Penn. (1951). 
Ramsey, J. L. (1970), "Realization of Optimum Interleaves," IEEE Trans. Inform. Theor., vol. IT- 16, 

pp. 338-345. 



BIBLIOGRAPHY 551 

Reiffen, B. (1960), "Sequential Encoding and Decoding for the Discrete Memoryless Channel," MIT 

Research Lab. of Electronics Tech. Rept. 374. 
Rohlin, V. A. (1967), " Lectures on the Entropy Theory of Measure-Preserving Transformations," Russ. 

Math. Sure., vol. 22, no. 5, pp. 1-52. 

Rosenberg, W. J. (1971), "Structural Properties of Convolutional Codes," Ph.D. Dissertation, Uni 
versity of California, Los Angeles. 
Rubin, I. (1973), "Information Rates for Poisson Sequences," IEEE Trans. Inform. Theor., vol. IT- 19, 

pp. 283-294. 
Sakrison, D. J. (1975), "Worst Sources and Robust Codes for Difference Distortion Measure," IEEE 

Trans. Inform. Theor., vol. IT-21, pp. 301-309. 
Sakrison, D. J. (1969), "An Extension of the Theorem of Kac, Murdock, and Szego to N Dimensions," 

IEEE Trans. Inform. Theor., vol. IT- 15, pp. 608-610. 

Sakrison, D. J., and V. R. Algazi (1971), "Comparison of Line-by-Line and Two Dimensional Encod 
ing of Random Images," IEEE Trans. Inform. Theor., vol. IT- 17, pp. 386-398. 
Savage, J. E. (1966), "Sequential Decoding the Computation Problem," Bell Syst. Tech. J., vol. 45, 

pp. 149-176. 
Shannon, C. E. (1959), "Coding Theorems for a Discrete Source with a Fidelity Criterion," IRE Nat. 

Com. Rec., pt. 4, pp. 142-163. Also in R. E. Machol (ed.), Information and Decision Processes, 

McGraw-Hill, New York, 1960. 
Shannon, C. E. (1948), "A Mathematical Theory of Communication," Bell System Tech. J., vol. 27, 

(pt. I), pp. 379-423 (pt. II), pp. 623-656. Reprinted in book form with postscript by W. Weaver, 

Univ. of Illinois Press, Urbana, 1949. 
Shannon, C. E., R. G. Gallager, and E. R. Berlekamp (1967), "Lower Bounds to Error Probability for 

Coding on Discrete Memoryless Channels," Inform. Contr., vol. 10, pt. I, pp. 65-103, pt. II, 

pp. 522-552. 
Slepian, D., and H. O. Pollak (1961), "Prolate Spheroidal Wave Functions, Fourier Analysis, and 

Uncertainty-I," Bell System Tech. J., vol. 40, pp. 43-64. (See Landau and Pollak [1961, 1962] for 

Parts II and III.) 
Stiglitz, I. G. (1966), "Coding for a Class of Unknown Channels," IEEE Trans. Inform. Theor., 

vol. IT-12, pp. 189-195. 
Tan, H. (1975), "Block Coding for Stationary Gaussian Sources with Memory under a Squared-Error 

Fidelity Criterion," Inform. Contr., vol. 29, pp. 1 1-28. 
Tan, H., and K. Yao (1975), "Evaluation of Rate Distortion Functions for a Class of Independent 

Identically Distributed Sources under an Absolute Magnitude Criterion," IEEE Trans. Inform. 

Theor., vol. IT-21, pp. 59-63. 

Van Lint, J. (1971), Coding Theory, Lecture Notes in Mathematics, Springer- Verlag, Berlin. 
Van de Meeberg, L. (1974), "A Tightened Upper Bound on the Error Probability of Binary Convolu 
tional Codes with Viterbi Decoding," IEEE Trans. Inform. Theor., vol. IT-20, pp. 389-391. 
Van Ness, F. L., and M. A. Bouman (1965), "The Effects of Wavelength and Luminance on Visual 

Modulation Transfer," Excerpta Medica Int. Congr., ser. 125, pp. 183-192. 

Van Trees, H. L. (1968), Detection, Estimation, and Modulation Theory, Part I, Wiley, New York. 
Varsharmov, R. R. (1957), "Estimate of the Number of Signals in Error Correcting Codes," Dokl. 

Akad. Nauk, SSSR 117, no. 5, pp. 739-741. 
Viterbi, A. J. (1971), "Convolutional Codes and Their Performance in Communication Systems," 

IEEE Trans. Commun. Tech., vol. COM- 19, pp. 751-772. 
Viterbi, A. J. (1967a), "Error Bounds for Convolutional Codes and an Asymptotically Optimum 

Decoding Algorithm," IEEE Trans. Inform. Theor., vol. IT- 13, pp. 260-269. 
Viterbi, A. J. (19676), "Orthogonal Tree Codes for Communication in the Presence of White Gaussian 

Noise," IEEE Trans. Commun. Tech., vol. COM-15, pp. 238-242. 
Viterbi, A. J. (1967c), "Performance of an M-ary Orthogonal Communication System Using 

Stationary Stochastic Signals," IEEE Trans. Inform. Theor., vol. IT- 13, pp. 414-422. 
Viterbi, A. J. (1966), Principles of Coherent Communication, McGraw-Hill, New York. 
Viterbi, A. J., and I. M. Jacobs (1975), "Advances in Coding and Modulation for Noncoherent Chan- 



552 PRINCIPLES OF DIGITAL COMMUNICATION AND CODING 

nels Affected by Fading, Partial Band, and Multiple-Access Interference," in A. J. Viterbi (ed.), 

Advances in Communication Systems, vol. 4, Academic, New York, pp. 279-308. 
Viterbi, A. J., and J. P. Odenwalder (1969), " Further Results on Optimum Decoding of Convolutional 

Codes," IEEE Trans. Inform. Theor., vol. IT- 15, pp. 732-734. 
Viterbi, A. J., and J. K. Omura (1974), "Trellis Encoding of Memoryless Discrete-Time Sources with a 

Fidelity Criterion," IEEE Trans. Inform. Theor., vol. IT-20, pp. 325-331. 

Wolfowitz, J. (1961), Coding Theorems of Information Theory, 2d ed., Springer- Verlag and Prentice- 
Hall, Englewood Cliffs, N.J. 
Wolfowitz, J. (1957), "The Coding of Messages Subject to Chance Errors," ///. J. of Math., vol. 1, 

pp. 591-606. 
Wozencraft, J. M. (1957), "Sequential Decoding for Reliable Communication," IRE Nat. Conv. Rec., 

vol. 5, pt. 2, pp. 1 1-25. 

Wozencraft, J. M., and I. M. Jacobs (1965), Principles of Communication Engineering, Wiley, New York. 
Yudkin, H. L. (1964), "Channel State Testing in Information Decoding," Sc.D. Thesis, MIT, 

Cambridge, Mass. 
Zigangirov, K. Sh. (1966), "Some Sequential Decoding Procedures," Problemy Peredachi Informatsii, 

vol. 2, pp. 13-25. 
Ziv, J. (1972), "Coding of Sources with Unknown Statistics," IEEE Trans. Inform. Theor., vol. IT- 18, 

pp. 384-394. 



INDEX 



Abelian group, 85 

Abramson, N., 35 

Acampora, A. S., 286, 287 

Additive Gaussian noise channel, 21, 46 

AEP (asymptotic equipartition property), 13, 15. 

523 

AGC (automatic gain control). 80 
Algazi, V. R.,510, 534 
All-zeros path, 239, 301 
Amplitude fading, 107 
Amplitude modulation, 50, 76 
AND gates, 361 
Anderson.J. B..423 
Arimoto, S., 141, 186, 194, 207, 212, 408 
Ash,R. B.,35 
Associative law, 82 
Asymmetric binary "Z" channel, 122 
Asymptotic rate, 229 
Augmented generating function, 242, 246 
Autocorrelation, 508 
Autocorrelation function of zero-mean Gaussian 

random field, 507 
Average: 

per digit error probability, 30 
per letter mutual information, 485 
Average distortion, 387. 389-391, 405, 424, 475, 

482, 486, 499 

Average error probability, 219 
Average length of code words, 16 
Average metric increment, 351 
Average mutual information, 22, 24, 25, 35, 38, 

134, 141,387,394,426,431 
Average normalized inner product, 121 



AWGN (additive white Gaussian noise), 51 
A WGN channel, 51, 131, 151, 169, 180,220,239, 
246, 369 



Backward conditional probabilities, 390 

Backward test channel, 407, 408 

Balanced channel condition, 221 

Balanced distortion, 444, 513, 516, 520 

Band-limited Gaussian source, 506 

Basis, 117 

Bayes rule, 26, 30, 390 

BEC (binary erasure channel), 44, 212 

Berger, T., 403, 440, 442, 446, 449, 453, 464, 479, 

481, 502, 503, 505, 534, 542, 543 
Berlekamp, E. R.,96, 159, 165, 173, 178, 185, 

194, 368, 378 

Bhattacharyya bound, 63, 88, 192, 212, 244, 302 
Bhattacharyya distance, 63, 88, 292, 408, 409, 

460 

Bias, 350, 474 

Binary alphabet composition class, 538 
Binary branch vector, 302 
Binary entropy function, 10, 33 
Binary erasure channel (BEC), 44, 212 
Binary feed-forward systematic codes, 253 
Binary generator matrix for convolutional code, 

228 

Binary hypothesis testing, 163 
Binary-input channels, 150,315 

AWGN, 2 14, 239,247, 248 

constant energy AWGN, 239 

octal output quantized, 154 



553 



554 INDEX 



Binary-input channels: 

output-symmetric, 86, 132, 179, 180, 184,278, 
315,317,318,341,346 

quaternary-output, 123 
Binary linear codes, 189 
Binary memoryless source, 10 
Binary PSK signals, 79 
Binary source: 

error distortion, 411 

with random parameter, 531 
Binary symmetric channel (BSC), 21,151,218, 

235, 246 
Binary symmetric source (BSS), 10, 397, 409, 

460, 545 

Binary-trellis convolutional codes, 301, 31 1 
Binomial distribution, 217 
Biphase modulation, 76 
Bit energy-to-noise density ratio, 69 
Bit error probability, 100, 243, 245, 246, 256, 305, 
312,317,335,346 

for A WON channel, 254 
Bits, 8, 47 

Blahut,R.E., 207, 44 1,454 
Blake, I., 96 

Block code, 50-212, 235, 390, 424 
Block coding theorems for 

amplitude-continuous sources, 424 
Block error probability, 99, 243, 257 
Block length, 3 14 
Block orthogonal encoder, 253 
Block source code, 389 
Bode,H.W., 103 
Bouman,M. A., 506 
Bounded distortion, 469 
Bounded second moment condition, 503 
Bounded variance condition, 428 
Bower, E.K., 377 
Branch distortion measure, 413 
Branch metric generator, 334 
Branch metrics, 259, 260 
Branch observables, 276 
Branch synchronization, 261 
Branch vectors, 238 

Branching process extinction theorem, 461 
Brayer, K., 115 
BSC (binary symmetric channel), 21, 80, 212, 

216,235,247 

BSS (binary symmetric source), 10, 409 
Bucher, E. A. ,341 
Buffer overflow, 376 
Bussgang,J.J., 27 1,287 



Campbell, F. W., 506 

Capacity for A WON channel, 153 



Cascade of channels, 26 
Catastrophic codes, 250, 258, 261, 289, 376 
Cauchy inequality, 196 
Cayley-Hamilton theorem, 291, 294 
Central limit theorem, 108 
Centroid, 173 

Channel capacity, 5, 35, 138, 152, 156, 207, 208, 
309,431 

of discrete memoryless channel, 23 
Channel encoder and decoder, 5 
Channel transition distribution, 55, 79 
Channels with memory, 1 14 
Chebyshev inequality, 15, 43, 123, 162, 537, 545 
Chernoff bound, 15,43,63, 122, 158, 159, 164, 

216,461,515,518 

Chernoff bounds for distortion distributions, 534 
Chi-square distribution, 112 
Class of semiorthogonal convolutional 

encoders, 298 

Code-ensemble average, 475 
Code-ensemble average bit error bound, 340, 

377 

Code generator polynomials, 250, 289 
Code generator vectors, 524 
Code state diagram, 239, 240 
Code synchronization, 258, 261 
Code trellis, 239 
Code vector, 82, 129, 189 
Coded channels without interference, 285 
Codeword, 11,389 
Codeword length, 16 
Coherence distance of field, 510 
Coherent channel: 

with hard M-ary decision outputs, 221 

with unquantized output vectors, 221 
Coherent detection, 124 
Colored noise, 102, 125 
Commutative law, 82 
Compatible paths, 306 
Complete basis, 102 
Complexity: 

for sequential decoding, 375 

for Viterbi decoding, 374 
Composite code, 522, 523, 530, 532, 544 

for overall stationary source, 531 
Composite source, 527, 528 
Composition class, 518, 521, 543 
Computational algorithm: 

for capacity, 207 

for rate distortion function, 454 
Concave functions, 30 
Connection vectors, 302 
Constraint length, 229, 230, 235, 248 

of trellis code, 411 
Context-free distortion measure, 387 



INDEX 555 



Continuous (uncountable), 5 

Continuous amplitude discrete time memoryless 

sources, 388, 460, 464 
Continuous amplitude sources, 423, 480 
Continuous amplitude stationary ergodic 

sources, 485 

Continuous phase frequency shift keying, 127 
Continuous-time Gaussian Markov source, 542 
Continuous-time Gaussian process, 

squared-error distortion, 503 
Continuous-time Gaussian sources, 493 
Continuous-time sources, 479 
Converse to coding theorem, 6, 28, 30, 34, 35, 

186 
Converse source coding theorem, 400, 401 , 406, 

410,427,460,484,543 

Converse source coding-vector distortion, 478 
Convex cap (D) functions, 35, 37 
Convex cup ( U ) functions, 35, 37, 439 
Convex functions, 35 
Convex region, 37 

Convolutional channel coding theorem, 313 
Convolutional code ensemble performance, 301 , 

337 

Convolutional coding lower bound, 320 
Convolutional lower-bound exponent, 320-321 
Convolutional orthogonal codes on AWGN 

channel, 315 

Convolutional orthogonal encoder, 254 
Correlation functions. 5 1 1 
Countably infinite size alphabet, 464 
Covariance matrix, 490, 542 
Cramer s theorem. 464 
Critical length, 323 
Critical run length of errors, 342 
Current, R., 464 
Cyclic codes, 96 



Darlington, S., 71 

Data buffer in Fano algorithm, 376 

Data compression schemes, 503 

Data processing system, 27 

Data processing theorem, 27, 406, 460 

Data rate per dimension, 132 

Davisson, L. D.,529, 534 

Decision rule, 55, 273 

Decoder speed factor, 376 

Degenerate channels, 315 

Deinterleaver, 1 16 

Destination, 4, 385 

Difference distortion measure, 452 

Differential entropy, 450-452, 496, 542 

Differential phase shift keying, 107 

Digital delay line, 228 



Dirac delta function [S( )], 5 1 , 272 

Discrete alphabet stationary ergodic sources, 19 

Discrete memoryless channel (DMC), 20, 207, 

217 

Discrete memoryless source (DMS), 8, 388 
Discrete stationary ergodic sources, 34 
Discrete-time continuous amplitude 

memoryless source, 423 
Discrete-time first-order Gaussian Markov 

source, 542 
Discrete-time stationary ergodic source, 480, 

500, 502, 542 
Discrete-time stationary sources with memory, 

479 

Discrimination functions, 219 
Disjoint time-orthogonal functions, 50 
Distortion, 423 
Distortion matrix, 443, 514 
Distortion measure, 386, 387, 413, 428, 504, 508 
Distortion rate function, 388 
Distribution of computation, 356 
Diversity transmission, 1 10 
DMC (discrete memoryless channel), 20, 28, 79 
DMS (discrete memoryless source), 8, 28 
Dual code, 94 

Dummy AWGN channel, 220 
Dummy BSC,219 
Dummy distribution, 164, 166, 169 
Duobinary, 151, 279, 340, 341 
Dynamic programming, 287 



Effective length, 329 

Efficient (near rate distortion limit), 523 

Elias,P., 138, 184,286 

Elias upper bound, 185, 344 

Encoding delay, 29 

Energy: 

per signal, 158 

per transmitted bit, 96 
Entropy of DMS, 8, 34, 35 
Entropy function, 37 
Entropy rate power, 497 
Envelope function of unit norm, 103 
Equal energy orthogonal signals, 65 
Erasure channel, ^-input, (Q + l)-output, 214 
Ergodic source, 480 
Ergodicity, 480 
Error distortion, 442 
Error event, 322 
Error run lengths, 324 
Error sequence, 278, 335 
Error sequence generator, 334 
Error signals, 277 
Error state diagram, 279 



556 INDEX 



Euclidean distance, 87 

Exponential source, magnitude error distortion, 

449 

Expurgated bound, 144, 146, 157, 217, 219 
Expurgated ensemble average bound, 143, 152 
Expurgated exponent, 157, 409, 460 



Fading channel, 114 

Fano,R.M.,34,35,68, 116, 138, 169, 186, 194, 

287, 350, 370, 496 
Fano algorithm, 370-378 
Fano metric, 35 1,380 
Feed-forward logic, 251 
Feedback decoding, 262-272, 289 
Feinstein, A.,35, 138 
Feller, W., 461 
Fidelity criterion, 385, 388 
Finite field, 311 
Finite state machine, 230, 298 
First-order differential entropy rate of source, 

496 

First-order rate distortion function, 495 
Fixed composition class, 544 
Fixed-composition sequences, 517 
Forbidden trellis output sequence, 414 
Forney, G. D., Jr., 75, 1 15, 251, 252, 272, 287, 

295,324,341-343,371,377,378 
Free distance, 240, 252, 264 
Frequency-orthogonal functions, 70 



Gallager, R. G., 35, 65, 96, 103, 1 16, 133, 134, 
138, 146, 159, 164, 165, 172, 173, 178, 185, 
186, 194, 202, 215, 272, 370, 372, 373, 378, 
403, 423, 430, 453, 459, 463, 467, 481 

Gallager bound, 65, 68, 96, 129, 306, 316 

Gallager function, 133, 306, 360, 393 

Gallager s lemma, 137 

Gantmacher, F. R.,338 

Gaussian image sources, 494, 506 

Gaussian integral function Q( ), 62 

Gaussian processes, 108 

Gaussian random field, 506, 534 

Gaussian source, 443, 448, 453 

Gaussian source rate distortion function, 506 

Gaussian vector sources, 473 

General mismatch theorem, 546 

Generalized Chernoff bound, 520 

Generalized Gilbert bound, 460 

Generalized Shannon lower bound, 496, 542 

Generating function sequence, 248 

Generating functions, 240, 241, 244, 252, 255 

Generator matrix, 83, 288 



Generator polynomials, 250 

Generator sequences, 251 

Geometric distributions, 465 

Gilbert, E.N., 185 

Gilbert bound, 185, 186,224,321,344,409,410 

Gilbert-type lower bound on free distance, 344 

Gilhousen,K. S.,377,378 

Goblick, T. J., Jr., 499, 500, 524 

Golaycode,89, 98 

Gram-Schmidt orthogonalization procedure, 47, 

117 
Gram-Schmidt orthonormal representation, 273, 

277 

Gray, R. M., 489, 526, 529, 534 
Grenander,U.,492,497 
Group codes, 85 



Hamming code, 93, 545 

Hamming distance, 81, 236, 239, 244, 262, 409 

Hamming single error correcting codes, 93, 

115 

Hamming weight of binary vector, 85 
Hard limiter, 80 
Hard quantization, 80, 155, 246 
Hardy, G.H., 194 

Heller, J. A., 75, 249, 259, 287, 289, 341, 378 
Helstrom,C.W., 102, 108 
Hilbert,D.,464 
Holder inequality, 144, 196, 359, 418, 423, 429, 

487 

Holsinger,J. L.,499, 500 
Homogeneous random field, 508 
Huffman, D. A., 13, 17 
Hyperplane, 57, 203 



Identity vector, 84 
Improved Plotkin bound, 223, 224 
Inadmissible path, 307 
Incorrect subset of node, 243, 354 
Independent components: 

maximum distortion, 479 

sum distortion measure, 471 
Independent events, 7, 19 
Indicator function, 391, 418, 429, 516, 521 
Information in an event, 7 
Information sequence, 333 
Information theory, 6 
Initial synchronization, 328, 341 
Input alphabet, 207 
Instantaneously decodable code, 12 
Integral equation, 507 
Intensity function, 506 



INDEX 557 



Interleaving, 110, 115, 116 

internal, 272 
Intersymbol interference (I SI), 75, 272, 285, 33 1 , 

336 
Isotropic field, 509 

Jacobian, 109 

Jacobs, I. M., 63, 75, 112-114,1 16, 249, 259, 

296, 368, 370, 373, 378 
Jelinek, F., 35, 150, 194, 214, 361, 371, 376, 378, 

410, 422, 423, 443, 453, 460 
Jelinek algorithm, 37 1,373 
Jensen inequality, 37, 40, 197, 426, 487 
Joint source and channel coding theorem, 467 
Jointly ergodic pair source, 485, 488 
Jointly ergodic process, 486 

AT-stage shift register, 228 

Karhunen-Loeve expansion, 102, 504, 505, 507, 

510,512 

Kennedy, R. S., 108 
Khinchine s process, 485 
Kohlenberg, A., 115,272 
Kolmogorov, N., 534 
Kraft, L. G., 18 
Kraft-McMillan inequality, 18 
Kuhn,H. W., 141,202 
Kuhn-Tucker conditions, 202 
Kuhn-Tucker theorem, 23, 188, 208 



Lagrange multipliers, 203, 434, 442, 446 

Landau, H. J., 74 

Law of large numbers, 543 

Lesh,J.R., 141,212,410 

L Hospital s rule, 120, 149, 150 

Likelihood functions, 55, 159 

forBSC. 169 

Limit theorem for Toeplitz matrices, 491 
Lin, S., 96 

Linear code, 82, 189,526 
Linear convolutional codes, 96 
Linear feedback logic, 252 
Linear intersymbol interference channels, 284 
Linkov,Yu.N.,464 
List decoding, 179, 215, 365, 367 
Little wood, J. E., 194 
Lloyd, S. P., 499 
Lloyd-Max quantizers, 499, 500 
Log likelihood ratio, 161, 273 
Low-rate lower bound, 321 
Lower-bound exponent, 171 
Lucky, R. W., 75, 271 



Af-level quantizer, 499 

McEliece,R.J.,184 

Mackechnie, L. K.,287 

McMillan, B.. 18,488,523 

Magnitude error distortion measures, 423, 427 

Majority logic, 270 

Mapping function, 3 1 1 

Martin, D. R., 5 17, 523 

Massey, J. L., 250, 251, 270, 287, 350, 380, 381 

Matched filter, 275 

Matched source and channel, 460 

Max, J., 499 

Maximum distortion measure, 474 

Maximum likelihood, 41 1 

Maximum likelihood decision rule, 58 

Maximum likelihood decoder, 58, 227, 262 

for convolutional code, 235 
Maximum likelihood list-of-L decoder, 366 
Maximum likelihood trellis decoding algorithm, 

239,411 

Memoryless channel, 54, 79, 132, 159 
Memoryless condition, 21, 146 
Memoryless discrete-input additive Gaussian 

noise channel. 21 
Memoryless source. 8. 388, 469 
Metric, 58, 236, 238, 262, 350 
MFSK (m frequency orthogonal signal), 220 
Minimax approach. 479 
Minimum distance, 244 
Minimum distortion path, 416 
Minimum distortion rule, 389 
Minimum-probability-of-error decision rule, 380 
Minkowski inequality, 198 
Mismatch equation, 532 
Modulo-2 addition for binary symbols, 82 
Morrissey, T. N., Jr., 264 
MSK (minimum shift keying), 126 
Mullin,R.C..96 

Multiple-amplitude modulation, 102 
Multiple-phase modulation, 102 
Mutual information, 19 



Nats. 8 

Natural rate distortion function. 409, 460 

Neyman, J., 159 

Neyman-Pearson lemma, 158-160. 172 

Node errors. 243. 255, 301, 305, 362 

Noiseless channel, 4, 143, 147 

Noiseless source coding theorem, 6, 11-13. 19 

Noisy channel, 5 

Nonbinary and asymmetric binary channels, 302 

Nonbinary modulation, 102 

Noncatastrophic codes, 251 



558 INDEX 



Noncoherent channel, 221 

Noncoherent reception, 104 

Nonergodic stationary source, 480, 523, 526 

Nonsystematic codes, 377 

Nonsystematic convolutional code, 252, 377 



Observables,52-57,276 
Observation space, 56 
Octal output quantized A WON channel, 214 
Octal quantization, 155, 214 
Odenwalder, J. P., 248, 287, 317, 341 
Omura, J. K., 75, 184, 219, 287, 454 
One-sided noise power spectral density, 51 
One-step prediction error of Gaussian source, 

497 

Oppenheim, A. V., 71 
Optimal code, 13 

Optimum decision regions, 56, 187 
Optimum decision rule for memoryless channel, 

55 

OR gates, 361 

Orthogonal codes, 98, 255-258 
on AWGN channel, 256, 257 
Orthogonal convolutional codes, 253, 255, 257 
Orthogonal functions, 117 
Orthogonal set of equations, 269 
Orthogonal signal set, 120, 169 
Orthonormal basis functions, 47, 50, 504 



Pair process, 485 

Pair state diagram, 299 

Pairwise error probability, 60, 244, 302 

Parallel channels, 2 15 

Pareto distribution, 361, 368, 371, 374, 378 

Pareto exponent, 368 

Parity-check codes, 85 

Parity-check matrix, 91, 265 

Pearson, E., 159 

Perfect code, 98 

Perron-Frobenius theorem, 338 

Peterson, W. W., 96, 99, 272 

Phase modulation, 76 

Pile, R., 403 

Pinkston,J.T.,464 

Plotkin,M., 175 

Plotkin bound, 175, 184, 344 

Poisson distribution, 427, 449, 465 

Pollack, H.O., 74 

Polya,G., 194 

Positive left eigenvector, 340 

Predetection filters, 102 

Prefix, 12, 181 



Prior probabilities, 55 
Push down stack, 371 



Q(-),62,247 

Quadrature modulator-demodulators, 71 
Quadriphase modulation, 76, 122 
Quantization of discrete time memoryless 

sources, 498 

Quantized demodulator outputs, 259 
Quantizer, 4, 78 
Quasi-perfect code, 98, 99 



Ramsey, J. L., 116 
Random-access memory, 116 
Random field, 468 
Random vector, 468 

Rate distortion function, 5, 387, 397, 427, 431 , 
445, 470, 47 1, 479, 481, 494, 503, 504 

for binary symmetric source, 442 

of random field, 508 

for stationary ergodic sources, 479, 485, 510 

for vector source with sum distortion 

measure, 489 

Rayleigh distribution, 109, 112 
Ray leigh fading, 109 
Received energy per information bit, 69 
Reduced error-state diagram, 281, 283 
Register length, 229 
Regular simplex, 95, 169 
Reiffen,B.,286 
Reliability function, 68 
Reliable communication system, 30 
Representation alphabet, 387, 389 
Robson, J.G.,506 
Robust source coding technique, 523 
Rodemich,E.R., 184 
Rohlin,V. A.,533 
Rosenberg, W.J., 251 
Rubin, I. ,449 
Rumsey,H., 184 
Run length of errors, 324 



Sain, M.K., 250, 251 
Sakrison, D. J., 451, 509, 510, 534 
Salz,J.,75,271 
Sampling theorem, 72 
Savage, J.E., 36 1,376, 378 
Schafer, R. W.,71 
Schwarz inequality, 142, 196 
Self-information, 20 
Semisequential algorithm, 371 



INDEX 559 



Sequential decoding. 6, 152, 227, 262, 286, 

349-379 
Shannon, C. E., 4, 13, 17, 19, 35, 103, 128, 138, 

159, 165, 173. 178, 185, 194,385,451,481, 

506, 534 

Shannon lower bound, 452, 463, 464 
Shannon s channel coding theorem, 5. 133 
Shannon s mathematical theory of 

communications, 6 
Shannon s noiseless coding theorem, 1 1 , 385, 

465 

Shift register, 228 
Signal representation. 117 
Signal set, 129 

Signal-to-noise parameter, 67 
Signal vector, 129 

Single-letter distortion measure, 387, 469, 482 
Slepian,D.,74 
Slepian and Wolf extension to side information, 

466 

Sliding block decoder, 264, 271 
Soft quantizer, 80. 155 
Source, 4 

Source alphabet, 387 
Source coding model, 387 
Source coding theorem, 397, 401 , 427, 460, 462, 

474 

Source decoder, 4, 396 
Source distance. 532 
Source encoder, 4, 396 
Source entropy. 4, 6 
Source reliability function, 397 
Sources with memory. 479. 494 
Spectral density, 494, 511 
Spectrum shaping function, 75 
Sphere-packing bound, 169-216, 219, 321 
Sphere-packing bound exponent, 179, 212, 220 
Square-error distortion, 423. 449, 505, 542 
Squared error distortion measures, 423, 505 
Stack algorithm , 35 1 . 36 1 , 370 
Staggered (offset) QPSK(SQPSK), 126 
State diagram, 231 
State diagram descriptions of convolutional or 

trellis codes, 231, 234, 237, 240, 277 
State sequences, 335 
State transition matrix of intersymbol 

interference, 336 
Stationarity, 489 
Stationary binary source, 529 
Stationary ergodic discrete-time sources, 387, 

388 

Stationary ergodic joint processes, 489 
Stationary ergodic source, 42, 480-526 
Stationary nonergodic binary source, 529 



Stationary source, 480 
Stieltjes integral, 41 
Stiglitz,I.G.,221 
Stirling s formula, 544 

Strong converse to coding theorem, 186. 408 
Suboptimal metric, 259 
Sufficient statistics, 54 
Suffix, 181 
Sum channels, 215 
Sum distortion measure, 470-541 
Superstates, 348 
Surviving path, 236, 239 
Symbol energy, 108 

Symbol energy-to-noise density ratio, 155 
Symbol synchronization, 261 
Symbol transition probability, 131 
Symmetric sources, 403, 513 
with balanced distortion, 443, 462, 513, 516, 

544 

Synchronization of Viterbi decoder. 260 
Syndrome of received vector, 91, 264 
Syndrome feedback decoder, 262-272 
Syndrome table-look-up procedure, 269 
Systematic binary linear code, 223 
Systematic code, 90, 91, 251, 264, 268, 365, 

377 
Systematic convolutional codes, 251, 268. 329, 

331 
Szego,G.,492,497 



Table-look-up technique, 91 
Tail: 

of code, 229, 23 1,258 

of trellis, 4 12 

Tan, H., 448, 449, 453, 465. 493 
Threshold-decodable convolutional codes, 270 
Threshold logic, 270 
Tilted probability, 161, 536, 540 
Tilting variable, 161 
Time-diversity. 110 

Time-invariant (fixed) convolutional codes, 229 
Time-orthogonal functions, 70 
Time-orthogonal quadrature phase functions, 70 
Time-varying convolutional codes, 229, 

301-305,331-346,357,361 
Toeplitz distribution theorem, 491 , 493, 505, 509 
Toeplitz matrices, 491 
Transition probabilities, 207 
Transition probability matrix, 259 
Transmission rate. 254 
Transmission time per bit, 255 
Transorthogonal code, 95 
Tree-code representation, 232, 460 



560 INDEX 



Tree descriptions of convolutional or trellis 

codes, 234 

Tree diagram, 230, 232, 236 
Trellis-code representation, 233 
Trellis codes, 234, 264, 401, 41 1 
Trellis diagram, 230-240, 411,412 
Trellis source coding, 412, 414 
Trellis source coding theorem, 421, 430 
Triangle inequality, 545 
Truncated maximum likelihood decision, 268 
Truncated-memory decoder, 262 
Truncation errors, 327 
Tucker, A. W., 141, 202 
Two-dimensional spectral density function, 

508 
Two-dimensional version of Toeplitz 

distribution theorem, 509 



Unbounded distortion measure, 427 

Unconstrained bandwidth, 165, 220 

Uniform error property, 86, 278 

Uniform quantizers, 499 

Uniform source, magnitude error distribution, 

449 

Union-Battacharyyabound, 63, 67, 244 
Union bound, 61, 476 

on bit error probability, 3 16 
Uniquely decodable code, 12 
Universal coding, 523, 526, 533 
Unquantized A WON channel, 156 
Useless channel, 148 
User, 385 
User alphabet, 387, 388 



VA (Viterbi algorithm), 238, 258, 261, 276, 287, 

414 
VandeMeeberg, L.,289 



Van Lint, J., 96 
Van Ness, F. L., 506 
Van Trees, H. L., 102, 107 
Variant: 

of Holder inequality, 196 

of Minkowski inequality, 199 
Varshamov,R. R., 185 
Varshamov-Gilbert lower bound, 185 
Vector distortion measure, 469, 476 
Very noisy channel, 155, 313, 326, 328 
Viterbi, A. J., 107, 108, 287, 296, 313, 317, 320, 

321,341,371,411,454 
Viterbi decoder, 237-334, 374-378, 41 1-423 



Weak law of large numbers, 43, 447 
Weight distribution, 310 
Weigh ted ensemble average, 132 
Weighting factors, 279 
Welch, L.R., 184 
Weldon, E. J., 75, 96, 271, 272 
"Whiten" noise, 103,501 
Whitened matched filter, 295, 298 
Wolfowitz,J.,35, 138, 186, 194 
Wozencraft,J.M.,63, 112-114, 116,286,370, 
373, 378 



Yao,K.,448,449,453,465 
Yudkin,H. L.,364,370,378 



Z channel, 44, 159,212,216 

Zero-rate exponent, 152, 178,318,321 

Zeroth order Bessel function of first kind, 509 

Zeroth order modified Bessel function, 105 

Zigangirov,K.Sh., 37 1,378 

Zigangirov algorithm, 371 

Ziv,J.,534 






I 



8 



.8 



8 j 

8 



-s 






o 
>> 



.SP 



1 

s 




I 



ISBN D-D7-Db7Slh-3 




9 780070 675162