Skip to main content

Full text of "Finite Markov chain models skip-free in one direction"

See other formats


NPS55-82-Q15 


WAL  POSTGRADUATE  SCHOOL 

Monterey,  California 


FINITE  MARKOV  CHAIN  MODELS  SKIP-FREE 
IN  ONE  DIRECTION 

by 

G.  Latouche 
D.  P.  Gaver 
P.  A.  Jacobs 

April  1982 


FEDDOCS 
D  208.14/2: 
NPS-55-82-015 


Approved  for  public  release;  distribution  unlimited. 

Prepared  for: 

Office  of  Naval  Research 
n  ^'ngton,  VA  22217 


DUDLEY  KNOX  LIBRARY 

NAVAL  POSTGRADUATE  SCHOOL 

MONTEREY,  CA  93943-5101 

NAVAL  POSTGRADUATE  SCHOOL 

MONTEREY,  CALIFORNIA 

Rear  Admiral  J.  J.  Ekelund  David  A.  Schrady 

Superintendent  Acting  Provost 

This  report  was  prepared  with  the  partial  support  of  the  Probability  and 
Statistics  Program  of  the  Office  of  Naval  Research,  Arlington,  VA.  Professor 
D.  P.  Gaver  also  gratefully  acknowledges  the  hospitality  of  the  Universite 
Libre  de  Bruxelles,  and  the  support  of  the  IBM  Company.  The  work  of  Associate 
Professor  P.  A.  Jacobs  was  partially  supported  by  The  National  Science 
Foundation,  Grant  Number  ENG-79-10825. 

Reproduction  of  all  or  part  of  this  report  is  authorized. 


UNCLASSIFIED 


SECURITY   CLASSIFICATION   OF   THIS   PAGE   (When   Data  Entered) 


DUDLEY  KNOX  LIBRARY 
M^fTL^°STGRADUATE  SCHOOL 




READ  INSTRUCTION^' 
BEFORE  COMPLETING  FORM 


REPORT  DOCUMENTATION  PAGE 


1.     REPORT   NUMBER 

NPS55-82-015 


2.  GOVT    ACCESSION   NO 


3.  .  RECIPIENT'S  CATALOG   NUMBER 


4.     TITLE  (and  Subtitle) 

FINITE  MARKOV  CHAIN  MODELS  SKIP-FREE  IN  ONE 
DIRECTION 


5.     TYPE  OF   REPORT  ft   PERIOD  COVERED 


Technical  Report 


6.     PERFORMING  ORG.    REPORT   NUMBER 


7.     AUTHORfsJ 

D.  P.  Gaver 
P.  A.  Jacobs 
G.  Latouche 


8.  CONTRACT  OR  GRANT  NUMBERf*; 


9.  PERFORMING  ORGANIZATION  NAME  AND  ADDRESS 

Naval  Postgraduate  School 
Monterey,  CA  93940 


10.     PROGRAM   ELEMENT.  PROJECT,    TASK 
AREA   ft    WORK   UNIT   NUMBERS 

61153N;  RR014-05-GE 
N0001482WR20017 


II.     CONTROLLING  OFFICE  NAME   AND  ADDRESS 

Office  of  Naval  Research 
Arlington,  VA  22217 


12.  REPORT  DATE 

April  1982 


13.     NUMBER  OF   PAGES 

53 


U.     MONITORING  AGENCY  NAME  A    ADDRESSf//  dlflerent  from  Controlling  Oltlce) 


15.     SECURITY  CLASS,  (of  thla  report) 


UNCLASSIFIED 


15a.     DECLASSIFl  CATION/ DOWN  GRADING 
SCHEDULE 


16.     DISTRIBUTION   ST  ATEMEN  T  (of  thla  Report) 


Approved  for  public  release;  distribution  unlimited 


17.     DISTRIBUTION  STATEMENT  (of  tha  abatract  entered  In  Block  20,  It  different  from  Report) 


18.     SUPPLEMENTARY  NOTES 


19.     KEY  WORDS  (Continue  on  tavaraa  aide  If  neceeeary  and  Identify  by  block  number) 


10.     ABSTRACT  (Continue  on  reveree  aide  If  neceaaary  and  Identify  by  block  number) 


Finite  Markov  processes  are  considered,  with  bi-dimensional  state  space,  such 
that  transitions  from  state  (n,i)  to  state  (m,j)  are  possible  only  if 
m  <_  n+1  .  The  analysis  leads  to  efficient  cimputational  algorithms,  to  deter- 
mine the  stationary  probability  distribution,  and  moments  of  first  passage 
times. 


dd  ,; 


FORM 
AN  7) 


1473 


COITION   OF    I   NOV  08  IS  OBSOLETE 
S/N    0102-014-A601    | 


UNCLASSIFIED 


SC CURITY   CLASSIFICATION  OF   THIS  FACE   (When  Data  Entered) 


FINITE  MARKOV  CHAIN  MODELS  SKIP-FREE 
IN  ONE  DIRECTION 


by 


G.  LATOUCHE  P. A.  JACOBS  and  D.P.  GAVER 

Universite  Libre  de  Naval  Postgraduate  School 

Bruxelles,  Bruxelles  Monterey,  California 

Belgium  U.S. A 


Technical  Report  N°  123 


ABSTRACT 

Finite  Markov  processes  are  considered,  with  bi -dimensional  state 
space,  such  that  transitions  from  state  (n,i)  to  state  (m,j)  are  possible 
only  if  m  <  n+1.  The  analysis  leads  to  efficient  computational  algorithms, 
to  determine  the  stationary  probability  distribution,  and  moments  of  first 
passage  times. 


1.  INTRODUCTION 

We  consider  irreducible,  bi-dimensional  Markov  processes 
{(X(t),Y(t)),  t  >  0>,  on  a  finite  state  space  {(n,i),  0  <  n  <  N,  1  <  i  <  K  }, 
for  which  transitions  from  state  (n,i)  to  state  (m,j)  are  possible  only  if 
m<n+l.     The  infinitesimal  generator  for  such  a  process  has  a  lower  block- 
Hessenberg  form  : 


Q  ■ 


'0,0 


'1,0 


'2,0 


'0,1 


'1,1 


'2,1 


1,2 


'2,2 


2,3 


QN-1,0       QN-1,1       QN-1,2       QN-1,3 
QN,N  QN,1  QN,2  QN,3 


0  0 

0  0 

0  0 


Qn- 


Qn-i 


N-l.N-1       VN-1,N 
QN,N-1  QN,N 


(1.1) 


The  diagonal  blocks  Q        are  square  matrices  of  order  K  ,  0  <  n  <  N. 
Their  diagonal  elements  are  strictly  negative,  all  other  elements  are  non- 
negative.     The  blocks  Q         ,  n  *  m,  are  rectangular  with  appropriate  dimen- 
sions; their  entries  are  non-negative.     The  rowsums  of  Q  are  equal   to  zero. 

We  denote  by  level  n  the  set  {(n,i),  1  <  i  <  K  }  of  states  corres- 
ponding to  the  common  value  n  for  the  first  index.     The  structure  (1.1)  of 
Q  permits  the  Markov  process  to  move  up  by  only  one  level  at  a  time,  and  to 
move  down  by  several   levels  at  a  time.     The  process  is  said  to  be  skip-free 
to  the  right. 


3. 


Many  models  in  queueing  theory  lead  to  Markov  processes  (or  Markov 
chains)   for  which  the  infinitesimal   generator  (respectively  the  transition 
probability  matrix)   is  a  (possibly  infinite)  block-Hessenberg  matrix. 
The  paradigm  for  the  lower-Hessenberg  case  is  the  GI/M/1  queue,  for  the 
upper-Hessenberg  case,  it  is  the  M/G/l  queue. 

Neuts  [7,9]  has  studied  in  great  detail     Markov  processes  and  Markov 
chains  with  infinite  block-Hessenberg  matrices  which  exhibit  a  repetitive 
structure  :   for  some     n*, 

Kn  =  Kn*   * 

Q„  m  =  Qr,_Li  ^1    »       f°r  all  n  >  n*,  m  <  n+1, 
^n,m       ^n+l,m+l 

if  the  matrix  is  lower-Hessenberg,  and  a  similar  condition  if  the  matrix  is 
upper-Hessenberg. 

In  particular,  Neuts  has  shown  that  if  a  process  with  such  a  lower 
block-Hessenberg  matrix  is  positive  recurrent,  its  stationary  probability 
vector  has  a  modified  matrix-geometric  form.     For  processes  with  upper 
block-Hessenberg  matrix,  the  quantities  which  may  be  most  elegantly  studied 
are  first  passage  times  to  lower  levels;  these  first  passage  times  correspond 
to  the  busy  period  in  queueing  theory. 

There  do  not  exist  many  published  results  for  processes  with  finite 

block-Hessenberg  matrices.     Most  of  them  deal  with  block-Jacobi  matrices  (the 

blocks  Q        are  zero  if  either  m  >  n+1  or  m  <  n-1).     Torrez  [12]  proposes  to 

determine  the  stationary  distribution  for  birth-and-death  processes  in  a  random 

environment,  by  using  numerical   procedures  designed  for  band  matrices. 

Neuts  [10]   and  Carrol   et  al .   [1]   explicitely  determine  the  stationary  probability 

distribution  for  the  PH/M/C  and  M/PH/1  queues  with  finite  waiting  room. 

Hajek  [3]   considers  a  model  with  repetitive  structure   :   Q         =  Q,    ,,   1  <  n  <  N-1, 

Q       ,i   =  i)i   i  and  Q„, ,       =  Q~   , ,   1  <  n  <  N-2.     Processes  with  general   block- 
^n,n+l         1,2  ^n+l,n       v,l  3 

Jacobi  matrices  are  analysed  in  Keilson  et  al .   [4],  and   in  Gaver  et  al .    [2], 


where  specific  numerical  examples  are  also  given. 

Wikarski    [13]   determines,  by  purely  analytic  arguments,  the  stationary 
probability  vector  for  processes  such  that  KQ  >  K,  >  ...  >  KN,  and  such  that 
the  matrices  Qn     n+1  are  of  rank  K-+i»  0  <  n  <  N-l.     We  do  not  need  these 
restrictions.     Moreover,  our  approach  yields  quantities  which  have  a  clear 
probabilistic  interpretation. 

The  purpose  of  the  present  paper  is  to  show  how  the  stationary  probabi- 
lity distribution,  and  moments  of  first  passage  times  from  some  level   to  another 
level,  may  be  efficiently  computed. 

The  stationary  probability  distribution  is  analysed  in  the  next  section. 
In  Section  3,  we  analyse  the  first  passage  time  from  level  n  to  a  higher  level 
m,  0  <n  <  m  <N.     The  results  in  these  two  sections  are  related  to  those  in  [2]. 

The  analysis  of  the  first  passage  time  from  level  n  to  a  lower  level   k, 
0  <  k  <  n  <  N,  is  more  involved,  and  is  presented  in  Section  4.     We  shall 
observe  that  it  is  possible  to  compute  independently  the  stationary  probability 
distribution,  and  the  moments  of  first  passage  times  to  higher  levels.     On  the 
other  hand,  it  is  necessary  to  compute  both  the  stationary  distribution,  and 
moments  of  first  passage  times  to  higher  levels,  in  order  to  compute  the  mo- 
ments of  first  passage  times  to  lower  levels. 

In  Section  6,  we  give  some  numerical  examples  of  common-cause  of 
failure    models.     We  consider  systems  with  N  machines  and  R  repairmen.     If  n 
denotes  the  number  of  machines  in  working  condition,  the  repair  rate  is 
ymin(R,  N-n).     The  failures  occur  in  this  way  :   in  an  interval   (t,  t+dt),  there 
is  a  jump  from  n  to  n-l  with  probability  An  dt  +  o(dt)  (independent  failures) 
or  to  n-k,  1  <k  <  n,  with  probability  v(£)q  (l-q)n~  dt  +  o(dt)   (common-cause 
failures). 

The  results  to  follow  are  applicable,  mutatis  mutandis,  to  Markov  pro- 
cesses with  upper  block-Hessenberg  infinitesimal   generator,  and  to  finite 
Markov  chains  with  block-Hessenberg  transition  probability  matrix,  as  we  briefly 
mention  in  Section  5. 


5. 


Notational  conventions. 

We  define  a  number  of  vectors  and  matrices  with  indices.  Occasionally, 
we  refer  to  specific  elements  of  those.  In  order  to  somewhat  simplify  the 
notations,  we  use  Q  ni(i»j)  to  represent  the  (i,j)th  element  of  the  matrix 

Vn" 

We  systematically  use  "n"  to  denote  a  given  level,  "m"  to  denote  a 

higher  level,  and  "k"  to  denote  a  lower  level. 
2.  THE  STATIONARY  PROBABILITY  DISTRIBUTION 


We  denote  by  p_  the  vector  of  stationary  probabilities  associated  with 
the  Markov  process  Q  : 

p_Q  =  0  ,  p_e  =  1,  (2.1) 

where  e  is  a  column  vector  with  unit  elements.  We  partition  the  vector  p  as 
p_  =  (p~,  p_i,  ...,  Pm)»  where  the  subvectors  p_^  have  K  elements  and  correspond 
to  the  states  of  level  n,  0  <  n  <  N. 
In  order  to  determine  the  vector  p_,  we  need  three  preliminary  lemmas. 

Lemma  1 

Consider  a  Markov  process  on  the  state  space  {1,2,. . . ,r,r+l,r+2,. . . ,r+s}, 
with  infinitesimal  generator  Q  : 


Q  = 


A   A 


0   0 


where  A  is  a  square  matrix  of  order  r,  A  is  a  rectangular  r  by  s  matrix. 
The  states  r+1  to  r+s  are  all  absorbing. 

a)  The  states  1  to  r  are  all  transient  if  and  only  if  the  matrix  A  is  non- 
singular,  in  which  case  (-A~  )  >  0. 


b)  The  (i,j)th  entry  of  (-A"  ),  for  1  <  i , j  <  r,  is  the  expected  amount 

of  time  spent  in  the  transient  state  j,  starting  from  the  transient  state  i, 
before  absorption  in  any  of  the  absorbing  states. 

c)  The  (i,k)th  entry  of  (-A~  A),  for  1  <  i  <  r,  r+1  <  k  <  r+s,  is  the  probabi- 
lity that,  starting  from  the  transient  state  i,  absorption  occurs  in  the 
state  k. 

d)  The  (i,k)th  entry  of  (-A'^f-A"1  A) ,  for  1  <  i  <  r,  r+1  <  k  <  r+s,  is  the 
expectation  of  the  time  spent  in  the  transient  states,  starting  from  the 
transient  state  i,  before  absorption,  measured  on  those  paths  that  lead  to 
absorption  in  the  state  k. 

Proof.  These  results  are  well  known,  although  we  are  not  aware  of  publications 
where  they  are  proved  under  the  stated  form.  The  first  statement  is  proved  in 
Neuts  [9],  Lemma  2.2.1;  the  second  statement  is  a  consequence  of  Neuts  and 
Meier  [11],  Corollary  2;  the  third  statement  is  proved  in  [2],  Lemma  1. 
To  prove  the  fourth  statement,  we  must  show  that  the  (i,k)th  entry  of  (-A  )• 
(-A   A)  is  equal  to  C  t  d  v-  k^'  wnere 

v-  k(t)  =  P  [the  process  gets  absorbed  in  k,  before  time  t  |  the 
initial  state  is  i]. 

Conditioning  on  the  state  of  the  process  at  time  t  >  0,  one  obtains,  for  h  >  0 

vijk(t+h)  -  (1+Aii  h)vifk(t)  +  Aijk  h  +  I  Aij  h  vj>k(t)  +  o(h), 

hence     v!^(t)  =  I  A^  vj>k(t)  +Ai>k  . 

or       v'(t)  =  A  v(t)  +  A  , 

where  v(t)  is  the  matrix  formed  by  the  vi  k(t)  's.  We  already  know  that 
v(0)  =  0,  and  lim  v(t)  =  (-A  )A,  therefore,  we  easily  obtain  that 

t-»  • 


J~  t  d  v(t)  =  -  [t((-A_1)A  -  v(t))]~ 
+  Jq  ((-A_1)A  "  V(t))dt 
■  JJ  ("A"1)  v'(t)  dt 
-  ("A"1)  [v(t)]J  =  (-AJ'^-A^A) 


We  now  define  the  processes  S  ,  0  <  n  <  N.  For  1  <  n  <  N,  S  is 
the  restriction  of  the  original  process  Q,  observed  during  those  intervals 
of  time  spent  at  level  n,  before  the  process  Q  enters  level  n-1  for  the 
first  time.  The  state  space  of  S  is  {(n,j),  0  <  j  <K  }.  Clearly,  all 
S  ,  1  <  n  <  N,  are  transient  Markov  processes.  The  process  Sq  is  the  res- 
triction of  the  process  Q  observed  at  the  lowest  level  0;  it  is  an  ergodic 
Markov  process.  We  denote  by  C  the  infinitesimal  generator  of  the  process 
S  ,  0  <  n  <N. 

We  also  define  the  matrices  n  ,  ,  0  <  k  <  n  <  N  as  follows  : 
nn  i,(i »j)  is  the  probability  that,  starting  from  state  (n,i),  the  first 
state  visited  at  any  level  below  level  n  is  the  state  (k,j). 

Lemma  2 


The  matrices  C   ,  0  <  n  <  N  are  recursively  determined  as  follows 


CN  =  QN>,    .  (2.2) 

en  -  1n.«  +  On.n+1  nn+l,n   .  0  <  n  <  N-1.  (2.3) 


Proof.     The  equation  (2.2)   is  obvious   :   starting  from  level   N,  the  process 
SN  terminates  as  soon  as  the  process  Q  enters  any  of  the  levels  0  to  N-1. 
Meanwhile,  the  transitions  are  governed  by  C\.  ... 

Consider  n  fixed,   1  <n  <N-1,  and  let  Z  (t),  t  >  0  be  defined  as  follows. 
Zn("0  =  i   if  the  process  S     is  in  state  (n,i)  at  time  t. 


Assume  that  Z  (x)  =  i  and  Z  (x  +  dx)  =  j  *  i.  Since  j  *  i,  a  transition 
must  have  occured  in  the  process  Q.  Z  (x  +  dx)  =  j  if  and  only  if  one  of 
the  following  events  have  occured. 

a.  The  transition  is  from  (n,i)  to  (n,j),  this  happens  with  probability 
Qn>ndx+o(dx). 

b.  The  transition  is  from  (n,i)  to  (n+l,h)  for  some  h,  and  the  process  Q 
returns  to  (n,j),  after  spending  an  unspecified  amount  of  time  at  level 
n+1  or  higher,  before  visiting  any  other  state  at  level  n  or  lower.  This 
happens  with  probability  Qn  n+l(1»h)  dx  nn+1  n(h,j)  +  o(dx). 

It  is  now  clear  that 


P[Zn(x    +    dx)    =    j|Zn(x)    =    i] 


Kn+1 


"   Vn^'J)    dT      +h^      ^n.n+l^1^)    nn+l,n(h'J)    dT 
+   o(dx),        for    1  <  i    *   j   <  Kn, 
and  similarly,  that 

P[Zn(x   +   dx)    =    i|Zn(x)    =    i] 

Kn  +  1 
-  (1  ♦  QBi„<l.l)  ^)  ^     Qn>n+1(i,h)  nn+ltB(h.1)  dt 

+    o(dx),        for    1<  i  <  Kn, 

which  proves  (2.3)   for  n  >  1.     The  proof  for  n  =  0  is  identical. 
Lemma  3 

The  matrices  TT     .,0<k<n<N  are  recursively  determined  as 
n  j  k 

follows 

Vk  ■   (^N1)   QN,k    •         forO<k<N-l,  (2.4) 

"n.k  •<<>««.*  ♦«n.iHlW>'      for  0  <  k  <  n  *  N-l     (2.5) 


9. 


Proof.     The  equation   (2.4)   is  an   immediate  consequence  of  Lemma  1  and  (2.2) 
A  simple  probabilistic  argument  leads  to 

Vk  =  K!n>tQn,k  +  Qn,n+1  nn+l,k  +  Qn,n+1  nn+l,n  nn,k]« 
which,  together  with   (2.3),  implies   (2.5).  □ 

Remark.     The  matrices  C   ,  0  <  n  <  N  and  n.,0<k<n<N,  must 
be  numerically  computed  together,  as  indicated  below. 

Algorithm  A 

A.l  CN  :=  QNjN; 

A. 2  for  k  from  0  to  N-l  do  determine  rL.  .    from  (2.4) ; 

A. 3  for  n  from  N-l  to  1  step  -1  do 

begin     determine  C     from  (2.3); 

for  k  from  0  to  n-l  do  determine  rr    .    from  (2.5) 
—         —  n,k  v       ' 

end; 
A. 4  determine  £Q  from  (2.3)« 

We  are  now  in  a  position  to  prove  our  first  main  result. 


Theorem  1. 


Th 


e  vectors  p.  0  <  n  <  N,  are  determined  by  the  equations 


2q  CQ  =  0,  (2.6) 

fin-fin.lVl.nK1)'      ft»rl<n<M»  (2.7) 

N 

I    fi.e=l.  (2.8) 

n=0  ~"  ~ 


10. 


Proof.   From  the  structure  (1.1)  of  Q,  it  results  that  the  system  p  Q  =  0 
may  be  decomposed  into 

N 
£0  %,0  +  ^  fiv  ^v,0  =  °-  (2-9) 

N 
^-l\-l,k  +  Pk\,k  +  v!k+1^Qv,k=^     forl<k<N-l,  (2.10) 

fiN-1  QN-1,N+^N,N  =  °'  (2.11) 

From  (2.11)  and  Lemma  2,  p^  =  -p^  QN_M  Q"jN  =  p^  QN_M  (-Q-JM), 
which  proves  (2.7)  for  n  =  N. 

The  remainder  of  the  proof  uses  an  induction  argument  in  two  steps. 
Induction  assumption  1  :  for  a  given  k,  1  <  k  <  N-l,  the  equation  (2.7) 
holds  for  k+1  <  n  <  N. 

In  order  to  prove  that  (2.7)  holds  for  n  =  k,  we  first  need  to  show 
that 

Z,  £v  ^,k  "  fit  <-Ct>  nt,k  '  for  k+l<  t  <  N.  (2.12) 

Clearly,  (2.12)  holds  for  t  =  N,  by  (2.4).  An  induction  argument  will  be 
used  to  prove  (2.12)  for  arbitrary  t  >  k+1. 

Induction  assumption  2  :  for  a  given  a,  k+1  <  a+1  <N,  the  equation  (2.12) 
holds  for  a+1  <  t  <  N. 

We  prove  that  it  holds  for  t  =  a,  by  using  successively  the  assump- 
tion 2,  the  assumption  1,  and  (2.5). 

N 

*  J^A.k  =£aQa,k  +  Ha+l  <"Ca+l>  "a+l.k  ' 

v=a 


=  K   ^a,k  +  (5a,a+ina+l,k)' 


11. 


Finally,  to  show  that  (2.7)  is  satisfied  for  n  =  k,  note  that 
(2.10)  can  be  written  as 

N 

°  =  J>k-i     Qk-i,k  +  Mk.k  +  v*k+1£vQv,k 

=  fik-1  Qk-l,k  +  h  Qk,k  +  Hk+1  (_ek+l)  nk+l,k 
=  2k-lQk-l,k  +  £k  (\,k  +  \,k+lnk+l,k)- 

The  equation  (2.6)   is  proved  in  the  same  way.     Since  CQ  is  the 
infinitesimal  generator  of  a  finite,  irreducible  Markov  process,  the  equation 
(2.6)  has  a  unique  solution,  up  to  a  multiplicative  constant,  and  that 
constant  is  determined  by  (2.8).  a 

We  have  thus  shown  that  the  matrices  C   ,  0  <  n  <  N,  are  the 
crucial  quantities  needed  to  determine  the  stationary  probability  distribution, 
We  shall  observe  in  Section  4  that  the  same  matrices,  together  with  the 
matrices  n    k,  0  <  k  <  n  <  N,  play  a  central  role  in  the  evaluation  of  first 
passage  times  from  a  given  level  n  to  a  lower  level  k.     We  must,  however, 
first  examine  the  first  passage  times  to  higher  levels. 

3.     FIRST  PASSAGE  TIME  TO  HIGHER  LEVELS 


We  denote  by  in  ni     the  first  passage  time  from  level  n  to  a  different 

level   n'    :   t^,   =  inf  {t  >  0   :   X(t)   =  n'|X(0)  =  n},  0  <  n,n'  <N,  n  *  n'. 

We  are  concerned  in  this  section  with  the  first  passage  times  t    for 

J       n,m 

m  >  n.  We  need  to  first  establish  some  preliminary  results 

We  define  the  processes  Sn,  0  <  n  <  N  in  a  similar  fashion  to  the 
processes  Sn.  For  0  <  n  <N-1,  the  transient  Markov  process  S  is  the  restric- 
tion of  the  process  Q,  observed  during  those  intervals  of  time  spent  at  level  n, 
before  the  process  Q  moves  upward  to  level  n+1  for  the  first  time.  The  ergodic 
process  SN  is  the  restriction  of  the  process  Q  observed  at  the  highest  level  N. 
We  denote  by  Cn  the  infinitesimal  generator  of  the  process  S  ,  0  <  n  <  N. 


12, 


We  also  define  the  matrices  rnm,  0  <  n  <  m  <  N  as  follows   :   rn  m(i>j)   is 
the  probability  that,  starting  from  state  (n,i),  the  first  state  visited  at 
level  m  is  the  state  (m,j).     More  formally,  rn  m(i»J)  »  p£Y(Tn  rJ  =  J  I x(°)  = 
n,  Y(0)   =  1]. 
Clearly,  r        e  =  e,  for  all  n  and  m.  (3.1) 

Lemma  4. 

The  matrices  C  ,  0  <  n  <  N,  are  recursively  determined  as  follows 

co  -  V  •  <3-2> 

Cn  =  "n,n  +  [fo  "n.k  rk,n-     'or  1<  n  <  N  .  (3.3) 


a 


The  proof  is  analogous  to  that  of  Lemma  2  and  is  omitted  here. 


Lemma  5 


follows 


The  matrices  r    „,0<n<m<N  are  recursively  determined  as 
n,m 


rn,n+l  "  K1)  Vn+1  •  *r  0  <n  <M-1.  (3-4) 

rn  m  =    rn  m  .  r    .  (3.5) 

n,m  n,m-l    m-l,m  ' 


=    n        rf  ,  +   ,  for  0  <  n  <  N-2, 
t=n+l    w,t 

n+2  <m  <N. 


(3.6) 


Proof.     For  n=0,  the  equation  (3.4)  is  an  immediate  consequence  of  Lemma  1 
and  (3.2).     For  1  <n  <  N-l,  observing  the  system  at  the  first  departure  from 
level  n  leads  to 

-1  n_1 

rn,n+l  =  ("Qn,n)(Qn,n+l  +  kJQ  Qn,k  rk,n  rn,n+l)» 

which,  together  with  (3.3),  implies  (3.4).  To  prove  (3.5),  since  Q  has  the 


structure  (1.1),  one  merely  decompose  the  probabilities  in  r    according  to 


13. 


the  first  state  visited  at  level  m-1.  Equation  (3.6)  is  now  obvious. 

The  matrices  C„,  0  <  n  <  N  and  r ,  0  <  n  <  m  <  N,  must  be  numeri 

n  n,m 

cally  computed  together,  as  indicated  below. 
Algorithm  B 

B.l    Cq  :=  Qq^q  ; 

determine  Tq  ,   from  (3.4); 
B.2  for  m  from  1  to  N-l  do 

begin  determine  C     from  (3.3) ; 

determine  r,  ._,   from  (3.4); 

m,m+i  v       ' 

for  n  from  0  to  m-1  do  determine  r    ml1  from  (3.5) 

—         —  n,m+l  v       ' 

end; 
B.3  determine  C»  from  (3.3). 


We  now  define,  for  x  >  0,  0  <  n,n'  <  N,  n*  n',  1  <  i  <  Kn, 
Kj<Kn.    , 

9n,i;n',j<x>  -  P[Tn,n'  <x«  Y<Tn,n'+  °)  =  ^^  =  n'  Y(°)  "  l3'  <3'7) 

we  denote  by  G  n.(0  the  matrix  such  that  G  „ • (C) (1 » J )  equals  the  Laplace- 

Stieltjes  transform  of  the  possibly  defective  distribution  gn  .   ,  .-(.). 

n ,i  ,n  ,j 

Clearly,  since  the  Markov  process  may  only  move  up  by  one  level  at 
a  time,  we  have  that 

Gn>m(«)  "  G„,n,-1<«>  Vl.m<«)  <3-8> 

m 
=    n        Gt  ,   t(0,     forO<n<N-2,  (3.9) 

t=n+l     w»* 

n+2  <  m  <  N. 


14, 


(3.10) 


A  simple  argument  conditioning  on  the  state  of  the  process  at  time  h  >  0 
shows  that 

Vi;n+l,j(x+h)  =  [1  +  «n,n>U  hl  9n,i  ;n+l,j(x) 

+  vJi(Qn.n)l,vh9n,v;n+l,jW 

n-1   Kk  Kn 

+  k=0  v=l(Qn'k)i*v  H  y=l  9k»v'n^  *  9n^  n+1'J(X) 

+(Qn,n+l>i,jh+0<h>*      forl<n<N-l, 
where    *    denotes  the  Stleltjes  convolution,  from  which  we  conclude  that 
5  Gn,n+1<5>  *  «n,n  Gn,n+1<5>  +  «n.n+l 

+  3oQ".l'Gk.n(C)Gn,n+l(«)'fo,-1<n<N-1- 
and  similarly, 

*  6o,i^)  '  ^0,0  %.i««)  +  %,v  <3-u> 

This  equations  may  be  written  as 

Wl<?)  •  Dn<5)  "n,n+l  •    *r  0  <  n  <  N-1.  (3.12) 

where        D„(?)  =  (El  -  Q^,,)"1  .  (3.13) 

°nM  -  (CI  -  Qn,n  -  V  Qn>k  \Jt))'1.  for  l<  n  <  N-1.               (3.14) 

Of  course,  we  have  G„  (0)   =  r  m  ,  and  from  (3.3)  Dn(0)  =  (-C'  ), 
n,mv  '        n,m         v  '     nx  '   x  n  ' 

0  <  n  <  m  <  N. 


15. 


Let       Un,n'  -~  G"."'U)  I**  "  (3-15) 

and       Hn  n'  =  Un  n'  -  *    for  0  <  n,n'  <  N,  n  *  n' .  (3.16) 


We  have 


U„  m(1»j)  =  E[xn  m,  Y(xn  m  +  0)  =  j|X(0)  =  n,  Y(0)  =  i], 
n,mv  J/     n,m   v  n,m   '   -l  v  '    ■  x  ' 


and       H„,m(i)  -  «x    I  X(0)  -  n.  Y(0)  -  11. 


Theorem  2. 

The  expected  values  of  the  first  passage  times  to  higher  levels 
are  given  by  the  following  relations. 


"o.i  ■  (-Co1)  £'  <3-17> 

Hn,n+1  "  <<><&  +  "=!  V  Hfc.n*'  for  l  *  n  <  "-1'        (3'18> 


k=0 

i^m  =  u«mi  +  rnm1umlm,forO<n<  N-2,  (3.19) 

— n,m      — n,m-l        n,m-l  -m-l,m  v 

n+2  <  m  <  N . 

Proof.  The  equation  (3.17)  immediately  results  from  Lemma  1.  The 
equations  (3.19)  are  directly  proved  by  using  the  strong  Markov  property  : 

Wi)=E[Vml  X<0)  -  n.  Y<0)  -  1] 

=  E  [Tn,m-l  I  X(°)  =  n'  Y<°>  =  i] 

+  E[E[Vl.mlY(Tn,m-l  +  °)llX(°)  "  "•  Y<0)  =i]' 

Conditioning  on  the  first  level  visited  after  level  n,  and  using  Lemma  1, 
we  obtain 


16, 


Vn+l  =  K!n>^+    [z=0  ("O  Vk  Vn+1 


=  ("Q"*n)  i+    k=o  (_g"'n)  Qn«k  (j^n  +  Fk»n  ^*n+l) 


by  (3.19).     Hence, 


n-1  n-1 

(Qn,n  +  klQ  Qn,k  rk,n)  Vn+1  =  "  <•  +  £,  Vk  M|c.n>' 


which,  together  with  (3.3),  proves  (3.18). 


Remarks. 

a.  Another  argument,  using  the  equations  (3.12),  (3.15)  and  (3.16), 
requires  the  evaluation  of  the  matrix  3  D  (0/35  •  This  is  done  in 
Appendix  A.  Similarly,  higher  moments  of  first  passage  times  may  be 
obtained  by  purely  routine,  albeit  tedious,  manipulations. 

b.  The  vectors  u^   may  be  computed  together  with  the  matrices  r   ,  by 
appropriately  modifying  Algorithm  B. 


FIRST  PASSAGE  TIME  TO  LOWER  LEVELS 


We  denote  by  t  k  the  first  passage  time  from  level  n  to  a  lower 
level  k.  In  the  preceding  section,  we  have  in  essence  decomposed  the  first 
passage  time  from  level  n  to  a  higher  level  m,  into  a  sum  of  first  passage 
times  from  n  to  n+1,  from  n+1  to  n+2,...,  from  m-1  to  m.  Since  it  is 
possible  to  move  down  several  levels  in  one  transition,  no  such  simple 
decomposition  is  possible  for  t  ■.  Rather,  we  must  explicitely  consider 
the  first  level  visited  below  n,  starting  from  level  n.  In  order  to  do  tnis, 
we  define  the  first  passage  time  Tn  as  follows 

Tn  =  inf  {t  >  0  :  X(t)  <  n  |  X(0)  =  n}  , 
=  min  {xn^k  ,  0  <  k  <  n-1},  for  1  <  n  <  N. 


17. 


We  define,  forx>0,0<k<n<N,   l<i^Kn,   1  <  j  <  Kk, 

gn>i;kjj(x)   =  P  [Tn<x,  X(Tn  +  0)  -  k,  Y(Tn  +  0)   =  J|X(0)   =  n,  Y(0)   =  1] 

we  denote  by  G     .(C)(i  »j)   the  Laplace-Stieltjes  transform  of  the  possibly 
defective  distribution  g     ...     •  (.),  and  by  Gn  u(0   tne  matrix  with  entries 
given  by  Sn>k(e)(1 »j). 

An  argument  similar  to  the  one  used  to  obtain  (3.10)   and  (3.11)   shows 
that 

*  SN,k(s)  =  Vk  +  %,H  §N,k<5)  •      ^r  0  <  k  <  N-l  (4.1) 

and      *  Bn,k^>  =  Vk  +  Vn  5n,k<5>  <4'2) 

+  <Wl  CW(5)  +  W(0  W^'  for  0  <  k  <  n  <  N-l. 
These  equations  may  be  written  as 

5N,k^)  =  DN^  QN,k   *       for  0<k<N-l.  (4.3) 

Bn,k^)  =  Dn<0   W„.k  +  Qn,n+1  W(5)]'  *>r  0  <  k  <  n  <  H-l.        (4.4) 
where  BNU)  =  (?I  -  QN>N)_1  ,  (4.5) 

*nM  '-  <*  "  Vn  "  <Wl  W^^'       for  l  <  n  <  H'1'  (4'6) 

Obviously,  we  have  that  5  .  (0)  =  n  .  ,  and  (Lemma  2)  Dn(0)  =  (-C"1). 
Observe  that 


n  u  e  <e,   for  0  <  k  <  n  <  N, 

n-l 

I  n  .  e  =  e,   for  1  <  n  <  N.  (4.7; 

k=0  n,K  "   ~ 


18. 

Let        D<^  = "  IT  6".*(5)  '5=0-  (4-8) 

Dnfk(i,j)  -  E[Tn,X(Tn  +  0)  =  k,  Y(Tn  +  0)  =  j|X(0)   =  n,  Y(0)  =  i]. 

Lemma  6. 

The  matrices  0     k  are  recursively  determined  as  follows 

°N,k  =  ^N^  nN,k   •       forO<k<N-lf  (4.9) 

°n,k  =  K^Vk  +  Wl(0n+l,k  +  °n+l,n  nn,k>]'  <4-10> 

for  0  <k  <  n  <N-1. 
Proof.     It  results  from  Lemma  1  that 

DN.k  "  (-tfH-tf  «N.k>-      forO<k<N-l, 

which  together  with  (2.4)  proves  (4.9).  For  n  <N-1,  we  condition  on 
the  first  level  visited  after  level  n,  and  we  obtain  by  using  Lemma  1  and 
the  strong  Markov  property  that 

°n,k  "  Kln>  Vk  +  Kin  <Wl)(0n+l,k  +  Vl.n  nn,k  +  nn+l,n  °n,k>* 

hence 

«n.n  +  Qn,n+1  nn+l,n>  °n,k  *  '   [nn,k  +  *n,n+l  <°n+l,k  +  °n+l,n  nn,k>]' 

which,  together  with  (2.3),  proves  (4.10).  a 

Corollary  1. 

The  vectors  u„  defined  as 
— n 


u. 


are  recursively  determined  as  follows 

%  -  K^  +  Vn+1  W   •       for  1  <  n  <  N-l.  (4.13) 


u*.  =  ("Cm1)  £  ,  (4.12) 


a 


This  corollary  immediately  results  from  Lemma  6  and  (4.7). 

The  Laplace-Stieltjes  transforms  of  the  distribution  functions 
for  the  first  passage  times  t     .    are  given  by  the  matrices  G     ,,(€).     ^ 
one  decomposes  the  trajectories  from  level   n  to  level   k,  according  to  the 
first  level   v  visited  below  n,  one  immediately  obtains 

Gn.k<5>  ■  Gn,k<«>  +    =    S„,v<«>  Gv,k<«) 
v=U 

n-l 
+    E 


(4.14) 


-1,4.1   gn     (0  G     .(c),       for  0  <k  <  n  <  N, 
v=k+l     n,vx    '     v,kv   ' 


from  which  the  moments  of  first  passage  times  to  specific  lower  levels 
may  be  derived.     The  first  moments  may  be  more  easily  obtained  by  using 
the  strong  Markov  property.     The  next  theorem  is  stated  without  proof. 

Theorem  3. 

The  expected  values  of  the  first  passage  times  to  lower  levels 
are  given  by  the  following  relations 

Hl.0"Hi-  <4-15) 

k-1  n-l 

iJn  k  =  i,  +  E  nn   u  ,  +  I   n   u  .  ,  for  0  <  k  <  n  <  N,  (4.16) 
— n,K   — n    _q  n,v  —v,k         -k+i  n»v  — v»k  v    ' 

where  the  matrices  n        are  determined  by  Lemma  3,  and  the  vectors  u     . 
n,v  J  — v,k 

for  v  <  k-1  are  determined  by  theorem  2.  a 


20, 


We  observe  here  a  major  difference  between  the  first  passage  times 
to  higher  and  to  lower  levels.  We  have  observed  in  the  preceding  section 
(Remark  b.  after  Theorem  2)  that  the  vectors  u    for  m  >  n  may  be  computed 
in  one  loop,  starting  from  n  =  0,  to  n  =  N-l.  The  evaluation  of  the  vectors 
u  .  for  k  <  n  requires  two  steps.  One  for  computing  the  vectors  u  ,  starting 
from  n  =  N,  to  n  =  1.  In  the  second  step  are  computed  the  vectors  u  .  , 
starting  from  n  =  1,  to  n  =  N.  We  observe  also  that  the  vectors  u^   for 
m  >  n  have  to  be  computed,  even  if  one  is  only  interested  in  the  first  passage 
times  to  lower  levels. 

It  is  clear  that  one  can  combine  the  computation  of  the  stationary 
probability  distribution,  and  the  expected  first  passage  times,  as  is  done  in 
the  algorithm  C  given  in  Appendix  B. 

It  is  much  easier  to  obtain  moments  for  the  first  passage  time  from 
level  n  to  any  level  at  or  below  k  <  n-l.  Let 


Tn  k  =  inf  {t  >  0  :  X(t)  <  k|X(0)  =  n}, 

=  min  {t    :  0  <  v  <  k} ,   for  0  <  k  <  n  <  N. 
n,v 

We  denote  by  G  .  (£)  the  matrices  with  entries  equal  to  the  Laplace-Stieltjes 

n  jK 

transforms  of  the  distribution  functions  for  the  first  passage  times  T  k> 
Also,  let 

and   u„  .  =  U  .  e  ,   for  0  <  k  <  n  <  N. 
— n,k    n,k  — 

If  we  condition  on  the  first  level  v  visited  below  n,  we  obtain 

*n  k<?)  =    *    5     (O  +  "i       Gn    (0  G    k(0. 
n'K  v=0       '  v=k+l       '  ' 

from  which  the  moments  of  T  .  may  be  obtained.  Again,  it  is  easier  to  deter- 

n  j  k 

mine  the  first  moments  by  using  the  strong  Markov  property.  The  next  theorem 


21. 


is  stated  without  proof. 


Theorem  4. 

The  expected  values  of  the  first  passage  time  from  level  n  to  any 
level  at  or  below  k,  are  given  by 

u„  „  i  =  iL  ,  (4.17) 

— n,n-l   — n  v    ' 

n-1 

and      Hn  k  =  "n  +  z   nn  x,  ux,  k  »   for  0  <  k  <  n  <  N,  (4.18) 

"»*      "    \i=k  +  1    "»v   v,l\. 

where  u  is  defined  in  (4.11).  d 

— n  x    # 

The  vectors  li  .  may  be  computed  in  one  loop,  beginning  with 
k  =  N-1,  n  =  N,  and  progressively  decreasing  k  until  k  =  0;  for  each  value 
of  k,  one  computes  the  vectors  Um  k  to  u.+,  .  .  In  Section  5,  we  shall  refer 
to  Algorithm  D  (in  Appendix  B),  which  is  designed  to  recursively  compute  the 
vectors  lL  .  ,  N-1  >  k  >  0. 


5.  GENERAL  REMARKS 


Remark  a. 


We  firstly  comment  on  Equation  (2.7).  The  (i,j)th  entry  of  the 
matrix  Q  ,   (-C~  )  is  equal  to 

1       K"  1 


n  L,n-    -(-C^Xk.JHS.l) 


'       k=1  ("Vl.n-lH1'1) 


22, 


The  factor  Qn_1>n(i  »ky("Qn-l  n-1^1  fi  )is  the  Probability  that» 
upon  leaving  the  state  (n-1,1),  the  process  Q  moves  to  the  state  (n,k). 
The  (k,j)th  entry  of  (-C~  )  is  the  expected  time  spent  by  the  process  in 
state  (n,j),  starting  from  (n,k),  before  hitting  any  state  at  a  lower  level, 
by  Lemma  1  and  the  definition  of  £  .  If  that  lower  level  is  below  level  n-1, 
the  process  will  be  forced,  by  the  structure  (1.1)  of  Q,  to  pass  through  level 
n-1  before  reaching  again  level  n. 

Therefore,  it  results  from  (5.1)  that  [Q  ,  _(-C~  )](i,j)  is  equal  to 
(-Q  j  n_i)(i  »i)times  the  expected  time  spent  in  the  state  (n,j),  before  the 
next  return  to  level  n-1,  given  that  the  process  Q  starts  in  the  state  (n-1,1). 
This  is  exactly  the  interpretation  of  Neuts'  matrix  R  ([9],  Theorem  1.7.2,  and 
[8]),  and  shows  that  (2.7)  is  the  counterpart  to  the  matrix-geometric  solutions. 

Remark  b. 

If  the  matrix  Q  is  upper  block-Hessenberg,  instead  of  having  the  form 
(1.1),  then  our  results  can  be  modified  in  an  obvious  way.     In  that  case,  the 
first  passage  times  to  lower  levels  may  be  directly  computed,  via  an  algorithm 
derived  from  Algorithm  B,  while  the  first  passage  times  to  higher  levels  are 
more  involved.     Also,  the  stationary  probability  vector  is  computed  by  determi- 
ning first  the  vector  pj,  (up  to  a  multiplicative  constant)  and  then  recursively 
the  vectors  p^j,  p^_2 j)q. 

This  last  observation  points  to  the  reason  why  it  is  difficult  to  eva- 
luate the  stationary  probability  distribution  for  infinite  upper  block-Hessenberg 
matri  ces . 

We  briefly  examine  now  the  problem  of  obtaining  a  good  approximation  for 
the  stationary  distribution  in  such  a  case.     Neuts  [7]  and  Lucantoni  and  Neuts 
[6]  consider  infinite  upper  block-Hessenberg  matrices  of  the  following  form  : 


23. 


Q  = 


B0   Bl   B2   b3 


B0   Al   A2   A3 


0    AQ   Ax   A2 


0    0    Ao   Ai 


and  stationary  probability  vector  x  =  ()u,  x,,  x,,   ...).     An  explicit  form 
is  found  in  [6,7]   for  x«  and  x, ,  as  well   as  complicated  but  tractable  expres- 
sions for 


m,   =    I  v  x    e   , 

v=0 


m0  =    E    v     x    e. 

2  -n  -v  — 

v=(J 

From  these,  a  heuristic  method  is  proposed  to  truncate  the  matrix  Q  at  some 
level  N,  and  it  is  suggested  that  x,,  x3,  ...,  Xm  be  determined  by  a  block 
Gauss-Seidel  iterative  procedure. 

An  alternative  method  would  be  to  determine,  as  indicated  in  Section 
1,  the  vector  £  corresponding  to  the  finite  matrix,  and  to  compare  the  vectors 
(pn»Pi)  to  (x_q,Xi),  in  order  to  check  whether  the  level  N  is  properly  chosen. 

Also,  we  may  truncate  the  matrix  Q  on  the  basis  of  first  passage  times 
argument,  instead  of  using  m,  and  nu.  It  is  yery  easy  to  adapt  Algorithm  D 
(in  Appendix  B),  in  order  to  compute  successively  the  expected  first  passage 
time  from  level  0  to  any  level  at  or  above  m,  for  m  =  1,2,...  .  One  may  then 
choose  N  to  be  the  smallest  value  of  m  for  which  the  expected  first  passage 
time  exceeds  a  given,  large  value.  When  N  is  determined,  the  vector  p_  can  be 
readily  obtained,  since  most  of  the  preliminary  computations  are  already 
performed. 


24. 


Remark  c. 

We  have  not  considered  first  passage  times  to  specific  states,  but 
have  concentrated  our  attention  on  first  passage  times  to  levels.  The  reason 
is  that,  in  a  number  of  applications,  such  as  chains  in  a  random  environment, 
the  levels  themselves  are  of  importance.  Nevertheless,  it  is  not  very  diffi- 
cult to  extend  our  analysis  to  the  study  of  x  .   .  =  inf{t  >  0  :  X(t)  =  m, 

n ,  i  ,m ,  j 

Y(t)  =  j  |  X(0)  =  n,  Y(0)  =  i}.  With  the  results  of  Sections  3  and  4,  we  can 
observe  the  process  between  successive  visits  to  level  m,  until,  for  the  first 
time,  the  state  (m,j)  is  visited  before  the  process  leaves  level  m. 

Remark  d. 


We  have  only  considered  continuous-parameter  Markov  processes.     Our 
analysis  may  be  immediately  adapted  to  discrete-time  Markov  chains.     The  major 
difference  is  that,  in  all  our  equations,  the  matrices  (-C~  )  and  (-C~  )  will 
be  replaced  by  fundamental  matrices  for  absorbing  Markov  chains  (Kemeny  and 
Snell    [5],  Section  3.2). 


NUMERICAL  EXAMPLES 


We  have  computed  some  characteristics  for  different  machine-repairmen- 
like  processes,  using  the  concepts  of  common-cause  failure  (described  in  the 
introduction)  and  of  randomly  varying  environments.     We  have  considered  four  dif- 
ferent types  of  systems,  described  below  in  increasing  order  of  their  complexity. 

We  have  computed,  for  each  system,  the  stationary  distribution  p  and  the 
vectors  iL  k-     To  do  this,  we  have  used  Algorithm  E  (in  Appendix  B),  which  inclu- 
des parts  of  both  Algorithms  C  and  D. 


25, 


We  shall  not  attempt  to  answer  any  specific  question  about  such 
systems.  Rather,  we  shall  describe  the  qualitative  and  quantitative  diffe- 
rences in  their  behaviour.  We  shall  use  here,  as  the  main  performance  measure, 
the  expected  times  ijL  .  until  there  remain  k  or  less  operating  machines,  start- 
ing from  a  fully  operational  system. 

Type  I.  The  first  model  is  the  classical  machine-repairmen  process.  We  con- 
sider a  system  of  N  machines  and  R  repairmen,  with  failure  and  repair  rates 
respectively  given  by  x'  and  y.  The  infinitesimal  generator  is  in  this  case 
a  tri -diagonal  matrix. 

Type  II.  This  is  a  simple  common-cause  failure  model.  In  superposition  to 
a  machine-repairmen  system,  with  N  machines,  R  repairmen,  failure  and  repair 
rates  given  by  x  and  y,  we  consider  an  independent  point  process  with  exponen- 
tial interevent  time  distribution,  with  parameter  v  :  the  process  of  common- 
cause  failures.  Whenever  a  common-cause  failure  occurs,  each  operating  machine 
has  a  probability  q  of  failing,  independently  of  the  other  machines.  Thus,  if 
there  are  n  operating  machines,  the  number  of  machines  that  fail  due  to  common- 
cause  has  a  binomial  distribution,  with  parameters  n  and  q.  The  infinitesimal 
generator  has  the  form  (1.1) »  with  blocks  of  order  1. 

We  define  as  system  I I. A  the  common-cause  failure  system  with  N  =  10, 
R  =  1,  X  =  .005,  y  =  1,  v  =  .01,  q  =  .2.  If  1/y  is  equal  to  one  hour,  then 
1/X  «8  days,  and  1/v  »4  days.  The  system  I. A  is  a  machine-repairmen  system 
with  N  =  10,  R  =  1,  X'  =  .007,  y  =  1.  That  value  for  x'  is  chosen  so  that  both 
systems  are  globally  similar  in  the  following  sense  :  when  n  machines  are  ope- 
rating, they  both  have  the  same  expected  number  of  failures  in  an  interval  of 
length  dt  :  this  is  n(X  dt  +  q  v  dt)  +  o(dt)  =  .007  n  dt  for  system  II. A,  and 
n  X'  dt  +  o(dt)  =  .007  n  dt  for  system  I. A. 


26. 


We  give  in  Table  I  the  stationary  probability  distribution  for  these 
two  systems-  We  observe  that  both  systems  have  high,  nearly  equal,  probabili- 
ties of  having  most  machines  in  operation.  Nevertheless,  the  two  systems  are 
different  since  the  distribution  II. A  has  a  longer  tail  towards  the  states 
with  few  operating  machines. 

This  difference  is  strikingly  apparent  on  Table  II,  where  we  give  the 
expected  times  u1Q  k,  0  <  k  <  9.  For  instance,  if  we  compare  u,Q  5  for  both 

systems,  we  observe  that  the  average  time  until  5  machines  or  more  are  under 

3 
repair,  is  smaller  by  a  factor  10  for  the  common-cause  failure  system. 

We  conclude  that  multiple  failures  have  a  profound  effect  on  the  dyna- 
mic behaviour  of  the  system.  In  support  of  our  conclusion  that  multiple  failures 
are  the  main  cause  for  difference,  we  observe  that  both  systems  have  identical 
repair  facilities  (R  =  1,  y  =  1),  and  that  the  stationary  expected  number  of 
failures  per  unit  of  time  (henceforth  denote  by  cp)  are  nearly  equal  :  they  are 
respectively  given  by 

10 
ipT  .=  I  n  pn  X'  =  .0694815, 
1,A   n=0    n 

10 

and      ipn  ,=  Z  n  p(\   +  vq)  =  .0693535. 
n. a   n=0    n 

Type  III.  We  assume  now  that  the  failure  and  repair  parameters  change  according 
to  an  independent  environmental  process  :  they  are  respectively  given  by  \.t   u. , 

J     J 

v.  and  q.  when  the  environment  state  it  j,  1  <  j  <  K.  We  assume  that  the  envi- 

J  J 

ronmental  process  is  an  irreducible  Markov  process,  with  infinitesimal  generator 
T.  The  system  is  now  described  by  a  pair  of  indices  (n,j),  where  n  denotes  the 
number  of  operating  machines,  and  j  denotes  the  state  of  the  environment.  The 
infinitesimal  generator  has  the  form  (1.1),  with  blocks  of  order  K.  A  typical 
example,  with  N  =  4,  R  =  2  and  K  =  3  is  displayed  in  Appendix  C.  Note  that  the 


27. 


off-diagonal   blocks  are  diagonal   matrices  because  the  only  possible  transi- 
tions are  from  (n,j)   to  (n,j'),  with  j  *  j',  or  (n,j)   to  (n',j),  with  n  *  n ' . 

We  firstly  define  a  system    with  two  environments.     In  environment  1 
(the  off-environment),  no  common-cause  failure  may  occur.     In  environment  2 
(the  on-environment),  common-cause  failure  may  occur  at  the  rate  v*.     Indivi- 
dual failures  occur  with  the  same  rate  X  in  both  environments. 

In  order  to  make  meaningful  comparisons  with  the  system  I I. A,  we  have 
chosen  v*  so  that  the  expected  time  between  two  common-cause  failures  is  equal 
to  1/v  =  (.01)     ,  the  same  as  for  the  system  1 1. A.     The  time  between  two  succes- 
sive common-cause  failure  has  a  PH-distribution  (Neuts  [9],  chapter  2),  with 
representation  (£,  V),  where 


6  =  [1   0], 


V  = 


(T21+  v*)    T21 


12 


-T 


12 


and  its  first  moment  is  given  by  -£  V~  e  =  (Toi+  Ti2)(T,2  v*)~  »  hence 

T21 

U«    =   V    ( 1   *  )  . 

T12 


We  have  chosen  T,2  =  .00015  and  T21  =  .004,  from  which  we  obtain 
v*  =  .27666.     If  the  unit  of  time  is  one  hour,  then  l/Tjp  «  9  months,  1/Toi  « 
10  days,  1/v*  »  3.5  hours. 

We  now  define  the  systems  III. A  and  III.B.     For  system  III. A,  we  have 
N  =  10,  R  =  1,  K  =  2,  \l  =  x2  =   .005,  »i  =  vz  =  1*  vi  =  °»  v2  =   -27666»  ^  =  °» 
q2  =  .2.     For  system  III.B,  there  is  only  one  difference  :  u2  =  2,  the  service 
rate  is  doubled  in  the  on-environment. 


28. 


In  Table  III,  for  each  system,  the  first  column  represents  tne 
stationary  marginal  distribution  of  the  number  of  operating  machines;  the 
last  row  represents  the  stationary  marginal  probability  of  being  in  each 
environment;  in  the  remaining  two  columns  are  given  the  stationary  conditional 
distributions  of  the  number  of  operating  machines,  given  that  the  system  is  in 
the  corresponding  environment. 

The  comparison  of  the  column  III.A.O  in  Table  III  and  the  column  II. A 
in  Table  I  shows  that  the  two  distributions  are  >jery   similar,  but  the  marginal 
distribution  for  the  system  in  a  random  environment  has  a  longer  tail  towards 
the  states  with  few  operating  machines.  We  also  observe  that  the  marginal 
distribution  is  not  a  good  descriptor  for  the  system  in  a  random  environment  : 
the  two  conditional  distributions  are  quite  different,  which  may  have  important 
implications,  even  if  there  is  a  small  probability  of  being  in  the  on-environment, 

In  Table  IV  are  given,  for  the  same  two  systems,  the  values  of  u,^  .  (1) 
and  u,Q  k(2),  for  0  <  k  <  9.  The  last  row  indicates  the  conditional  probability 
of  being  in  each  environment,  given  that  there  are  10  machines  in  operation. 
The  columns  III.A.l  and  III. A. 2  are  to  be  compared  to  the  column  II. A  in 
Table  II. 

Several  differences  may  be  observed.  Firstly,  the  time  u,Q  ~  to  com- 

3 
plete  failure  is  shorter  by  a  factor  10  for  the  system  in  a  random  environment. 

Secondly,  the  values  of  u,Q  ,(1)  to  u,g  7(1)  are  close  to  each  other  :  they 

differ  at  most  by  a  factor  2.  This  is  due  to  the  influence  of  the  on-environment 

-1        3 
it  takes  on  the  average  (T,-)   »  6.6  10  units  of  time  to  switch  from  environ- 
ment 1  to  environment  2,  at  which  time  the  common-cause  failure  process  is 
turned  on  and  brings  more  rapidly  the  system  towards  the  states  with  few  opera- 
ting machines.  As  u\Q  6(1)  is  so  close  to  (T12)"  »  we  have  computed  u1Q  6(1)  + 
u6  k(2),  for  0  <  k  <  5,  and  found  that  it  differs  from  the  corresponding  u^  k(l) 
by  at  most  70  units  of  time. 


29. 


We  conclude  that  the  systems  1 1. A  and  1 1 1. A  have  markedly  different 

dynamic  behaviours,  even  when  they  have  identical   repair  facilities  and 

nearly  equal   stationary  failure  rates   : 

10 
*III.A  =  n%  n  tpn.l  xl  +  Pn,2  <x2  +  V^1  =  -0666435, 

is  close  to  <Pjt  .. 

When  examining  the  data  for  system  III .B ,  we  observe  that  the  same 

comparisons  can  be  made  between  III.B  and  1 1. A  as  between  1 1 1. A  and  1 1. A. 

10 
The  stationary  average  service  rate  for  system  III.B  is    I  (pn  ,  m  +  P     9  u9)  = 

1.036145,  and  ipm  B  =  .0680697. 

The  remaining  examples  will   be  derived  from  the  system  III.B. 

We  shall  next  consider  systems  with  three  environments.     Environment  1 
is  still   the  off-environment  where  no  common-cause  failure  may  occur.     Environ- 
ments 2  and  3  together  constitute  the  on-environment  where  common-cause  failure 
may  occur  at  a  constant  rate  v*,  but  with  different  failure  probabilities, 
which  we  denote  respectively  by  f2  and  f3,  in  order  to  distinguish  from  the 

o 

probability  q2  in  the  preceding  model.  The  infinitesimal  generator  T  for  envi- 
ronmental changes  is  chosen  as  follows  : 


T  = 


12 


21 


21 


'12 
-(T21  +  T23) 

T32 


0 

T23 
-(T21  +  T32) 


(6.1) 


thus  the  on-environment  has  exponential  duration  with  parameter  T2,,  just  as 
for  the  systems  I I I. A  and  III.B.     We  shall  set  T32  >  T23  and  f2  <  f3,  so  that 
the  on-periods  are  comprised  of  intervals  of  relatively  low  failure  probability 
f2,  interrupted    by  shorter  intervals  of  higher  failure  probability  f3. 


30. 


In  order  to  make  meaningful  comparisons  with  the  systems  I I. A  and 

III .B,  we  have  set,  as  before,  v*  =  v  (1  +  T21/T12),  so  that  the  expected 

time  between  successive  common-cause  failures  is  the  same  for  all  systems. 
We  also  choose  f2  and  f3  such  that 

q2  =  a  f 2  +  (1-a)  fy 

where    a  =  Ogl  +  T32^T21  +  T32  +  T23^"  » 

and  is  the  conditional  probability  of  being  in  environment  2,  given  that  the 
system  is  either  in  environment  2  or  3.     This  choice  of  f2  and  f3  ensures  that, 
on  the  average,  the  probability  that  any  machine  fails  when  common-cause  fai- 
lure occurs  is  the  same  for  all  systems. 

The  system  III .C  is  defined  as  follows   :  N  =  10,  R  =  1,  K  =  3,  X,  =  x2  ■ 
X3  =   .005,  u1  =  1,  m2  =  y3  =  2,  vx  =  0,  v2  =  v3  =   .27666,  q1  =  0,  f2  =   .1504, 

f3  =  >36»  T12  =   -00015*  T21  =   -004'  T23  =   '°4,  T32  =   ,125'     For  svstem  m-D» 
the  differences  are  as  follows   :  f2  =  .1801,  f 3  =  .7,  T32  =  1.     Thus,  we  have 

for  system  III.D  yery  short  bursts  of  high  failure  probability. 

The  stationary  probability  distributions  are  respectively  given  in 
Tables  V  and  VI   :  in  column  0  the  marginal  distribution  of  the  number  of  ope- 
rating machines;  in  columns  1  to  3  the  conditional  distributions,  given  the 
environment  state.     The  last  row  of  each  table  gives  the  marginal  distribution 
of  the  environment  state. 

We  compare  Tables  V  and  VI  with  the  columns  III .B  of  Table  III.     The 
same  observation  that  were  made  earlier  are  valid  here  :  the  marginal  distribu- 
tion has  a  greater  probability  mass  at  the  states  with  low  operating  machines. 
The  probabilities  corresponding  to  6  to  10  operating  machines  given  the  off- 
environment  (columns  III.B.l,  III.C.l,  III.D.l)  are  nearly  equal   in  all   three 
systems.     The  conditional  distributions  given  the  on-environment  are  noticeably 
different  for  all   systems;  in  particular,  the  distribution  in  column  III.D. 3  is 
different  from  all  the  others,  in  that  it  is  more  or  less  uniform  for  n  between 
2    and  8. 


31. 


In  Table  VII  are  given,  for  the  same  two  systems,  the  values  of 
u10  k^) '  ^or  1^J^3,  0<k<9.     The  last  row  indicates  the  conditional 
probability  of  being  in  each  environment,  given  that  there  are  10  machines 
in  operation.     We  observe  that  short  bursts  of  higher  failure  probability 
are  more  detrimental   to  the  performances  of  the  system,  since  the  values  of 
u10  .  (j)  for  the  system  III.C  are  nearly  consistantly  greater  than  those  for 
the  system  III.D,  the  exception  being  u,Q  .  (3)  for  high  values  of  k.     That 
exception  is  probably  due  to  the  very  short  duration  of  the  environment  3. 

Finally,  we  have  computed  that  ipjrj  -  =  .0679624,  and  cpj , .  Q  =   .0680954. 

Our  general  conclusion  is  as  follows   :   if  one  considers  the  systems 
III. A  (or  III.B),  II. A  and  I. A  as  increasingly  simplified  approximations  of  the 
system  III.C  (or  the  system  III.D),  we  obtain  increasingly  optimistic  measures 
of  their  performance,  based  on  the  expected  first  passage  times  u,q  ■  ,  although 
the  systems  have  nearly  equal   stationary  expected  number  of  failures  per  unit 
of  time. 

Type  IV.     For  our  last  numerical  examples,  we  shall  assume  that  the  server  does 
not  change  instantaneously  the  service  rate  when  there  is  a  change  of  environ- 
ment.    More  precisely,  we  assume  that  the  environment  evolves  as  in  the  system 
III.C  and  III.D,  with  environmental   changes  governed  by  a  matrix  T  of  the  form 
(6.1).     The  "normal"  service  rate  in  the  off-environment  (environment  1)  is 
equal  to  y,.     When  the  system  enters  the  on-environment  (environment  states  2 
and  3),  the  server  notices  it  when  2  or  more  machines  fail  simultaneously,  or 
after  a  random  interval  of  time  of  exponentially  distributed  duration,  with 
parameter  £,.     The  server  then  changes  the  service  rate  to  y2-     The  service 
rate  remains  equal  to  p2  until  an  interval  of  time  has  elapsed  without  any 
failure,  that  interval  of  time  is  exponentially  distributed,  with  parameter  £2- 

In  this  case,  we  need  to  describe  the  state  of  the  system  by  three 
indices  (n,j,k),  where  n  denotes  the  number  of  machines  that  are  operating,  j 
denotes  the  environment  state,  and  k  =  1  or  2  to  indicate  the  service  rate  that 


32. 


is  in  effect.  For  each  given  value  of  n,  we  order  the  states  as  follows 
(n.1,1),  (n,2,l),  (n,3,l),  (n,l,2),  (n,2,2),  (n,3,2).  The  possible  tran- 
sitions from  (n,j,k)  to  (n,j',k')  are  indicated  below,  together  with  the 
corresponding  transition  rates. 


(n,2,l) 


(n.1.1)     *- 


'2,1 


2,3 


(n,3,l) 


(n,l,2) 


The  infinitesimal  generator  Q  has  the  form  (1.1),  with  blocks  of 
order  6  given  by 

yi  .... 

Hi   . 

^2  • 

^2 


'n,n+l 


,   0<n<N-l, 


33. 


'n.n 


'1,2  * 

T  *  T 

'2,1  '2,3 

T  T  * 

'2,1      '3,2 


1,2 


T2,l     * 


2,3 


T  T  * 

'2,1     '3,2 


,       0  <  n  <N, 


'n,n-l 


nx 


1     ' 


qn,l     * 

• 

q(3) 

• 

•                      • 

nx,     . 

• 

Mn,l 

q(3! 

Mn,l 


,       1  <n  <N, 


*n,n-k 


,(2) 


W 


q(2) 

qn,k     * 


«$ 


,       2  <n  <N, 
2  <  k  <  n, 


where    q$  =  nx1  +  v^J)  f*  (l-f/-". 


34. 


the  entries  represented  by  a  "."  are  equal  to  zero,  the  entries  represented 
by  a  "*"  are  such  that  the  row-sums  of  Q  are  equal   to  zero. 

We  define  the  following  type  IV  system  :   N  =  10,  R  =  1,  K  =  3, 
Xj  =  A2  =  X3  =   .005,  uj  =  1,  y2  =  2,  vj  =  0,  v2  =  v3  =  v*  =   .27666,  q.   =  0, 
f2  =   .1801,  f3  =   .7,  T12  =  .00015,  T21  =   .004,  T^  =   .04,  T32  =  1,  q=  ^  = 
v*/2  =  .13833. 

The  column  0  of  Table  VIII  gives  the  marginal  distribution  of  the 
number  of  operating  machines,  in  the  remaining  columns  are  given  the  condi- 
tional distributions,  given  (j,k)  where  j  is  the  environment  state,  and  k  the 
index  of  the  service  rate.     The  last  row  gives  the  marginal  stationary  proba- 
bilities of  the  pairs  (j,k).     In  Table  IX  are  given  the  values  of  u\Q     (j,k), 
for  1  <  j  <  3,  k  =  1,2  ,  0  <n  <9.     The  last  row  indicates  the  conditional 
probabilities  of  being  in  the  state  (10,j,k),  given  that  there  are  10  operating 
machines. 

We  observe  from  the  last  row  of  Table  VIII  that  the  states  in  {(n,l,2), 
0  <  n  <  10}  are  essentially  states  of  transitions  towards  the  set  {(n,l,l), 
0  <n  <  10}.     For  both  j  =  2  and  3,  the  sets  {(n,j,l),  0  <  n  <  10}  and 
{(n,j,2),  0  <  n  <  10}  have  similar  probabilities,  but  different  distributions  : 
the  states  in  {(n,j,l),  0  <n  <  10}  are  more  concentrated  than  those  in  {(n,j,2), 
0  <  n  <  10}  towards  large  values  of  n,  in  other  words,  towards  few  machines 
under  repair.   This  is  seemingly  in  contradiction  with  the  inequality  y,  <  u2; 
this  is  due  to  the  fact  that  transitions  from  (j,l)  to  (j,2)  occur 
whenever  there  are  multiple  failures. 

The  expected  first  passage  times  are  not  much  different  from  those  of 
the  system  III.D  (see  Table  VII). 


35 


Finally,  we  mention  that  the  program  implementing  Algorithm  E 

has  been  written  in  FORTRAN  77  and  run  on  a  CDC  170/750.  Care  has  been 

taken  not  to  waste  memory  space,  by  making  sure  that  no  space  is  reserved 

for  matrices  n.  .  with  j  >  i,  Q.  .  with  j  >  n+1,  or  vectors  u.  .  with  j  >  i. 
l  ,j  i  ,j  i  ,j 

In  other  respects,  the  implementation  was  straight  forward.  For  systems  with 
N  =  10,  the  program  uses  on  the  average  .2  seconds  of  central  processor  time 
for  K  =  3,  and  .6  seconds  for  K  =  6. 

7.  ACKNOWLEDGEMENTS 


Work  of  Donald  P.  Gaver  partly  supported  by  a  contract  with  the 
Office  of  Naval  Research  (NR  609-006).  Work  of  P.  A.  Jacobs  partly 
supported  by  National  Science  Foundation  Grant  Number  ENG  79-10825. 


36. 


REFERENCES 

[1]       J.L.   Carroll,  A.   Van  de  Liefvoort  and  L.   Lipsky. 

Another  look  at  the  M/G/l  and  GI/M/1  queues.     Operations  Research, 
to  appear. 

[2]       D.P.  Gaver,  P. A.  Jacobs  and  G.  Latouche. 

Finite  birth-and-death  models  in  randomly  changing  environments. 
Tech.   Report  N°121.  Laboratoire  d' Informatique  Theorique,  Universite 
Libre  de  Bruxelles,  Brussels,  1981. 

[3]       B.  Hajek. 

Birth-and-death  processes  on  the  integers  with  phases  and  general  boun- 
daries.    Journal  of  Applied  Probability,  to  appear. 

[4]       J.  Keilson,  U.  Sumita  and  M.  Zachman. 

Row-continuous  finite  Markov  chains,  structure  and  algorithms.  Technical 

Report  N°8115.  Graduate  School  of  Management,  University  of  Rochester, 
(1981). 

[5]       J.G.   Kemeny  and  J.L.  Snell. 

Finite  Markov  chains.     Van  Nostrand,  Princeton,   (1960). 

[6]       D.M.  Lucantoni   and  M.F.   Neuts. 

Numerical  methods  for  a  class  of  Markov  chains  arising  in  queueing  theory. 
Tech.  Report  78/10,  Applied  Mathematics  Institute,  University  of  Delaware, 
Newark,   1978. 

[7]       M.F.  Neuts. 

Queues  solvable  without  Rouche's  theorem.     Operations  Research,  27, 
(1979),   767-781. 

[8]       M.F.  Neuts. 

The  probabilistic  significance  of  the  rate  matrix  in  matrix-geometric 
invariant  vectors.     Journal  of  Applied  Probability,  17,  (1980),  291-296. 

[9]       M.F.   Neuts. 

Matrix-Geometric  Solutions  in  Stochastic  Models  -  An  Algorithmic  Approach. 
The  Johns  Hopkins  University  Press,  Baltimore,  1981. 

[10]       M.F.   Neuts. 

Explicit  steady-state  solutions  to  some  elementary  queueing  models. 
Operations  Research,  to  appear. 


37. 


[11]   M.F.  Neuts  and  K.S.  Meier. 

On  the  use  of  phase  type  distributions  in  reliability  modelling  of 
systems  with  a  small  number  of  components.  OR  Spektrum,  2,  (1981), 
227-234. 

[12]   W.C.  Torrez. 

Calculating  extinction  probabilities  for  the  birth-and-death  chain  in 
a  random  environment.     Journal   of  Applied  Probability,   16,   (1979), 
709-720. 

[13]       D.  Wikarski. 

An  algorithm  for  the  solution  of  linear  equation  systems  with  block 
structure.     Elektronische  Informationsverarbeitung  und  Kybernetik,   16, 
(1980),  615-620. 


38. 


I. A 

II. A 

0 

1 

2 

.000002 

3 

.000017 

4 

.000100 

5 

.000441 

6 

.000011 

.001472 

7 

.000230 

.003731 

8 

.004104 

.008752 

9 

.065136 

.054839 

10 

.930519 

.930647 

Table  I 


Stationary  distribution  for  systems  I. A  and  II. A 
Missing  numbers  are  less  than  10"  . 


39. 


I. A 

II  .A 

0 

1 

2 

3 

4 

5 

6 

7 
8 
9 

923  1012 

7  1012 

105  109 

2  109 

63  106 

2  106 

93  103 

5  103 
256.9 
14.3 

106  106 

5  106 

463  103 

60  103 

10  103 

2  103 

710.9 

279.5 

117.6 

17.0 

Table  II 


Expected  time,  starting  from  10,  until  there  remain 
k  or  less  operating  machines,  for  systems  I. A  and 
II. A. 


40. 


Ill  .A.O 

III.A.l 

1 1 1  .  A  .  2 

III  .B.O 

III.B.l 

III.B.2 

0 

.000005 

.000141 

.000010 

1 

.000037 

.001015 

.000003 

.000094 

2 

.000141 

.000001 

.003871 

.000018 

.000487 

3 

.000382 

.000002 

.010500 

.000065 

.001815 

4 

.000827 

••  .000006 

.022717 

.000189 

.000001 

.005199 

5 

.001524 

.000013 

.041815 

.000466 

.000003 

.000466 

6 

.002480 

.000027 

.067911 

.001002 

.000010 

.027451 

7 

.003667 

.000125 

.098122 

.001923 

.000102 

.050497 

8 

.006577 

.002196 

.123402 

.004864 

.002165 

.076818 

9 

.051019 

.047572 

.142937 

.049665 

.047543 

.106236 

10 

.933342 

.950059 

.487564 

.941804 

.950174 

.718606 

Env . 

.963855 

.036145 

.963855 

.036145 

Table  III 


Stationary  distributions  for  systems  III. A  and  III.B 
Missing  numbers  are  less  than  10"  . 


41 


III.A.l 

III. A. 2 

III.B.l 

III.B.2 

0 

1 

2 
3 
4 
5 
6 
7 
8 
9 

218  103 

39  103 

16  10 3 

10  103 

8  103 

8  103 

7  103 

4  103 

454.6 

20.0 

211  103 
32  103 
9  103 
4  103 
2  103 
841.7 
443.9 
163.7 
15.1 
3.6 

2  106 

166  103 

39  103 

16  103 

10  103 

8  103 

7  103 

4  103 

454.7 

20.0 

1  106 
159  103 
32  103 
9  103 
3  103 
1  103 
548.8 
177.4 
15.4 
3.6 

.981118 

.018882 

.972421 

.027579 

Table  IV 


Expected  time,  starting  from  (10, j),  until  there 
remain  k  or  less  operating  machines,  for  systems 
III. A  and  III.B. 


42, 


III. CO 

III.C.l 

III.C.2 

III.C.3 

0 

.000004 

.000014 

.000468 

1 

.000023 

.000087 

.002443 

2 

.000071 

.000348 

.007189 

3 

-.000165 

.000001 

.001080 

.015699 

4 

.000328 

.000003 

.002939 

.028580 

5 

.000594 

.000005 

.007332 

.045250 

6 

.001016 

.000013 

.017046 

.062420 

7 

.001731 

.000102 

.036057 

.074554 

8 

.004557 

.002165 

.065256 

.078259 

9 

.049505 

.047542 

.0106656 

.086266 

10 
Env. 

.942004 

.950169 

.763186 

.598871 

.963855 

.027590 

.008555 

Table  V 


Stationary  distributions  for  system  III.C 
Missing  numbers  are  less  than  10~  . 


43. 


Ill  .D.O 

III.D.l 

III  .D.2 

III.D.3 

0 

1 

2 
3 
4 
5 
6 
7 
8 
9 
10 

.000018 
.000054 
.000105 
-.000175 
.000284 
.000495 
.000921 
.001739 
.004677 
.049594 
.941938 

.000001 
.000002 
.000003 
.000005 
.000012 
.000101 
.002165 
.047542 
.950169 

.000193 
.000716 
.001700 
.003398 
.006519 
.012724 
.024937 
.045872 
.072797 
.105755 
.725389 

.008376 
.020724 
.033000 
.040077 
.039427 
.034300 
.030762 
.033505 
.043182 
.067661 
.648986 

Env. 

.963855 

.034760 

.001385 

Table  VI 


Stationary  distributions  for  system  III.D 
Missing  numbers  are  less  than  10 


44, 


III.C.l 

III.C.2 

III  .C.3 

III  .D.l 

III .D.2 

III.D.3 

0 

1 

2 
3 
4 
5 
6 
7 
8 
9 

131  103 

32  103 

16  103 

11  103 

9  103 

8  103 

7  103 

4  103 

454.9 

20.0 

124  103 
25  103 
9  103 
4  103 
2  103 
1  103 

665.5 

230.2 

18.8 

3.9 

124  103 
25  103 
8  103 
4  103 
2  103 
831.7 
.  399.0 
138.7 
13.4 
3.5 

37  103 

17  103 

12  103 

10  103 

9  103 

8  103 

7  103 

4  103 

454.7 

20.0 

30  103 
11  103 
6  103 
3  103 
2  103 
1  103 
592.8 
197.2 
16.6 
3.7 

30  103 
10  103 
5  103 
3  103 
2  103 
956.4 
486.8 
168.6 
15.2 
3.6 

.972209 

.022352 

.005439 

.972277 

.026769 

.000954 

Table  VII 


Expected  time,  starting  from  (10, j),  until  there  remain 
k  or  less  operating  machines,  for  systems  III.C  and  III.D 


45, 


0 

(1.1) 

(2,1) 

(3,1) 

(1.2) 

(2,2) 

(3,2) 

0 

.000024 

.000252 

.004878 

.000054 

.000320 

.013208 

1 

.000071 

.001010 

.015020 

.000200 

.001152 

.029737 

2 

.000142 

.002395 

.027803 

.000473 

.002744 

.043229 

3 

.000240 

.000001 

.004412 

.036023 

.000929 

.005660 

.049054 

4 

.000393 

.000002 

.007597 

.035707 

.001699 

.011110 

.046435 

5 

.000684 

.000003 

.014368 

.031340 

.003036 

.020984 

.040339 

6 

.001246 

.000009 

.030240 

.031798 

.005317 

.037149 

.037239 

7 

.002253 

.000096 

.061311 

.044074 

.008901 

.059562 

.040634 

8 

.005373 

.002157 

.104263 

.068355 

.014212 

.082957 

.049765 

9 

.050324 

.047533 

.145735 

.107482 

.041137 

.111313 

.071410 

10 

.939250 

.950199 

.628417 

.597519 

.924043 

.667052 

.578950 

Env. 

.963268 

.015221 

.000580 

.000588 

.019539 

.000804 

Table  VIII 


Stationary  distribution  for  a  Type  IV  system. 
Missing  numbers  are  less  than  10"  . 


46. 


(1.1) 

(2,1) 

(3,1) 

(1.2) 

(2,2) 

(3,2) 

0 

34    103 

28    103 

27    103 

34    103 

28    103 

27    103 

1 

17    103 

10    103 

9    103 

17    103 

10    103 

9    103 

2 

12    103 

5    103 

5    103 

12    103 

5    103 

5    103 

3 

10    103 

3    103 

3    103 

10    103 

3    103 

3    103 

4 

9    103 

2    103 

2    103 

9    103 

2    103 

2    103 

5 

8    103 

1    103 

839.8 

8    103 

1    103 

851.9 

6 

7    103 

534.6 

441.8 

7    103 

547.2 

450.0 

7 

4    103 

187.0 

160.8 

4    103 

190.7 

163.3 

8 

454.7 

16.3 

15.0 

457.7 

16.5 

15.1 

9 

20.0 

3.7 

3.6 

20.0 

3.7 

3.6 

.974497 

.010184 

.000369 

.000578 

.013876 

.000496 

Table  IX 


Expected  time,  starting  from  (10,i,k),  until  there 
remain  n  or  less  operating  machines,  for  a  type  IV 
system. 


A.l 


APPENDIX  A 

First  order  derivatives  of  the  matrices  D„(0 

rr   ' 

The  equations  (3.13)  and  (3.14)  may  be  written  as 

D0K)(  V  -  "0,0'  "  '• 

Dn<5>(  «  -'Q„,„  "  Jo  Qn.k  Gk,n<«»  "  »•     1<n<  "-*• 

By  differentiating  both  sides  of  these  equations  with  respect  to  5,  we 
obtain  that 

DJ(0(  51  "  Q0,0)  +  Do^)  =  °» 

W«>(  5I  "  Qn,n  "  £  Qn.k  Gk,n<«>)  +  Dn^)   <!  "  £  Qn.k  Gk,n<*»  =  °- 

1  <  n  <  N-l, 
or      D£(e)  =  "  (Dfl(e))2   •  (A.l) 

D;(0  -  -  Dn(0(I  -  V  Qnjk  Gkfn(0)  Dn(0,  for  1<  n  <  N-l.  (A.2) 

First  order  derivatives  of  the  matrices  D  (£) 
The  equations  (4.5)  and  (4.6)  may  be  written  as 

DN(5)(EI  -  QN,N)  ■  I. 

Dn(0(d  -  Q„,„  -  Q„,„+i  5„+i,„(0)  -  I.     Kn<N-l. 

By  differentiating  both  sides  of  these  equations  with  respect  to  £,  we 
obtain 


dn(0(«i  -  qn>n)  +  DN(e)  =  0, 


A. 2 


or  D'(c)  =  -  (BN(€)r. 


'N 


(A. 3) 


and       B'(0(5I  -  Qn,n  -  Qn>n+1  5n+1>n(0)  ♦  Dn(Od  -  Qn,n+1  6n+1>n(0)  -  0, 


or  D;(0  «  -  DnU)d  -  Qn,n+1  8;+1,n(0)  Dn(0.     for  1  <  n  <  H-l.       (A.4) 


B.l 


APPENDIX  B 


Algorithm  C.   Evaluation  of  the  stationary  probability  vector  p_  =  (Pn>-  •  •  >Pm)  » 
and  the  expected  first  passage  times  ir  ,  ,  for  all  pairs  of  levels  n  *  n'. 

CI  (Initial  computations  :  the  matrices  n  .  ,  C  and  the  vectors  Oj . 

CN  :=  QN,N  ; 

determine  u».  from  (4.12); 

for  k  from  0  to  N-l  do  determine  n,.  .  from  (2.4); 

for  n  from  N-l  to  1  step  -1  do 


begin  determine  C  from  (2.3); 
determine  u  from  (4.13); 
for  k  from  0  to  n-l  do  determine  n  u  from  (2.5) 


end; 

determi ne  CQ  from  (2.3) j 

C2  (Determination  of  the  vectors  jr  and  u       ,) 

Hq  :=  solution  of  the  system  ttq  C"q  =  0,  tTq  e_  =  1; 

C0  :=  Q0,0; 

Hi.o  :=^1; 

for  n  from  1  to  N  do  determine  u^  Q  from  (4.16) ; 

determine  uQ  ,  from  (3.17); 

determine  rQ  ,  from  (3.4); 

for  m  from  1  to  N-l  do 


B.2 


begi n     determine  it     from  (2.7); 

m 
re-normalize  the  vectors  nn  to  n     so  that    I    n.  e  =  1: 

~°       Hn  1-0  -1  " 

for  n  from  m+1  to  N  do  determine  u         from  (4.16): 

— n,m     v    '' 

determine  C  from  (3.3); 

determine  u a1  from  (3.18); 

m,m+l  v         ' 

determine  r    mJ,  from  (3.4); 

m,m+l  x       ' 

for  n  from  0  to  m-1  do 

begin     determine  u        ,  from  (3.19); 

determine  r    mJ_.   from  (3.5) 
n,m+l  v       ' 

end 

end; 

determine  itm  from  (2.7); 

n  N 
renormalize  the  vectors  nn  to  nM  so  that    I    n„  e  =  1. 
"°        -N                 n=0  -"  " 


At  the  end  of  the  algorithm,  the  vectors  n    are  equal   to  the  vectors  p. 
n  <  N.     We  sys' 
overflow  problems. 


0  <n  <N.     We  systematically  renormalize  the  vectors  n     in  order  to  avoid 


Algorithm  D.     Recursive  evaluation  of  the  vectors  Vj.  defined  by  v.    =  u..  .    , 
0  <k  <N-1. 

Dl  (Initial  computations) 

£N  :=  QN,N; 

determine  uN  from  (4.12); 

v^_1  :=  Ufti        (equation  (4.17)) 


B.3 


D2  (Main  loop,  at  the  k  th  step,  v^_,   is  determined) 

for  k  from  N-l  to  1  step  -1  do 

begin     for  n  from  N  to  k+1  step  -1  do  determine  TT    .    from  (2.4)  and  (2.5) 

determine  C.  from  (2.3); 

determine  u.  from  (4.13); 

\  k-1 -:=  ^c'  (eQ.uation  (4-17)) 

for  n  from  k+1  to  N  do  determine  ti     .    ,   from  (4.18) ; 


end. 


Algorithm  E.     Evaluation  of  the  stationary  probability  vector  p_  =  (Pq»...»p_m)» 
and  the  expected  first  passage  times  iT    .  ,  for  0  <  k  <  n  <  N.     This  algorithm 

~™l I  y  l\ 

combines  Algorithm  D  and  part  of  Algorithm  C. 

El  (Initial  computations) 

CN  :=  QN,N  ; 

determine  tL  from  (4.12); 

for  k  from  0  to  N-l  do  determine  n.,  k  from  (2.4) ; 
E2  (Determination  of  the  vectors  if  ,,  and  preparation  to  the  evaluation  of  the 

^1   y  l\ 

vector  p_) 

Hn  n-1  :=  ^W  '     (ecluation  (4-17)) 
for  n  from  N-l  to  1  step  -1  do 

begin     determine  C     from  (2.3); 

determine  u     from  (4.13); 

Vn-1  :=  %  ;       (equation  (4-17)) 

for  m  from  n+1  to  N  do  determine  u ,  from  (4.18); 

— m,n-l     x    ' 

for  k  from  0  to  n-l  do  determine  n  .,  from  (2.5) 
end; 


3.4 


E3  (Determination  of  the  vector  p_) 

ttq  :=  solution  of  the  system  ttq  Cq  ■  0,  tt,  e_  ■  1; 

for  m  from  1  to  N-l  do 

begin     determine  n    from  (2.7); 

m 

renormalize  the  vectors  nn  to  nm  so  that  I  Tt.  e  =  1 
-0   -m       i=Q  -i  - 


end; 

determine  n^'from  (2.7); 


N 


renormalize  the  vectors  nn  to  nM  so  that  I  n.  e  =  1. 
"°   -N       i=0  -1  " 

(The  vectors  jr  are  equal  to  the  vectros  t)  ,  0  <  n  <  N) 


C.l 


APPENDIX  C 

Infinitesimal   generator  for  Type  III. 

We  assume  N  =  4,  R  =  2  and  K  =  3.     Entries  represented  by  a  "." 
are  equal   to  zero,  entries  represented  by  a  "*"  are  equal   to  minus  the  sum 
of  the  off-diagonal  entries  on  the  same  row. 


Q  = 


*      T          T 
'1,2     2,3 

2y, 

T              *       T 
'2,1             '2,3 

2^2 

0 

0 

0 

T          T             * 
'3,1     3,2 

*      2u3 

ql,l 

*      T         T 

1,2      1 ,3 

2"i 

• 

• 

•  43  • 

T             *      T 
2,1              '2,3 

• 

2Vn 

• 

0 

0 

•    •  43 

T         T             * 
'3,1     3,2 

• 

• 

2u3 

43  •    • 

43  •    • 

* 

Tl,2 

Tl,3 

2y. 

(2) 
q2,2 

•  43  • 

T2,l 

* 

T2,3 

2u« 

0 

(3) 
*          *      q2,2 

•    •  43 

T3,l 

T3,2 

* 

•          •      2"3 

qd)       •          • 
q3,3 

43  •    • 

q(1) 
q3,l 

• 

• 

*       T          T 
1 1,2     1,3 

llj    •     • 

•  43  ■ 

•  43  • 

• 

q(2) 
q3,l 

• 

T2,l      *      T2,3 

•        p2        ' 

■    ■  43 

•    •  43 

• 

• 

q{3) 
q3,l 

T         T     „      * 
3,1     3,2 

*        u3 

H4,4 

q4,3 

q(1) 
q4,2 

• 

• 

43  •    • 

*      T         T 
1 1,2     1,3 

•   43   ■ 

•  43  • 

• 

q(2) 
q4,2 

• 

•  43  • 

T            *      T 
'2,1              '2,3 

■     ■   43 

•    •  43 

• 

• 

q(3) 
q4,2 

•    •  43 

T          T              * 
3,1     3,2 

where  q 


(k)  _ 
n,i 


nAk  +  vk   (J)  q\   (l-qk)n"\   1<  k  <  3,   1<  n  <  4,   1  <  i  <  n. 


DISTRIBUTION  LIST 

NO.  OF  COPIES 

Library,  Code  0142  4 

Naval  Postgraduate  School 
Monterey,  CA  93940 

Dean  of  Research  1 

Code  01 2A 

Naval  Postgraduate  School 

Monterey,  CA  93940 

Library,  Code  55  2 

Naval  Postgraduate  School 
Monterey,  CA  93940 

Professor  D.  P.  Gaver  183 

Code  55Gv 

Naval   Postgraduate  School 

Monterey,  CA     93940 

Defense   Technical    Information  Center  2 

ATTN:    DTIC-DDR 
Cameron    Station 
Alexandria,    Virginia    22314 


DUDLEY  KNOX  LIBRARY  -  RESEARCH  REPORTS 

5  6853  01068007  7