OPTIMAL RULES AND ALGORITHMS FOR SOME PARALLEL PROCESSOR SCHEDULING PROBLEMS

By LOUIS ANTHONY MARTIN-VEGA

A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF

THE UNIVERSITY OF FLORIDA

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA 1975

Pana ml quoAlda llagati y nueAt/i&s do& btn.dlclon.zA LoLuUito tj Mdniza

Pfieganto jovcn at bahio: "l-Vz leu, vJJutudeJ* do. la vlda, &eA& intzligdncia la que domina?-"

RcApGnd.io eJL &ahlo at jove.n'. "-No to. cnganeA Ivijo mlo, que. en la p&W-U>.t?.n(ua ij ccn&ianza en Via ZAtd cl v&idadeAO podoA-lo- . "

ACKNOWLEDGEMENTS

I wish to acknowledge the numerous and significant contributions to this study by Dr. H. Donald Ratliff, who served as Chairman of my Doctoral Committee and provided the original motivation for this re- search. His guidance and insight have contributed considerably to both the form and contents of this dissertation and his dedication to the development of this work is sincerely appreciated.

I thank the other members of my committee, Dr. Richard L. Fran- cis, Dr. Thorn J. Hodgson and Dr. Ira Horowitz for their constructive comments and suggestions leading to the completion of this work. I am also grateful to Dr. M.E. Thomas and Dr. Eginhard Muth for their assis- tance and support throughout my stay at the University of Florida.

I also acknowledge the financial assistance received from the Economic Development Administration of Puerto Rico throughout the course of my graduate studies and the support of Messrs. Jorge Vega, Joaquin Flo res and Luis V. Martin in the acquisition of this aid.

Thanks are also extended to Mrs. Edna Larrick, Miss Mary Van Meer and Mrs. Pat Whitehurst for the typing of the manuscript and to Mr. Jeff Whitehurst, for his artful preparation of all figures and il- lus trat ions .

Finally, I bow my head in loving respect to my dearest wife Maggi for her patience, understanding and ever-present gentle encourage- ment. To my fine son, Louisito, I extend my token of pride and

appreciation which is sure to grow with his years. And to my pequena Monica goes my delight at her unique insistence in sharing in the final stages of this work. To this my family, as well as to my par- eats Louis and Eva Martin, I extend my gratefulness for a moment which is as much theirs as it is mine.

This dissertation was supported in part under grants from the Office of Naval Research and the Army Research Office, Durham, N.C.

TABLE OF CONTENTS

Page

ACKNOWLEDGEMENTS ±±i

LIST OF FIGURES viii

ABSTRACT ix

CHAPTER

1 INTRODUCTION AND OVERVIEW 1

1-1 In the Beginning \

1.2 The Parallel Processor Scheduling Problem 2

1.3 Representation of a Schedule 4

1.4 Scope of the Research 5

1 . 5 Overview 5

2 LITERATURE REVIEW g

2.1 Classification Scheme 10

2.2 Optimality Criteria , 10

2.3 Organization of the Review 13

2.4 Path 9 ,[[ 13

2 . 5 Path 8 14

2.5.1 Min-Sum Criteria 15

2.5.2 Min-Max Criteria 17

2.6 Path 7 '.[ 18

2 . 7 Path 6 21

2.7.1 Min-Sum Criteria 21

2.7.2 Min-Max Criteria 23

2.8 Path 5 26

2.9 Paths 4 and 3 27

2.9.1 Path 4 ... 27

2.9.2 Path 3 ' 28

2.10 "Paths 2 and 1 ' . ' 32

2. 10. 1 Min-Sum Criteria 33

2 . 10 . 2 Min-Max Criteria 39

3 SCHEDULING RULES FOR SOME PARALLEL PROCESSOR PROBLEMS. 41

3.1 Int roduction 41

3.2 Scheduling with Job Splitting and No Restric-

tions on Job Availabilities 43

3.2.1 General Formulation 43

3.2.2 Minimization of Maximum Lateness 49

3.2.3 The "Due Date" Rule as a Special Case

of the "Greatest Potential Lateness" Rule. 53

Page

3.3 Nonsimultaneous Job Arrivals 54

3.3.1 Minimization of Maximum Flow Time 54

3.3.2 Minimization of Maximum Lateness 59

3.4 Simultaneous Processing by More than One Machine.. 62

3 . 5 Two Min-Sum Problems 64

3.5.1 Limited Machine Availabilities 64

3.5.2 Jobs with Common Due Dates 65

3.6 Minimization of Maximum Deferral Cost 66

3.7 Conclusions 67

FIXED ROUTE PARALLEL PROCESSOR SCHEDULING PROBLEMS 69

4.1 Introduction

69

4.2 The One Train-One Track Problem 72

4.3 The M Train-One Track Problem 76

4.3.1 Problem Formulation 77

4.3.2 Maximization of the Number of Jobs Taken to their Destinations by the First K

Trains 79

4.3.3. Minimization of the Sum of Completion

Times 83

4.4 Restrictions on Job Availabilities 86

4.4.1 Jobs Not Simultaneously Available 86

4.4.2 Jobs Restricted by Due Dates 92

4.4.3 Jobs Not Simultaneously Available and Sub- ject to Due Dates 97

4.4.4 Synopsis 98

4.5 Minimization of Maximum Lateness 98

4.5.1 Minimization of Lmax with Job Splitting ... 98

4.5.2 Feasible Schedules Without Job Splitting. .. 103

4.6 M Trains on One Track with Switchyards 107

4.6.1 Train Schedules Based on Demand 108

4.6.2 Fixed Train Schedules 109

4.7 Routes Consisting of a Directed Network 117

4-8 Conclusion

.120

MIN-MAX PARALLEL PROCESSOR ALLOCATION PROBLEMS 122

5.1 Introduction 122

5 . 2 Formulation 125

5.3 Case 1: Minimization of Maximum Loss Probability . 126

5 . 4 Case 2 : Minimization of Maximum Penalty 132

5.5 Conclusions 1-^5

SUMMARY

AMD SUGGESTIONS FOR FUTURE RESEARCH 137

Page

APPENDICES

1 COMPUTATIONAL RESULTS 143

2 REDISCOVERING THE SQUARE WHEEL 154

3 THE OPERATION OF THE SEABOARD COAST LINE RAIL-

DriA71 /oot \ . , 166 ROAli \s^l,j -*- JU

BIBLIOGRAPHY l72

BIOGRAPHICAL SKETCH 17 8

LIST OF FIGURES

Figure Page

1 uxassiiication of Deterministic Scheduling Problems 9

2 A Three Job , Two Machine Problem 16

3 Example of a Two Job, Three Period Problem with M Identical Processors 124

4 Network Representation of a Two Port Three Period Min-

Max Toss Probability Allocation Problem 128

5 An Example of a Min-Max Allocation Problem on a

Directed Network 130

6 Network Representation of a Two Port, Three Period Min-Max Penalty Allocation Problem 134

7 Plot of CPU Time Versus Problem Size 152

8 Freight Railway Network 167

ABSTRACT OF DISSERTATION PRESENTED TO THE GRADUATE COUNCIL

OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE

REQUIREMENTS FOR THE DECREE OF DOCTOR OF PHILOSOPHY

OPTIMAL RULES AND ALGORITHMS FOR SOME PARALLEL PROCESSOR SCHEDULING PROBLEMS

By

Louis Anthony Martin-Vega

March, 19 75

Chairman: H. Donald Ratliff

Major Department: Industrial and Systems Engineering

A study of some deterministic parallel processor scheduling problems is presented. Optimal rules and algorithms (i.e., construc- tional methods which lead directly to an optimal solution) are de- veloped for problems motivated by the theory of parallel processor scheduling itself as well as for problems motivated by their potential application in transportation systems. This is in contrast to enumer- ation type approaches (e.g., dynamic programming, branch and bound) which have been utilized for problems of this nature.

With respect to the classical parallel processor environment, a basic problem of scheduling identical processors without restric- tions on job or machine availabilities is first considered. A priority- based rule is shown to minimize maximum lateness when a limited form of job splitting is allowed. Optimality of the rule is shown to ex- tend for minimizing maximum flow time when jobs are not simultaneously available. Algorithms and properties are presented for special cases of more general problems and other optimality criteria. These

developments are also used to tie in previously unrelated results in the area.

Optimal rules are then presented for various criteria involv- ing the scheduling of freight on a class of fixed route railway sys- tems, lram movement xs Eooeieu as a series of parallel processors whose availability is subject to a predetermined train schedule. Situations ranging from maximization of the number of jobs taken to their destinations on one train and multiple train systems to the scheduling of jobs on systems consisting of directed networks are considered. Besides presenting a new scenario for scheduling research, these results represent one of the first developments of nonsimulation type models for scheduling problems in the railway industry.

An extensive review of the literature is also included which serves as a unified framework for future work in the area of parallel processor scheduling.

CHAPTER 1

INTRODUCTION AND OVERVIEW

1 . 1 In the Beg inning

The origin of scheduling problems can be traced to the. annals of primitive man. Surely his initial observation of the difficulties involved in trying to accomplish more than one task simultaneously presented him with the "first" scheduling problem. While our unsung "first scheduler" must by necessity remain anonymous, what is cleat- is that attempts to quantify the scheduling decision process began around the turn of the twentieth century. Such ventures, being mainly confined to the manufacturing shop environment, led to the phrase "ma- chine scheduling problem" as a descriptor of such processes. These attempts and their documentation served to create what is known as the "theory of scheduling," a field of study whose present applicability transcends its original environment. While scheduling problems continue to attract increasing numbers of researchers, their complexity is such that even today, existing results consist mainly of a great number of individual contributions lacking, for the most part, any unifying theory.

Interest In scheduling problems has grown appreciably within the last two decades. Certainly a great deal of this interest is spurred by the countless applications that have arisen for problems initially rel- egated to a "machine shop" environment as well as the theoretical

challenge that this apparently well-structured and yet difficult to solve problem presents. The development of numerous formulations and techniques for dealing with scheduling problems has expanded knowledge in this once highly specialized area to the point where the "theory of scheduling" is now a general term and notions such as "job shop" and "flow shop" problems constitute specializations of the general theory. This dissertation will concentrate on a particular class of determin- istic scheduling problems, namely, the problem of scheduling jobs on

ct 1 cU-XtiX |JJ- LICtr:t»^k

1 2 The Parallel Processor Scheduling Problem

Informally, the deterministic scheduling problem can be described as follows: (Rinnooy-Kan [61])

Given a set of n jobs (tasks, events, products, . . .) that have to pass through tn machines (processors) in a given prescribed order (which may be no order) under certain restrictive assumptions (which may be no assumptions), what is according to seme criterion, the optimal schedule for handling the jobs.

The parallel processor problem is a subset of deterministic scheduling specifically characterized by the assumption that for any job j, j = 1, . . ., n and machine I, I = 1, . . . , m, either machine £ is capable of completely processing job j or machine £ does not have the capability of processing job j at all. This implies that the only ordering restrictions which exist are those that may be specified among the jobs themselves. This is in contrast to other deterministic prob- lems where some of the jobs may require processing on more than one

machine before they are completed in which case machine ordering restrictions may also exist.

Interest in parallel processor systems has blossomed with the advent of multi-processor computer systems. As Muntz [56, p. 1] notes "these systems have been shown to have several advantages over more conventional single processor organizations. The one of most impor- tance to a large class of users is the potential increase in computing speed achievable by parallel programming. This is particularly of

interest for real time amplications and also for ~* c"r-~ hUv cc""}f*-atir

such as weather forecasting, when the results are needed more quickly than they can be provided by a single processor." Other areas of ap- plicability that appear in the literature include manufacturing systems [63, 77] and problems concerning project planning under scarce resources [26].

The impact of a scheduling decision varies from one situation to the next. In many cases it is impossible to isolate the scheduling pro- cess. As Rinnooy-Kan [61, p. 4] points out, the scheduling decision is generally preceded by planning activity and followed by control activ- ity, both of them involving economic and technological judgments that strongly influence the scheduling decision itself. Pounds [59] reports that management is often not even aware that a scheduling problem exists; there are so many decisions to be made that the order that jobs are pro- cessed is not perceived as an inf luenceable and relevant variable. How- ever, as Elmaghraby [26] notes, scheduling decisions are likely to get more and more important as the computer takes over many routine decisions and as improved operations research techniques perfect other ones. And

Mellor [53] quotes a list of no less than 27 goals that can be at- tained by good scheduling, including items as diverse as day-to-day stability of work, force and anticipation of price changes. Moreover, there are situations in which the scheduling decision becomes the principal factor affecting daily operation as is the case in crew scheduling on airlines. While the argument over the impact of schedul- ing decisions continues, there appears to be general agreement in that oftentimes significant differences exist in the use of one schedule over a no titer. i.t is this iactor that constitutes the basic "raison d'etre" for work in this area.

1 . 3 Representation of a Schedule

The objective of a parallel processor problem centers around

determining an optimal schedule. Informally, a schedule is simply a

description of the work to be done by each machine at each moment in

time. The simplest way of specifying the schedule is through the use

of a Gantt Chart. The chart consists of a time axis for each processor

with intervals marked off and labeled with the name of the job being

processed. A symbol of <f> can be used to represent an idle period [56].

For example, the following schedule for two jobs shows that Job a is

processed from t = 0 to t = 3 on machine one, Job b from t = 0 to t = 1

on machine two and t = 3 to 5 on machine one, and Job c from t = 2 to

t = 5 on machine two. The period t = 1 to t = 2 is an idle period on

machine two .

w i Job a i Job b

M I Job b i 6 I Job c |

u2 | | | j

t = 0 1 2 34 5

1 . 4 Scope of the Research

This dissertation presents a study of deterministic parallel processor scheduling problems. The primary contribution of the study is the development of optimal rules and algorithms (i.e., construc- tional methods which lead directly to an optimal solution) for some parallel processor problems. This is in contrast to enumeration type approaches (e.g., dynamic programming, branch and bound) which have also been utilized for problems of this nature. The research effort is 11 iviueu between problems motivated essentially by the theory of parallel processor scheduling itself, as well as problems motivated by their potential application in transportation and military systems. The effort is brought together by the fact that the insight developed in the treatment of the former provides the foundation for the solution of the latter.

A secondary contribution is the interlacing of a number of heretofore unrelated results which appear in the literature on single and parallel processor problems and the establishment of a common, uni- fied framework for future work in the area.

i. 5 Overview

In Chapter 2, a classification scheme for parallel processor problems is developed. The scheme serves as the basic outline for an extensive review of the state of the art of parallel processor schedul- ing and provides a framework for identification of areas for future de- velopment. The review emphasizes and provides details for those problems where scheduling rules and network formulations suffice as solution

procedures although enumerative approaches are mentioned when applicable.

In Chapter 3 problems which arise from the theory of parallel processor scheduling are studied. The basic problem of scheduling identical processors without restrictions on job or machine availabil- ities is considered. An optimal rule is developed for minimizing maximum lateness when jobs are assumed simultaneously available and a limited type of job splitting is allowed. Optimality of the rule is shown to extend for the problem of minimizing maximum flow time when jobs are not

lems is hindered by the lack of a dominant priority selection criterion. Algorithms and properties are also presented for special cases of more general, problems and other optimality criteria. The developments of this chapter are also used to tie. in previously unrelated results in the area of parallel processor scheduling.

Chapter 4 provides the most important contribution of this dis- sertation, Optimal rules are developed for a multiplicity of criteria in problems involving the scheduling of freight on a class of fixed route railway systems. In these problems the trains are modeled as parallel processors whose availability is subject to a predetermined train schedule. Situations ranging from maximization of the number of jobs taken to their destinations on one train and multiple train systems to the scheduling of jobs on systems consisting of directed networks are considered. Besides presenting a new scenario for scheduling research, this chapter represents one of the first developments of non-simulation type models for scheduling problems in the railway industry.

In Chapter 5, min-max parallel processor problems of the type generally found in production scheduling contexts are consid- ered. These problems have the particular characteristic that penal- ties are assessed as a function of ho;-/ much of a job has been completed by a certain period rather than as a function of a job's completion time as is the case in previous chapters. Optimal algorithms are pre- sented for two cases. The problems considered in this chapter, while not of the same spirit as those in the previous chapters, chronolog- ically were the first approached in this study, The notions utilized in their solution opened the door to many of the developments of Chap- ters 3 and 4.

Finally, Chapter 6 presents a summary and discussion of areas for future research.

CHAPTER 2 LITERATURE REVIEW

2 . 1 Classification Scheme

The accompanying diagram (Figure 1) presents a schematic classification of deterministic scheduling problems upon which the organization of this review is based. The scheme is based on identi- fying the restrictive assumptions currently imposed on deterministic scheduling problems and the level of difficulty they create in the solution of the same. Each path in the drawing represents one par- ticular scheduling problem (e.g., Path 6 denotes the m identical parallel, processor problem with sequence independent setup times and precedence constraints among the jobs). In general, the higher up in the tree an assumption occurs, the more critical it is to the computa- tional efficiency of currently available algorithms for solving prob- lems under the assumption. Underlying the classification scheme are the optimal! Ly criteria which in this review will be partitioned into two major divisions: min-max and min-sum criteria.

2.2 Optimality Criteria [16, 61]

We first establish the following definitions: Let

r^ = the release time of job j (the earliest time that pro- cessing could start on job j where r. = 0 for all j

<

■■ >

CO

w

f— 1

o

Z

CO

P

co

fU

p

l-H

i >

1

O

Z

o

i-

1 1

CO

z

o

M

H

p u

H P

P H

o>

u en

£3 g

p

,

r*J

a u

P

h a

/

M M

w

Q

I

o

I

/

1

UJ

s<

o

p

i

H

P-l

1

:— i

O

CO

H

P

2

i

M

n

:

M

P

,__}

H

P

P

p

n

p

CO

|

CO

\

o

M

1 |

w

H

1 I

H

1 \

a

Pi

\

CJ

H

CO

\ }

2

Pd

pi

H

a

O

o

H

P I

H

w

p

\

en 1

\

\

<J

°

\

CJ

M

HH

CO {I

H

p r

Z

o

w

p

Pi

H

p

z

PJ

p

z

CO

W

p

P-i

S3

P

t— t

P

H

P

z

H

p

1*1

3

CO

cy

CO

Z

P

I

o

M H

1

CO

z

o

M

H

PQ

H

<

O

/

H

H

1

Pi

<d

!

H

>

j

CO

<

P

Pi

pq

o

o

M

H

H

ci

ffi M

H P

SI

H Pj

H

Ci CO

<;

P

«

O

hT

O

z

H H

P H

CM

i ° M

/ K P

X

H f*i

H

M CO

<J

K~ pq

P

o

i_>

CJ

z

H H

1

H

\

P M

r-l

H P

M Pm

5: co

H

P5

Ph

O •-5

10

implies that all jobs arc simultaneously available

for processing)

p. = the total processing time for job i

W. = the total waiting time for iob i j j j

C = completion time of job j (time at which job j is finished) J

F. = flow time ot" job j (the total time job j spends in the shop)

These concepts are related as follows:

C . r + W + p

i 1 3 j

F = W + p j j j

Setting due dates for each job allows us to define:

d. = due date or desired completion date of job ]

L. = (C. - d.), the lateness of job i J J J J J

T. = max (0, L.), the tardiness of iob j.

Based on these definitions we have that min-max optimality criteria commonly considered in the literature are:

(1) minimization of maximum completion time C = max (C )

j

max . j

(2) minimization of maximum flow time F = max (F )

max . j 1

(3) minimization of maximum waiting time W = max (W )

j

max i

When due dates are set for all jobs two other criteria of interest are:

(4) minimization of maximum lateness L = max (L )

j

max . j

(5) minimization of maximum tardiness T = max (T )

max . ' j J

11

Note that nnv schedule minimizing L also minimizes T (but not

max max

necessarily vice versa) .

Defining c.(t) = the deferral cost or penalty associated with the completion of job j at time t we have as a final min-max criterion

(6) minimization of maximum deferral cost c = "' " -j '

max j J

of which (1) through (5) are all special cases.

The min-sum optimality criteria commonly considered in parallel processor problems are:

(7) Minimization of mean completion time

(sum or total completion time)

(8) Minimization of mean flow time

(sum or total flow time)

(9) Minimization of mean waiting time

(sum or total waiting time)

(7), (8), and (9) are really special cases of:

(10) Minimization of the weighted sum of completion E a C

j j j times where a. denotes the relative importance

J

of job j

(11) Minimization of the weighted sum of flow times E a F , and

j j j

(12) Minimization of the weighted sum of waiting times Z Ct W .

.1 j

Since C. = r. + W. + p. and F, = W + p we have: J J J J J J j

£ a . F . = E a . W . + E a . p . and E a . C = E a r + E a F i 3 J j J 3 3 2 ' j 3 j j j j j J j

1

c

>,

c

n

j

i

1

V

V

n

j

J

1

w

>:

w

n

j

J

which given that T. an and L a.r. are schedule independent constants.

j 2 J i J J

imply that (10), (11), and (12) are equivalent criteria, as are (7), (8) , and (9).

Mien clue dates d are set for the jobs, criteria of interest J

(13) Minimization of mean or total lateness L = EL. and

n . i J

(14) Minimization of mean or total tardiness T = I T.,

n . I J

as well as,

(L5) Minimization of total weighted lateness T, Ci.L. and

j J J

(16) Minimization of total weighted tardiness Y a.T.

j J

Since Y a L = Y a C - 7, a d and Y a.d is j J j j J j j J J j J J

a schedule independent constant, we conclude that (15) is equivalent to (7), (8), and (9) and (16) is equivalent to (10), (11), and (12). Another criterion of special interest is:

(17) Minimization of the number (#) of tardy jobs.

Finally, we .note that all eleven min-sum criteria are special cases of

(18) Minimization of Y c (t) where c.(t) is the penalty or

j J

cost associated with the completion of job j at time t.

A final criterion which arises in problems of a production scheduling context concerns the minimization of total penalty cost- associated with processing a set of jobs where the cost is a function

13

of how much of each job has been completed by time t, (t = 1, . . ., H, where H is the scheduling horizon).

2 . 3 Organization of the Review

The review itself will start with the most general parallel processor scheduling problem (Path 8) and work its way downwards to the most basic parallel processor problem (Path 1). Path 9, while not a parallel processor problem, is included for the sake of completeness and will be commented upon only briefly. For each problem, current re- sulLs may range from enumeration approaches to network formulations to the existence of scheduling rules be they one pass or iterative. We again note that the emphasis in this review will be on rules and network formulations (i.e., constructional methods which lead directly to an optimal solution) .

The results in each section (Path) will be dichotomized into those involving min-max or rain-sum optlmality criteria. This is done as much to highlight the surprising fact that the majority of the re- sults found in the literature are for parallel processor problems with min-sum optimality criteria as well as providing natural subheadings for the presentation of the results themselves.

2 . 4 Path 9

Deterministic scheduling problems can be initially delineated

into situations where machines are identical versus situations when

they are not. The system consists of non-identical processors if the

processing time of job j, p. , varies with its machine(s) assignment

J 1

(i.e., p. denotes the processing time of iob i on machine i) . When 3 9.

14

the processors are not identical, machine ordering restrictions may exist which define the various patterns the jobs or any subset of the jobs may follow in their progress towards completion. Two particular cases of interest within this category concern the job shop and flow shop problems. In both cases, it is assumed that some of the jobs require processing on more than one machine before they are completed. Job shop problems are those in which the machine ordering restrictions may take on any form. Flow shop problems require that all jobs be

:sscd in the same machine ordei and in some iastances impose the

additional condition that the same job order be preserved on all ma- chine-:; .

As noted previously, a distinguishing feature of parallel pro- cessor systems is that machine ordering restrictions are not imposed on the jobs. It then follows that the problems of interest in this review pertain to Paths 8 through 1 in which case we conclude with Path 9. (Reviews of deterministic scheduling problems including those in Path 9 are found in Ashour [3], Bellman [8], Conway, Maxwell and Miller [16], Elmaghraby [26], Maxwell [48], Rinnooy-Kan [61] and Spinner [75].)

2.5 Path 8

This most general parallel processor problem has received scant attention in the literature. The scarcity of results is a by-product of the complexity introduced when job processing times are dependent on the machine assignment as well as the jobs themselves.

15

2.5.1 Hin-Sum Criteria

The most significant result in this area is provided by Horn [40] who shows that when all jobs are simultaneously available the problem of minimizing F can be formulated as an assignment problem. An example network for a 3 job two machine problem is given in Figure 2. Tc construct the network define a node j corresponding to each job and a node corresponding to the K from the last position on machine £. From each node j, construct a forward arc ( j , k£) to each node

kp . is assigned to each arc (j , k£) , where p . denotes the processing

j X. J I

time of job j on machine £. Vox example, if job 2 is assigned to the third from the last position on machine ore its contribution to the total flow time (cost) is 3p . Define a source node S and a sink node T where a forward arc (S, j) is constructed for all j = 1 , . . . n and a forward arc (k£, T) is constructed for all k = 1, . . . n and £ = 1, . . . m. Since each job can only be assigned once a capacity of one is placed on all arcs (S, j). Analogously, since each position on each machine can only be occupied by one job, a capacity of one is placed on each arc (k?,, T) . When n, the number of jobs, is large, computational difficulties may arise since it is difficult to determine "a priori" a bound on the number of jobs to be assigned to each machine and n posi- tions must be allocated to each machine. Horn provides means for re- ducing computational difficulties caused by the size of the formulation.

Another result in this area is given by Gupta and Maykut [35] who present a dynamic programming formulation for the problem of mini- mizing E c . (t) .

16

capacity

Figure 2: A Three Job, Two Machine Problem

17

2.5.2 Min~Max Criteria

The problem of minimizing F was looked at brieflv by

max

McNaughton [50] for the simplest case of three jobs and two machines. In order to solve this case he had to consider 13 different subprob- lems, an occurence that did not inspire him (nor any other researchers for that matter) to continue onward. However, there is a special case for which results are attainable.

Consider the problem of scheduling n jobs with identical pro- cessing times on m non— identic a.L processors so as to rniniruxze .f

tJ max

Since the jobs are identical we have the p . is the sane for all j and

J *-

a given £, £ = 1, . . .,m. (i.e., p.- = p for all j = 1, ..., n) .

2 x, x,

Let x. = the number of jobs assigned to machine £ and

p.. = the processing time of any job on machine 1.

The problem of minimizing F can be formulated as:

max

(P-l) subject to E x„ = n

min P(X)

m

£=1

:., >_ 0 and integer

where

P(X) = raax{p x ; P2X2;

x }

(P-l) is a single constraint min-max problem similar to those considered by Jacobsen [43] and can be optimally solved using a marginal allocation procedure as follows:

18

(i) Set xn = 0 for all I, and set 1=1 (i.e., P. (X) = 0).

(ii) Assign any job j, from the set of jobs not yet assigned, to machine 1* if :

Pf.(xr. + 1) - P.(X) = min {pn(x + 1) - P.(X)} 1 fc=l,...,m "

(ill) Once a job is assigned to &* , xfJ. is increased by one and all other values of x remain the same, (iv) Let i = i + 1 and recompute P . (X) . (v) Repeat steps (ii) through (iv) until all jobs are assigned in which case the current value of P . (X) is optimal.

2 . 6 Path 7

Even though we now assume that the processors are identical (i.e., p . = p. for i = 1, . . , ,m) , the inclusion of sequence dependent setup times makes this a very difficult problem. In the literature that we considered, no results were found for problems involving min- max criteria although this was not the case for min-sum criteria as we shall now see.

The .only solution for the case of more than one processor is

provided by Dantzig and Fulkerson [17]. They show that the special

case where the objective is the minimization of the number of resources

required to complete all iobs bv their due dates if p = d. - r (i.e.,

J J j

no job slack) can be formulated and solved as a minimum cost network flow problem. This problem is referred to in the literature as the "Tanker Scheduling" problem.

19

For example, consider the following problem with four jobs where release times, due dates and sequence dependent setup times are given bv the following data:

:\

1J

1 2 3 4

3-

—J

1

0

2

2

3

5

3

4

5

4

8

9

-

1

3

?

-

-

-

3

-

-

-

1

-

-

-

-

The network is constructed by creating a node d. corresponding

to each job due date, and defining a node r. corresponding to each job

release time. A directed arc (d., r.) is drawn if and only if d. +

s < r . The network is completed by adding a source node S, a sink ij ~ J

node T and a dummy node D such that a directed arc (S, d.) with unit

J

capacity is drawn from S to all nodes d., a directed arc. (0, r.) is

J ' J

drawn from D to all nodes r with infinite capacity and unit cost, and

J

an arc (r , T) with unit capacity is drawn from r. to T for all nodes r. J J J

(capacity, cost)

20

The minimum number of resources required to process ail jobs

can then be determined by solving a minimal cost flow problem on the

above network such that all arcs (r., T) from r to T are saturated.

J J

The number of resources required is given by the flow out of the dummy node D.

The inclusion of sequence dependent setup times is quite a formidable barrier since no longer can the setup times be included as part of the job processing times. The difficulty of this class of problems is appreciated by noting that for the one machine case x»jhere the objective is the minimization of the sum of setup times, the prob- lem is equivalent to what is commonly referred to in the literature as the "traveling salesman" problem. Because of the fame or the notori- ety, as you wish, of this problem, no need will be found for expounding upon it in this survey. A thorough review of this problem is provided by Bellmore and Nemhausar [9]. The specific role played by the problem in scheduling theory is elaborated upon by Conway, Maxwell and Miller [16] and Elmaghraby [26].

A final criterionconsidered by Glassey [33] concerns the mini- mization of .the number of changeovers required to have produced d.(t) units of product j by time t (t = 1, ...T) where the machine produces one unit of product per time unit and T denotes the scheduling horizon. He combines graph-theoretical and dynamic programming arguments in order to solve the problem. Roth the formulation and the results are dis- cussed by Rinnooy-Kan [61] .

21

2.7 Path 6

We now assume that setup times are sequence independent in which case they can be included as part of the job processing times. We also assume that the jobs are related by precedence constraints such as those found in CPM and PERT which create a partial ordering of the jobs defined by the precedence relation. As Baker [4] notes, the most commonly used convention "i < j" is used to denote the fact that job i precedes job j (i.e., job j can not begin processing until job i is com- pleted). When i < j, job i is called a predecessor of job j and job j is called a successor of job i. Job i is a direct predecessor of job j if i < j and there is no job k such that i < k < j.

2.7.1 Min-Sum Criteria

Baker [4] advances the opinion that a collection of precedence rel.-ted jobs may not be a realistic context for dealing with a mean flowtime (F) criterion. He states that in many cases, a predecessor job is important only to the extent that it allows a later job to be processed; the flowtime of the predecessor job may itself be of no im- portance. He suggests that a more meaningful context might be one in which contributions to the F criterion ari se only from jobs that have no successors in which case the £ ct.F. criterion would be more apnropri- ate with a. = 0 for those jobs whose contribution is not meaningful. This belief is apparently shared by those who have researched this prob- lem since current results deal basically with this criterion.

Horn [39] provides an algorithm in which the directed graph representing the precedence constraints is a forest; i.e., a collection

22

of trees, each with a root node. For example, given the following precedence relationships

there are only two candidates for the first job, jobs 1 and 2. When one of the two is selected, the node as well as all branches leading from it are deleted generating a new set of trees with candidate root nodes. In order to solve the 7, a F. problem for precedence constraints of the above form, Horn introduces the notion of a successor set S. to job j where if:

(i) k e S. and k / j, then k < j and

(ii) if k £ S and i < j, then either i e S. or i < j . j 2

His algorithm consists of calculating for each root j

Y. = min (E p .)/(£ a.) 2 c 2 2

and scheduling the root job with minimal y . . The procedure is then repeated with the new set of roots. Based on Conway, Maxwell and Miller's [16] observation that the pair of sequences that minimize

and maximize T. a F will be antithetical, Horn's algorithm is also

2 J 2

optimal for situations where the precedence constraints are upside- down trees. The algorithm can be applied after turning the trees upside down again, reversing all directed arcs and replacing a. by -a.. [61]

23

Sidney [72] considers the E c . (t) problem when c.(t) is linear.

J J J

He presents a combinatorial algorithm and proves that a permutation is

optimal if and only if it can be generated by the algorithm. He also develops more efficient procedures for special precedence structures such as parallel chains, parallel networks, job modules and rooted trees.

2.7.2 Min-Hax Criteria

A fundamental result in this area is provided by T. C. Hu [41]

for the special case when all jobs have unit processing time and the

graph of ordering restrictions is a tree with all arcs directed towards

the root node. Hu's algorithm for minimizing F consists of:

max

(i) Labeling the job without any successor by a. = 1.

(ii) Labeling all other jobs by

a . = 1 + a, J k

where k is the job that directly succeeds job j in the tree.

(iii) a. If the number of jobs without predecessors is less

than or equal to m, the number of machines, then

these are scheduled leaving any excess machines idle.

b. If the number of jobs without predecessors exceeds m,

then those m jobs with the largest a values are

.3

scheduled, (iv) All scheduled jobs and branches incident to them are omitted, and step (iii) is repeated until all jobs are scheduled.

For example, consider the following problem with n = 7 jobs and m = 2 machines.

24

cs ; (T) \ ,-'

a =3 5,.-

'0}v*

\ \

A0/,O©)

. = 3

Since we have two machines then according to the algorithm

CU ,.., 1 J „u ,

emu ci li

]uu 4 or j. ttere we aroitrariiy choose job 5 and enclose jobs 7 and 5 in a dotted curve. The succes- sive steps are self-explanatory. The jobs are completed in F = 4

max

units of time which is clearly minimal.

As Baker [4] observes, Flu's algorithm takes on added signifi- cance when it is noted that any job j with integer processing time p. can be modeled as a series of p. jobs with unit processing time where each except the last has one direct predecessor. In this case, Hu ' s algorithm is optimal for jobs with arbitrary processing times and a tree precedence graph when a limited form of job splitting (i.e., one unit of .each job can be processed in any period) is allowed.

Zaloom [79] has observed that any general set of precedence constraints can be converted to a tree by removing all but one arc lead- ing out of each node. Therefore, Hu's algorithm provides a lower bound on the problem with general precedence constraints when the same limited form of job splitting is allowed. If Hu's solution to the problem on the tree does not violate any of the extirpated precedences, then it is

optimal for the general network. If some precedences are violated, then what is lacking in this instance is an algorithm to optimally replace precedence arcs that are initially removed.

For the case of arbitrary job processing times and a general graph, Coffman and Muntz [15] present an algorithm for minimizing Fmax °n tW° ProcCiSSOrs- Tb<--y allow job splitting and divide the jobs into independent (non-interfering) subsets, ordering within the groups and then combining them. The algorithm does not extend for any larger values of m.

For the single machine problem, Lawler [46] has presented a

very efficient algorithm for minimizing max c (t) when c (t) is a mono-

1 3

tonically non-decreasing penalty assessed if job j is completed at time

t. His procedure consists of sequencing job k last if

c (T) = min c . (T) where T = Ep k . i -i

J^ J jej J

where S denotes the subset of jobs without successors and J denotes the set of all jobs. After job k is selected, it is removed from J and the procedure is repeated to find the job to be scheduled next to last.

This process is continued until all jobs have been scheduled. The L

max

problem can now be solved by letting c.(t) = t - d . The T problem

J j max

is solved by letting c (t) = max (t - d . , 0). In these two cases the algorithm becomes equivalent to sequencing the jobs from last to first, always choosing next from among the jobs without successors a job with the latest possible due date.

The difficulty of the m identical parallel processor problem with precedence constraints is brought into perspective by noting that

26

even with job splitting, the problem is equivalent to what is referred to in the literature as the capacitated or constrained CPM problem with a single type of resource. No attempt will be made here to delve fur- ther into this extensively investigated area. The interested reader is referred to Bennington and McGinnis [10 J, Davis [18J and Herroelen [38] for recent state of the art reviews.

2 . 3 Path 5

Problems in which not all machines are available during the scheduling horizon represent a relatively untouched area in schedul ing on parallel processors. Such a condition may arise as an "a priori" restriction on the problem or in instances where the machines themselves move in tine as may be the case in the scheduling of jobs in transporta- tion systems. A major contribution of this dissertation is the develop- ment of optima.! rules for minimizing L, L , F , C and other cri- r max max max

terin for problems involving the scheduling of freight on a class of fixed route railway systems. In these problems the trains play the role of parallel processors whose availability is subject to a predetermined train schedule. These results are contained in Chapter 4.

With respect to more classical situations we find that in a recent paper Celders and Kleindorfer [31] consider the problem that arises in trying to minimize overtime and scheduling costs on a single processor with varying capacity. They develop a branch and bound al- gorithm for dealing with this detailed and complex scheduling problem. We also find that the problem of minimizing Z c.(t) when the number of machines available varies from period to period can be formulated as an

assignment problem for the special case of jobs with unit processing time. This formulation is presented in Chapter 3.

2 . 9 Paths 4 and 3

The availability of a job for processing is typically restricted

bv its arrival or release time into the system r., by its due date d.

I J

or both. When r i 0 for some j, the situation is commonly referred to J

as the case when jobs are not simultaneously available for processing. An analogous situation occurs when jobs must be processed by their due

dates (i.e. , T < 0) .

max

2.9.1 Path 4

In this section we consider problems where in addition to job availability restrictions it is also required that once processing has commenced or, any job it must be carried through until the job is com- pleted. The only results we uncovered here were based on the following m Ln-max p r o b 1 am s .

For a single processor and r . =f 0 for some j it can be noted

that the F problem is trivially solved by scheduling jobs as soon as

max

possible after they become available. The L problem for one machine 1 J max

is not cpuite so straightforward. Besides being of interest in itself, its importance is also clue to its applicability in computing lower bounds for the general job shop problem [14]. Dessouky and Margenthaler [19 J solve the problem by initially scheduling the jobs such that job k p 'recedes job q if:

r, + d < r + d

k k q q

28

Branch and bound is then used to move from this Initial solu- tion to optimality in a finite number of steps. Other Implicit enumer- ation schemes for this problem and the T r problem are found in Bratley et al. [13] and Baker and Su [7]. More recently, McMahon and Florian [49] presented a very efficient branch and bound solution such that a complete solution is associated with each node of the enumeration tree.

When both availability restrictions are imposed a problem con- sidered in this area concerns the minimization of F subject to T

max max

= 0. A branch and bound procedure is developed by Brat'iey. Florian and Robil Lard [12, 13] based on their work for the job splitting version of the same problem to be discussed in Path 3.

2.9.2 Path 3

When job splitting is allowed the question of how often jobs can be split comes into play. Conway et al. [16] have observed that if ail the jobs can be processed simultaneously on all m machines, the situation becomes equivalent to processing on a single machine. The m machines working simultaneously on a single job can be viewed as a single machine with m times the power of the basic machines. If the rn machines are identical, a pseudo-processing time p\ is defined for each job where:

J m and these p" are used whenever the scheduling procedure calls for a pro- cessing time.

If it is the case that the jobs have integer processing times and some (or all) jobs can be processed by more than one machine

29

simultaneously, then it is possible, to transform the problem into one where no job is processed simultaneously on more than one machine in each period. (See Chapter 3)

A . Min-Sum Criteria

Dorsey, Hodgson and Ratliff [20, 21, 22] present a network approach for the problem of minimizing the total penalty cost associ- ated with producing a set of products over a finite planning horizon. Letting each product be a job, and the time it takes to produce a product be the processing time, allows their network approach to solve an equivalent parallel processor scheduling problem. The problem they solve concerns the minimization of total penalty cost associated with processing a set of jobs on m identical processors over a fixed time horizon H where:

(a) It is known how much of each job should be completed by period t (t = 1, ..., H) (i.e., there exists a fixed schedule for completing each job).

(b) The penalty cost for each job is a function of how much of the job has actually been completed by time t (i.e., a penalty is assessed for being off the fixed schedule).

This formulation is discussed in detail in Chapter 5 where algorithms are developed for parallel processor problems with the same type of penalty cost under min-max criteria.

With die exception of the above problem, results for parallel processor problems within this category are virtually non-existent. For the case of a single processor Schrage and Miller [70] in looking at

30

single server queuing problems show that a "shortest remaining pro- cessing time" (SRPT) rule will minimize F. Unfortunately, the rule does not extend to m ^ 2 as shown by the following example:

Job (j) r . p .

J

1

0

4

2

0

5

3

0

1

4

1

1

5

2

1

6

6

1

7

6

i

Extension of the SRPT rule for this example implies that among those jobs available for processing priority is given to those jobs with shortest remaining processing time.

For the data given, the rule yields the following schedule:

t = 12345 6 789 Machine 1 111222622 , -

Machine 2 3451^^7^^

However, a better schedule is:

t = 12 3 4 5 6 7 Machine 1 2 2 11116 .

Machine 2 3 4 5 2 2 2 7 W " '

So while Conway, Maxwell and Miller [16] state that rules for single machine problems with simultaneous release times extend to non- simultaneous release times with job splitting, such fortune does not generalize to parallel processor systems.

31

B . Hin-Max Criteria

When both nonsimul taneous release Liir.es and due date restric- tions are imposed on the. problem, we find that Bratley, Florian and Kobillard [13] have developed a network formulation for minimizing

suoject to T =0. It is best illustrated through the following

max max ° °

example .

Let n = 3 and in = 2 with the following data:

J_

1 2

2 1

3 3

Letting T denote the length of the scheduling horizon, suppose

T = max d. = 4. "hen the problem of scheduling the jobs subject to

J J release time and due date constraints is equivalent to finding a feas- ible flow on the following network.

32

That is, all arcs of the form (S, Job j) have capacity p., all arcs

ot the form (t , F) have capacitv m, and an arc with capacity one is i

drawn from Job j to t . if job j can be processed in period t. (i.e.

if r < t < d ) . If no feasible flow is obtained then T is increased j - i - j

and the procedure is repeated. If a feasible flow is obtained, then T

is decreased. The minimum value of T for which a feasible flow is

obtained is the optimal value of F

max

When due date restrictions are not imposed on the jobs a more efficient rule can be anolied for minimizing F when the jobs are not simultaneously available. This result is presented in Chapter 3.

2.10 Paths 2 and 1

Scheduling with jobs simultaneously available presents the simplest and by far most thoroughly understood problem in the field. Most of the research effort on parallel processor problems in this area is motivated by the success encountered in the solution of single ma- chine problems although the results have not been as gratifying. In- ability to extend single machine results stem in part from the fact that properties of single machine problems do not carry over in general for more than one processor. For example, Conway et al . [16] have shown that for any optimality criterion that is strictly a function of job completion times (i.e., regular measures of performance), it is not necessary to consider schedules which involve job splitting. It then follows that the optimal schedule will be one of the n! permuta- tions of the job numbers 1, . -,n (e.g., SPT rule in section 2.10.1 or "due date" rule in section 2.10.2). This convenient result does

33

not extend to m _> 2 processors for which job splitting is a relevant consideration in developing optimal schedules even when the jobs are all simultaneously available.

2 . 10 . 1 Min-Sum Criteria

A. F and Za.F. J-J

Perhaps schedulings' most well known result was first estab- lished by Smith [74] who showed that the F problem for one machine is solved by the sequence [1], [2], ..., [n] wi th p , , £ p , -, . .., < p, ,.

This result is more commonly known as Lhe SPT, or shortest processing

i a time rule. The rule also minimizes W, L, C and £ F. (a > 0) . Smith

n J

[74] also showed that the rule can be generalized to solve theZa.F.

11

problem. The solution consists of the sequence [1], [2], ..., [n] with

[i] < in:

[i] ~ a[2;

<

This "weighted SPT rule" also minimizes E a .W. , Jia.L. and Z a.C..

J J J J J J

Smith also considers the problem of minimizing F subject to the condition that no job be tardy. His algorithm (generalized by Rlnnooy-Kan [61] to cover Za.F.) consists of initially scheduling the jobs according to the "due date" rule (see Section 2.10.2) to determine if at least one schedule exists such that no jobs are tardy. If a schedule does exist, lie then reorders the jobs such that job k is as- signed to be last in the sequence if

(a) d > Y, p . and k . ]

34

pk pi n

(b) -rr > ~jJ- for a LI i with d. > £ d

That is, he allows a job to be in the last position only if (a) this

does not cause it to have non-zero tardiness; and (b) if the job has

the ereatest p . /a ratio of all lobs that could be last without beino 3 3 '

tardy. Once a job is selected to be last, the procedure is repeated

with the remaining n - 1, n - 2, ..., jobs until all jobs are ordered.

In the case of parallel processors McNaughton [50] proved that

there exists an optima] schedule without job splitting for both the F

and la. F. problems. 3 3

For the F problem, Conway et al . [16] have shown that an extension of SPT rule for single machines yields an optimal schedule. Their modi- fied SPT rule initially assigns the m jobs with shortest processing time to the machines. Each time a machine finishes a job, it would be as- signed from among those jobs waiting, the job with the shortest process- ing time.

An interesting property of this rule is that jobs in equivalent positions on different machines can be interchanged without affecting F in which case several alternate assignments will yield optimum schedules. The rule also minimizes W and L. As we have seen in the previous section however, this rule will not extend to parallel processor problems with nonsimultaneous job arrival times even if job splitting is allowed.

The Z a.F. problem on parallel processors was first approached by Eastman, Even and Isaas [23] who showed that a lower bound on the problem is given by

35

7 (m) > ? Vn- ?^ CD

a = m ( n + J ) a

where F (m) is the mean weighted flow time with m machines. Moreover, a

they show that, in general, no greater lower bound is attainable by exhibiting a set of jobs that attain this bound. At present a solu- tion to this problem has not been obtained. As Baker [4] notes, an optimal solution to the general problem will be characterized by the "weighted SPT" rule on each machine, but the problem arises in decid- ing how to partition the jobs to the machines. Baker and Merten [6] have explored some elementary properties of this problem and use them to develop three heuristic rules. An experimental comparison of the rules yielded no dominating results.

In a recent paper Miller [54] considers the more complex prob- lem of scheduling n jobs on in parallel processors with the added re- striction that exactly q. machines must be assigned to job j for a continuous interval p. where job splitting is not allowed. He pre- sents a branch and bound argument for minimizing F to arrive at optimal solutions for small problems (n = 8) .

B . Number of Tardy Jobs

A rather ingenious procedure for solving the problem of minimiz- ing the number of tardy jobs on a single machine was presented by Moore [55]. His algorithm as modified by Hodgson [55] consists of

(i) Scheduling the jobs by the "due date" rule. If no jobs are tardy, then we terminate. (ii) If the job [k] is the first tardy job in the due date sequence, then discard job [£] where

P[?] = raax{P[1]> P[2], ..., P[k]}.

Repeat steps (i) and (ii) until no jobs are tardy.

(iii) Once a non-tardy subsequence has been found, all dis- carded jobs are added on in any order to complete the optimal sequence.

Sidney [73] has presented a variation of this algorithm for solving this same problem subject to the constraint that some subset of the jobs must be completed on time.

Lawler and Moore [47] developed a functional equation which al- lows for the formulation of the a.' tardy (weighted number of tardy jobs)

J

problem through dynamic: programming .

Moore's rule does not extend to the general parallel processor problem. When job splitting is allowed an extension of Moore's rule can be used to solve the special case when all job due dates are identi- cal (i.e., d - d for all j). This result is presented in Chapter 3. J

The problem of minimizing E c.(t) and its special cases T and

j J

E a T on a single processor has received extensive treatment in the

literature. The E a T was first postulated bv McNaughton [50] who noted the solution for two extreme cases:

(a) If a solution exists such that no jobs are tardy, then Jackson's "due date'1 is optimal.

(b) If it is the case that for all feasible solutions all

jobs would be tardv (i.e. d. > p. for all j), then Smith's

J - J

weighted SPT is optimal.

37

In the absence of these two occurrences the problem becomes formidable even though the generality of the criteria has motivated a multitude of researchers to try their hand at its solution.

The first approach on the T problem was carried out by Emmons [28]. He develops two basic theorems that can be used to eliminate subsets of sequences from consideration. He then uses the theorems as elimination rules in the implementation of a branch and bound solu- tion procedure. While various other approaches have been tried (all

procedures for solving £ Ct.T, and 7, c.(t) of course apply to T) . it

J j J

should be pointed out that Baker and Martin [5] have found a heuristic developed by Wilkerson and Irwin [78] to be the most efficient of cur- rent techniques for the T problem. Lawler and Moore [47] present a dynamic programming formulation for the case when all jobs are subject to primary and secondary due dates.

Research on the £ a.T. problem was initiated by Schild and

j :l J Fredman [68]. Despite their initial claims, their procedure was found

to he non-optimal due to the inadequacy of the pairwise comparison method utilized in their decision rule [16]. Dynamic programming solu- tions for this problem are in Held and Karp [37], Lawler [45] and Srin- ivasan [76]. Branch and bound procedures have been developed by El- maghraby [25], Fisher [29] and Shwimer [71]. Petersen [57] has presented an efficient heuristic based on the use of reordering operations.

The case of non-linear but monotone non-decreasing deferral costs is formulated by Lawler [45] as a dynamic programming problem. Rinnooy-Kan, Lagewag and Lenstra [62] develop a branch and bound approach.

38

The general non-linear case has been approached by Gotterer [34] and Schild and Fredman [69] through dynamic programming and by Shwimer [71] using branch and bound.

The problem for parallel processors was first addressed by McNaughton [50] who showed that if the cost functions are linear no job splitting is required. Rothkopf [66] came up with the first optimal procedure via a dynamic programming algorithm. Gupta and Walvekar [36] formulated the problem as a 0-1 mixed integer program. Arthanari and Ramamurthy [2] and Elmaghraby and Park [27] have devised branch and bound schemes that generate optimal solutions. Computa- tional results are provided only in the latter reference.

Lawler [45] looked at the problem of non-linear but monotone nondecreasing deferral costs and was able to formulate the special case of uniform processing times as a transportation problem. He also formulated the general non-identical job case as a capacitated trans- portation problem. The solution to this latter formulation, while not necessarily optimal, provides a lower bound on the optimal solution value .

Root [65] considers the special case when all jobs are subject to a common due date. He. develops eight theorems by which non-optimal schedules are eliminated and generates a reduced set of schedules con- taining an optimal one. However, no systematic procedure is provided for determining which schedule (s) in the set is optimal.

Despite the assault that has been carried out on this problem, the lack of non-enumerative solutions is sufficient testimony to its complexity. This is further highlighted by Karp [44] having included

39

the Y, a.T. problem for a single machine on his list of problems that are not expected to have a polynomial bound on their solution time. Even in the T problem, it appears that serious theoretical complica- tions are caused by the non-linearity of T . .

2.10.2 Mi n-Hax Criteria

A. F

max—

The F problem with job splitting on m identical parallel max °

processors was first solved by McNaughton [50] who observed that a

necessary condition for a schedule of value F = T" to be optimal

max

is that: n

II Pj\ T* = max (max (p.); (J^— ))

where /.\ denotes the smallest integer greater than or equal to - . McNaughton's procedure for generating a schedule of value T" (as ex- plained by Baker [4]) consists of:

(i) Selecting any job to begin on machine one at time zero. (ii) Choosing any unscheduled job and scheduling it as early as possible on the same machine. This step is repeated until the machine is occupied beyond T , or until all jobs are scheduled. (iii) Reassigning the processing scheduled beyond T" to the next machine instead, starting at time zero and returning to step (ii) .

In Chapter 3 a "longest remaining processing time" rule for

minimizing F even when iobs are not simultaneously available is max

40

developed which provides an alternate procedure to MeNaughton's rul< for simultaneous job availabilities.

L

max-

For single machine problems, Jackson [42] has shown that both

the L and T problems are solved by the sequence 111, [21

max max J l L J ' L J ' '

[n] where

d[l] ~d[2] ±

[a!

(i.e., schedule the jobs in nondecreasing dae date order).

The classic result is commonly referred to as the "due date" rule .

When job splitting is allowed it is possible to develop an

efficient optimal rule for minimizing 1. on parallel processors when

max

ail jobs are simultaneously available. it is at this point that our review terminates and the development of rules for classical parallel problems begins in Chapter 3.

CHAPTER 3

SCHEDULING RULES FOR SOME PARALLEL PROCESSOR PROBLEMS

3. 1 Introduction

When scheduling on a single machine, Conway et al . [16] have shown that situations where job splitting is allowed need not be con- sidered. The most basic single machine problem is then the no job splitting problem without restrictions on job availabilities. Such is not the case for parallel processors where it is often true that better schedules can be obtained by permitting jobs to be processed by more than one machine or by different machines in different periods. The job splitting assumption, while uncommon in problems of a machine shop nature, acquires relevance in instances where machine change over times are practically instantaneous (e.g., computer systems) or where job processing times are relatively long (i.e., days, weeks) as often- times occurs in resource allocation problems modeled as parallel pro- cessor systems.

In Section 3.2 we consider the most basic parallel processor problem where job splitting is allowed and no restrictions are imposed on job availabilities. The problem is formulated in a general context which is used to develop an optimal rule for minimizing maximum lateness when all jobs are assumed simultaneously available for processing. The

41

l\2

rale consists essentially of assigning priority to those jobs which have the "greatest potential lateness" or the "smallest remaining slack" at the beginning of each period in the scheduling horizon. It is also shown that Jackson's "due date" algorithm (Section 2.10.2.B) for one machine problems is a special case of this rule.

In Section 3.3 we drop the assumption that all jobs are simul- taneously available. In this situation the results of Section 3.2 extend only to the problem of minimizing maximum flow time where a "longest remaining processing time" rule is found optimal. Properties and relationships of this rule to other problems in the literature are discussed. Finally a very efficient heuristic is developed for the maximum lateness problem based on the results of Section 3.2. Computa- tional results are also presented.

As noted in Section 2.9.2, an important question in job split- ting concerns how often the jobs can be split. In Section 3.4, a straightforward trans Formation is presented which allows theresults of Sections 3.2 and 3.3 to be applied to situations where some or all of the jobs can be processed by more than one machine simultaneously.

In Section 3.5 two min-sum parallel processor problems are

presented. It is first shown that the problem of min E c (t) for the

i j

special case of unit processing time can be formulated as an assignment prcblan.

An algorithm for minimizing the number of tardy jobs when jobs are sub- ject to a common due date and job splitting is allowed is also presented based on the results of Section 3.3.

In Section 3.6 repeated application of a necessary condition is used to present an algorithm for minimizing max c.(t).

43

Finally, a brief discussion of the results of this chapter is presented in Section 3.7.

3.2 Scheduling with Job Split ting and No Restrictions on Job Availabilities

In this section a scheduling rule for minimizing maximum late- ness on m parallel processors is presented. We first consider the scheduling horizon to be divided into discrete periods such that the length of any period corresponds to one unit of processing. We then allow the jobs to be split in the sense that any job can be processed by any machine in any given period and it is not required that once a job's processing has commenced it must be left on the machine until completed. It is recpiired, however, that no job be processed by more than one machine in the same period.

The rule consists of determining the "potential lateness" or "remaining slack" for all jobs not yet completed at the beginning of each period. The rule then assigns priority to those on jobs with "greatest potential lateness" or "smallest remaining slack." The values are updated at the beginning of each period and the procedure is repeated until all jobs have been scheduled.

3.2.1 General Formulation

Given n jobs to be processed on m identical parallel processors such that job splitting is allowed and all jobs are simultaneously available (i.e., r. = 0 for all j):

Define: p.(t) = the processing time remaining for job j at the beginning of period t.

A feasible schedule at: the beginning of t: one that satis- fies the following restrictions: (i) No job is processed on more than one machine during the same period; (ii) Jobs are changed only at the beginning of periods; and (iii) p.(T + 1) = 0, for all j, where T is the final period.

Let X(t) = x. , i ; denote a (t - 1) by m matrix representing a feasible lt

schedule generated up to the beginning of period t where:

r

1 if job j is processed on a machine during period t! (t* < t - 1)

Q otherwise, and,

t-1

t'=l

-i Jt

the number of periods that job j has been processed in a feasible schedule X(t)

Let

C(X(t)) = (j|p.(t) > 0 in a schedule X(t)}

We define

I(X(t)) = the total idle time on all m machines in a schedule X(t) where:

n t-1 I(X(t)) = m(t - 1) - >; Z j=l t'=l

jt

Associated with each job j is a "potential lateness function"

L.(X(t)) defined recursively as follows: J

45

L.(X(1)) = b. where, b. is a known constant, and, J J J

L.(X(t - 1)) if job j is either on a machine

L.(X(t)) =<

during period t - 1 or if j t C(X(t)) L. (X(t - 1)) + 1 otherwise

A schedule X(c) will be called a t-optimal schedule if Cor a given t and for all j:

^ (X(t)) <«£(X(t))

for all feasible schedules X(t) where:

oC (X(t) = max {L . (X(t)}

J

j=l,...,n

It will subsequently be shown that for any problem adhering to the previous formulation, the following "greatest potential lateness" rule will generate a t-optimal schedule.

Let |-| indicate the number of elements in the set •. For any t' = 1, 2, ..., t schedule all j £ C(X(t')) so that:

(i) If |C(X(t'))| <_ m, then process all jobs in C(X(t')). (ii) If |c(X(t'))| > m, then process any m jobs in C(X(t')) such that if job q is processed and job r is not, then

L (X(t')) > Lr(X(t'))

(i.e., process jobs 1, ...,

m wnere

L <X(f)) >L[2](X(t')) > ... >L[m](X(t')) > ... L[£](X(t))

?, < n and [ j | denotes the job with the j— largest "potential

Lateness function" value.

Properties of Schedules Generated by the "Greatest Potential

Lateness" (GPL) rule:

Suppose t* is any period such that at the beginning of t* , there

exist at least m + 1 jobs with Lf ,(X*(t*)) = Lf ,(X*(t*)) = ... = ... =

Lr ,(X*(t*)) = C\.(X*(t*)) where X*(t*) denotes any schedule gener- [m + 1]

ated by the GPL rule up to the beginning of t* . It then follows that an increase in the value of (X*(t)) occurs at the beginning of t* + 1 (i.e.,^(X*(t* V D) > e£(X*(t*))).

Define ()-'•' to be the set of jobs j such that L.(X*(t*)) = (X*(t*)),

where | Q* | _> m + 1

Property 3.1: If there exists a schedule X(t*) / X*(t*) such that (X(t* + D) < e£(X*(t* + D)

then it must be the case that for at least on job j £ Q*,

t*-l t*-l

L X. > T,

t=l JL t=l

t*-l t*-l

Proof: If I x < E x* for all j £ Q*, then

i It -, it

t=l J t=l J

L.(X(t*)) >t?if(X*(t*)) for all j E Q* . Since | Q* | > m

4.7

it follows that for some "job j e Q* ,

L.(X(t* + 1)) >c5^(X*(t*)) and therefore that

£> /v*

at (x(t* + D) >^(x*(t* + D). Q

Pro perty 3,

insider any schedule X(t*)) having for some job i

t*-l

t*-l

(it (C(X*(t*)) n C(X(T*)))) l x < 2 x*

t=l x t=l L

(a) Tf i e Q*. then L (X(t*)) > c€ (X* ( t*) )

(b) If i i Q*. then L.(X(t*)) >^f (X*(t*)) .

Proof: (a) If i £ Q*, then L.(X*(t*)) = dC (X* (t*) ) . Since i is now

processed at least one period less in X(t*) , then L.(X(t*)) > o& (X*(t*)).

(b) Consider the following diagram where p and f denote the first and last periods respectively that job i was pro- cessed in the periods t = 1, 2, . . . , t* - 1 in the schedule X*(t*) .

t*-l

x 1

Y

X

1 x

Since ! /. Q* and | Q* | >_m + 1, and r. = 0 for all j, it must be

the case that at the beginning of period f, L.(X*(f)) > L (X*(f)) for

1 ~ J

some job j ■: Q* . (Otherwise, under the GPL rule, job i would not be

48

processed during period f.) Let X'(t*) be a schedule identical to X*(t*) except that job ! is not processed in period f.

t*-l t*-l Since Z x! = Z x! - 1, then t=l Xt t=l "

L (X'(f)) _-: L (X*(f)) which implies L.(X'(f)) > L.(X*(f)) for some job j '■ Q*. Since job i is not processed from f to t* , then for all t,

where f < t < t*, L (X' (t + 1)) = L (X'(t))+ 1, whereas L.(X*(t + 1))

j

< L.(X*(t)) +1. It chen follows that for t = t*, L.(X'(t*)) >

L (X*(t*)) for all j e Q*. Finally, let X(t*) f X*(t*) be any other t* t*

schedule such that Ex < I •<* t = l Lt t=l '"

Note that for i incomplete by t* , L (X(t*)) is solely a function of the number of periods i is worked on up to the beginning of t* , but not of when i is worked on. Therefore

L.(X(t*)) > L.(X'(t*)) >^f(X*(t*)). Q

Property 3.3: In order to show that the GPL rule is t*-optimal, it

t*-l suffices to show that for any other schedule X(t*) havin°- £ x s

° t-1 jt ' t*-l

): K*-]t for a0Tns JobjeQ* it must be the case that 7. x. < T x* t=1 t-1 Xt t-1 U

for some job i .

Proof: The result follows directly from properties 3.1 and 3.2. Q

Property 3.3 provides a condition whereby the "greatest poten- tial lateness" rule is t-optimal for all periods t with the property of

49

t" . In general, it is not a straightforward task to show that Property 3.3 will hold. However, the following property characterizes a situa- tion under which the conditions of Property 3.3 will always hold.

Property 3.4: If X*(t*) is such that I(X*(t*)) ± I(X(t*)for all X(t*) i X*(t*)

then property 3.3 holds and X*(t*) is a tA-optimal schedule.

Proof: Let X->"(t;'r) be a schedule generated by the GPL rule such that

I(X*(t")) < l(X(t*)) for all X(t"-':) r X*(t*). Assume there exists a

schedule X(t"j t X"(t") such tnatc^ (.X(.t,: + j.jj - ca_, (..\"-:(.t:: + l)j that

t*-l does not satisfy property 3.3 (i.e., X(t*) is such that E x >

t=l J

t*"1 r1s t*-l t*-l

i'L for some i £ Q* and Ex. = E x*.. for all other jobs t=I t+1 Tt t=l Lt

i). Then by processing job j at least one more period before t*, total idle time is reduced and hence, a contradiction exists. D

Finally, it should be noted that for any t + t* ,o£ (X»(t)) = £7^ (X';'(t + 1)) and the GPL rule is clearly optimal.

3.2.2 Minimization of Maximum Lateness (L ) max

It will now he shown that the problem of minimizing L on m

max

parallel processors with job splitting and simultaneous job availabil- ities can he adapted to the previous formulation and optimally solved with the CPL rule .

For all jobs j = 1, ..., n, let:

p.(t) = processing time remaining for job j at the beginning of period t such that:

50

p.(l) = total processing time for job j, and for t > 2,

j p. (t - 1) - 1 if job j is on a machine during

t - 1 or if j £ C(X(t)) . p.(t - 1) otherwise

P.(t) =

In addition let:

that:

d.(t) = the number of periods remaining at the beginning of

period t until job j is due where :

d . (1) = the original due date for job j and

d. ft) = d.(t - 1) = 1, for t > 2. J J

Finally, define the "potential lateness function" L (X(t)) such

.1

L.(X(1)) = p.(l) - d.(l) and

:i j i

L.(X(t)) = D.(t) - d.(t) for t > 2 3 J J

It then follows from the definitions of p.(t) and d.(t.) that.

L.(X(t))

J

L.(X(t - 1)) if job j is either on a machine

during period t - 1 or if j t C(X(t))

L.(X(t - 1)) + 1 otherwise J

Rule 3.1: An optimal rule for the L , T problem consists of s^hedul-

max max

ing for any t' =1, 2, ..., t all j £ C(X(t')) so that

(i) If |c(X(t'))| < m, then all jobs in C(X(t\) are processed, (ii) If iC(X(t'))| > m, then process any m jobs in C(X(t')) such that if job q is processed and job r is not, then

51

L (X(t')) > L (X(t')) or equivalent ly q r

P (t') - d (i:') > p (t') - d (t') in the t' optimal q q ~ r r

schedule X(t') .

Proojf: Optimality of the rule follows from the observation that since r. = 0 for all j, I(X(t*)) = 0 < I(X(t*)) for all feasible schedules X(t*) r X(t*) and any period t* (where t* is as defined previously). Therefore, Property 3.4 holds and the optimality condition of Property 3.3 is satisfied. D

We also note that an equivalent formulation of Rule 3.1 con- sists in giving priority to job q if:

dq(t') - pq(t') < dr(t') - pr(t')

in which case the decision is based on favoring those m jobs with smallest remaining slack at the beginning of each period.

An Example : Consider the following problem requiring the scheduling of n = 5 jobs on m = 2 parallel processors where the data generated by the GPL rule are displayed below.

t 1 2 3 4 5 p (t) 3 3 3 3

pb(t)

1

0

-

-

p (t)

c

2

1

0

-

pe(t)

1

1

1

0

pf(t)

5

5

4

3

52

d (t)

db(t)

0

-

-

-

d (t) c

?

1

-

-

d (t)

c

3

2

1

-

df(t)

6

5

4

3

p (t) - d (t)

pb(t) - db(t)

-1

-

-

-

p (t) - d (t)

c c

0

0

-

-

p (t) - d (t) e e

-2

-1

0

-

Pf(t) - df(t)

-1

0

0

0

|C(X(t)) |

5

4

3

2

where * denotes the jobs which are candidates tor scheduling at each period .

t = 1: Job c and either job b or f must be processed during t = 1. Arbitrarily select job b.

t = 2 : Job b is now completed and need no longer be considered (i.e., C(X(t)) = 4).

Jobs c and f must be processed during t = 2. t = 3 : Job c is now completed and is no longer considered .c)f(X (t*) ) = 0 and 1 0"v | = 3 ^ tn - 2 , so t = 3 is a period t* at which an increase in e£(X(t*)) will occur at t* + 1 = 4. Jobs a, e and f £ Q* . The schedule up to the beginning of t* = 3 consists of:

Mi h 1<*S ,

f Job b | Job f {

t = 1

5 3

I(X(t'c)) = 0 and it can be readily verified that if either jobs a, e, or f were worked on one more period before t* =3, it would imply that <^(X(t*)) >0. (Property 3.2).

Arbitrarily select jobs e and f for processing at t = 3.

t = 4: Job e is completed and no longer considered. At this point

|C(X(4)) I = 2 = m and r. = 0 for all j imply that no further decisions are necessary. Both remaining jobs a and f are processed until com- pleted. An optimal schedule with L = T = 1 is given by:

max max

Job c Job e Job a

M ' ' 1 1

Job b Job f M2 i- ' 1

t = 1 2 3

3.2.3 The "Due Date" Rule as a Special Case of the "Greatest Potential Lateness" Rule

In Section 2.10.2.B, Jackson's "due date" rule for minimizing

L (T ) on a single machine was shown to consist of sequencing max max ° m &

the n jobs so that dril < d,_, < ... < dr ,. While his result is L1J - (2J - - [n]

based on the assumption of no job splitting, application of the GPL rule to the one machine problem allows one to arrive at the same con- clusion .

Property 3.5: Any n job single machine problem with simultaneous job arrivals and job splitting can be converted into an equivalent N job problem without job splitting as follows:

Split job j (for all j = 1, ... n) into p. identical jobs

each with processing time p., = 1 and due date d , = d . The orig-

J j j

n inal L problem with job splitting is equivalent to an N (N = Y, p )

max ' '■' v i

j=l 3

job single machine problem subject where all jobs have unit processing

times and due dates d... J

Applying the GPL rule to the N job problem results in schedul- ing, for any period t, job q' versus job r' if:

p , - d , > p , - d , for all r' r q' - q q r r

Since p , = p , = 1 for all q' and r', the rule becomes equivalent to

selecting job q if

d , < d , for all r' i q' . q r

Since there is only one machine, the original job q will have

priority until it is completed. Therefore, no job in the optimal

schedule will be split and all original jobs will be scheduled in due

date order.

3 . 3 Non-Simul taneous Job Arrivals

When the jobs are not assumed to be simultaneously available

(i.e., r. ^ 0 for some j), the rule of Section 3.2 minimizes L only J max

for the special case of a single machine. We will show however that

the problem of minimizing F can be solved by an extension of the

max

GPL rule.

3.3.1 Minimization of Maximum Flow Time (F )

max

When due date restrictions are not imposed on the jobs but the jobs are not simultaneously available for processing, the following

"longest remaining processing time" (LRPT) rule provides an efficient

procedure for minimizing F

max

Consider the scheduling horizon to be divided into discrete periods such that the length of each period corresponds to one unit of processing time. We again define:

p.(t) = the processing time remaining for job j at the beginning of period t, where

r

J

p. (0 =

?.(t - i)

1) - 1 if job j is processed during period t - 1 otherwise

C(X(t)) = (j|p.(t) > 0 in the schedule X(t)}

In addition define:

A(t) - (j|r. <_ t} (i.e., A(t) is the set of all jobs available for processing at or before the beginning of period t.)

Rule 3.2: Any schedule X*(t) which has the property that at the begin- ning of any period t, all jobs available and not yet completed (i.e., all j e (A(t) n C(X*(t))} are scheduled such that:

(i) If |A(t) n C(X*(t))| < m, then all jobs in A(t) n C(X*(t)) are processed, (ii) If | A(t) <-> C(X*(t))| > m, then any m jobs in A(t) * C(X*(t)) are processed such that if job q is processed and job r is not,

Pq(t) 1 Pr(t)

minimizes F

56

The rule translates to assigning priority at the beginning of each period to those m available jobs with longest remaining pro- cessing time. The procedure starts with t = 1 and is repeated until the beginning of period T where T is such that |c(X*(T) | = 0.

Proof : Consider the beginning of any period t and any two jobs q and r

such that either q or r may be processed on a machine during t but not

both. Assume that p (t) < p (t) but that r is selected for processing r q r o

during t. Let X(T) be any arbitrary schedule after t. For example,

t

t+1

T

q

q

r

1 q

q

q

r

r

r

q

1 r

X(T)

where p (t) = 6 > 5 = p (t) . q r

At the beginning of t, the maximum number of overlaps in X(T) between jobs q and r (i.e., the maximum number of periods that q and r could he processed simultaneously) is given by:

max 0L(t) < min (p (t), p (t)) = p (t) .

However, since r is processed during t and q is not, the maxi- mum number of overlaps from t + 1 to T in X(T) is given by:

max 0L(t + 1) <_ p (t + 1) = p (t) - 1.

This implies that there is at least one period after t where

q is processed and r is not. Hence r and q can be interchanged at t,

the schedule X'(T) after the interchange is also feasible and F in

max

X'(T) is less than or equal to F in X(T).

max

57

Now given any arbitrary schedule, starting at t = 1 the jobs can be interchanged so that they satisfy the ''longest remaining pro- cessing time" rule. The interchange procedure is repeated for t = 2, 3, ... T to generate a schedule X*(T). Since all schedules which

satisfy the LRPT rule have the same value of F , the result follows. D

max

This "longest remaining processing time" rule for minimizing

F when jobs are not simultaneously available of course provides an max

alternate procedure to McNaughton's rule (Section 2. 10. 2. A) for the

F problem with simultaneous job availabilities. In this case the max '

LRPT rule simplifies to:

Starting with t = 1

(i) Rank the jobs so that pfl,(t) > pf 7, (t) > ... _> p (t) . (ii) Schedule the first m jobs on the machines in period t. (iii) Let t - t + 1, and update all processing times, where again

P.(t) = <

p.(t - 1) - 1 if job j is 1 processed during

period t - 1

P.(t - 1)

otherwise

(iv) Repeat steps (i) , (ii), (iii) until all jobs are completed.

In Section 2.7.2, a procedure by T. C. Hu for minimizing F

max

when jobs are subject to precedence constraints was discussed. Hu's algorithm was based on the assumption that all jobs have unit processing time and the precedence graph is a tree with all arcs directed towards the root node. In the problem considered in this section we assume that the jobs are not constrained by any precedence relationships, but that

job processing times are arbitrary integers. However, it can be

shown that T. C. Hu's procedure can be applied to the problem in this

section and that Hu's algorithm is equivalent to Rule 3.2.

Again observe that any job j with Integer processing time p.

can be modeled as a series of p. jobs with unit processing time, where

each except the last has one direct predecessor. For example, if we

have three jobs with p = 3, p, = 2 and p =4, the problem is equiv- a d c

alent to minimizing F on the following precedence graph: max

» Job c

b b

where each original job is divided into p. subjobs of unit processing time. The precedence relationships assure that no original job is worked on more than once in the same period and the jobs are tied to a root node to complete the tree. Each path in the precedence graph corresponds to a particular job in the original problem. T. C. Hu's algorithm leads one to select, at each step, those m subjobs without predecessor from each of the m longest paths. With respect to the original problem without precedences, Hu's algorithm is equivalent to processing, at each step, one unit of those m jobs with longest remaining processing

time, in which case the procedure is equivalent to Rule 3.2.

To complete our discussion on the F problem with non-simul- ' max

tnneous job arrivals we note that an alternate approach has been suggested

bv Robillard [64]. Let r,, r„, ... r denote the iob release times.

12 n

Let T denote any horizon length so that all jobs can be completed by T

ri h

3 "3

Starting from T and working backwards, minimize the maximum

lateness of all jobs (L ) by letting the iob release times play the max J r j

role of due dates. Let c. , c„ , ... c denote the completion times of

1 I n

the jobs after a schedule minimizing L has been constructed. A

max

schedule that minimizes F is obtained by shifting the L schedule

max J ° max

'. . - r . . It then follows J J

to the left by an amount equal to rain

3-1,... n

that the L problem with simultaneous iob arrivals and the F prob- mux max

lem with non-simultaneous job arrivals are equivalent. If job splitting

is allowed, then the "greatest potential lateness" rule (Rule 3.1) can

be used with Robillard's observation to provide yet another way of

minimizing F when r. i 0 for some j. Robillard 's procedure does max j

not require job splitting but is in fact limited to job splitting cases

since optional rules have yet to be developed for either the L or

max

non-simultaneous F ^_ problems when job splitting is not allowed.

3.3.2 M inimization of Maximum Lateness (L )

max

The. "greatest potential lateness" rule does not generalize tc

problems where m > 2 for minimizing L

Failure of the rule when

60

r. f 0 for some j centers around a tradeoff between potential lateness P.(t) - d.(t) and remaining processing time p.(t) which cannot be resolved unilaterally. Since neither criterion is dominant an optimal measure for assigning job priorities cannot be attained. (See Appendix 2 for a discussion of this occurrence) .

Rule 3.1 is extended to provide what results in a very efficient heuristic for the L problem when r. 4- 0 for some j . The heuristic employs the following rule:

For any t' - 1, 2, ..., t, schedule all j e U(t') n C(X(t'))} such that:

(i) If ! A ( t ' ) A C(X(t'))| < m, then process all jobs in A(t') n C(X(t')). (ii) If JA(t') n C(X(t'))| > m, then process any m jobs in A(t') n C(X(t')) such that if job q is processed and r is not, then either

(a) p U') - d (t') > p (t*) - d (t') or

q q r r

(b) Pq(f) - d (t') = pr(t') and p (t') > Pr(t')

where the arbitrariness of selecting jobs with equal potential lateness functions is resolved by picking the one with longer remaining processing time.

In order to test the efficiency of the heuristic an experiment was conducted in which 279 different problems were run. The heuristic was programmed in APL/360 [32] and problem sizes of up to 80 jobs on 5 machines were generated. Solutions obtained by the algorithm were initially compared to a lower bound on the problem. Those that did not

61

obtain the lower bound value were tested for optimality by not al- lowing any job to be processed beyond the furthest period such that

L < L generated by the heuristic, for all j. If no feasible j max

schedule exists under these conditions, then the schedule generated by the heuristic is optimal. The heuristic proved optimal in 277 out of the 279 problems. Computations efficiency of the heuristic was tested by running an 80 job 2 machine problem which was optimally solved in 16 seconds on an IBM/370 system. Complete details and dis- cussion of the experiment are found in Appendix 1.

For the case of a single machine, the following property can be used to show that the above heuristic is in fact optimal when jobs are not simultaneously available.

Property 3.6: Any n job single machine problem with job splitting and non-simultaneous release times can be converted into an equivalent N job problem as follows:

Split job j (for all j = 1, . . . , n) into p. identical jobs with processing time p. =1, due date d. = d. and release time r. =

r.. The original L problem is equivalent to an N (N = l p.) job

J max j

single machine problem subject to the new processing times, due dates and release times.

The two selection criteria call for favoring job q over job r for any t' = 1, ... t of:

(a) p , (t') - d , (tT) > p^, (t) - d , (t) or

(b) P , (t) > Pr,(t)

where both q', r' Z {A(t') C(X(t')}. However^ since p , = p , = 1 n ' cj r

no tradeoff between (a) and (b) will ever occur and the optimal priority criteria selects q ' over r' if dqf(t) <dr,(t)

a result previously noted in its latter form Ln [24].

3.4 Simultaneous Processing by More than One Machine

In Section 2.9.2 we discuss Conway, Maxwell and Miller's [16] observation with respect to situations where all jobs can be processed by all m identical machines simultaneously. In that instance it is shown that such an assumption is equivalent to scheduling on a single machine with m times the power of the basic machine.

We again consider our scheduling horizon to be divided into discrete periods where the length of each period equals one unit of processing time. If it is the case that some (or all) jobs can be processed by more than one machine in the same period, then the follow- ing transformation can be used to convert the problem into one where no job is processed by more than one machine in each period.

Consider any job j (j = 1, . . . n) with integer processing time p which can be processed by up to q. (q . < m) machines simultaneously in

any period. Assume that if q. (q . < q.) machines process job j in a

J J - J J -

period then the processing time for job j is reduced by q.. Then job j can be divided into q. jobs such that:

(i) P.. - q. [ ] of these jobs are assigned a pseudo-processing J ''j

time of [-^J + 1.

qi

P, (Li) The remaining jobs are assigned a processing tine of [— *H

C1.i

63

where [ ] denotes the largest integer less than or equal to .

n The transformed problem becomes or.eof scheduling N (N = E q.)

j=l J

jobs on m identical machines subject to the condition that none of

the N jobs may be processed on more than one machine simultaneously.

{•or example, consider the problem of scheduling n = 2 jobs on

m = 2 machines such that:

Pa = 5 % - 2 Pb = 3 qh = 2

Since qa = 2 we have that job a is divided into two jobs such

Pa that p - q [ 1 or one job p is assigned a processing time of a a q .-, a i

P

[ ] + 1 or 3. The other job p is assigned a processing time of

J raz °

a

two .

Since q = 2 we have that job b is similarly divided into two

jobs with processing times p, , = 2 and p. _ = 1. ° *bl b2

The transformed problem becomes one of scheduling N = 4 jobs

with processing times p = 3, p = 2 and p = 2, p = 1 on two

3-L 3./. D.L D&

identical machines so that no job is processed by more than one machine in each period.

Correctness of the transformation follows from the observation that a solution to the transformed problem is feasible to the original problem. No more than q. machines work on the original job j simultan- eously (since it is divided into a. jobs none of which can be orocessed

j

simultaneously on more than one machine in any period). Moreover, since

64

all jobs in the transformed problems are completed and

qj £

1=1

l\n = pj

then job j .in the original problem is completely processed. The com- pletion time of job j is given by the last of the completion times of

the a. transformed jobs. 'J

It then follows from this transformation that the results in Sections 3.2 and 3.3 also apply to the situation where simultaneous processing is allowed by more than one machine.

Tt.rr, M-in-Si.™ Pi-nK I .

3.5.1 Limited Machine Availabilit ies

In Section 2.8 it was noted that very few results have been obtained for problems with limitations on job availabilities. One special case for which results are attainable is the following:

Consider the situation where v;e have unit processing times and the number of machines available varies from period to period. In this case the problem of minimizing T, c.(t) can be formulated as follows:

That is, all arcs of the form (S, j) have capacity P. = 1, all arcs of the form (i, F) have capacity m. (the number of machines available in

period i) and an arc with cost c.(i) is drawn from job j to period i if job j can be processed in period i (i.e., if r. < i) . The solu- tion to the above assignment problem provides a schedule, which minimizes

i, c.(t). Extension of this notion to arbitrary processing times is

i '

stymied by the inability to construct appropriate costs for the jobs.

3.5.2 Jobs with Common Due Dates

In Section 2.10.1.B, Moore's algorithm for minimizing the number

of tardy jobs was presented. While Moore's rule does not extend to

general parallel processor problems, when job splitting is allowed the

special case when all jobs have identical due dates (i.e., d = d for

j

all j) can be solved using the following algorithm: Let d = due date for all jobs

r EjPj

(:i) Compute F = <max ; max p >

max | m *jj

whore m denotes the number of identical processors and J the set of all jobs.

(ii) (a) if F < d, then use either the "longest remaining max °

processing time" rule (Rule 3.2) or McNaughtons

algorithm [50] to construct a schedule such that no

jobs are tardy.

(b) If F > d, then discard job k where k is such that: max J

p, = max {p.} for all i e J. k J

Remove job k from the sec J and repeat (i) and (ii)

until F < d. max

66

(iii) Once a non-tardy subschedule has been found, all dis- carded jobs are added on Ln any order such that no job is worked on by more than one machine in each period, thereby completing the optimal schedule.

The application of the above rule for no job splitting is of course contingent upon a procedure for determining a schedule of value [•' < d which has yet to be found.

3.6 Minimization of Maximum Deferral Cost

Given any parallel processor scheduling problem for which a feasible solution exists, repeated application of the following property can be used to generate a schedule that minimizes max c.(t). Property 3.7. Lit 7<T) be any feasible solution with value K = max c.(t). A

i- j J

necessary condition tor generating a strictly better schedule X'(T) is

that c.(t) < K for all j = 1, ..., n in X'(T).

The interpretation of this deceptively simple condition in a scheduling context leads to the subsequent iterative procedure for generating an optimal schedule:

(i) Generate any feasible schedule with value K = max c.(t).

J 3

(ii) Determine the latest period each job can be completed so

that max c . (t) < K.

j J

(iii) Impose due dates on the jobs equal to this latest period. (iv) Determine a feasible schedule for all jobs subject to their newly assigned due dates.

67

(v) If a feasible schedule exists, update K and repeat

(ii) through (v) . If no feasible schedule exists, then

stop with min max c . (t) = K. i J

It should be noted that the procedure, while completely inde- pendent of how a feasible schedule is generated, does require the existence of a method for arriving at a feasible schedule. For example, if r. = 0 for all j and job splitting is allowed the results of Section 3.2 can be used at each iteration to determine the existence of a

schedule with L =0 for each new set of due dates. If r ^ 0 for max j

some j, then Bratley et al.'s network approach (Section 2.9.2.B) can be used for the same purpose. The lack of efficient methods for gener- ating feasible schedules when job splitting is not allowed places limita- tions on the use of the above procedure for problems of this gender.

3 . 7 Conclusions

This chapter has been concerned with the development of optimal rules for problems that originate from the theory of parallel processor scheduling. A fundamental parallel processor problem v/as isolated and formulated. The general formulation presented for this basic problem of scheduling identical processors was motivated by the feeling that opti- mal rules could be developed for a class of problems adhering to this structure. A rule was then developed (Section 3.2) for minimizing max- imum lateness. When job arrivals were allowed to be non-simultaneous, development of aa optimal rule was limited to the criteria of maximum flow time (Section 3.3). However, extensions to more general cases were hindered by the lack of a dominant priority measure (Section 3.3.2)

as well as complexities Introduced when various assumptions were relaxed (Appendix 2) .

The relative lack of results for min-max problems in the literature (Chapter 2) prompted one to concentrate on this criterion. Min-sura problems were also considered but with the exception of two special cases (Section 3.5) little success was attained. Perhaps the most intractable of the problems studied were situations when job splitting is not allowed for which no results -were attained for either min-max or min-sura criteria.

While utilization of the rules in this chapter is constrained by the rigidity of the assumptions invoked, it may very well be the case that these assumptions mark the perimeter beyond which development of optimal rules for parallel processor problems is highly improbable. Nonetheless attempt-, to develop rules for both the successful situations presented in this chapter as well as the unsuccessful ventures contribute to a fundamental understanding of the parallel processor problem. It is this insight which assists greatly in Chapter 4, where rules are developed for problems motivated by their potential application in scheduling on railways.

CHAPTER 4 FIXED ROUTE PARALLEL PROCESSOR SCHEDULING PROBLEMS

4 . 1 In treduc tion

While scheduling problems have received prominent attention in the airline industry, comparatively little analytical work has been done concerning such problems in the railroad industry. This seems particu- larly strange when one considers the observation by Mehl [52, p. 2] that ''Almost everywhere rail transportation systems are infested with problems. Tran- sit systems all over the country lose money. Even the more profitable railroads could have earned more by investing in a passbook savings ac- count." Menl's remark takes on added significance when he notes that such problems arise equally in freight service as well as in the more publicized area of passenger service.

Most of the work related to scheduling problems in the railroad industry has concerned itself with simulating various independent activ- ities within the structure of a particular railway system. k compendium of simulation models currently in use is summarized in the Association of American Railroads' Application Digest [ 1 | and in the Railroad Research Developmental Issue [60]. Some recent analytical approaches that, according to Mehl [52], Incorporate more practical considerations are a Network Design Model by Billheimer [ll] and a Suburban Railway Fleetsize Model by Salzborn [ 6 7 1 - In general, however, the amount of analytical work done with respect to schedule generation is scarce.

69

70

The primary interest of this chapter centers around the develop- ment of solution procedures for problems involving both a scheduling and routing aspect. The specific problems dealt with concern the movement of freight on railway networks although the notions developed are appli- cable in other contexts as well.

The emphasis will be on the simplest scheduling situation that arises when the route (network) consists of a single track connecting two major switchyards. This is akin to the operation of a "local line" of freight service (see Appendix 3 for a brief description of freight service operation on a railway system). The behavior of a local line follows two basic modes of operation:

(i) Based on demand or traffic available.

In this situation trains are formed and sent off as the need may arise. Interest here is focused on determining the size or number of trains required to carry all traffic when time is ignored.

(ii) Based on train movement according to a fixed schedule along the route. Although we will deal with mode (i) for certain route con- figurations the single track problems to be considered herein will con- centrate on mode (ii) within which two major questions arise:

(a) What the train schedule is to be, and

(b) Given a train schedule, what should be transported on which train and what should be dropped off at each station.

In order to key on the most basic problem, we assume that the arrival of the trains at the stations along the line is predetermined (i.e., (b) the train(s) schedule is fixed a priori). The train(s) will either slow down or speed up accoringly to meet the schedule.

71

A basic assumption thai will bo maintained throughout is that: loading or unloading will not affect the schedule of the trains. The feasibility of such an assumption is enhanced by the operation of freight trains that usually fun at intervals of every two or three days. In this context, loading and unloading times play roles of less significance.

Section 4.2 will deal with the simplest problem, namely the sched- uling of one train on one track. The notions developed here are extended in Sections 4.3 through 4.5 to generate a series of basic feasibility results and optimal rules for the operation of M trains on one track sub- ject to various restrictions on job availabilities and optirnality crite- ria. Sections 4.6 and 4.7 allow for systems extending beyond a local line and networks with more than a single route, respectively, in which cases additional feasibility results are expounded.

Finally, we relate the problems of this chapter to the more classical parallel processor scheduling problems of Chapters 2 and 3 by noting that in the fixed route case, the trains play the ro Le of pro- cessors and the freight that of the jobs. A basic difference which is exploited is that in the classicaL problems of the previous chapters jobs "queue up and wait" if not processed, whereas in the fixed route problems the machines move in time causing any job which is not processed to wait for a subsequent machine. This last characteristic allows one to classify fixed route problems as pertaining to problems with limited machine availabilities, i.e., Path 5 in the classification scheme of Chapter 2.

I -.2 The One Train - One Track Problem

(Alias the Waldo Train Scheduling Problem)

There exists a train station at Waldo, Florida, that is unique in a variety of aspects, many of which are not really relevant to the following discussion. Two relevant aspects are, however,

(1) That the station is serviced by only one track, and

(ii ) That only one train per day passes through Waldo in a given direction. (While it is true that the same train passes through Waldo on its return trip, it suffices to view this occurrence as the. same problem in the opposite direction.)

Based on this orLentation we are interested in solving a sched- uling problem that adheres to the following general description.

We have a single train with a fixed capacity (in boxcars), travel- ing a single fixed route subject to a fixed timetable. Tt encounters a series of local (Waldo-like) stations where jobs are waiting to be picked up and delivered to their destinations. All origin and destination sta- tions for all jobs lie along the fixed route. Furthermore, we assume that all job units are commensurable with those of the train capacity (i.e., each job requires one boxcar).

Given that the train makes only one pass through the route, it

is desired to determine which jobs are to bs picked up or dropped off

at the various stations so as to maximize the number of jobs that are

taken to their destinations.

The problem can be schematically described as follows: O O o

e^k^-cR! n __

(|p|H§®W 1 2 3 4 5 6 .... £

where the stations along the route are numbered 1, 2, ..., ". We define:

73

m = the capacity of the train (boxcars) o. = the origin station of job j, j = l,...,n f. = the final or destination station of job j, j = 1, ...,n p (i,k) = the time it takes to transport job j from station i to station k(i < k) (i.e., the time it takes the train to travel from station i to station k) . Furthermore, letting s denote any station we define:

0(s) = [jo = s} (i.e., 0(s) is the set of jobs whose origin is

J -Jo

station s) F(s) = (j|f = s} (i.e., F(s) is the set of jobs whose destination

is station s)

F(s) = (jlf. --h s}

T(s) = {j| job j is on the train when it arrives at station s}

A(s) = {j| job j is available to be transported from s to s + 1} where

A(s) = {(T(s) n F(s)) u 0(s)}

for all stations s = 1, . ..,£ and all jobs j = 1, . . . , n, and

0(s) I = n (where | j denotes the number of elements in the set •) s=l,...,£

Finally, let

X(f) = any schedule or assignment of the jobs to the train for all

station intervals s,s+l (s = l,...,'i).

We first note that the effect of the jobs upon the system consists

in creating an overlap (or demand) for service between any two stations

along the route. Let 5(s, s+1) denote the number of jobs (loads) which

overlap between stations s and s+1 where formally

6(s, s+1) = | u 0(v)) n ( u F(v)) | v=l,...,s v=-s+l, .. .,£

74

For example, given the following route and .jobs, where each job can be

represented by an Interval starting at o., ending at f . , of length

J J

p . (o . , f . ) , we have

j J 1

Job c

Job h

Job a

6(1, 2) - L; 6(2, 3) = 3; and 5(3, 4) = 1.

Property 4.1. The minimum train capacity, m . , required to transport

' rain

all job:; to their destinations is

in . = max c(s, s+1)

mm p

It then follows that if ni > max 5(s, s+1), no scheduling problem

s=l,...,a

exists since all jobs can be transported to their destinations. Therefore,

we focus attention on the only case for which a scheduling problem arises.

Theorem 4.1. Leu m< max 5(s, s+1). Then any schedule that adheres s=l,...,£

to the following rule maximizes the number of jobs taken to their desti- nations .

When the train arrives at any station s, s = 1, ..., £,

(i) If |A(s) I < m, then transport all jobs in A(s) to station s+1, (ii) Tf |A(s)| > m, then transport any m jobs in A(s) to station s+3 such that if job q is transported and job r is not,

then p (s, f ) < p (s, f ) q c[ r r

(i.e., select those m jobs which are presently closest to their destina- tions) .

75

Proof. Let X*(£) denote a schedule generated by the rule. (Note that

X*(£) is not necessarily unique since ties are broken arbi trarily. )

Assume that X*(l) is not optimal. Let X' (£.) i X*(£) be a schedule that

does not satisfy the property of the rule. That is, in X' (1) there

exists a station s;'- such that job q is transported from s* to S*+l and

job r is not and

p (s*, f ) > p (s'% f ) . q q r r

Assume that |J'| > |j*|> where |J'| and | J* | denote the number of jobs transported to their destinations in X1 (") and X*(£) , respectively.

If q arrives at its destination in X' (;■'), then q occupies a box- car from s* for d (s;,;, f ) units of time. Since p (s*. f ) > p (s*. f ), ' q q q q r r

interchanging r for q at s* allows r to arrive at its destination without affecting any other jobs.

Tf q does not arrive at its destination in X' (£) , then there must exist a station s', where s' < f , where q was taken off the train. If

q

s' < f , interchanging r for q at s* does not decrease the number of jobs that arrive at their destinations. If s' > f , then after the interchange r will be able to arrive at its destination thus increasing the number of jobs taken to their destinations by one.

In either case, interchanging r for q at s* results in the num- ber of jobs taken to their destinations after the interchange being greater than or equal to the number before.

If we start with s* = 1, after at most interchanges of the above form, any schedule X'(") can be converted to a schedule X*(£.) such that |j*i^ > | J' | for all s = 1, ..., I. Therefore, |j*| > |j'|, contradicting the initial assumption.

76

Tt now remains to show that all different schedules generated

by the rule have the same value. Let X*(£) and X*(£) be two schedules

1 2

that have the property of Theorem 4.1 but such that X*(£) i X*(J?)

1 2 We wish to show that |j*| = I J* I . 1 2

Since X*(£) { X*(k) , then there must exist a station s* such that:

(a) r is transported from s* to s*+l in X*(£) whilp q is not

(b) q is transported from s* to s*+l in X*(£) while r it not and

(c) p (s*, f ) = p (a*, f )

q q r r

Then clearly interchanging q and r at s* does not affect the number of

jobs taken to their destinations. Again starting with s* = 1 after at

most me interchanges, the schedule X*(£) can be made identical to X*(Z)

2 l

and |j*| = |J*|. Repetition of this argument for any schedule X*(£) -/

u X*(£), allows us to conclude that for all schedules X*(£) with the. prop-

u

erty of Theorem 4.1, |j*| = |j*|. Q U 1

4 . 3 TheJ^Train^ One Track Problem

We now extend the formulation of Section 4.1 to the case where

M trains service the stations along the route. In this situation it Is

important to distinguish between instances where job splitting is or is

not allowed. If job splitting is permissible, then any job can be

carried from its origin to its destination on any combination of trains.

No job splitting requires that once a job is loaded on a train it must

be carried its entire distance on the same train. (Note that in the One

Train - One Track case it is irrelevant whether job splitting is or is

not allowed since all jobs that are not carried their entire distance

need never be transported at all.)

77

4.3.1 Problem Formulation

The M train problem can be schematically described as follows: Oo &r. Qr,

where again the stations along the route are numbered 1, 2, ..., I (the

above schematic illustrating the case where M = 3) .

We assume that all definitions and assumptions of Section 4.1

still hold, although we redefine

m = the capacity in boxcars of train i, i = 1, 2, ..., M i

T.(s) = { j ! job j is on train 1 when it arrives at station s}

A. (s) = {j | job j is available to be transported on train i from s

to s+1 } where

A.(s) = {(T.(s) n F(s)) u 0(s)}. i i

X(£) = any feasible schedule or assignment of the jobs to the trains

for all station intervals s, s+1 (s = 1, ...,£), where by feas- ible we mean that given any station s, any job j which is trans- ported to s+1 on train Y, either originates at s or is trans- ported to s on one of the trains 1,...,Y-1. We also define

t. = the interarrival time between trains i and i+1 at all stations l

along the route (i = 1, 2, . . . , M)

a = the time at which train 1 arrives at o. (i.e., the earliest J J

departure time of job j).

Theorem 4.2. Let c(s, s+1) = |( u 0 (v) ) n ( u F(v))| again

V=l , . . . , s v=s+l ,...,£

denote the overlap or demand for boxcars between stations s and s+1.

78

Y Furthermore, let ) m. = the total capacity of the first Y trains. 1=1 a

It then follows that a necessary and sufficient condition for all jobs to arrive at their destinations is that at least M* trains travel the route where

M*-l M*

I m. < max 6(s, s+1) < J" m. 1=1 3 s=l,.../ 1 = 1 :1

Proof. The condition is clearly necessary. Sufficiency is established

by assuming that J m. > max 6(s., s+1), A schedule can now be con- 1=1 X s=i,.../

structed in which all jobs arrive at their destinations by simply start- ing with the first station and arbitrarily assigning the jobs to any unfilled train. Then repeat the process at each station encountered

along the route. If a station s' is reached where all loads cannot be

M*

assigned, it follows that } m. < S(s', s'+l) which is a contradiction, n

1=1 l

Corollary 4.1. Assume the necessary and sufficient condition of Theorem 4.2 is satisfied. Then, any schedule which has the property that no job which can be transported is left uncarried (i.e., train i travels empty between s and s+1 only if |a. (s)| = 6 for all i = 1,...£I) will take all jobs to their destinations on M* trains irrespective, of whether job splitting is or is not allowed.

Proof. Follows directly from the proof of Theorem 4.2.

We also note that without loss of generality we can assume that m. = 1 for all trains 1 = 1, ..., M (i.e., each original train with capacity m. can be conceptualized as m. "trains" with unit capacity and

79

interarrival time of £ between any two boxcars of the original train i) . Therefore, Theorem 4.2 translates to

M* = max 6 (s, s+1) s=l, . . .1

4.3.2 Maximization of the Number of Jobs Taken to Their Destinations by the First k (^iv _s Li-- ) trams

In this section we will be concerned with generating schedules that not only transport all jobs to their destinations on M* trains, but at the same time maximize the number of jobs transported to their destinations by the first K trains for all K = 1,2,...,M*. We assume that the necessary and sufficient condition of Theorem 4.2 is satis- fied and will first consider the situation when any job can be carried from its origin to its destination on any combination of trains.

Theorem 4.3. Assume that job splitting is allowed. Then any schedule

having the property that given the arrival of train Y (1 < Y < K) at any

station s (s = 1, . . . ,£) ,

(i) If |A^(s) I < 1, then any job in Ay(s) is transported to

s+1 on train Y.

(ii) If |A^(s) | > 1, then any job in ilr(s) is transported to

s+1 on train Y such that if job q is transported and job r

is not, then

P (s, f ) < p (s, f ) q q r r

maximizes the number of jobs taken to their destinations

by the first K trains for all K = 1,2,..., M*.

80

Proof. Let X ' ( 2) be any schedule that maximizes the number of jobs taken to their destinations by the first K trains. Disregard all jobs not taken to their destinations on the first K trains in X'(>-). Let

R

D.',(£) be the set of jobs transported to their destinations by the first

etween s and

K trains in X*.(£) and let 5_,(s, s+1) denote the overlap b

s+1 of the iobs in D'(£) for all a = 1, ..., L.

h

From Theorem 4.2 we have that max 6 , (s, s+1) = K. It then

s D

follows from Corollary 4.1 that any schedule which, has the property that train Y travels empty between s and s+i only if |A„(s)j = cp for all Y = 1, . . . , K, will transport all the jobs in D'(£) to their desti-

K.

nations on the first K trains. Note that any schedule generated by

conditions (i) and (ii) of Theorem 4.3 satisfies the property of

Corollary 4.1 Therefore, use conditions (i) and (ii) to schedule the

jobs in D' (£) on the first K trains and denote this schedule hy X°(?,)-

Let X*(iO be any schedule generated by conditions (i) and (ii) K

of Theorem 4.3 for the whole set of jobs and the first K trains. Dis- regard all jobs not taken to their destinations on the first K trains

in X*(?,). Let D*(J£) be the set of jobs transported to their destina- K K

tions on the first K trains in X* ( I) . Starting with s = 1 and train

one compare X*(£) and ( £) as follows: K K

(a) If train Y travels empty from s to s+] in both schedules continue to s+1.

(b) If train Y travels empty from s to s+1 in X°(£) but does

not travel empty in X*(£), then transport the job carried K

from s to s+1 in X-':(0 from s to s+1 in X:'(£.)- Continue

K K

to s+1.

81

(c) Assume Chat: train Y transports a job from s to s+1

in both schedules. Let q and r be the jobs transported in X^Ol) and X* respectively. If q = r, continue to s+1. Assume q =/ r.

(1) If p (s,f ) < p (s,fr), it must be true that r is not transported at all in X°(") since the jobs trans- ported in X°(£) satisfy conditions (i) and (ii) . Dis- card q and transport r from s to s+1 on Y in (I) .

Since p (s,f ) < p (s,f„), r can be transported from - r r- rq i

s+'l to f in X°(?) using the same trains that q traveled on previously. After this interchange X^(>1) remains op- timal since the number of jobs transported in JC(JL) re- mains the same.

(">') If p (s,f ) = p (s,f ), then transport r from s r r q 1

to s+1 on Y in (£) , and take r to its destination by interchanging q and r everywhere they appear in X°(l). If r was not processed in X£(&) , then discard q. We again have that after the interchange the number of jobs transported in X°(£) remains identical. Either (a), (b) or (c) is applied for s = 2, ... £. The pro- cedure is repeated tor trains 2, 3,... PC. At each step we have that both schedules are identical up to train Y and station s and that after train K is scheduled at station £, X°(.1) = X*(£) . Since the number of jobs transported in X°(A) remains identical after either (a), (b) or (c) , X°(£) E X*(£) implies that |d*(£)| = |d'(£)| and X*(£) is optimal.

K K K K K

Finally, it is obvious that if X* (£) and X* (£) are any two

82

schedules generated by conditions (i) and (Li) for the whole, set of lobs and the first K trains but such that X* (£) i K* (£) , that |D*.(t)| - |D* (£)| where D* (£) and D*R(£) denote the set of jobs transported to their destinations on trains 1, ..., K in X* (£) and X* (£) , respectively. |_J

/.K

Theorem 4.3 allows us to present the following algorithm for generating a schedule that maximizes the number of jobs taken to their destinations by the first K trains (for all K = 1, ..., M*) when job splitting is now allowed.

(i) Use Theorem 4.3 to determine the maximum number of jobs

taken to their destinations by the first K trains when job splitting

is allowed. Let X*(£) be the schedule generated by Theorem 4.3 and R

D*(£) be the set of jobs taken to their destinations in X*(£) .

(Li) Disregard all jobs not taken to their destinations in X*(£) Schedule all jobs in D*(£) on the first K trains by starting with the first station and arbitrarily assigning the jobs to any unfilled train. Repeat the procedure for stations 2, . .., £ after which a schedule X£(£) without job splitting has been obtained which maximizes the number of jobs transported to their destinations by trains 1, K.

Proof. Let 6 ... (s, s+1) denote the overlap between s and s+1 of the jobs in D*(£) in X*(£) for all s = 1, . . . , JL Since max 5 (s , s+1) < K, it follows from Theorem 4.2 and Corollary 4.1 that in step (ii) all

jobs in D*(£) are transported to their destinations by X£(£) without

J R K

job splitting. Since |D*(£)| is a lower bound on the no job splitting case, we have that X£(£) is optimal. [_]

rv

83

4.3.3 Minimization of the Sum of Completion Times

We assume in this instance that job splitting is allowed and

that the trains travel the route according to a fixed schedule with

train interarrival times t.. We are interested in generating schedules

that minimize the mean or sum of job completion times or equivalent!;/

minimize the mean lateness of all jobs.

We first observe that the completion time C. of any job j, is

solely a function of the train that it arrives on at its destination

f.. Thai: is, if job j arrives at f. on train one, then J J J

C (1) = a. + p(o, , f .) j J * J J

If it arrives at f. on train two, then J

C.(2) = a. + p(o., f .) + t. ; J 1 J J J

and in general, if it arrives at f . on train Y, then

Y-l

C. (Y) = a. + p(o., f .) + I t

J J 1 .1

i=l

Let W.(Y) denote the. waiting time of job j given that it

arrives at its distillation on train Y where

Y-l

w.(y) = y t.. J i~i x

Property h. 2. Let X'(£) be any schedule that delivers all jobs to

n n

their destinations on M* trains. Let j C.(X*(£)) and T W (X' (i) )

denote the sum of completion times and sum of waiting times, respec-

n n

tively, for all jobs in X' («.) . Then J C.(X'(£)) < }' C.(X(£.)) if and

j = l J j = l -1

n n J J

only if >" W.(X'(;.)) < V Vl,(X(l)) for ail other schedules X(£) that

:i=i J j-l J

transport all jobs to their destinations on M* trains.

84

Y-l

Proof. C.(Y) = a. + p (o . , f.) + }' t. = a. + p(0., f.) + W . (Y) .1 .1 r J ^ ;L J .1 J J

n n n n

.". } C.(X'(£)) = j" a. + }' p(0., f.) + / W.(X'(A))

j = l J j = l J i=l J J .1=1 :1

n n

where ) a + ) p(o.5 f, ) arc constants Independent of the schedule. n

n

Our objective reduces to one of minimizing ) W . Since W, is

1=1 3 J

strictly a function of the train that j arrives at its destination on,

it is intuitively appealing that what we would like to do is deliver as

many jobs as possible to their destinations on train one, then on train

n two, etc., in order to minimize / W. . In Theorem 4.3 we found that

j=l J this is optimally accomplished by giving priority to that job which is

closest to its destination whenever a decision arises. We now restate

Theorem 4.3 as a scheduling rule and show that it also provides us with

a procedure for minimizing the sum of waiting times.

Rule 4.1. Assume that job splitting is allowed. Then any schedule

having the property that given the arrival of train Y (Y = l,...£i*) at

any station s (s = !,...,£)

(i) If |A (s) | < 1, then any job in A (s) is transported to

s+L on train Y.

(ii) If |A,(s)| > 1, then any job in A,(s) is transported to

s+1 on train Y such that if job q is transported and

job r is not, then

p (s, f ) < p (s, f ) 'q q r r

n minimizes ) W . . j=l J

Proof. The proof will be by contradiction. Let X*(#<) be any schadule

generated by Rule 4.1 where N*,N*, . . «JN*j. denote the number of .jobs taken

to their destinations by trains 1,2,...}'\* in X*(£). Let X°(£) be

any optimal schedule where ,N° , . . --EST ° , denote the number of iobs taken

1 I M<:

to their destinations by trains l,2,...£i* in (>".)• Let } W* and } V!"

J j J j

denote the sum of waiting times in X*(£) and (Ji) , respectively.

Assume that X*(i) is not optimal. We than have that

J I WJ J J or equivalently that

N°(t.L) + N°(t1+t2) + ... + N^Ct^+.-.+t^

<N*(tl) +N*(t1+t2) + ... +N°,(tl+V...+tM,_1)

which, can also be expressed as

<t1(N*HJ*+...^) + t2(N|«*+...-Hi^) + ... + ^(N-J

By Theorem A. 3 we have thai:

(i) N* > which implies that (N*+N*+. . .+N* ) < (N°+N°+. . .+N° . ) 11 13 M" 2 3 M*

(ii) (N*+N*) > (N?+N°) which implies that (N*+N*+. . .N* ) <(N°+M.°+. . .N° ) 11 12 j 4 M* 3 4 M

(M*-2) (N*+N*+.. .+N* „) > (N°+N°+. . .N° „) which Implies that

^-l+V ^ (NM-1+NM^

in which case Y < ) W* only if

■' 1 J J

, < N* . M* M*

However, Y N* = V and (i) through (M*-2) imply that N* < . '', l .", i " M* M*

i=l i=l

and hence a contradition ensues, fj

86

4 . 4 Restrictions on Job Availabili ties

Up to this point we have, assumed that all jobs are available for shipment when train one arrives at their origin station and that the jobs can be transported to their destinations on any of the trains that travel the rente. When this assumption is relaxed the situation is not so straightforward and the question of job splitting plays a critical role.

The restrictions on job availabilities that we shall consider are the following:

1. That the earliest train some jobs can be transported on is some train other than train one.

2. That the last train some jobs can be transported to their destinations on is some train other than train M.

3. Both 1 and 2.

4 .4.1 Jobs Not Simultaneously Available

We first assume that some of the jobs are not at their origin when train one arrives. Such a situation may arise when jobs arrive at their origin station intermittently in which case their travel along the track is restricted to a subset of the trains. The case when not all the jobs are simultaneously available for transporting on train one can be illustrated as follows:

(2)

O)

(2) (1)

a o o . , o o o

12 3 4 5 6

87

where (E) denotes the first or earliest train that a particular job can go on. Define:

& „(s, S+l) = the number of jobs which overlap between stations

h

s and s+l from among those jobs which must go on train E or later, where in the above diagram we have

S1(l,2) = 0; 61(2,3) -= 2; 6^3,4) = 2; ,^(4,5) = 2; 61(5,6) = 2

and 5 (s, s+l) = 1 for all s = 1,...,6.

Theorem 4.4. Assume that job splitting is allowed. Then a necessary and sufficient condition for all jobs to be transported to their desti- nations on M trains (where m. = 1 for all i = 1, . . . ,M) is that

(i) max 6 (s, s+l) = 1

s Ll

(ii) max 6 (s, s+l) = 2

s pl L

(M) max 6 (s, s+l) = M s 1

Proof of Theorem 4.4 .

Theorem 4.4 is clearly necessary. Sufficiency will be shown by constructing a schedule that delivers all jobs to their destinations on M trains.

Lemma 4.1. Suppose that the first Y trains have been scheduled and that Theorem 4.4 is satisfied with respect to the remaining jobs or portions of jobs that were not transported to their destinations on trains 1,...,Y. Considering the stations in the order 1,2,..., at any station s let q and r be any jobs such that both q and r are available for transporting

88

to s+] on train Y+l (i.e., q, r e (Ay (s)}). Transport q to s+.l on train Y+l i f

P„(s, f ) > p (s, f )

q q lr r

that is, give priority to that job which is currently farthest away from its destination. Then after train Y+] completes the route all jobs or portions of jobs that remain to be transported on trains Y+2,...,M", satisfy the condition of Theorem 4.4.

Proof. First consider Y = 1. Let k denote any job or portion of job that remains to be transported on trains 2,... M* after train one com- pletes the route using the above procedure for leading the jobs. Let

s* denote the last station k was carried to by train one where s* > o .

k

Since k was not taken to its destination on train one, then there exists a job k' such that o , = s* that is transported from s* to s' (s' > f ) on train one. This is illustrated in the following Illustration where the broken line indicates that portion of the job transported on train one and the solid line that portion of the job remaining to be trans- ported on after train one completes the route.

kj

k k

fk s' Assume that after train one completes the route, the maximum overlap between any two stations along the segment s* to f is M. Since any job that overlaps with k between s* and f also overlaps with k' , this would imply that before train one is scheduled, the maximum overl

ap

between any two stations from s* to f is M+ I which is a contradiction. Now label all jobs or portions of jobs with (E) = 1 that were not taken

to their destinations on train one with (C) = 2, and leave the same label as before on all other jobs. It then follows that the condition of Theorem 4.4 is satisfied for this new set of jobs, that is,

(i) max 5 (s, s+l)= 1

s rl

(M*-l) max 6 (s, s+1) = M-l s -

The argument is now repeated for train two through M by simply renumbering the trains after each train is scheduled.

j. i.^w i.tnuuu -t . j_ xl. toiiows LticiL ii eai_-u liolii ls scueuwleu by using the rule, then after trains 1, . . .M-l complete the route, train M can pick up all remaining jobs and a schedule that transports all jobs to their destinations on M trains has been constructed.

It now follows from Lemma 4.1 that a procedure for transporting all jobs to their destinations on M trains is givea by the following rule.

Rule 4.2. Assume that job splitting is allowed and that the necessary and sufficient condition of Theorem 4.4 holds. Then any schedule hav- ing the property that given the arrival of train Y (for all Y = 1,...M*) at any station s (s = !,...£),

(i) If I Ay.Cs) J < 1, then any job in A-(s) is transported to s+1 on train Y. (ii) If |A.(s) | > 1, then any job in A, (s) is transported to s+1 on train Y such that if job q is transported and job r is not, then

p (s, f ) > p (s, f ) q q r r

will transport all jobs to their destinations on M* trains.

90

Example :

(2) Job d

(2) Job

c

(3) Job e

(1) Job b

(3) Job h

(1)

Job a

(2) Job g

0 o

. Q

O n o

1 2

3

4 5 6

7 i

hare

(i) max 6 (s, s+1) - 1 s J

(ii) max 6" (s, s+1) = 2 s ~

(iii) max 5 (s, s+1) = 3 s 1

Train one transports Job a from s = 1 to s = 2, At s = 2

p (2, f ) = p (2, 5) > p (2, 4) = p (2, f ) b b b a a a

Therefore Job b is transported from s = 2 to s = 5 on train one. Train

one travels empty from s -= 5 to s ^- 8 since no other jobs are available

for Lt. After train one completes the route we have:

(2) Job d

(2) Job c (3) Job e

(3) Job h

(2) Job a (2) Job g

-O

12 3 45 67 8

where the remaining portion of Job a has been relabeled and all other jobs retain their original label. We now have (i) max o (s, s+1) = 1

(ii) max 6 (s, s+1) = 2

Train, two transports Job c from s = 1 to s = 2. At s ■= 2,

P (2, f ) = p (2, 4) > p (2, 3) = p (2, f ) a a a c c c.

Therefore Job a is transported from s = 2 to s = 4 on train two. Train

two completes its route by transporting Job d from s = 4 to s = 5 and

Job g from s - 5 to s - 7. After train two reaches station 8 we have

(3) Job c (3) Job e

(3) Job h

3

4

5 6 7 8

where the remaining portion of Job c has been relabeled and all other job labels remain the same. We now have

(1) max 6. (s, s+1) - 1 s ->

in which case train three picks up Jobs c, e, and h. The constructed

schedule is summarized as follows:

Train 1: Train 2: Train 3:

g_

o— 6

7

8

We mentioned previously that job splitting plays a crucial role in problems with restrictions on job availabilities. The following example will show that when job splitting is not allowed, Theorem 4.4 while still necessary is no longer sufficient to ensure that the M trains can in fact transport all jobs to their destinations.

92

(1) Job b

(2) Job a JjO_ Job__c__

(1) Job d (2) Job g

(1) Job e

o o cj o o o a o o o

1 2 3 A 5 6 7 8 9 10 11 We have that

(i) max 6 (s, s+1) - 1 S 2

(li) max 6 (s, s+1) = 2 s ^

However, since job splitting is not allowed, jobs b and e must be com- pletely carried by train one (since they overlap with Jobs a and g, respectively, which can only go on train two). This implies that Jobs c and d must both be transported on train two and hence no schedule exists in which all jobs can be taken to their destinations on train two subject to the no job splitting and job availability constraints.

4.4.2 Jobs Restricted by Due Dates

We now assume that although all jobs are at their origin when

train one arrives, it is required that a subset of the jobs be delivered

to their destinations on some train before train M. Such a situation

may arise when due dates are imposed on the jobs and our objective is

one of delivering all jobs to their destinations on or before their due

dates. For example, letting d. denote the due date of job j , we observe

that the lateness L. of any job j, is solely a function of the last

train it is carried to its destination on. That is, if job j arrives

at f. on train one, then J

L.(l) = (a. + P(o., f .)) - d. J J J J J

93

if it arrives at its destination on train two, then

L.(2) = (a. + p(o., f.) + t )) - d.;

.J J J J 1 J

and in general, if it arrives at its destination f. on train Y, then

J Y-l L.(Y) = (a. + p(o., f .) + V t.) - d..

j j j j i£1 i 3

The problem of finding a schedule such that all due dates are met is

therefore equivalent to that of scheduling the jobs so that L s. 0 for

j

all j, where the last train each job can travel on is predetermined by

its last nonnegative value of L.. This situation, the opposite of the

J

one encountered in 4.4.1 can be illustrated as follows:

_(2)_ ___.

__ _C2)

ill . CD

_o o

12 3 4 5 6

where (L) now denotes the last train that a particular job can go on. Define

5. . (s, s+1) = the number of jobs which overlap between stations

s and s+1 from among those jobs which must be deliv- ered to their destinations on train L or earlier, where in the previous illustration we have

6(;))(1,2) = 1; 6(2)(2,3)=2; S(2)(3,4)=2; 6(2)(4,5)=2; o(2)(5,6)=2 and

S(1)(l,2)=l; 6(1)(2,3) =1; 6(1)(3,4)=0; 6(1)(4,5)=1; 6 (1) (5,6) == 1.

The analogy between this situation and the case when jobs are not simultaneously available can be used to assert:

94

Theorem 4.5. Assume that job splitting is allowed. Then a necessary and sufficient condition for all jobs to be transported to their desti- nations on M trains (where m. = 1 for all i - 1,...,M) is that

(i) max S. .(s, s+1) = 1 s {■*■)

(ii) max <5,„.(s, s+1) = 2 s \^>

(M) max fi.M. (s, s+1) = M s CM)

Proof . The proof is completely analogous to the proof of Theorem 4.4. A schedule that delivers all jobs to their destinations ou or before their due dates on M trains can be constructed by inverting the proce- dure of Lemma 4.1 as follows:

Starting with train M (instead of train one) and station 9, (instead of station 1) consider any station s having jobs q and r avail- able for transporting to s-1 on train M (i.e., q and r are available if given that they arrive at their destination on M, they are not late). Transport q to s-1 on train M if

p (s, o ) > p (s, o )

q q r r

that is, give priority to that job currently farthest away from its origin. After train M completes its inverted route, all jobs or por- tions of jobs that remain to be transported on trains M-l,...,l, satisfy conditions (i) through (M-l) of Theorem 4.5. This inverted procedure is repeated for trains M-l, M-2,...,l after which all jobs will have been transported "from their destinations to their origins" on the M trains.

Example : (2) Job d

(2) Job c (1) Job e

(3) Job b (1) Job h

(3) Job a (2) Job g

O JQ , O O O O O O

12 3 4 5 6 7 8

where (I) max 6. . (s, S+l) = 1

s ' L'

(ii) max 5 (s, s+l) = 2 s {*■)

(iii) max 5 .(s, s+l) = 3 s C3)

Train three is scheduled starting at station s = 8. It travels

empty from s = 8 to s = 5 at which time it picks up Job b. At s = 3,

P (3, o ) = P (3,1) p, (3,2) = p, (3, o, )• Therefore, Job a is trans- el a a d b b

ported from s = 3 to s = 1 on train three. After train three is sched- ule J we have

(2) Job d

(2) Job c (1) Job e

(2) Job b _(2) Job h

(2) Job g

1 2 3 4 5 6 7 8

where the remaining portion of Job b has been relabeled and all other

jobs retain their original label. We now have

(i) max 6. (s, s+l) = 1 s (.1)

(ii) max 6 (s, s+l) = 2

96

Train two travels empty from s = 8 to s = 7. At s = 7, Job g

is picked up and taken to s = 5. At s = 5, Job d is transported to

s = 4. After traveling empty from s = 4 to s = 3 we that that

p (3, o ) = P (3,1) > P, (3,2) = p. = (3, o, ) c C c b b b

Therefore, Job c is transported from s = 3 to s = 1 and train two com- pletes its route. We now have

(1) Job e

(1) Job b (1) Job h

C! O O O O O o _o

12 3 4 5 6 7 8

where the remaining portion of Job b has been relabeled and all other job labels remain the same. Finally, we have (i) max 6 (1)(s, s+1) = 1

in which case train one picks up Jobs h, e and b. The constructed schedule is summarized as follows:

Train 1: 1

Train 2 :

Train 3:

O— O 0 O O _ O-

12 3 4 5 6 78

The awkwardness of developing a schedule in the previous inverted manner makes it desirable to develop procedures for accomplishing the same objective starting with train one, train two, etc. This is dis- cussed in Section 4.5 where the analogous problem of minimizing the max- imum lateness of all jobs is presented for both job splitting and no job splitting situations.

97

4.4.3 Jobs Not Simultaneously _Availablg_^and Subj ect to Due Dates When restrictions exist on both the first and last trains that a job may travel on, we find that even when job splitting is allowed, necessary and sufficient conditions for generating a feasible schedule arc hard to coma by. The situation can he illustrated as follows:

(1) - (1) Job a (2) - (2) Job b (1) - (1) Job c (1) - (2) Job d

12 3 4 5 6 7 8

where (E) - (L) denote, respectively, the first and last trains a par- ticular job can go on. When Theorem 4.4 and Theorem 4.5 are combined they result in a necessary condition for generating a schedule that satisfies the job availability restrictions with M trains. Lack of sufficiency is shown by the problem above where M = 2. We have that

Theorem 4.4

Theorem 4.5

f (i) max 6 (s, s+1) = 1 s -

] (ii) max 5 (s, s+1) = 2

(i) max 6. . (s, s+1) = 1

(ii) max 6 (s, s+1) = 2 s V <- )

However, Jobs a and c can only go on train one, and Job b can only go on train two. This implies that Job d must be transported from s = 2 to s = 3 on train two, s = 4 to s = 5 on train one and s = 6 to s = 7 on train two again. Hence no feasible schedule can be constructed on two trains regardless of which end of the route we start from even if job splitting is allowed.

98

4.4.4 Synopsis

The results of this section show that even the question of deter- mining a feasible schedule to transport all jobs to their destinations becomes complicated if restrictions on job availabilities are imposed. In order to look at other optimality criteria we revert to the situation where no job availability restrictions are imposed.

4 . 5 Minimization of Maximum Lateness

We assume that there are no restrictions on job availabilities,

we are interested in this instance in generating schedules that minimize

L or equivalently C max ^ J max

In Section 4.4.2, we defined d. to be the due date of iob i and

J j j

noted that L , the lateness of job j, is solely a function of the last

train job j is carried to its destination on. This observation allows

us to establish the following property with respect to the L problem.

max

Property 4.3. Let X(i) be any solution to the L problem that trans-

max

ports all jobs to their destinations on M trains. Let job q be such that Lq = Lmax and assume that 1 arrives at f on train Y (Y > 1) in X(£) . A necessary and sufficient condition for any other schedule X1 (£) 4 X(2)

to be better than X(4) is that L.(X'(£)) < L.(X(£)) for all j i q and

J J

that q arrive at f on either of trains Y-l, Y-2,...,l in X' (£) .

4.5.1 Minimization of L with Job Splitting

max ' °

The job splitting assumption for the M train one track problem

implies that any job j may be transported between o. and f by usin°- anv

J J e J-

combination of available trains.

99

Given the arrival of train Y at station s, the potential lateness of all jobs j e Afs) is defined as

Y-l

L (Y) = (a. + p(o., f.) + I t.) - d., 3 3 3 3 ^ i 3

that is

L.(Y) = lateness of job j if it were to arrive at its destination on train Y.

Since the L criterion is dependent upon a single job (the

max r ° J v

latest one), it is intuitively appealing that whenever a decision is made between two jobs, priority should be given to that job exhibiting the greatest potential lateness of the jobs considered. In this case, the notion translates into the following rule which can be used to gen- erate a schedule X(£) that minimizes L(X(s)) V s = 1, . . . Jl (where L(X(s)) = max I . (X(s)) ¥ j = 1, . . . ,n; s = 1, ...,£') .

Theorem 4.6. Any schedule having the property that given the arrival of

train Y at station s for Y = 1,...,M and s = 1,...,£.

(i) If |Ay(s) | < 1, then any job in A„(s) is transported to s+1

on train Y,

(ii) If |A (s) J > 1, then any job in Ay(s) if transported to s+1

on train Y such that job q is transported and job r is not,

then

L (Y) > L (Y), q r

that is

Y-l Y-l

(a +p (o , f )+ I t.)-d > (a + p (c , f ) + }' t.) - d

q q q q -ti 1 1 r r r r •=] i r

minimizes L(X(2,)) for all schedules X(ii) .

100

Proof- Optimality of Theorem 4.6 will be shown by induction on the number of stations s. The rule, is obviously optimal for Z = 1 and £ = 2. Assume that the rule is true for I = s* and consider Z = s*+l.

Since we assume Theorem 4.6 to be optimal at s*, then L.(X(s*)) = L . (X( s""+l) ) for all jobs j whose destination if s"-':. Ther~efore, consider the jobs that go from s* to s*+l.

(i) If all jobs transported from s* to s*+l are taken on the same train that they were transported from s*-l to s* on then the rule is clearly optimal at s*+l since L(X(s*+l) = L(x(s'-))- (ii) If all jobs transported from s* to s*+l are not taken on the same train that they were transported from s*-l to s* on and L(X(s-+l)) > L(X(s*)), then it must be the case that job k, where L (X(s*+1) = L(X(s*+l)) is either

(a) A new job (i.e., o, - s* . In this case if k is trans-

k__

ported from s* to s*+l on train Y = 1, the rule is clearly

optimal. If k is transported on train Y > 1, then it must be

shown that L . (Y) > L (Y) V j transported from s* to s-'c+l on J k

one of the trains 1,2,...,Y-1 in X(s*+1) . But this folows by the rule since the new job was at the station when each of these jobs was sent.

(b) An old job (i.e., o < s*) . In this instance k could not have gone on train Y = 1 from s* to s*+l. Again it must be shown that L.(Y) > L (Y) for all j transported from s* to s*+l on one of the trains 1,2,...,Y-1.

Let V be the train that k arrived at s* on. Then L.(X(s*+l)) > L (X(s*+1)) for all j transported from s* to s:':+l on one of the trains V, Vt1,...,Y since k was at s* between trains V and Y.

10]

Let q be the job that replaced k on train V at s "-'■". Let q be

the job carried on train V-l from s* to s*+l. Then again by the rule

L j (V) > L, (V) . q k

If q1 is such that o < = s*. then for all i on trains 1,...,V-1

q

from s* to s*+l, L.(V) > L, (B) . J k

If q-1 i s su^h that o 1 < s* then lef v1 he the train that o •*

q

arrived on at s*. Repeating the above argument gives us

L (X(s*+1)) > L, (X(s*+1)) for all i transported from s* to s*+] J k

on one of the trains V L, V]+1,...,V, V+1,...,Y;

1. . (X(s*+1) ) > L (X(s"+1)) for all j transported from s* to SA+1

on one of the trains V2 , V2+l , . . . , V1 , v'fl,...,V, V_1,...,Y.

At some point one of the jobs q- , q:',...,qw, is such that o = S"

qu

in which case

L,(Y) > Lk(Y) = Lk(X(s*+l))

for all j transported from s;'; to s*+l on one of the trains 1,...,Y-1. If o r s-: for some q ", then it must be true that all jobs transported from s* to s*+l are taken on the same train they were transported from s*-l to s* on and by case (i) the rule is true.

Rather than having to compute L. (Y) when Y arrives at s for all j in A (s), we find that implementation of Theorem 4.6 is further facilitated by the following property:

Propertv 4.4. If L (1) > L (I), then L (Y) > L (Y) for all trains

_t < q r q r

Y = 1, . , . , M and any station s.

Y-l

Proof. L.(Y) = L.(l) + T t . Therefore, if L (1) > L (1) we have that Jl J ±L=1 i q

Y-l Y-l

L (Y) = L (1) + V t. > L (1) + I t. - L (Y). Q

q q i=i 1 r i=i 1 r

101

It then follows from Theorem 4.6 and Property 4.4 that a schedule

which minimizes L (C ) is generated by the following procedure:

max max b '

(i) Compute L.(l) (or C.(l)) for all jobs j. (ii) Rank the jobs in order of nonincreasing values of L.(l) (or

c.(D). j

(iii) Whenever a decision has to be made between two jobs select q over r if q is ranked higher than r. The procedure represents a more straightforward way of answer- ing the question posed in 4.4.2 of generating a feasible schedule such

that L. < 0 for all k. By minimizing I, with M* trains a schedule such j max

that L < 0 is generated which would then, of course, satisfy the due

max ° ' J

date restrictions.

Example : stations stations

Consider the following problem involving four jobs and six where two trains travel the route with unit travel time between

Job c

Job b

Job a

Job d

Based on

.1

2

> following d

ata

j

d.

a. -JL

a

4

0

b

3

1

c

6

2

d

8

3

p(o., f.) L.(l) Rank

-2

#3

0

#1

-1

#2

-3

#4

103

We have b > c > a > d and an optimal schedule is given by: T . -, . a__| b I b I d | d |

I 1

Train 2:

with b > a at s = 2; b > c at s = 3, and T, = L =1.

max c

4.5.2 Feasible Schedules Without Job Splitting

No job splitting for the M train one track problem requires that

once a job i is picked up by train Y at o., it must arrive at f. on the

J J

same train for all j = l,....n and Y = 1,...,M. The problem we are inter- ested in is:

(P-l) Given M trains with capacity m. = 1 for all i = 1,...,M, can all the jobs be scheduled so that all satisfy their due dates when job split- ting is not allowed?

in 4,4.2 we noted that the last train any job j can be delivered at f . on so it arrives on or before its due date is easily determined in which case the situation can be illustrated by

(2) (3) (1)

(2) (3)

12 3 4 5 . . . I

where (L) denotes the last train a job can go on in order to arrive by its due date. While efficient rules have been developed for job split- ting problems, results in this section for no job splitting are limited to the case when M = 2. Again define

i5. . (s, S+l) = the number of jobs which overlap between statins s

and s+l from among those jobs which must be delivered to their destinations on train L or earlier.

104

Property 4.5. A necessary condition for all jobs j (j = l,...,n) to be completed on or before their due dates is that

(i) max ('(L)(s, s+1) = 1 (ii) max 6 (s, s+1) = 2

(M) max 5 (s, s+1) = M s (M*)

If Property 4.5 were sufficient, then a feasible solution to (P-l) could be generated by constructing a schedule of the same nature as generated in the job splitting case. However, Property 4.5 is not sufficient as illustrated by the following example for M = 2, s = 11.

(2)

O) 12) (1)

J2)__

(2)

o O-

12 3 4 5 6 7 8 9 10 11

M = max <5(s, s+1) = 2. However, no feasible schedule exists so that

8=1,..., 11

L. < 0 for all j, j = 1,...,6, with M = 2 trains.

The following algorithm can be used to generate a feasible schedule to (P-l) when M = 2.

(i) Check to see if the necessary condition of Property 4.5 is satisfied. That is,

and

max 5 .. . (s, s+1) = 1 s ^L'

max S (s, s+1) = 2

105

(il) Eliminate all jobs that do not overlap with any other jobs from the problem by assigning them to train one. (lil) Assign train one to all remaining jobs with (L) = 1. (iv) Assign train two to all jobs that overlap with jobs already assigned to train one. (v) Assign train one to all jobs that overlap with jobs assigned in (iv) . (vi) Repeat steps (iv) and (v) until either:

(a) Any assignment which must be made creates a conflict, in which case there does not exist a solution such that all job due dates are satisfied.

(b) All jobs are assigned and a solution such that all due dates are satisfied has been found.

(c) No overlap exists but there remain jobs to be assigned.

In this case all remaining unassigned jobs are due on train

two and can be assigned to either trains one or two. Since

these jobs do not overlap with any job previously assigned,

then the problem of assigning these remaining jobs is identical

to that of finding a feasible solution such that all jobs

arrive at their destinations. Since max 6. .(s, s+1) = 2,

s ^^'

Corollary 4.1 can be used to generate a feasible schedule for the remaining unassigned jobs. A feasible solution with all due dates satisfied will then have been found.

Proof . At each step, a necessary condition for the existence of a schedule where all jobs arrive by their due dates is satisfied. The condition for jobs that must arrive on train one is initially satisfied. The jobs that

106

must go on drain one imply that a necessary condition for overlapping jobs must also be satisfied (i.e., these must go on train two), etc.

The algorithm either terminates due to lack of a feasible solution or with all jobs scheduled such that L. < 0 for all j implying feasibility of the generated schedule. H

Repeated application of the above algorithm for different values of

L allows one to arrive at an optimal solution for L when M = 2. j max

Example: (2) Job g

(2) Job b

(2) Job e

(1) Job c (2) Job h

(1) Job a (1) Job d

-O O O

4 5 6

(l) max S (s, s+1) = 1 s=l,...,9 ( }

::' ■■'■■: > . (s, S+1) = 2

s=l, ... ,9

(ii) Not applicable since all jobs have at least one overlap, (iii) Assign Jobs a, c, d to train one. (iv) Assign Jobs b and e to train two.

(v) No further assignments can. be made, (vi) At this point, both Jobs g and h remain unassigned. Assign the jobs arbitrarily, say g to train one and h to train two. All jobs are assigned and a feasible schedule has been constructed.

10 7

4.6 M Trains on One Track with Switchyards

In the development of all previous sections, it has been the case that all M trains travel the complete route from station s = 1 to s = I. However, when the route includes switchyards, this behavior may no longer hold. The switchyards play the role of stations at which new trains may originate or at which the movement of certain trains may cease. The inclu- sion of switchyards in the problem creates the concept of a route asso- ciated with each train, a situation which can be schematically modeled as follows:

Jobs

<o®- *&■

Trains

where stations 1, 3, and 6 represent switchyards. The jobs {a, b, c, d}

play the same role as before. However, the trains {Y. , Y„, Y„, Y.} now

12 3 4

originate and terminate at various switchyards along the route. The basic difference between this situation and those modeled previously is that the carrying capacity provided by the trains is no longer constant for all track segments. If we assume that rn = I for all trains Y and let

108

c(s, s+1) denote the carrying capacity between any two stations s and s+1 (s = 1, ...,£) we have for this example that:

s, s+1 6(s, s+1) c(s, s+1)

1-2 2 3

2-3 3 3

3-4 2 2

4-5 2 2

5-6 2 2

This is in contrast to the previous cases where c (s , s+1) = M for all s - 1, .. . ,1.

4.6.1 Train Schedules Based on Demand

He first consider the case where trains are scheduled based on demand (i.e., the movement of the trains is not based on an "a priori" fixed schedule). In this situation, trains do not move unless they can, in fact, transport a job.

Property 4.6. A necessary and sufficient condition for all jobs to be transported to their destinations is that

c(s, s+1) > 6(s, s+1) for all s = ],...,$>. where we recall that <5(s, s+1) denotes the number of jobs (loads) which overlap between stations s and s+1.

Proof. The condition is clearly necessary. Sufficiency is established by assuming that c(s, s+1) > &(s, s+1) for all 3 = 1,...,?,. A schedule can now be constructed in which all jobs arrive at their destinations by simply starting with the first station and arbitrarily assigning the. jobs

IQQ

to any unfilled train. Then repeat the process at each station encountered along the route. If a station s' is reached where all loads cannot be assigned, it follows that c(s', s'-f-l) < $(s\ s'+.L) which is a contradic- tion.

4.6.2 Fixed Train Schedules

We now consider the situation that arises when the schedule of the trains is fixed "a priori." In this case Property 4.6 is no longer sufficient to ensure the delivery of all jobs to their destinations. For example, consider the following problem where we assume that iph, = 1 for all trains Y, and the travel time between any two stations is iden- tical (i.e., one unit of time).

Job a

Job b

Job c Job d

2 3 4

^ = 0 ^

Yl

r = 0 FUl

2 4g^>

3

° fe^__l3_

0 E& h.

Since there is unit travel time between any two stations, the

train schedules are specified by the start times of the trains. Letting

r denote the start time for train Y, where in this example the data are as follows :

110

s, s+l c(s, s+1) 5(s, s+1) Y_ ^Y

I -2 2 2 1 (,

•9 - 3 2 2. 2 0

3-4 1 1 30

4-5 1 1 /, o

We note that the necessary condition of Property 4.6 is satis- fied. However, there does not exist a schedule which delivers all jobs to their destinations on trains Y ±, Y? , Y3 and Y . The earliest moment dob c can arrive at station 2 is at t = 1, at which time because of the trair. schedule there is no train to transport it to station 3.

If, however, there does exist a schedule that transports ail jobs to their destinations, then they can be delivered as follows:

'lh{^i}'Pl!lJL:l- Assume that the necessary condition of Property 4.6 is satisfied. Furthermore, assume that all trains travel at the same con- stant speed (i-e., train passing is not allowed). Let m = 1 for all trains Y(Y = L,...,M', where M' denotes the number of trains which travel various sections of the route) . Then any schedule having the property that given the arrival of train Y at station s, s = 1, ...,£.

('-) If |Ay(s)| < 1, then any job in A,,(s) is transported to s+1

on train Y. ('0 If |Ay(s)| > i> then any job in iYr(s) is transported to s+1

on train Y such that if job q is transported and job r is not,

then

P (s, f ) 1 p (s, f ) q q *r r

Ill

(i.e., give priority to that job currently farthest away from its destination), transports all jobs to their desti- nations if at least one such schedule exists.

Proof of Theorem 4.7. Let X'(.J be any schedule that delivers all jobs to their destinations. Let t (s) denote the arrival time of train Y at station s for any train Y(Y = !,...,>[') and s - 1, ...,?,.

Lemma 4.2. Let q and r be any two jobs available for transporting from

s to s+1 on train Y at ty(s) (i.e., q, r t ,V,(s)). Assume that q is

transported to s+1 on train Y in Xr(£) but that p (s, f ) < p (s, f )

q " q r r

If q and r are interchanged (that is, r is transported on Y to s+1 instead of q) both jobs can still be transported to their destinations without affecting the schedule of any other jobs or any of the trains that leave s on or before t (s) .

Proof. Assume that r is transported from s to s+1 on train Z in X' (£)

where t (s) > t (s) . if q and r are interchenged at s, then after the

interchange q travels on z and r on Y from s to s+1. Consider station

s+1 at which point either

(i) q arrives at its destination or

(ii) both q and r continue to s+2 .

Case (i) If f = s+1, then clearly after the interchange q still q 5 »

arrives at its destination. Since t (s) < t (s), r will arriv at s+1 earlier than in X' (£) and can be transported from s+1 to f as prior to the interchange.

112

Case (ii) If f :' s+l> let Y' ancl z' De the trains that transport q and r, respectively, from s+1 to s+2 in X'(A)> where Y' leaves s+1 either before or after train Z arrives. (a) Assume Y' leaves s+1 after Z arrives (i.e., tY,(s+l) > tz(sfl). After the interchange q arrives at s+1 on train Z in which case it can still be transported from s+J to s+2 on train Y1. it can then be transported from s+2 to f as before the interchange. Since r arrives

q

at s+1 on train Y after the interchange, t (s+1) < t s+1) implies that

r arrives at s+1 earlier than before the interchange in which case it

can ciearlv be transported from s+1 to s+2 on Z' and from s+2 to f

r

as before.

(t>) Assume Y' leaves s+1 before Z arrives (i.e.,

ty,(s+l) < tz(s+l)).

In this case q arrives at s+1 too late to catch Y' . However,

by interchanging q and r again at s+1 (i.e., q is transported on Z' from

s+1 to s+2 and r is transported on Y') both q and r can be transported

to s+2 without affecting any other jobs or any other trains except the

ones they traveled on in X' (I) . After they are interchanged at s+1,

the argument is repeated until either case (i) or case (ii) (a) occurs

after which no further interchanges are required. Termination of the

procedure is assured since p (s , f ) < p (s , f ) (i.e., case (i) will r q ' q r r

aLways occur eventually unless (ii) (a) occurs earlier) .

New let X*(£) denote any schedule that adheres to the property of Theorem 4.7. Let X'(P) £ X*(£) be any schedule that delivers all jobs to their destinations on M1 trains. Let t < t <-..< T denote any

113

moment in time at which a train either leaves or arrives at a station where T is the total time required for all M1 trains to complete- their routes in X' (£) .

Consider t and station s = 1. If no trains leave s = 1 at t , then continue to s = 2. If a train Y leaves s = 1 at t , let q be the

job transported on it to s = 2. If p (1, f ) < p . (1, 1.) for all

q q j .j

j £ A,(l), then continue to s = 2. If not, schedule r on train Y at t

and q on train Z at tA where r is such that p (1, f ) _ p.(l, f.) for all 1 1 r r j j

j E A, (1) and 7. transports r from s = 1 to s = 2 in X' (? ) . It follows from Lemma 5 that after the interchange both q and r can still be deliv- ered to their destinations. Repeat the procedure for s = 2,3,...,*'. When s -- f , it will be the case that the schedule for t has the property of Theorem 4.7. Consider now t and s - 1,...,£. Since all interchanges occurring at t„ will not affect the schedule at t (Lemma 4.2) it follows that t, need never be reconsidered. After the schedule for t„ is deter- mined, it remains unaffected, and so on. The procedure terminates when the schedule for T is established in which case a schedule X*(£) has been generated.

It should be noted that it is not necessary to carry out Theorem 4.7 for all Lime periods t < t? <...,T unless it is the case that a feasible schedule is in fact generated. At any moment in time when the necessary condition of Property 4.6 is not satisfied, the proce- dure of Theorem 4.7 can be terminated since it would then be the case that no feasible solution exists.

114

Example: Consider the following jobs and trains where r denotes the start time for train Y. We assume unit travel time for all trains between stat ions .

Jobs

r = 0

r„ = l

r3 = 0_

r_ - 1

We first note that the necessary condition of Property 4.6 is satisfied,

that is, c(s, s-:-l) > 6(s, s+1) for all s = 1,...,6.

s, s+.l 6(s, s+l)_ c(s, s+1)

1-2 1 3

2-3 3 4

3-4 3 3

4-5 2 3

5-6 2 2

t - 0: s = 1; Job a is transported from s = 1 to s = 2 on train 1.

s = 2; since p (? , f ) > p, (? , f , ) , Job c is transported from 1 c c b -' b

s = 2 to s - 3 on train 3.

After the trains move the necessary condition of Property 4.6 still holds and we have

r:l-l li r,,'2

r2-l

^ " t ^

'5-1 ^

t = 1: s = 1; no iobs are available. 2

s = 2: since p, (s, l, ) > p (2, f ), Job b is transported from

' ' b b a a

s = 2 to s = 3 on train 1.

s = 3- since p (s, f ) > p (s , f ), Job d is transported from ' ' d d c c

s = 3 to s = 4 on train 3. After the trains move, train one completes its route, the neces- sary conditions still hold and we have

116

r4 = 2

5 Y,

r2 - 2 _„i

r3"2 i

Y5

t3 = 2: s = 1; N/a.

s = 2; Job a is transported on train two to s = 3.

s = 3; No trains available

s = 4; Jobs d and e are transported on trains 4 and 3 to s = 3.

3 4 5

Y Y r2 = 3 2 r4 = 3 \

r3 = 3

r5 = 3 5-

Ill

t, = 3: s = 1, 2: N/A. 4

s = 3: Since p (3, f ) > p. (3, f . ) , c is transported on train c. c b b

2 to s = 4. Job b is picked up by train 5. s = 5: Train 3 and 4 complete their routes by transporting d and e .

Finally, we have

Y r = 4 5_

and train 5 completes its route with Job c.

The schedule is summarized as follows: a b

Train 1 |

Train 2 | &-

Train 3 Train 4

Train 5

4 . 7 Routes Consisting of a Directed Network

In all previous sections it has been assumed that the movement of jobs and trains is along a single fixed route with one initial and one final station. Another way of describing this assumption is that our freight transportation network consists of a series of directed arcs with one source and one sink node. The case of parallel but noninter- acting routes can be decomposed into independent single fixed route problems,

118

If the freight transportation network does not have the above form, the problem becomes quite complex (this is the situation faced by an actual railway system when all lines are considered). In this case, the results obtained are Limited to that of determining the minimum num- ber of trains required to deliver all jobs to their destinations for a freight transportation system consisting of a directed network,

Specifically, given a system which is a directed network G= [N, A] where N denotes the set of stations and A the various track sections, let s = ^s1> s2,...,s^i denote the stations at which trains originate and T = (t^, t?, . ,.,t } the set of stations at which trains terminate their

For example:

Job d

in the above freight transportation system, trains leave either stations s^, s2 or s^, pass through intermediate stations and eventually terminate their routes at either station t or t .

We again note that the role of the jobs consists in creating an overlap upon the system. Ef m= 1 for all trains, then each job creates the need for a train to travel along all arcs from a job's origin station to its destination.

Property 4.7. Let 6 r denote the total demand between, any two stations

y

x and y (i.e., along any arc (x, y) ) . '['hen o is a lower bound on the

x, y

number of trains required to cross arc (x, y).

We also note that for any intermediate node x e {N n (S o T) } it must be the case that the number of trains entering the station x must equal the number of trains which leave (basic conservation of trains property). If the trains are scheduled on demand (that is, there is no "a priori" train schedule), we find that:

Theorem 4.8. The minimum number of trains required to transport all jobs to their destinations is given by the minimum feasible flow from source to sink.

Proof . First model the system as a directed network with lower bounds 6 for each arc (x, y) . Let S be a supersource and T be a supersink. Use any available algorithm [30] to find the minimum feasible flow M* from S to T. We must show that all jobs can in fact be taken to their destinations on M* trains (i.e., how to assign the jobs to the trains). Consider any intermediate node x and consider the incoming arcs (i.e., the set B(x) = {y c N | (y, x) e A}). Take k to be any job arriv- ing on a train at x. Either

(i) k passes through the node and leaves on an outgoing arc. Then by leaving it on the same train it arrived on (i.e., assigning the train it arrived on to carry the job through its outgoing arc) we assure that k is taken beyond node x. (iO fj, = >- (node x is job k's destination). In this case then k has been taken to its destination.

120

After all jobs that are to be carried through node x have been assigned (i.e., these train assignments have been made), then all jobs which, originate at x remain to be assigned. By assigning all remaining unassigned trains so as to complete the flow requirements on the outgoing arcs, we assure that these jobs are taken beyond x.

An assignment of trains to jobs which delivers all jobs to their destinations can be made by labeling the nodes (stations) from 1,2,...,?.. For each node, assignments are made in accordance with the procedure in the above proof. The procedure terminates with node ?, at which time all jobs are assigned in deliverable fashion.

4 . 8 Conclusion

Problems with scheduling and routing aspects exert a vital role in the efficient operation of transportation systems. In this paper very elementary systems based on the movement of freight along a local line of a railway system have been considered. In such a situation, the routing aspect is fixed, allowing one to concentrate exclusively on the scheduling portion of the problem. This simplification allows one to develop sched- uling rules based on "locally" optimal system performance. These rules have the particularly nice property that they can be updated as jobs enter or leave the system.

The motivation for the rules developed is based on conceptualiz- ing the problem as one of scheduling jobs on a system of parallel proc- essors. For the single train case, the m boxcars are considered to be m identical processors whose function it is to process a set of jobs. The jobs are fixed and the processors move in time in such a way that any job not scheduled when the processors arrive is relegated to being

121

performed at some later date. The M train case then consists of a series of M parallel processors adding greater flexibility as well as complexity to the scheduling problem. However, insistence upon the use of a single fixed route allows efficient optimal rules to be developed for various criteria and for assumptions on job availabilities. The applicability of solution procedures common to more classical parallel processor problems for the solution of these "single fixed route" problems can be noted in the similarity of the decision rules developed in Sections 4.3 through 4.5 and those of Chapter 3. When the "single fixed route" assumption is relaxed we find that even the question of generating feasible schedules is not straightforward although results were attained for particular network structures.

The complexity of most transportation systems presents a serious challenge to those attempting to analytically model and perhaps solve some of the problems inherent in their operation. However the underlying mechanism of many a system can only be Identified by stripping it to its most basic elements and building from there on. It is indeed fortunate that many of the problems considered in this chapter were amenable to efficient optimal rules, since we feel that the systems considered con- stitute the most basic structure of freight transportation. It remains to be seen whether the insight developed will carry onward to problems of a more general nature.

CHAPTER 5 MIN-MAX PARALLEL PROCESSOR ALLOCATION' PROBLEMS

5 . 1 Introduction

This chapter is concerned with a class of parallel processor problems characterized by optimality criteria that are a function of how much of a job has been completed by a certain period. This is in contrast to criteria considered in previous chapters which are a function of job completion times. Specifically the problem of in- terest concerns the minimization of a cost function associated with processing a set of jobs on m identical processors over a fixed time horizon H where :

(a) It is known how much of each job should be completed by period t (t = 1, ... H) (i.e., there exists a fixed schedule for completing each job).

(b) The penalty cost for each job is a function of how much of the job has been completed by time t (I.e., a penalty is assessed for being off the fixed schedule) .

In Chapter 2 (section 2.9. 2. A) it was mentioned that Dorsey, Hodgson and Ratliff [20, 21,22] have, shown that problems adhering to the above description which arise in production scheduling contexts can be modeled as minimal cost network flow problems. An example of their formulation for a two job, three period problem such that job

122

123

two is available only in periods two and three is given in Figure 3. To construct the network, define a node in corresponding to each pe- riod t and define a node n. corresponding to each lob j and period t

Jt ° J

in which i is available. From each node m , construct a f orward arc

t

(in , n. ) to each node n. for all j = 1, ..., n. Since only one t it jt -'

machine can be assigned to each node in each period, an upper bound

of one is placed on each arc (in , n. ). Next connect each node n

t' Jt jt

to node n. , , by two parallel forward arcs (n. , n. ,) and one

j,t+i jt' j,t+r

reverse arc (n , n ). An upper and lower bound w is assigned J , t+1 j t j t

to one of the forward arcs corresponding to the number of machine periods of production of job j required by period t to meet the pro- duction schedule. If more machine periods of production are completed by period t than required, this flow on the other forward arc is as- sessed a cost representing the penalty for exceeding the schedule. Analogously, flow on the reverse arc is assessed a cost representing the penalty for having completed less than required by period t. Finally define a source node S and a sink node T where a forward arc (S, m ) is constructed for all t = 1, ..., H and a forward arc

(n.7T, T) is constructed for all j = 1, ..., n. A capacity of M, the j H

number of processors is placed on the flow through each of the arcs

(S, m ). Since each job j must be completed at the end of the horizon,

a lower bound of w denoting the number of machine periods required

to completely process j ob j is placed on the flow through the arc

(n , T) . The problem is then solved using any of the algorithms for

finding minimal cost flows in single commodity networks [30].

124

No cost incurred since production is on schedule

Cost of exceeding production schedule

Figure 3:

Example of a Two Job, Three Period Problem with M Identical Processors.

125

This network formulation extends beyond problems of a produc- tion scheduling context. In this chapter we shall apply the network flow formulation to problems involving the allocation of mine counter- measures which we find can be modeled as parallel processor problems of this form.

5 . 2 Formulation

The context of the problems to be considered here will be that of providing the "defender" of a logistics system under mining attack with a quantitative basis for allocating his mine countermeasures (MCMs) , The "defender" plays the role of the "scheduler" and the countermeasures which consist primarily of helicopters and their support equipment are the parallel processors. The logistics system consists of a set of ports which arc mined. The clearing of each port represents a job to be carried out whose completion requires the assignment of a minimum number of MCMs. A penalty is incurred depending on the latest time each job is completed, where the processing of the job may be split among the parallel processors (the MCMs). The responsibility of the de- fender is to allocate his MCMs in such a way that the losses (penalties) incurred by the system are not extreme.

McWhite and Ratliff [51 I have developed optimal and near op- timal procedures for various problems in which the measure of effective- ness for MCM allocation consists of minimizing total shipping losses. Here the prime concern will be in obtaining optimal min-max allocations for the following two cases:

12(

Case 1:

We assume that there is a loss probability associated with get- ting goods out of a port. The probability of loss is a nonincreasing function of the number of countermeasures assigned to the port. In addition, we have a schedule of what has to be shipped out of the ports and we assume that tilings go out on schedule. The objective in this case is to deploy the MCMs so that the maximum loss probability of the ports is minimized.

Case 2:

We assume that the goods in a port cannot leave until the port is reasonably cleared (i.e., until the loss probability of a port is less than or equal to some satisfactory level). A penalty is incurred for each, period a shipment is delayed. The penalties are assumed to be nondecreasing functions of time. The objective for this situation is to minimize the maximum penalty incurred by all ports.

5.3 Case 1: Minimization of Maximum Loss Probability

To facilitate modeling this case we will consider our planning horizon to consist of H time periods. We will allow the MCMs to be redeployed among ports but only between periods. For i = 1,2,... m

t-1

and t - 1,2, .. .H, let I. (

/., (7 q.,) represent the loss probability

k=l

for port i in period t where q is the number of MCMs assigned to

r k

port i in period k. Implicit in this definition of the loss proba- bility is the assumption that the effect of using an MCM is independent of the period. This min-max allocation problem under these assumptions

can be formulated as:

127

t

rain max l±t O q )

k=l

i = 1 , . . . m

t = ],...!!

s ub j e c t t o

qit < Qt for t = 1,2,. ..II

1=1

0 < q. < q. 'if lit

and integer for all i - 1 , 2 , . . . m and t = 1,2,. . .H

where 0 is the number of MCMs available during period t and q. is

defined to be zero for 1 = 1,2,... in.

The system can be modeled as a directed network by creating

nodes n for t = 1,2,... II and p. for 1 = l,2,...m and t = 1,2,...!!.

For t = 1,2,...H, construct an arc from n to p with capacity ci

t it - Mit

and associate a loss probability of zero with each of these arcs.

For t = 1,2, ...H-l, and i = 1,2,... in, construct an arc from p to

Fit

P; i-4-i with capacity ) q and loss probability I. (f(p ,p ))

i>c-t-i ^_, iic - xt ^ it i,t+l

k=l

where (r'(pi(_, P.. t+1) ) is the arc flow. Construct a source node 3

with arcs from S to n having capacities Q and loss probabilities

of zero for t = 1,2,...H, and a sink node T with arcs from p to

i ,H H

T with capacities

|. and loss probabilities Z (f(p , T) ) it r it wiH

t = l

L2i

(Loss Probability, Capacity, lower Bound)

(0,qu,0)

Uu(.),qil,o)

(0,Q1,0)

(0,Q3,0)

(0,q23,0)

Figure 4: Network Representation of a Two Port Three Period Min-Max Loss Probability Allocation Problem

129

for i = l,2,...m. Finally, assign a lower bound of zero to all arcs. An example network for a 2 port 3 period system is shown in Figure A. (This network structure is the same as that presented in McWhite and Ratliff [51].)

If we assume that $, (•) is a monotonically nonincreasing function of the number of MCMs assigned to port 1 in period t for i = l,2,...m and t = 1,2,...H, then the following procedure provides an efficient method for minimizing the maximum loss probability: (i) Construct a feasible flow from S to T. (if) Let 9,±t* = max ^ (0 for all i = l,...m and t = 1,...H. (iii) A necessary and sufficient condition for there to exist a better solution Ls that the loss probability on every arc be strictly less than &±t* Therefore, increase the lower bound on each arc to

the smallest integer value it can assume so that £ < I * for all

it it

i = 1 , . . . m and t =; 1, . . . h.

(iv) Repeat steps (i) , (ii) and (iii) until no feasible flow exists. The deployment corresponding to the last set of feasible flows found minimizes the maximum loss probability.

Optimality of the algorithm is insured because at each step a necessary and sufficient condition for improving the current solution is generated. The algorithm terminates because the maximum flow is finite and at least one lower bound is increased by an integer at each step.

ft should be noted that the algorithm used for the min-max allocation problem can be extended to any general directed network with monotonically nonincreasing functions on the arcs. (See Figure 5 for

130

f(lj2)(q(i>2)

f(1>3)(q(l,3))

f ( n ( n a "N ^

f(3t5)(q(3,5))

f(4jT)(q(4,T))

f(5jT)(q(5,T)

Figure 5: An Example of a Min-Max Allocation Problem on a Directed Neti-7ork

131

an example network.) That is, given a directed network G = [N, A],

suppose that eacli arc (x,y) E A has associated with it a monotonically

nonincreasing function C, . (q(x,y)) where q(x,y) denotes the number

(x,y)

of resources assigned to arc (x,y) . Then an optimal solution to the problem:

min max f. . (q(x,y)) q (x,y) eA U'yJ

V

subject to: (x,y) eA

q(x,y) < Q

SP i Q x=S

0 x^S,

Q x=T

q(x,y) - 2_j q(>S>-) = S 0 x^S,T

ysA(x) yeB(x)

q(x,y) > 0 and integer

where: A(x) = {ysN | (x,y) eA} B(x) = {yeN | (y,x) eA} is obtained by

(i) Constructing a feasible flow from S to T. (ii) bet K* = max f(q(x,y)) for all (x,y) (iii) Increase q(x,y) for all (x,y) to the smallest value it can take such that f(q(x,y)) < K* for all (x,y) . (iv) Repeat steps (i) , (ii) and (iii) until no feasible flow

exists. The last set of feasible flows is optimal. This algorithm is an extension of methods used by Jacobsen [43] and Porteus and Yormak [58] for the solution of min-max marginal allocation problems of the form:

132

rain max f . (q. )

q i

subject to: j> q . = Q

i=l

q > 0 and integer for al] i=l,...n

1

where f , f , .... f are n real-valued monotonically nonincreasing

12 n

functions defined on the nonnegative reals and Q Ls a positive inte- ger denoting the number of units of scarce resource.

5 . 4 Case 2: Minimization of Maximum Penalty

Again we allow our planning horizon to be made up of H time

periods and we allow the MCMs to be redeployed among ports but only

between periods. Let Q denote the maximum number of MCMs which can

be allocated to all ports in period t and q. denote the number of MCM

l

days required to work on port i for it to be cleared. Also let q.

i t

denote the maximum number of MCMs which can be assigned to port i in period t and r. denote the first period that port i can be worked on. In this case we will assume that a penalty c.(t.) is incurred if port i is not cleared until period t.. We also require that all ports be completely cleared by period H.

This problem can be formulated as:

mm max c . ( t . )

. , li.

t i=l, . . .m

E qit = qi

subject to: > q. = q. i = l,2,...m

133

1 < Q t = 1 , 2 , . . . H lit - t

i = l

0 < q < q. for all i = l,2,...m

t = 1 . 2 , ... in

r. < t . < H for i = 1,2, . . .m

i i.

and q. and t. integer.

This system can be modeled as a network by creating nodes n j J a t

for t = 1 , - . . H and p for i = 1 ,?,... m . For t = r , r , . . . H , con- i i l+l

struct an arc from n to p. with capacity q. . Construct a source node t 'i J Mit

S with arcs from S to n having capacities Q and a sink node T with arcs from p. to T with both an upper and lower bound of q.. An example network for a 2 port, 3 period system where r. = 1 and r = 2 is shown in Figure 6.

If we assume that c.(t) is a nionotonically nondecreasing func- tion of t for all t = 1,...H and i = l,...m, then the following obser- vation can be made:

Given any feasible deployment and its associated objective function value, a necessary and sufficient condition for generating a strictly better deployment is that the penalty costs for all ports in the new solution be less than the current objective function value. This condition, in terms of the network, implies that for a better de- ployment to be obtained, flow may be allowed only on arcs from p. to t for any t such that c. (t) is strictly less than the current objective function value. (This notion is utilized by Bracley et al. [13] to solve a related parallel processor scheduling problem.)

13*

(Capacity, Lower Bound)

(Q3,0)

Figure 6: Network Representation of a Two Port, Three Period Min-Max Penalty Allocation Problem

135

Based on this condition, the following algorithm can be used to

find a deployment that minimizes penalty cost.

(i) Generate a feasible flow for the network and calculate

the penalty costs associated with the feasible solution.

(ii) Let K = max c.(t) for all i = l,...m and t = 1....H. l

(iii) For all i = l,...m, determine the latest period that p.

can be cleared so that c. (t) is strictly less than K.

1

(iv) Delete all arcs from p. to all periods which extend be-

i

yond the latest period determined in (iii) for all i = l,...m. (v) Repeat steps (i) through (iv) with the new network gen- erated at each iteration until no feasible flow can be attained. The deployment corresponding to the last feas- ible flow is optimal. As in Case 1, optimality of the algorithm follows from the fact that at each step a necessary and sufficient condition for im- proving the current solution is satisfied. Termination of the algorithm upon arriving at the first infeasible solution Implies that the solution before the last step is optimal.

5.5 Conclusions

In this chapter two cases of min-max parallel processor allo- cation problems have been considered. Both problems are found to ad- here to a network flow formulation of the type utilized for the solution of related problems which arise in production scheduling contexts. While these problems differ from those considered in the previous chapters.

136

chronologically they represent the first parallel processor situations approached in this study. They can be related to the previous work by observing that the notion used in developing algorithms for both cases was found to extend to the solution of parallel processor problems such as those considered in sections 3.6 and 4.5.2.

CHAPTER 6 SUMMARY AND SUGGESTIONS FOR FUTURE RESEARCH

In this dissertation a study of some parallel processor sched- uling problems has been presented. We now summarize and comment upon the main contributions of this study which are contained in Chapters 3 and 4.

Chapter 3 presents some optimal rules and algorithms for prob- lems originating from the theory of parallel processor scheduling. The developments of this chapter were motivated by the belief that more ef- ficient methods could be constructed for dealing with parallel processor problems, some of which had been considered previously in the literature: thus the emphasis on optimal rules rather than enumerative type algo- rithms. Moreover, an extensive review of the previous literature (Chapter 2) served to support an initial observation that many current results in the area were isolated ventures with little interrelation- ship. This led to the objective that identification of some basic prob- lems in the area would not only lead to the development of optimal priority type rules for the solution of the same, but also serve to link previously unrelated observations and results for both single and multiple processor problems. The results of Chapter 3 can be considered as achieving but partial success with respect to this original, albeit, optimistic objective.

137

138

By allowing a limited form of job splitting, a priority based rule was shown to minimize maximum lateness for the parallel processor problem without restrictions on job or machine availabilities. Unfor- tunately, the rule did not extend optimally to the solution where jobs were not simultaneously available although a rule for minimizing maxi- mum flow time for this more general problem was found optimal. Both results are contributions to the theory for a very specialized parallel processor situation but lack the impact and generality attainable if say the rule for minimizing lateness had also been shown optimal for nonsimultaneous release times where current results consist of a network formulation. Nonetheless, the rules do tie in many other results in the area .

A by-product of this part of the study has been to identify the extent to which optimal priority based procedures will play a role in the solution of parallel processor problems. It may well be the case that the assumptions imposed on the problems for which rules were at- tained may mark the perimeter beyond which the development of optimal rules becomes highly improbable. This is certainly a strong conjecture for problems where job splitting is not allowed ( partitioning problems with a high degree of discreteness and complexity). While efforts will surely continue to encompass more general problems than those considered here with optimal rules, it is felt that even when job splitting is allowed, this does not constitute a very promising area for future re- search.

The most attractive area for future development concerns the class of problems presented in Chapter 4. In this chapter, the

1 39

scheduling of freight on a class of "fixed route" railway systems was modeled as a parallel processor problem. It is important to emphasize that the only distinction between the problems in this chapter and those considered in Chapter 3 concerns the limited avail- ability of the machines. In the fixed route problems jobs not carried by a machine (i.e., freight not carried by a train) at a particular moment in time can no longer be processed by the same machine and must wait until the next processor arrives or becomes available. This is in contrast to the more classical formulation of the parallel processor problem where jobs not processed at any point in the scheduling horizon "queue up" and wait until they are processed by either the same or another machine. Yet it is this property that opens up a new area of scheduling problems amenable to priority based optimal rules for the variety of criteria and assumptions considered in Chapter 4.

The development of this new area proceeds from the simplest case of one train (processor) traveling a single fixed route to the situation where M trains travel the route. It is shown that by giving priority to jobs currently closest to their destinations, the number of jobs taken to their destinations by the single train as well as the first K trains (K < M) of any M train system is maximized. An important characteristic of this criterion Ls that the rule, while de- pendent on job splitting for its implementation, provides sufficient information to develop solutions for no job splitting with the same optimal value. This is especially of interest when it is remembered that the no job splitting assumption provided an insurmountable ob- stacle tor the development of optimal rules in the more classical

140

problems of Chapter 3. One then observes that an approach for deal- ing with other parallel processor problems without job splitting could consist of looking first at the job splitting case and trying to iden- tify those situations where the value of the relaxed problem is in fact optimal when jobs cannot be split. In any case, optimal solutions to job splitting cases will always provide bounds on the optimal values of their no job splitting counterparts.

When restrictions are imposed on job availabilities, the anal- ysis was not so straightforward, when job splitting was allowed feasibility conditions and rules for transporting all jobs to their destinations were developed for cases when jobs are not simultaneously available or job duo dates are invoked. However, inclusion of both constraints and/or a no job splitting assumption makes even the ques- tion of feasibility presently intractable. This latter occurrence identifies a niche for future analysis within the framework of fixed route problems.

Relaxing arrival restrictions on the jobs allowed one to mini- e lateness for the M train case when again job splitting is allowed. No job splitting results were confined to the case of two trains and attempts for M > 3 were thwarted by the highly combinatorial and un- structured nature of the problem that evolved.

The last two sections of Chapter 4 present scenarios of in- creasing practicality and not too surprisingly, of increasing diffi- culty. Allowing trains to originate and/or terminate their progress at different stations introduces the notion of a route associated with each train although all train routes are still confined to the same

mi

I -VI

single track. Enlarging the situation to systems of tracks (networks, in essence) increases dramatically the complexity of the problem. In making any decision concerning the loading of a job on a train, it no longer suffices to consider only the remaining processing time of a jolj. One must also take into account where the job is to go (i.e., the job route) and the train route. For example, in the one train sys- tem below where the solid lines denote the job route and the broken line the train route,

Train

maximizing the number of jobs taken to their destinations requires that priority be given to Job b which is neither the job closest to or farthest away from its destination at station 1. In general, the inclusion of non-fixed routing aspects into the problem destroys much of the "locally optimal" properties of the fixed routing case. how- ever, it is quite probable that the special structure of networks such as trees can be capitalized upon to generate efficient rules and al- gorithms .

142

In conclusion, it is felt that there exist greater incentive and promise for analyzing and exploring more general situations than those considered in Chapter 4. This study has served to introduce and develop a new area of scheduling on parallel processors. Farther extensions of this research not only serve the goal of breaching the gap between the simplified but insightful special cases we have con- sidered and the highly complicated situations encountered in freight transportation, but also add to an increased understanding of parallel processor systems, be they of theoretical or applied origin.

APPENDIX I

COMPUTATIONAL RESULTS

A . 1 . 1 Int roduction

As noted in Chapter 3, the shortest remaining slack time rule does not extend optimally to the situation where jobs are not simultan- eously available. However, experience with the problem led the author to believe that with a slight modification the heuristic might demon- strate near optimal behavior as well as a very high, degree of computa- tional efficiency. To test this conjecture, an experiment was carried out. The global result for 2 79 different problems was that in 277 case the rule yielded an optimal solution. A description of the experiment and details of the results obtained is now presented.

A . 1 . 2 Description of the Experiment

The heuristic tested for solving the n job m machine problem with job splitting and nons imul taneous release times consists of sched- uling the jobs so that:

For any t' = 1,2,. ..,t schedule all k e (A(t') n C(X(t'))} so that

(i) Cf |A(t')| nC(X(t'))| -"-' m, then process all jobs in A(t') n C(X(t')). (ii) If | A( t ' ) n C(X(t'))| > m, then process any ra jobs in A(t') n C(X(t')) such that if job q is processed and s is not, then either

143

and

144

(a) d (t') - p (t*) < d (t») - p (t'j q q s s

(b) d (t1) - p (t») = d (t') - P (t') q q s s

P (f) > p (t')

q s

The algorithm was coded in APL/360 [32] and consists of three functions :

1. Function "DGEN"

The function "DGEN" makes use of the built-in random number generator of APL to generate

(i) Random processing times, P, between 1 and 20. (ii) Random release time, R, between 1 and 15, and (iii) Random due dates between P + R and P 4- R + TABLE. The smallest due date for any job is at least as large as its processing time plus release time (i.e., no job initially starts off late). The TABLE entry is a random number whose upper limit is fixed before each set of runs allowing one to exercise control over whether job due dates are "tight" or "not so tight." For example, TABLE=15 causes due dates to be randomly generated between P + R and P + R + 15. The minimum values of R plus 1 is sub- tracted from all values generated in (ii) before the random due dates are generated. The function is activated by specifying the values m, the number of machines, and n the number of jobs, as well as the value of TABLE for a particular set of runs. Initialization of COUNTER = // allowed V n job m machine sets of data to be generated.

145

VDGEN[U]

'.' ,1 DGEN N

[1]

DI. :P-- ?N, 20

[2]

PTEMP-- P

[3]

R--?N, 15

[4]

R-R-(i /R)

[5]

R-R+l

[6]

D-R+P+?N. TABLE [N-l

[7]

PP

[8]

COUNTER-COUNTER-1

[9]

--DL- i 0-COUNTER

As each problem is generated it is fed into: 2. Function "pp"

Function "PP" executes the scheduling algorithm by initially comparing all values in the vector P-D and selecting the m jobs which have priority. Then all processing times are updated and the proce- dure is repeated until |c(X(t'))| = 0.

vpp[D] v PP 11] LB<-r/(l /((R-1)+P-D),r/(| ((+/P)tM),(I7(R-1)+P))-I7D) [2] MAS-(2,M)pO

[3] T<!/R

[4] LOOP: FLAG--- (0-T) a(T>R)

[5] s-(p--n)

[6] ST-FLAG/S

[7] PT-FLAG/P

1 8 ] JOB-FLAG/ ( ( i p FLAG) xFLAG)

I 9] ->ZER0- l0 = RO+pST

[10] ->((R0=M) , (R0>M) ,R0<M)/EQL,C0NT,LSS

[11] LSS : TEMP-JOB, ( M-pST)pO

[12] ->PR

[13] EQL: TEMP -JOB

[14] ->PR

[ 15 ] CONT: ST--ST [TT-7ST]

[16] PT-PT[TT]

[17] JOB-JOB [TT]

[18] TEMP^-CNT^TIE 1-1

[19 ] SET : >ALG0>' i ST [ I J =ST [ 1+1 ]

[20] ->NEXTy iTIE=0

[21] TEMP-TEMP,JOB[(I-TIE) TO I]

[22] -*BY

[23] NEXT : TEMP^TEMP , J OB [ I ]

[24] BY : CNT-f-CNT+TIE+1

146

25 j ->( (CNT -M) , (CNT<M) , CNT=M) /GREAT , LESS , EQUAL

26] ALf;0:TlE-. TT.E+'I.

27] -THRU

23] LESS:TrE-0

29] THRU: I '-"1+1

30] '-TAKE-'i L=pST

31 J -SET

3 2 ] TAKE : TI - -$ (TI E+l ) : <j> JOB

331 CNT^-PST

34] -FIN

35 ] GREAT : TI^6 (TIE+1 ) ' tbTEMP

36 ] TEMP-s-ij) (TIE+1) !• cjiTEMP

37] FIN:EPT-P[TI]

38 1 OEPT< EPT [ XX-;' EPT ]

39 J TEMP TEMP, (M- (CNT- (TIE+1) )) tOTI^TI [XX]

40 ] EQUAL : TEMP-L ! TEMP

41] PR: MAS ROWCAT TEMP

42] P-«-(P-((idP)cTEMP))

431 D< (D-l)

44] T-T+1

45] >LOOP

46] ZERO:>PPSTOP:- i 1=a/0=P

47] TEMP+MpO

48] >PR

49] PPSTOP:"MAXL MAS

Once all jobs have been scheduled the program passes on to: 3. Function "MAXL"

Function "MAXL" computes the lateness of all jobs in the sched- ule by comparing the job completion times with their due dates. Once the "lateness vector L is generated, it is scanned to identify the value of the maximum lateness.

YMAXL[Li]7 V MAXL MAS [1J DIM<-pMAS [2] TTER-:DIM[l]-2 [3] MAS- (0 0 ,ITERpl)/[l] MAS [4] CPT«-(pF)pO [5] CTR+-ITER

[6] LIST- (~(DIM[2]pO=MAS[CTR;]))/MAS[CTR;] [7] ->MLZxil=pLIST [8] LIST- LIST t^LIST] [9] MLZ: OPT [LIST ]-CTR

147

[10] ML:CTR-CTR-1

[11 J ->MLY*iO=CTR

[12] ^MLx;l= (a/MAS[CTR; ]=0)

[13] MASTEMP-(~(DIM[2]p0-MAS[CTR;])) /MAS[CTR;]

[ 1 4 ] ABSENT-: ~MASTEMPe LIST

[15] -ML*~l*v /ABSENT

[16] LI ST- L 1ST, IP- ABS ENT/MAST EMP

[17] LIST-LIST[ALIST]

[18] TP-TP[|TP]

[19] CFT[TP]-CTR

r')."v1 ourr

[21] MLY:D-D+ITER

[22] L--CPT-D

[23 J LMAX-r/L

[24] JMAX-L i LMAX

[25] >0-:iLB-LMAX

[26] TMAS^MAS

[27] 'PROCESSING TIMES'

[28] PTEMP

[29] PSLTM-H-/PTEMP

[30] 'SUM OF PROCESSING TIMES'

[31] PSUM

[32] 'DUE DATES'

[33] D

[34] 'RELEASE TIMES'

[35] R

[36] 'SCHEDULE'

[37] TMAS

[38] 'LOWER BOUND'

[39] LB

[40] 'LATENESS VECTOR'

[41] L

[42] 'MAXIMUM LATENESS'

[43] LMAX

[44] 'JOB WITH MAX LATENESS'

[45] JMAX

Once the value of L is determined it is compared with LB max

which is a lower bound on the value of L .If LB = L , then no

max max

results are printed out, since the "rule" behaved optimally. If LB < L > then the schedule and other relevant information are printed out. The printed information is analyzed to determine if the rule is optimal or not.

An example of a run of 5, 20 job, 3 machine problems with loose due dates (TABLE = 135), now follows:

Table *■ 135 74 30 17 Counter - 5 3 DGEN 20

Loose Due Dates 4 of 5 hit LB

Processing Times

9 10 4 18 14 5 19 8 9 4 6 5 IS 14 17 16

1 11 S 6 Sum of Processing Times

202 Due Dates

7L 90 71 59 41 22 43 33 40 S3 18 81 81 74 22 27

50 25 71 39 Release Times

14 15 5 3 8 1] 10 10 2 13 9 12 1 3 3 4

12 S 9 9 Schedule

13 9 15 15 15 15 15 15 15 15 15 15 15 15 15 6

8 1ft 15 8 18 18 18 7555555 5

5 5 5 7 20 9 5 5 7 4 4 4 4 4 4

4 4 4 4 4 4 4 4 4 1 13 8 14 14 14

13 13 2 2 2 2 2 2 2 2

0

13

9

16

16

16

16

18

1 1

11

11

11

11

6

16

15

15

18

16

15

16

16

8

8

7

7

7

7

7

7

7

7

7

7

5

7

20

9

7

4

14

14

14

14

14

13

1

13

1

13

1

19

14

8

18

3

14

18

13

13

13

10

10

10

10

13

13

0

0

0

0

0

0

0

0

4

9

9

3

9

18

18

18

18

16

18

11

18

16

18

16

18

6

18

16

7

7

18

16

8

8

8

8

8

20

20

20

20

9

5

7

7

4

17

1

1

19

1

19

14

19

14

19

14

3

13

1

19

14

19

1

13

12

12

12

12

13

13

0

0

0

0

0

0

0

0

Lower

Bound -3

149

Lateness Vector

-14 -23 -13 -4 -2 -13 -2 -3 -10 Maximum Lateness

-2 Job with Max Lateness

-3 -3 -2 ■13 -2

18 -4 -18 -14

In this instance only one problem is printed out, since 4 of the 5 problems achieved their lower bound.

A lower bound LB on the value of L is determined by computing:

max

LB., = max [r, 1 k k

\ ~ dk

and

r (A pk\ "\

LB„ = max J max ( ) ; max (r, - 1 + p. )

2 ,, Y \\^ m / k k 'k J

L + Pfc) j - max )dk)

:nd letting LB = max (LB ; LB )

LB, is a trivial lower bound which holds if all jobs are proc- 1

essed continuously from the first moment they become available. LB is obtained by computing minimum length of the schedule and sub- tracting from this value the maximum due date. Since at least one job

\/l l\

is still being processed at time max

; max (r - 1 + p ) k k k

the best that could happen is that the job is the one with maximum due date.

Those schedules that do not achieve their lower bound are analyzed for optimality by not allowing any job to be processed beyond

the furthest period such that L. < L generated by the rule, for all j

j max -

If no feasible schedule exists under these conditions, then the sched- ule generated by the heuristic Ls optimal.

150

A . 1 . 3 Results of the Experimen t

A. total of 279 problems were run with n varying from 5 jobs to 50 and m from 2 to 5 machines. The number of jobs that achieved their lower bound is also noted.

N 5 10 20 30 50

"Tight" Due Dates

m = 3 m = 4

ra = 5

Runs

Opt

LB

Runs

Opt

LB

Runs

Opt

LB

Runs

Opt

LB

21

20

?

-

-

-

-

-

-

-

-

-

5

4

1

5

5

3

5

5

4

5

5

4

5

5

1

5

5

0

-

-

-

5

5

0

2

2

1

2

2

0

2

2

0

2

2

1

1

1

0

1

1

0

1

1

0

1

1

0

"Not So Tight" Due Dates

m = 3

m = 4

N 5 10 20 30 50

Runs

Opt

L3

Runs

Opt

LB

Runs

Opt

LB

Runs

Opt

LB

25

25

19

25

25

24

25

25

24

-

-

-

15

15

12

15

15

13

15

15

13

15

15

15

10

10

9

10

10

9

10

10

9

10

10

9

5

5

5

5

5

4

5

5

J

5

5

5

4

4

J

4

4

4

/.

4

4

4

4

3

151

A . 1 . k Computational Efficiency of the Algorithm

It was initially suspected that the number of computations required by the algorithm would be of the 0(n ) . This was arrived at by noting that at most n comparisons are required in each of the periods

IPk k

t t "- 1 ... T. Since T > and p is generated randomly between

m k

1 and 20 and 2 < m < 5, the number of computations, COMP is approximately,

IP,

k COMP -x. n T oi n -x, n * 10 ' n -\. 0(n2).

However,

run times for m

= 2 y:i n 5 10 20 25 30 35 40 55 60 80

eide<

l the following data: CPU time (sec)

0.40

1 . 06

2.80

3.92

4.33

5.02

6.73

9.42 11.06 15 . 92

which when plotted (Figure 7) revealed an almost linear relationship between CPU time and n, the number of jobs.

152

16

14

12

10

8 +•

6 i-

4 4-

2 -

10

20 30

1

40

50 60

70

Figure 7: Plot of CPU Time Versus Problem Size

153

A least squares fit for the data yielded the following functional relati onship :

CPU (sec) = -1.50 +■ 0.21 n where r = .994 (Correlation Coefficient)

and 8 = 1.605 (Estimate of Standard Deviation).

And so, for all practical purposes, the algorithm can be considered to be of 0(n) .

A . 1 . 5 Summary

Based on the results of this experiment it may be concluded that the original conjecture still stands as valid. The algorithm was non- optimal in but 2 of 279 problems and never failed to arrive at an optimal solution for "not so tight" due dates (i.e., TABLE > 10). Computation times are extremely efficient for m = 2 and should decrease for m > 2, since T decreases with the number of machines for a given value of n.

Finally, the author wishes to express his gratitude to Mr. Jesus Saenz for his generous assistance in the coding of the algo- rithm as well as his role in the author' s conversion to an "APL" believer.

APPENDIX 2 REDISCOVERING THE SQUARE WHEEL

A . 2 . 1 Introduction

The multiplicity of technical journals available to practition- ers and researchers in the management sciences and related areas serves many purposes. One of the most: important concerns the dissemination of solution procedures that are developed for problems of interest. In- deed, the first step in any investigative endeavor centers around a compilation of known results in the area to be considered. A compre- hensive survey serves the objective of providing an orientation to the researcher of the area in question which in turn allows him to appraise his conjectures or hypotheses in light of the current "state of the art." The more complete the survey the less risk there is that the researcher's efforts will conclude in a "rediscovery of the wheel," that is a dupli- cation of previous successful efforts.

While a great deal of incentive exists for publishing success- ful research, very little encouragement is provided for publishing in- formation based on conjectures that resulted incorrect under analysis. This is unfortunate because such negative results are also a part of the "state of the art" in that they also provide insight into the mech- anism behind research problems. Since these results are generally known only to those who have personally encountered them, it is the author's

154

155

belief that a great deal of time (research $, etc.) is expended in literally "rediscovering the r.quare wheel," that is, duplicating pre- viously known but undisseminated negative results.

This appendix is directed towards a partial rectification of this situation. In this current research effort, large amounts of time have been dedicated to eventually disproving a series of "heuris- tics" related to parallel processor scheduling problems. It has been the case, at times, that the development of these counterexamples has required as much effort as the development of proofs for those rules that have been found to be optimal. Moreover, it is debatable which of the two occurrences has provided more insight into the structure of the problems at hand. With this in mind the following "rediscoveries of square wheels" are now presented.

A . 2 . 2 Min imiz ation of L w ith_ Non-S imultaneous Job Arrivals

The efficiency of the "heuristic" of Section 3.32 for the above problem may raise speculation concerning the possible optimality of the rule for all but certain pathological cases. The following example provides one of the two instances where the rule was not optimal.

ob (j)

r . 1

P. (r.)

d.

1

1

1

1

2

1

2

4

3

1

1

L

4

2

1

10

5

3

1

4

6

3

1

4

7

4

1

4

8

4

1

4

156

at t - 1 > jobs 1, 2, 3 are considered wher

PjCD - d (1) =1-1 = 0

P2(1) " d2(1) = 2 " 4 = "2 and J°bs 1 and 3 are scheduled.

P3(l) - d3(l) = 1 - 2 = -1

at t - 2, only jobs 2 and 4 are available for scheduling at t = 3, jobs 2, 5, 6 are considered where

P2(3) - d2(3) = 1 - 2 = -1 P5(3) - d5(3) = 1 - 2 = -1 * P6(3) - dfi(3) = 1 - 2 = -1

Since p2(3) = p5(3) - pg(3) = 1, the choice is arbitrary, say, 2 and 5. at t = 4, jobs 6, 7, 8 are considered where

P6(4) - d (4) =1-1=0

Py'''') " d7^) =1-1-0 * and jobs 6 and 7 are scheduled. pg(4) - dg(4) -1-1=0

Finally, at t = 5, only job 8 remains with

P (5) - d (5) =1-0 = 1.

o o

The schedule generated by the rule is:

t= 1 2 3 4 5

Machine One 12 2 6 8

Machine Two 3 4 5 7 0

15 7

where L„ = 1 = L

h max

However, the following schedule is optimal:

t: 12 3 4 5 Machine One 13 5 7 4 Machine Two 2 2 6 8 (5

where I, = L = L„ = 0. mar. 7 8

The failure of the scheduling rule is centered around the trade- off between potential lateness (p.(t) - d.(t)) and remaining job pro-

J J & j j

cessing time p (t). In this example it is better to schedule job 2 at

t - 1, even though P.,(l) - d^l) > p2(l) - d (I) - It appears that

P2^1) = 2 > 1 = p (J) dominates at t = 1, since when 3 is scheduled at

t - 1, job 4 is forced into the schedule at t = 2 prematurely. If it

were the case for all t and for any jobs s and q, that when p (t) -

ds(t) > p (t) - d (t) we also had Pg(t) > p (t) , then the rule would

be optimal. Clearly no trade-off exists in the F problem, since

max

d (t) = 0 for all k and it was shown in Section 3.31 that the rule is optimal in this instance. Attempts at using a combination of the two priorities have not yielded any positive results.

'v -2.3 Precedence Constraints

The lack of results through this proposal for parallel pro- cessor problems with precedence constraints is not by design but by default. The inclusion of precedence constraints for identical pro- cessors converts the problem into the "m identical resource con- strained CPM" problem. A number of promising rules were applied to tne Fm-,,- and L™.,-, Problems for general precedence networks. None,

158

however, returned optimal results.

The motivation behind looking at this problem is based on the following property:

Property A.l: Any n job m machine problem with job splitting can be converted into an equivalent N job problem where all jobs are identical as follows:

Split job j (for all j = L, ..., n such that p. > 2), into p identical jobs in series, each with processing time p., =1. It then follows that any n job problem without precedence constraints and non- identical jobs can be converted into a problem of scheduling N =

n

Y. p identical jobs on m processors with precedence constraints con-

sis ting of n parallel strings. Any n job problem with precedence con- straints can be converted into an N job problem with identical jobs on an enlarged but equivalent precedence network.

It then follows that T. C. Hu's [41] algorithm for the F

max

problem with identical jobs will solve any F problem with the above

max

type of job splitting and precedence constraints in the form of a

tree. However, the only algorithm for the F problem and peneral

max °

precedence constraints is given by Coffman and Muntz [15] for in = 2 .

The L problem is only solved for m = 1 by Lawler [46]. Therefore

in trying to arrive at rules for more general cases only problem with

identical jobs were looked at (i.e., p =1 for all i).

j

A. 2. 3.1 F

max

Rule A-l: Let Q(t) = the set of all unprocessed jobs at t without predecessors. For any t, schedule all j E Q(t) so that:

159

(i) If |Q(t)| < m, then process all .jobs in Q(t).

(ii) If j 0 ( t ) ! > m, then process any m jobs in Q(t) such

that if job q is processed and job s is not, then

I > I , where I. denotes the length of the longest q - s ;j

path from j to the end of the precedence graph. That the rule will not work for m >_ 3 is shown by the fol- lowing example where in = 3 .

Start

Finish

at t = 1, A, B, C, D e Q(t) where

"B C d

= 4

Process B, C, D.

at t = 2 A, E £ Q(t) at t = 3 F, G £ Q(t) at t = 4 H, I, J, K, L, M, N e Q(t) all with length = 1,

so we arbitrarily pick H, I, J at t = 5 K, L, M, N £ Q(t) and pick K, L, M

160

at t = 6 N, 0 e Q(t)

at t = 7 P c Q(t) and the schedule results in

t = 1 2 3 4 5 6 7

Machine One B A F H K N P

Machine Two C E G 1 L 0 0 with F - 7.

max

Machine Three D 0 0 J M 0 0 However, an optimal schedule is given by:

t = 1 2 3 4 5 6

Machine One A D E J L 0

Machine Two B F H K M P with F =6.

max

Machine Three C 0 1 C N 0

Rule A- 2:

For any t, schedule all j Q(t) so that:

(i) If |Q(t)| < m, then process all jobs in Q(t).

(ii) If |Q(t)| > m, and q is processed while s is not, then

S > S where i q i _ i s i

Is. I - the number of successors to i (equivalent to the sum oi remaining processing times for all jobs dependent on j's completion since p. = 1 for all j). Again, this rule will not work for m > 3.

161

Start

Finish

at t = 1, A, B, C, D, £ Q(t) where

SA> =?

it t = 2

at t - 3

sbI = Isc! = Isd

6 .". Process A, B, C

D, E, F, G, II, 1, J £ Q(t) where

v - 6

s = s

bE] ' F1

S = S = 1

G, H, I, J, K E Q(t:) where

s = s = s

sk! = 5

= 1

'. Process K, G, H

at t = 4 I, J, L e Q(t)

at t = 5 M, Q E Q(t)

at t = 6 N e Q(t)

at t = 7 0 E Q(t)

at t = 8 P E Q(t) and the schedule is:

Process D, E, F

lg:

t = 12 3456 7 8

Machine One ADKIMNOP

Machine Two BEGJQ000 where F =8

max

Machine Three CFHL0000

However, an optimal schedule is given by:

t = 1 2 3 4 5 6 7

Machine One B A F. C I Q P

Machine Two C K F H J 0 0

Machine Three D 0 L M N 0 0

Lawler [46] was able to make use of the notion of scheduling jobs from last to first in arriving at his alogorithm for the one machine case. However, both Rule A-l and Rule A-2 fail if they are applied from last to first. The preceding examples can be used to show this by simply switching the start and finish nodes on both graphs .

A . 2 . 3 . 1 L

max

Rule A-3: Sequence the jobs from last to first, always choosing next from among the jobs currently available (i.e., those without succes- sors), m jobs with the latest possible due dates. (This rule is a direct extension of Lawler 's optimal procedure for the n job one machine problem [46].

The rule will not work since if d. =0 for all j, then the

.1

L problem is equivalent to minimizing F . The rule would trans- max max

late to scheduling from last to first any m jobs currently without

163

successor. However, T. C. Hu [41] has shown that even for a tree precedence structure the choice is not arbitrary.

Rule A-4: Sequence the jobs from last to first, always choosing from the jobs currently available, job q over job s if

I

,-1

where £ denotes the length of the longest path from j to the begin- j

ning of the graph.

where m = 2

at T jobs E, F, B, C, are available where

H - 2 = H and £ = 1 = &„ and so E and F are chosen E F S C

at T-l jobs B, C, D, are available where

H /' = £ = H =1 and so B and C are chosen B C D

at T-2 job D is available

at T-3 job A is available. The resulting schedule is

164

t = 1 2 3 4 = T

Machine One A D B E with L Machine Two 0 0 C F

L_ = L_ = 1

t, If

However, an optimal schedule is given by:

t = 1 2 3 4 - T

Machine One A D E B with L - L = L = 0.

max E F

Machine Two

0 0 F C

Rule A- 5: Sequence the jobs from last to first, always choosing from the jobs currently available, job q over job s if:

L:-d > i1 - d .

q q - s s

m = 1

at T: choose A over B since

,-1

d, - 5 - 4 = 1 > -1 = 4 - 5 = I - d

A b J

However, this contradicts Lawler's optimal rule for m = 1 which picks B over A.

165

While certainly the previous examples are not general

proofs, they lend credence to the author's conjecture that no one

pass rule exists for either the F or L problem with prece-

max max

dence constraints. The rules presented represent both the logical extension of known optimal results for more special cases as well at the extension of the results obtained in this dissertation without precedence constraints.

APPENDIX 3 THE OPERATION OF THE SEABOARD COAST LINE RAILROAD (SCL)

A . 3.1 Introduction

Although passenger services are the most widely publicized ex- amples of inefficient railway operation, freight services have a far greater impact on the nation's economy. Typical problems of rail freight transport include low equipment utilization, long and variable origin/destination transit time, congested and costly terminals, ex- pensive and loosely controlled local movements, and ever-present high labor costs.

In an effort to develop insight into the particular problems encountered by the railway industry a visit was paid to the main head- quarters of the Seaboard Coast Line Railroad in Jacksonville, Florida. The following description of operations is based on interviews with Mr. J. P. Roberts, Manager of Sales and Service, and Mr. T. J. Wadding- ton, Director of Freight Scheduling, both of whose willing coopera- tion is hereby duly acknowledged.

A . 3 . 2 The Freight Transportation Network

The freight services of the SCL encompass an area bordered on the northeast by Washington, D.C. (Potomac Yard), on the southeast by Miami, Florida, extending northwest to Chicago, Illinois, west to St.

166

167

SrJteOAZQ COV>7 L:Nfe 3MLPHJAO

Schedules shown herein are subject to change without notice and are ?or information only. They are not guaranteed.

=gm= \

\

c

Figure 8: Freight Railway Network

168

Louis, Missouri and Memphis, Tennessee and southwest to New Orleans, Louisiana. A map of the freight transportation network is found in Figure 8. The thick arcs denote the "parent" line and the narrow ones the annexed lines acquired through merger with the Louisville and Nashville railway. The whole network constitutes what is known as the set of "family" lines. The lines or routes are divided into three categories:

a. Run Lines: These are lines that pass through major switchyard centers where trains are broken up or service is added depending on the circumstances (e.g., Washington, D.C. to Jackson- ville, Florida). Trains along these routes usually travel directly between the major switchyards along the route with few, if any, stops at smaller stations along the way.

b. Run- Through Lines: These are lines that run partially through the SCL network and continue through the service of another railway. Resources along this line are pooled with other participat- ing railways (e.g., Jacksonville to New Orleans segment). These lines also pass through major switchyards.

c. ' Local Lines: These are lines that serve small facili- ties. Trains running these lines stop to either pick up or drop off cars at local sidings. These lines usually originate and terminate at major switchyards. The formulation of Chapter k provides a de- scription of railway operations along a local line.

169

A . 3 . 3 The Capacity of a Train

A prime concern of much of the development of this study has been the determination of the minimum train capacity required to deliver all freight to its destination. The capacity of a train is mainly restricted by one of the following factors:

a. The engine size: In order for a certain engine to

pull "x" boxcars on a given route at an acceptable speed, it is nec- essary that "x" not exceed a certain p re-determined value.

b. The "capacity" of a passing track: SCI. was originally a "two-track" railway that is, if: was generally the case that most routes were connected by two tracks, each track strictly uni-direc- tional in use. At present SCL is a "one-track" railway routes con- sist of one track where trains run in opposite directions. At dif- ferent locations along a route, there are sidings or passing tracks (originally part of the second track), where a train may wait while another passes in the opposite direction. While a train(s) in one direction nay have unlimited capacity, the train(s) in the opposite direction cannot be longer than the smallest passing track along the route .

c. An additional restriction imposed externally is that some trains may not leave their origins until they are at least "y" cars long.

A . 3 . 4 The Development of Freight Schedules

Schedules are based primarily on the needs for transporting freight. The idea is to provide enough capacity throughout the

170

system to move every thin;;, every day. Arrival and departure tines are determined so as to grab as much freight as possible and then the trains are run as fast as possible from station to station. Since schedules are set to meet peak demands, the situation of having to choose which of two cars to add to a train does not oc- cur often. However, a critical problem, from management's view- point, is the existence of a shortage of boxcars. They point out that the demand for boxcars exceeds the available supply; that if they had more boxcars, they could move more freight. However, it is not clear that efficient use is made of the boxcars on hand since an average per day per boxcar utilization rate of 0.7 reveals that on the average 30% of all boxcars travel empty all the time.

Management acknowledges that the delivery of some commodi- ties is of greater urgency than others, and hence the concept of a "due date" does exist. However, current operations do not take this into account. The goal is to get everything to its destina- tion as soon as possible whether it has to be there or not. The possible use of any slack in the system is completely negated by this approach.

The complexity of moving over 50 different commodities is simplified by giving equal priority to all boxcars, thereby paying little real attention to the particular commodity being moved.

A . 3 . 5 Summary

Tn summary, a scheduling problem as concerns "what" to put on or drop off does not often arise in practice because capacities

171

are geared to meet peak demand;;. The scheduler views all the freight as a single commodity to be moved as quickly as possible. The sched- uling function is, at present, essentially one of creating new routes or eliminating stops as demand fluctuates. Timetables are set in ac- cordance with demand.

It is clear that the problems considered in this report are a simplification of current operations. However, inefficiencies in current scheduling operations of SCL bring to light the need for per- haps considering such factors as "due dates" in order to more effec- tively utilize current equipment. Certainly scheduling problems do exist and the ramifications of more efficient scheduling methods would most likely be in more efficient equipment utilization.

BIBLIOGRAPHY

I j | Applications Digest, Association of American Railroads, Washing- ton," D.C.

[2] Arthanari, T.S. and Ramamurthy, K. G. , "A Branch and Bound Algo- rithm for Sequencing n Jobs on m Parallel Processors," Opsearch, 7, 1970, 3, September, pp. 147-156.

[3] Ashour, S., Sequencing Theory, Springer Verlag (.1972).

[4] Baker, K.R., "Procedures for Sequencing Tasks with One Resource Type," Int. Journal of Prod. Research, 11 (19 73), 125-138.

[5] Baker, Kenneth R. and Martin, James B. , ,:An Experimental Compari- son of Solution Algorithms for the Single-Machine Tardiness Prob- lem," Technical Report No. 72-76, Department of Industrial and Operations Engineering, University of Michigan (Presented at the Symposium on Scheduling, Raleigh, N.C., May, 1972).

[6] Baker, K. R- and Merten, A. G. , "Scheduling with Parallel Pro- cessors and Linear Delay Costs," Nav. Res. Log. Quart. , Vol. 20, 4, 1973.

[7] Baker, K. R. and Su, '/,., "Sequencing with Due-Dates and Early Start Times to Minimize Maximum Tardiness," Manuscript, June, 19 72.

[8] Bellman, R. , "Some Mathematical Aspects of Scheduling Theory," J. Soc. Ind. and Appl. Math., 4, No. 3, September, 1956.

[9] Bellmore, M. and Nemhauser, G. L., "The Traveling Salesman Prob- lem, A Survey," Operations Research, 16, July-August, 1968.

[10] Bennington, G. E. and McGinnis, L. G., "A Critique of Project Planning Under Constrained Resources," Symposium on the Theory of_ Scheduling and Its Applications, Springer Verlag (19 7 3).

[Ill Billhcimer , J. '..'., "Network Design with Fixed and Variable Cost Elements," Transportation Science (7) (19 73), 4 7-74.

[12] Bratley, P., Florian, M. and Robillard, P., "Scheduling with

Earliest Start and Due Date Constraints," Nav. Res. Log. Quart., Vol. 18, No. 4, December, 1971.

1 13] Bratley, P., Florian, M. and Robillard, P., "Scheduling with

Earliest Start and Due Date Constraints on Multiple Machines," Technical Report #111, Universite de Montreal, Department d'Informatique, January, 1973.

172

17:

[14] Brat ley, P., Florian, M. and Hob i Hard, P., "On Sequencing with Earliest Starts and Dae Dates with Application to Computing Bounds for the (n/m/C/F ) Problem," Nav. Pes. Log. Quart. 20 (1973) 57-67.

[15] Coffman, E. and Hunt/,, R. , "Optimal Pre-emptive Scheduling on Two-Processor Systems," IEEE Transactions Comput., 18, 1014.

[16] Conway, R. '..'., Maxwell, W. L. and Miller, L. W. , Theory of Scheduling, Addison-Wesley, August, 1967.

[17] Dantzig, G. B., and Fulkerson, D. R. , "Minimizing the Number of Tankers to Meet a Fixed Schedule," Nav. Res. Log. Quart., 1, 217-222 (1954).

[13 1 Davis, Edward W. , "Resource Allocation in Project Network Models-

A Survey," A 1 1 E T r an s a c 1 1 o n s , Vol. 5, No. 4, December, 1973.

[19] Dessouky, Mi. E. and Margenthaler, C. P., "The One Machine Se- quencing Problem with Early Starts and Due Dates," AIIE Trans- actions, 4,3, September, 1972.

[20] Dorsey, R. C, Hodgson, T. J. and Ratliff, H. D., "A Network Ap- proach to a Multi-Facility, Multi-Production Scheduling Problem without Backordering," Research Report No. 73-5, Department of Industrial and Systems Engineering, University of Florida, Gainesville, Florida, January, 1973.

[21 | , "A Network Approach to a Multi-Facility, Multi-

product Production Scheduling Problem with Backordering," Research Report No. 73-6, Department of Industrial and Systems Engineering, University of Florida, Gainesville, Florida, Jan- uary, 19 73.

[22] , "A Multiple. Facility, Multiple Product Production

Scheduling Problem with Overtime,1' Research Report No. 73-7, Department of Industrial and Systems Engineering, University of Florida, Gainesville, Florida, January, 1973.

[23] Eastman, W. L., Even, S. and Isaacs, I. M. , "Bounds for Optimal Scheduling of Jobs," Management Science, Vol. 11, 2, December (1964).

[24] Elmaghraby, S. E. (ed.), Symposium on the Theory of Scheduling and Its Applications, Springer Verlag (1973).

[25] , "The One-Machine Sequencing Problem with Delay- Costs," Journal of Industrial Engineering, Vol. 19, No. 2, February, 19 6 P..

17A

[26] Elmaghraby, S. E. (ed.), "The Machine Sequencing Problem

Review and extensions,'' Nav. Res. Log. Quart., Vol. 15, June, 1968.

[271 Elmaghraby, S. E. and Park, S. H. , "On the Scheduling of Jobs on a Number of Identical Machines," AIIE Transactions, Vol. 6, No. 1, March, 19 74.

[28] Emmons, H. , "One-Machine Sequencing to Minimize Certain Func- tions of Job Tardiness," Operations Research, Vol. 17, No. 4, July, 1969.

[29] Fisher, M. , "Optimal Solution of Scheduling Problems Using

Lagrange Multipliers," Report 7210, Center for Math. Studies, University of Chicago, March (1972) .

[30] Ford, L. R. , Jr. and Fulkerson, D. R. , Flows in Networks , Princeton University Press, Princeton, N.J., 1962.

[31] Gelders, L. , and Kleindorfer, P. R. , "Coordinating Aggregate

and Detained Scheduling Decisions in the One Machine Job Shop: Part I Theory," Operations Research, Vol. 22, No. 1, Jan. - Feb. ,19 74.

[32] Gillman, L. and Rose, A., APL/360: An Interactive Approach, John Wiley & Sons, 19 70.

[33] Glassey, C. R. , "Minimum Changeover Scheduling of Several Pro- ducts on One Machine," Operations Research, Vol. 16, No. 2, March-April, 1968.

[34] Cotterer, M. II. , "Scheduling with Deadlines, Priorities and

Non-Linear Loss Functions," Research Report, Dept. of Industrial Engineering, Georgia Institute of Technology, Atlanta, Georgia.

[35] Gupta, J. and Maykut, A., "Scheduling Jobs on Parallel Processors With Dynamic Programming," Decision Sciences, November, 1973.

[36] Gupta, J. and Walvekar, A., "Sequencing n Jobs on m Parallel Processors," Opsearch, 6, 1969, 4, December, pp. 295-298.

[37] Held, M. and Karp, R. M. , "A Dynamic Programming Approach to Sequencing Problems," J. SIAM, Vol. 16, No. 2, March (1962).

[38] Herroelen, Willy S., "Resource-Constrained Project Scheduling the State of the Art," Operational Research Quarterly, Vol. 23,

[39] Horn, W. A., "Single-Machine Job Sequencing with Treelike Pro- cedure Ordering & Linear Delay Penalties," J. SIAM, 23 (1972), 189-202.

175

[40] Horn, W. A., "Minimizing Average Flow Time with Parallel Mach- ines," Technical Note, Operations Research, Vol. 21, No. 3, Hay-June (19 7 3) .

[41] Hu, T. C, "Parallel Sequencing and Assembly Line Problems," Operations Research, 9, No. 6, November, 1961.

[42] Jackson, J. R. , "Scheduling a Production Line to Minimize Maxi- mum Tardiness," Research Report 43, Management Sciences Research Project, UCLA, January, 1955.

[43] Jacobsen, S., "On Marginal Allocation in Single Constraint Min- Max Problems," Management Science, July, 19 71.

[44] Karp, R. M. , "Reducibility Among Combinatorial Problems," in

Complexity of Computer Computations, R. E. Miller, J. W. That- cher "and J. D. Bohlinger (eds.), Plenum Press, New York, 19 72,

pp. 85-103.

[43] Lawler, E. L., "On Scheduling Problems with Deferral Costs," Management Science, 11, No. 2, November, 1964.

[46] , "Optimal Sequencing of a Single Machine Subject

to Precedence Constraints," Management Science, Vol. 19, No. 5, January, 19 73.

[47] Lawler, E. L. and Moore, J. M. , "A Functional Equation and Its Application to Resource Allocation and Sequencing Problems," Management Science, Vol. lb, No. 1, September, 1969.

[48] Maxwell, W. L., "The Scheduling of Single Machine Systems

A Review," The Int. Journal of Prod. Research, Vol. 3, No. 3, 1964.

[491 McMahon, G. and Florian, M. , "On Scheduling With Ready Time

and Due Dates to Minimize Maximum Lateness," Report #159, Uni- versite de Montreal, Department d' Informatique , January, 1974.

[50] McNaughton, R., "Scheduling with Deadline and Loss Functions," Management Science, 6, No. 1, October, 1959.

[51] McWhite, P. B. and Ratliff, H. 0., "Allocation of Mine Counter- measure Resources When Shipping Flow is Fixed," Research Report No. 73-12, Industrial and Systems Engineering Department, Uni- versity of Florida, Gainesville, Florida, January, 19 7 3.

[52] Mehl, Roger H. , "A Call for Practical Rail Transportation Sys- tem Models," Presented at ORSA/TIMS Joint National Meeting, Boston, Mass., April 22-24, 1974.

176

[53] Mellor, P., "A Review of Job Shop Scheduling," Operational Research Quarterly, 17 (1966), 161-171.

[54] Miller, Louis W. , 'Branch and Bound and Heuristic Approaches Lo a Sequencing Problem with Team Requirements," AIIE Trans- actions, Vol. 6, No. 3, Sept. 19 74.

[551 Moore, J. Michael, "An n Job-One Machine Sequencing Algorithm for Minimizing the Number of Late Jobs," Management Science, Vol. 15, No. 1, September, 1968.

[56] Muntz, R. R., SjAjg_dialin^ of Computations on Multiprocessor Sys- iLgragi .. Jne Preemptive Assignment Discipline, Ph.D. Dissertation, Princeton University, Princeton, N. J., April, 1969.

[57] Petersen, Clifford C. , "Solving Scheduling Problems Through Re- ordering Operations," Symposium on Scheduling, Raleigh, N. C, May 15-17, 19 72.

[58] Porteus, E. L. and Yormak, J. S., "More on Min-Max Allocation," Management Science, Vol. 18, No. 9, May, 19 72.

[59] Pounds, W. F., "The Scheduling Environment" in Industrial Sclic

[60] [61] [62] [631

[64 J [65]

uling, J. F. Muth and C. L. Thompson (eds . ) , Prentice-Hall, New York, 1963.

j^A"i.lEgil!LE£§-^:3_1l^lLJf"J-Agt"-:'-n Developmental Issue, Railroad Research Information Service (1973), U. S. Department of Transportation.

Rinnooy-Kan, A. H. G., "The Machine Scheduling Problem," Report BW 27/73, Mathematisch Centrum, Amsterdam, 19 73.

Rinnooy-Kan, A. H. G., Lagewag, B. J. and Lenstra, J. K., Report BW 33/74, Mathematisch Centrum, Amsterdam, April, 1974.

Roberts, S. D. and Moodie, C. L., "Experiments with Priority Dispatching Rules in a Parallel Processor Shop," Int. Journal of Prod. Research, 6 (1968) , 4, pp. 303-312.

Robillard, P., Private Communication, Oct. 1973

Root, J. C, "Scheduling with Deadlines and Loss Functions on k Parallel Machines," Management Science, 11, No. 3, January, 1965.

i 66] Rothkopf, Michael H. , "Scheduling Independent Tasks on Parallel Processors," Management Science, Vol. 12, No. 6, January, 1966, pp. 437-447.

.67] Salzbom, F. J. M. , "Minimum Fleetsize for a Suburban Railway System," Transportation Science, 4 (1970), 383-402.

177

[68] Schild, A. and Fre.dman, I. J., ''Scheduling Tasks with Linear Loss Functions," Management Science, 7, No. 3, April, 1961.

[69] , "Scheduling Tasks with Deadlines and Non-Linear

Loss Functions," Management Science, 9, No. 1, October, 1962.

[70] Schrage, L. and Miller, L. H. , "The Queue M/G/l with Shortest Remaining Processing Time Discipline," Operations Research, Vol. 14, No. 4, July-August, 1966.

[71] Shwimer, J., "On the N-Job , One Machine Sequence independent

Scheduling Problem with Tardiness Penalties: A Branch and Bound Solution," Management Science, Vol. 18, No. 6, February, 1972, pp. 301-313.

[72] Sidney, Jeffrey B. , "One Machine Sequencing with Precedence

Relations and Deferral Costs Parts I, II," Working Papers, Nos . 124, 125, Faculty of Commerce and Business Administration, Uni- versity of British Columbia, April, 19 72.

[73] , "An Extension of Moore's Due Date Algorithm,''

Working Paper No. 126, Faculty of Commerce and Business Admini- stration, University of British Columbia, April, 1972.

[74] Smith, W. E., "Various Optimizers for Single State Production," Nav. Res. Log. Quart., 3, No. 1, March, 1956.

[75] Spinner, Allen II., "Sequencing Thcory--Development to Date," Nav. Re.s. Log. Quart., 15, 319-330, June, 1963.

[76] Srinivasan, V., "A Hybrid Algorithm for the One Machine Sequenc- ing Problem to Minimize Total Tardiness," Nav. Res. Log. Quart., Vol. 18, No. 3, September, 1971, pp. 317-327.

[77] Walter. J. M. , "Scheduling Parallel Processors with Job Prece- dence Costs Using a Weighted Criterion," M. S. Thesis, Purdue University, August, 1969.

[78] Wilkerson, L. J. and Irwin, J. D., "An Improved Method for

Scheduling Independent Tasks," AIIE Transactions, Vol. 3, No. 3, September, 19 71.

[79] Zaloom, Victor A., "Research Abstract on a Feasibility Approach to Optimal Schedules," Symposium on the Theory of Scheduling and Its Applications, Springer Verlag, 1973.

BI ( JGRAPHICAL SKETCH

Louis Anthony Martin-Vega was bom on December 14, 1947 in New York, New York. He received his elementary and secondary educa- tion in New York, Florida and Puerto Rico graduating from Franklin 1). Roosevelt High School in Guanica, Puerto Rico, as president and valedictorian of the class of 1964. He attended the University of Puerto Rico at Mayagiiez from 1964 to 1968, receiving a Bachelor's degree in Industrial Engineering, magna cum laude , in December, 1968. Upon graduation he was employed as an Assistant Systems Analyst by the Banco Credito y Ahorro Information Systems Center in Hato R.ey, Puerto Rico. In September, 1969 he was awarded a scholarship by the Economic Development Administration of Puerto Rico to pursue graduate studies at New York University. He left graduate school in May, 1970 to serve as Assistant Director of a nine-month Traffic Safety Study for the Department of Public L'orks in San Juan, Puerto Rico. He re- turned to New York University in February, 19 71, completing a Master of Science degree in Operations Research in June of the same year.

In June, 1971, Louis Martin-Vega entered the graduate pro- gram in Industrial and Systems Engineering at the University of Flor- ida. He received a Master of Engineering degree from the University of Florida in March, 1973.

Louis Martin-Vega is a registered Professional Engineer and is a member of the Tau Beta Phi and Alpha Pi Mu honorary engineering societies as well as the American Institute of Industrial Engineers,

178

179

The Institute of Management Science, Operations Research Society of America and College of Engineers and Architects of Puerto Rico professional societies. He is married to the former Emed Magali F lores of Guanica, Puerto Rico, and is the proud father of one son. Louisito and one daughter, Monica.

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.

/,

H. Donald Ratliff, Chairman Associate Professor of Industrial and Systems Engineering

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.

Richard L. Francis Professor of Industrial and Systems Engineering

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.

Thorn J. Hodgson

Associate Professor of Industrial.

and Systems Engineering

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy.

Ira 'Horowitz

Professor of Management

This dissertation was submitted to the Graduate Faculty of the College of Engineering and to the Graduate Council, and was accepted as part 1 ulfillment of the requirements for the degree of Doctor of Philosophy.

full March, 19 75

// &Y.

Dean, College of Engineering

Dean, Graduate School

UNIVERSITY OF FLORIDA

3 1262 08553 3163