UNIVERSITY OF;

ILLINOIS LIBRARY

/" UR3ANA-CHAMPAIGN

ENGINEERING

■mwii

Minimum Fee tor

NOTICE: Return or renew all Libra each Lost Book is $50.00.

The oerspn charging this material is responsible for its retWtt) tire 'jibran from ybi#*if>as withdrawn

on °^|miiff |#i*tiffflm below'

Theft, mutilationTfnl un JerkrW Joftcl JWAsons for discipli- nary action and may result in dismissal from the University. To renew call Telephone Center, 333-8400

UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN

L161— O-1096

Digitized by the Internet Archive

in 2012 with funding from

University of Illinois Urbana-Champaign

http://archive.org/details/networkingresearOuniv

^ ENGINEERING LIBRARY UNIV^ITY OF ILLINOIS

URBANA. ILLINOIS

.enter for aq vance<

UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

URBANA IILIf:

CAC Document Number 240 CCTC-WAD Document Number 7518

Networking Research in Front Ending and Intelligent Terminals

ENFE Final Report

September 30, 1977

The person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below.

Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University

™|V«Smr_OF ,uINO,s MBRARY AT URBANA-CHAMPA.GN

JIM 6 Itfji

FEB 2L 1. jjw?

FEB 2 3

L161— O-1006

CAC Document Number 240 CCTC-WAD Document Number 7518

Networking Research in Front Ending and Intelligent Terminals

ENFE FINAD REPORT

Prepared for the Command and Control Technical Center WWMCCS ADP Directorate Defense Communications Agency Washington, D.C. 20305

under contract DCA100-76-C-0088

Center for Advanced Computation University of Illinois at Urbana-Champaign Urbana, Illinois 61801

September 30, 1977

approved for Release:

^fc^.A.

Peter A. Alsberg, Principal Investigator

TABLE OF CONTENTS

Page

SUMMARY 2

Background

ENFE Research Program 2

ENFE SOFTWARE ARCHITECTURE 7

General Description 7

Host-to-Front-End Communications 7

Channel Protocol 7

Channel Protocol module 8

Process-to-Service Communications 8

Process-to-Service Protocols 9

Service Structure 9

Host-Host Service module 10

Program Access Service module 10

Server Virtual Terminal Service module 10

PROTOCOL SPECIFICATIONS 11

Host-to-Front-End Protocol 11

ARPANET Host-Host Process-to-Service Protocol 12

Program Access Process-to-Service Protocol 12

Server Virtual Terminal Process-to-Service Protocol.. 13

Other Process-to-Service Protocols 14

Telnet Data Entry Terminal Option 14

EXPERIMENTATION 16

ENFE Experiment Plan 16

Goals 16

Tools 16

Specific Tests 17

UNIX/ENFE Experimental Performance Report 17

Experimentation Software 17

Experiment Results 18

IMPLICATIONS OF EXPERIMENT RESULTS 21

Multi-Host Study 21

OFFLOADING STRATEGIES 41

Offloading the Telnet Protocol 41

Offloading the File Transfer Protocol 41

Offloading Other ARPANET Protocols 42

ALTERNATIVE ARCHITECTURES 43

Alternative Architecture Research Plan..."/" ai

State of the Art * * ' * ,t

Research Directions ........... 43

Research Plan ,i

43

SUMMARY

background

Under contract DCA100-76-C-0088 , the Center tor Advanced imputation of the University ot Illinois at Urbana-Champaign has investigated the capabilities ot network front ends. As a part that contract, an experimental network front end (ENFE) has )een developed to interface a World-Wide Military Command and :ontrol System (WWMCCS) H6000 to the ARPA network and to conduct experiments with the proposed ARPANET Host-to-Front-End Protocol. i total of 194.28 man-months were expended over a period of 12 months (1 October 1976 to 30 September 1977).

An experimental network front end (ENFE) was a primary :ontract deliverable. Delay in GFE hardware delivery signifi- :antly decreased the work that could have been accomplished, digital Equipment Corporation delivered the ENFE development mm- computer (DEC PDP-11/70) three months late. This slowed con- struction ot critical ENFE operating system software. Associated omputer Consultants delivered the Honeywell-DEC communications ink three months late, which delayed software installation, esting, and evaluation. Therefore, only a crude evaluation of NFE capabilities is available at this time. Despite these de- ays, however, all ot the essential work contracted for has been uccessfully completed. The ENFE was built, tested, and is urrently operational.

JNFE Final Report 9/30/77

■NFE Research Program

The ENFE research program was organized into two teams, rhe first team implemented the ENFE. The ENFE uses a DEC 'DP-11/70 computer running a Unix general purpose operating sys- :em. The Unix operating system capabilities were expanded to >rovide more general support for Host-to-Front-End Protocol (HFP) software. Measurement software was added to Unix to support :ests and experiments.

The second team was concerned with protocol issues. This :eam revised the HFP specifications, generated Telnet protocol >ptions for Honeywell VIP terminals, conducted offloading stu- lies, followed AUTODIN II protocol developments, and constructed » plan for research into alternative front-end architectures.

Individuals and small groups were drawn from both of ihese teams to generate an experiment plan and to carry out and malyze ENFE tests and experiments.

There were some risks associated with the ENFE research urogram. The state of the art in network communications is jeared to machine/terminal interaction rather than machine/machine interaction. A generalized machine-to-machine Protocol like HFP had never been used to facilitate communica- :ions between a large computer and a front end. Furthermore, Jnix was not designed to support high speed message switching. >iven the state of the art in HFP and the architecture of Unix, -he Unix ENFE and HFP software were strictly experimental. The Primary product of this research program was experience with the

ENFE Final Report 9/30/77

host front-ending problem.

All work (except the multi-host study) performed under the contract has already been thoroughly documented (Tables 1 and 2). Thus, this final report will abstract those reports pro- duced .

!NFE Final Report 9/30/77

Table 1 Contract Deliverables

AC Document CCTC-WAD Document Title Date

Number Number

220 7501

DRAFT H6000 Software 15 Nov. 1976 Specifications

221 7502 DRAFT Experimental 15 Dec. 1976

Network Front End Functional Description

220 7501 FINAL H6000 Software 10 Jan. 1977

Specifications

221 7502 FINAL Experimental 15 Jan. 1977

Network Front End Functional Description

227 7509 DRAFT Experimental 28 Mar. 1977

Network Front End Experiment Plan

227 7509 FINAL Experimental 16 May 1977

Network Front End Experiment Plan

232 7512 DRAFT Alternative 16 May 1977

Architecture Research Plan

233 7515 DRAFT Experimental 1 July 1977

Network Front End Software Functional Description

233 7515 FINAL Experimental 1 Aug. 1977

Network Front End Software Functional Description

2NFE Final Report 9/30/77

AC Document CCTC-WAD Document Title Date

Number Number

232 7512 FINAL Alternative 30 Sept. 1977

Architecture Research Plan

239 7517

UNIX/ENFE Experimen- 30 Sept. 1977 tal Performance Report

230 7511 Offloading ARPANET 30 Sept. 1977

Protocols to a Front End

241 7519 ENFE Nassi-Shneider- 30 Sept. 1977

mann Flow Charts

242 7520 ENFE Listings and 30 Sept. 1977

Object Code

240 7518 ENFE Final Report 30 Sept. 1977

JNFE Final Report 9/30/77

Table 2 Unscheduled Reports Delivered

AC Technical CCTC-WAD Document Title Date

Number Number

219 7503 Host to Front End 19 Aug. 1977

Protocol/Version I

80 7504 ARPANET Host-Host 17 Mar. 1977

Process-to-Service Protocol Specification

81 7505 Program Access Proc- 10 Mar. 1977

ess-to-Service Spec- ification

82 7506 Server Virtual Ter- 17 Mar. 1977

minal Process-to Ser- vice Protocol Spec- ification

84 7507 Illinois Inter-Proc- 1 Apr. 1977

ess Communication Fa- cility for Unix

94 7514 Telnet Data Entry 27 June 1977

Terminal Option

ENFE Final Report 9/30/77

ENFE SOFTWARE ARCHITECTURE

General Description

The offloaded network software can be thought of as a set Df services provided to host (H6000) processes or to users, rhese services allow the network and the various hosts connected to the network to be conveniently used. A complete functional description of the ENFE software architecture is contained in CAC Document No. 233. The key features are summarized below.

■lost -to -Front -End Communications

A basic mechanism must be provided to support communica- tion between host processes and front-end services. This mechan- Lsm is the Host-to-Front-End Protocol (HFP) , which is defined in :AC Document 219 (ARPA Request for Comments (RFC) 710). The HFP specification distinguishes two protocol layers: the channel )rotocol and the process-to-service protocols.

Channel Protocol. By means of the channel protocol, log- cal channels are set up between host processes and the front-end services, and messages are transmitted on these channels. Provi- sions are made for flow control and for out-of-sequence signal- ing. The channel protocol defines five types of HFP Messages:

SNFE Final Report 9/3B/77

1. BEGIN, which sets up logical channels;

2. END, which terminates logical channels;

3. TRANSMIT, which transmits data;

4. SIGNAL, which provides a means for synchron- izing the ends of a logical channel, for interrupting the other end, and for flushing data from the other end of the channel; and

5. EXECUTE, which provides a means for passing service-specific information "out of band;" i.e., outside the strict sequencing required for TRANSMIT Messages.

2ach Message type can be either a Command (requesting that the

iction defined by the Message be taken) or a Response (indicating

whether the action was taken and, if not, providing some explana-

:ion). The HFP specifications use the capitalized word, Message,

:o refer to these Message types.

Channel Protocol module. The front end contains a

ioftware module, the Channel Protocol module (CPM) , which manages

he logical channels and serves as a bi-directional multiplexor.

■he host also contains a CPM which similarly manages the other

^nds of the logical channels.

'£ocess-to-Service Communications

Communications between a host process and a front-end ervice may be divided into three stages:

1. communications between the host process and the host CPM (described in CSC Document No.

8

1NFE Final Report 9/30/77

R493700056-2-1, "Host to Front-End Processor Protocol Interface Functional Description"),

2. communications between the host CPM and the front-end CPM (described in CAC Document No. 220, "H6000 Software Specifications" and CAC Document No. 219, "Host-to-Front-End Proto- col") , and

3. communications between the front-end CPM and a front-end service (described in CAC Docu- ment No. 233, "Experimental Network Front End Software Functional Description").

Process-to-Service Protocols. The process-to-service

protocols specify the content, sequencing, and type of HFP Mes-

ages by which host processes communicate with front-end ser-

ices. The process-to-service protocols implemented in the ENFE

re :

1. ARPANET Host-Host Process-to-Service Protocol (CAC Technical Memorandum No. 80),

2. Program Access Process-to-Service Protocol (CAC Technical Memorandum No. 81), and

3. Server Virtual Terminal Process-to-Service Protocol (CAC Technical Memorandum No. 82) .

Service Structure . Each front-end service implements one rocess-to-service protocol. All front-end services execute ithin their own address spaces; i.e., as user-level programs.

Each program is structured as a finite state machine that ccepts two types of inputs. HFP Message inputs are generated by rocesses in the host requesting action from the front-end ser- ices. I/O completion event inputs are generated by the system

;NFE Final Report 9/30/77

n response to service-initiated device I/O operations. Each nput is associated with a specific HFP logical channel. The nput type and current channel state determine the immediate ction and next channel state. Most actions result in the ransmission of data to another destination and in the generation f an HFP Response indicating the success or failure of the ac- ion .

Host-Host Service module. The ARPANET Host-Host Service HHS) module enables programs running in the host to use the RPANET Network Control Program (NCP) in the front end. It im- lements the ARPANET Host-Host process-to-service protocol.

Program Access Service module. The Program Access Service PAS) module enables programs running in the host to execute rbitrary programs in the front end. It implements the Program -cess process-to-service protocol.

Server Virtual Terminal Service module. The ARPANET erver Virtual Terminal Service (SVTS) module enables programs on ie host to be accessed by terminals connected to other hosts on ie ARPANET. It implements the ARPANET Server Virtual Terminal rocess-to-service protocol. It also implements the ARPANET Tel- it protocol described in NIC Document No. 15372.

10

NFE Final Report 9/30/77

PROTOCOL SPECIFICATIONS

os t -to-Front -End Protocol

The performance of data communications tasks such as ter- inal handling and network protocol interpretation can impose a ignificant load on a host. Some of these tasks can be performed the front end for the host. The Host-to-Front-End Protocol HFP) defines a form of communication between the host and the ront end to enable this "offloading" of services. Thus, the HFP provides specifications for:

1. a channel protocol,

2. individual Commands and Responses,

3. the process-to-CPM interface, and

4. the service-to-CPM interface.

i addition, the HFP provides specifications for specifying i:ocess-to-service protocols.

Each HFP Message contains a HEADER carrying channel pro- >col information and may contain TEXT carrying process-to- >rvice protocol information. Process-to-service protocols use *?P Messages to carry information between a process and a service 'dule. The HFP Message types are:

1. BEGIN Command/Response,

2. END Command/Response,

11

1NFE Final Report 9/30/77

3. TRANSMIT Command/Response,

4. SIGNAL Command/Response, and

5. EXECUTE Command/Response.

RPANET Host-Host Process-to-Service Protocol

CAC Technical Memorandum No. 80 specifies a process-to- ervice protocol for providing ARPANET Host-Host Protocol and nitial Connection Protocol services to a process through the FP. The Host-Host Protocol is the basic inter-process communica- ion protocol for the ARPANET (ARPANET NIC Document 8246). The rogram which implements it in each host is the Network Control rogram (NCP) . The service described here provides an interface, trough the HFP, between a process in a host and an NCP in a ront end. This enables the host process to establish and use RPANET connections.

-ogram Access Process-to-Service Protocol

CAC Technical Memorandum No. 81 specifies a process-to- brvice protocol for the execution of, or attachment to, arbi- :ary programs in the front end. The intent was to provide a ?neral mechanism that would allow the host to access terminal- lented front-end services. Examples of such services are User 'lnet and teleconferencing.

The protocol assumes that the program access service

12

ENFE Final Report 9/30/77

itself is completely offloaded to the front end. The only software remaining in the host is a relay process that passes properly formatted data between a host terminal or process and the Program Access Service module in the front end. HFP TRANSMIT :ommands are used for this data transmission.

>erver Virtual Terminal Process -to -Service Protocol

CAC Technical Memorandum No. 82 specifies a protocol for iff loading the server side of a virtual terminal protocol; e.g., ierver Telnet for the ARPANET. This protocol allows some flexi- >ility in the degree of offloading that may be achieved. Jthough the protocol is applicable to a general virtual terminal ervice, the discussion in the specification is in terms of the RPANET Telnet Protocol, which is currently the only such proto- ol widely used.

The functions of the typical Server Telnet implementation nclude :

1. manipulating network connections,

2. negotiating Telnet options,

3. mapping between local terminal representa- tions and network virtual terminal represen- tations ,

4. transmitting data over connections,

5. handling special control functions, and

13

2NFE Final Report 9/30/77

6. interfacing remote terminals so that they appear as if they were local terminals.

The server virtual terminal process-to-service protocol ises HFP Messages to carry information between the residual part Server Telnet in the host (the "process") and the Server Vir- ual Terminal Service module (the "service") in the front end.

ther Process-to-Service Protocols

Further study of the problem of offloading Telnet led us o conclude that a single, symmetric protocol should be designed o handle both User and Server Telnet services. We have con- tructed, but have not implemented, such a protocol (CAC Techni- al Memorandum No. 103, "Network Virtual Terminal Process-to- ervice Specification"). This protocol specification, inaddi- ion to specifications for three process-to-service protocols onstructed in connection with our study of strategies for of- loading the ARPANET File Transfer Protocol, is appended to CAC Jcument No. 230, "Offloading ARPANET Protocols to a Front End."

yj}et Data Entry Terminal Option

Under the current contract, we have been tasked to pro- Lde facilities for attaching data entry terminals, specifically »e 7705 VIP terminal, to the ENFE and the Telnet software. How- er, the Telnet protocol was originally designed to support sim- •e, scroll-mode terminals. To get the maximum amount of useful- ?ss out of a data entry terminal, the Telnet protocol needed to

14

1NFE Final Report 9/30/77

■e extended. Fortunately, the Telnet protocol has a built-in :echanism, the "option negotiation" mechanism, to allow such xtension. We have therefore defined an option to support data ntry terminals. In effect, this option defines the Network Vir- ual Data Entry Terminal. This option supports a minimal set of seful functions common to most data entry terminals and also Hows a number of highly sophisticated functions to be negotiat- d. Details of this option may be found in "Data Entry Terminal ption," CAC Technical Memorandum No. 94 (ARPANET RFC 731).

15

)NFE Final Report 9/30/77

EXPERIMENTATION

xperimental Network Front End Experiment Plan

This document (CAC Document No. 227) described our prel- minary plans for the ENFE tests and experiments. The three main ections identified:

1. goals of the experimentation,

2. tools for experimentation, and

3. scenarios for specific experiments.

Goals. The goals of the experiment plan fell into three ategories :

1. performance testing,

2. fine-tuning of the system, and

3. investigating benefits to be gained from design changes.

ests were proposed to determine whether the system works as it

s supposed to and is able to provide the front-ending facilities

eeded in the short term by the WWMCCS community. In particular,

throughput and terminal support tests were given high priority.

Tools. In this document we described an optimal set of

^ols for thoroughly understanding the front end. The set we

lanned to implement included:

1. timing mechanisms for generating interrupts as specified by the experimenter and for timestamping messages,

2. artificial traffic generators (involving both software and hardware) at the three front-end

16

NFE Final Report 9/30/77

interfaces (the interfaces to the host, to the network, and to the terminals), and

3. software to collect data as the experiments are run.

ther tools described in this document such as a simulation pro- ram and queueing theory analysis are essential to a follow-on tudy of how the front-end design may be improved.

Specific Tests. In the last section of the report, we resented a set of scenarios for specific tests. These were ecessarily tentative, particularly as to details. We also mapped at a more comprehensive program of experimentation than we ex- scted to be able to carry out under the current contract. Tests id experiments will continue under a follow-on contract (the nase B work) .

jIX/ENFE Experimental Performance Report

CAC Document No. 239 reports on the results of the Phase -program of testing and experimentation.

Experimentation Software. The front-end tests used a >nfiguration of software modules similar to the standard ENFE ^figuration, except that local and foreign host processes were mulated by processes resident in the ENFE itself. These ^ocesses served as message generators, sending messages to each '•her via the standard ENFE modules. An extra copy of the Chan- el Protocol module was also included in the ENFE to interface "e "local host" message generator to the front-end's Channel

17

1NFE Final Report 9/30/77

'rotocol module.

In order to make accurate timing measurements, a pro- rammable clock was attached to the PDP-11/70. System calls were mplemented to enable the experiment software to utilize this lock to get clock readings and to schedule interrupts. Times- amping software was built into the front end at various points, his software inserted clock readings (timestamps) into the texts f messages as they were transmitted through the front end. In his way the progress of a message through the ENFE could be easured to a high degree of accuracy.

Small modifications were made in the standard Unix moni- oring facilities to provide for monitoring of kernel-level as ell as user-level processes. This allowed a detailed analysis t processor usage.

Experiment Results. a large part of the Phase A testing id evaluation task involved testing the software to make certain iat it operates correctly. A certain amount of fine-tuning nore than was anticipated in the Experiment Plan) was also car- ned out. The Phase A measurements reported here have allowed us identify which portions of the system need to be made more Ificient and to draw broad conclusions regarding the system chitecture.

The measurements reported here include:

1. timing tests of the Inter-Process Communica- tion (IPC) primitives.

2. timing of the progress of single messages

18

INFE Final Report 9/30/77

through the front end,

3. monitoring of processor usage, and

4. saturation throughput using several different configurations .

From timing the IPC primitives, we found that it requires

minimum of five to six milliseconds to relay a message from one

ront-end process to another. The single-message timing measure-

ents indicate that (except in the case of the NCP) the time a

essage spends being processed by the modules is a small fraction

f the time required to relay the message between modules. Moni-

oring the processor usage by the Channel Protocol module and by

he Host-Host Service shows that 85 to 90 percent of CPU time is

xpended in system calls. Furthermore, kernel-level monitoring

hows that about half of this time is just in the overhead of

aking system calls. We conclude that, as long as the front-end

rchitecture requires Unix system calls to relay messages from

ne module to another, no dramatic improvement in the efficiency

f the front-end services can be expected.

Obtaining meaningful throughput measurements has been

ade difficult by the self-contained experimentation configura-

ion, which included two passages through the NCP. In this con-

iguration, we found that throughput is severely limited by the

-P. To investigate the extent of this limitation, we have sent

usages as fast as possible from the ENFE through the Urbana IMP

3 an 11/50 at Urbana. With this configuration, each message is

andled only once by the ENFE NCP. The maximum throughput

19

FE Final Report 9/30/77

easured to date with this configuration is about 50 kilobaud already enough to saturate the ARPANET) when messages are sent / the 11/50 to the ENFE. However, when messages are sent from ne ENFE the throughput is roughly 40 percent less. We attribute lis difference to inability of the slower 11/50 to receive data apidly .

To further investigate the factors affecting throughput, i? have separately exercised the two major portions of the mes- sage path in the self-contained configuration. The front-end prtion of the path (from the message generator to the Host-Host !?rvice) has a message throughput that is four to five times ::eater than that of the network portion (from a message genera- te through the NCP to the IMP and back through the NCP to a mes- sige receiver). These results provide corroboration of the lim- ping effect of the NCP.

20

ENFE Final Report y/30/77

IMPLICATIONS OF EXPERIMENT RESULTS

lulti-Host Study

In this study we examine the impact ot a multi-host conti- nuation on a network front end (NFE) . This impact is a function >f both the capacity and the structure ot the NFE. in this itudy, the NFE examined is the WWMCCS Phase A Experimental NFE ENFE). Our estimates show that the WWMCCS Phase A ENFE can sup- ort a multi-host configuration. Minor protocol changes may be equired.

Figure 1 shows a model NFE in a single-host configuration. he NFE communicates with the host via a host interface. In the NFE this is the Asynchronous Bit Serial Interlace (ABSl) . The FE communicates with the network via a network interface. In he ENFE this is the IMP-11A. The NFE communicates with termi- via terminal interfaces. in the ENFE, these are DH-ll and V-ll interfaces.

Figure 2 shows an NFE in a multi-host configuration. The itference between this and the single-host configuration is that l the multi-host configuration there are additional host inter- nes (ABSI's in the ENFE).

We discuss the quantitative and qualitative effects ot ot

Uti-host configurations on the NFE. We first deal with the

antitative effects. In so doing, we develop a method for

'termining the resources required in the NFE for a given load

21

<]NFE Final Report y/30/77

mposed by the many hosts. We then deal with the qualitative el ects on the NFE and the protocols that it uses.

22

NFE Final Report 9/30/77

Single-Host NFE Conr igurat ion

'

H

N

I

NFE

I

T I

0

/ \ / \

/ \

0

N

E

p

T

W

T

u

R

K

were:

HI NI TI PT

T

Interlace to the host Interface to the network Interfaces to terminals Port Terminal

Figure 1^.

2i

NFE Final Report y/30/77

Multi-Host NFE Conf iguratio n

P T

N E T W U R K

HI NI TI PT T

Interlace to the host Interface to the network Interlaces to terminals Port Terminal

Figure 2.

24

;NFE Final Report y/30/77

When we ask whether an NFE can support a given multi-host onf iguration, we are asking, in part, whether the resources vailable in the NFE are sufficient to support the load imposed y the hosts. Put another way, it Rlimit is the amount of a iven resource R which is available in the NFE and Rreq is the otal amount of the resource R which is required to support the oad imposed by the hosts, we must have:

Rreq < Rlimit, for all resources R. o determine whether or not this condition holds, we must compute req. To do this, we break the NFE down into its component odules, and determine the resources required by each module. We hen analyze the load in terms of the use of NFE modules which it ntails. Finally, we obtain the total resource requirement by umming the resources required by each module for supporting the oad imposed by the hosts.

Let Rmodule[m] be the amount of resource R that is required y module m. Then Rreq is computed by the summation

M

U) Rreq = \ Rmodule lm] ,

I

m=l lere M is the number of modules in the NFE. Rmodule [m] will in sneral consist of two parts: the fixed part, Rfixedlm], and the ariable part, Rvar [m] . Thus

U) Rmodule lm] = Rfixedlm] + Rvar lm] .

tixedfm] is the amount of resource R which is required by module

25

;NFE Final Report y/30/77

i independently ot the load. Rvar [m] is the amount of resource R hich is required by module m and which varies with the load.

We can usually determine Rfixed[m] directly from m by meas- rement. To determine Rvar [m] , we first determine the amount of esource R that is required by module m for each unit ot load. nis we call Rload [m] . We then determine the total load that en- ails the use ot module m. This we call Lmodule[m]. Then

(3) Rvar [m] = Rload lm] * Lmodule [m] .

module [m] will be the sum ot the loads which are imposed by each f the hosts and which entail the use ot module m. We let L[h,mJ e the load which is imposed by host h and which entails the use t module m. It H is the number ot hosts,

H

I

(4) Lmodulelm] = \ L[h,m],

h = l

H

(5) Rvarlm] = Rload [m] * ) L[h,m],

I

h = l

H

(6) RmodulelmJ = Rf ixed [m] + (Rload [m] * \ L[h,m]), and

I

h = l

26

NFE Final Report 9/30/77

M H

V V

} (Rfixed[m] + (Rloadfm] * >

(7) Rre(? = ) (Rfixedlm] + (Rloadfm] * ) Llh,m])).

m=l n = i

e note that the model we have developed employs a linear rela- ionship between load and resource consumption. In the real jorld, such a relationship does not always hold, particularly <ith respect to time-limited resources such as processor time.

We now discuss this method in more detail. In an NFE, the ^sources we consider are ot two kinds: space-limited resources hd time-limited resources. The space-limited resource we con- fer is primary memory. The time-limited resources we consider ;:e processor time and interface bandwidth. We will discuss the imputation of Rreq for primary memory in some detail. We will :scuss the other two resources only insofar as their treatment offers from that of primary memory.

We first consider the primary memory resource C. Let Climit the total available memory in the NFE. We want to compute eq, the total amount ot memory required to support the load im- sed by the hosts. Then we can determine whether

Creq < (Jlimit. this relation holds, then the memory in the NFE is sufficient : support the load imposed by the hosts.

Consulting equation (4), we see that the first step in com- bing Creq requires the determination ot L[h,m] for each host h

27

1NFE Final Report 9/30/77

nd for each module m. This information can be conveniently epresented by a matrix:

L[l,l] L[l,2] .... L[1,M] L[2,l] L[2f2] .... L[2,M]

L[H,1] L[H,2] .... L[H*,M]

ach row represents the load which is imposed by a given host, ach column represents the load which entails a given module. We ote that the sum of column m is Lmodule [m] .

To determine the L[h,m], we need two sets of information

nich involve the services which the NFE performs for the hosts.

B let U[h,s] be the number of simultaneous uses of service s by

bst h. This information can also be represented by a matrix:

U[l,l] U[l,2] .... U[1,S] U[2,l] U[2,2] .... U[2,S] .

U[H,1] U[H,2] .... U[H*S] •lere S is the number of different services performed by the NFE. Ich row represents the use of services by a given host. Each olumn represents the use of a given service across all hosts.

We let E[s,m] be the number of unit loads imposed on module for each use of service s. Again the information can be 'presented by a matrix:

E[l,l] E[l,2] .... E[1,M] E[2,l] E[2,2] .... E[2,M] .

E[S,1] E[S,2] .... E[S,M]

28

NFE Final Report 9/30/77

ach row represents the load on the modules of the NFE for each ise of a given service. Each column represents the load on a dven module when each service is used once.

Then

S

(8) L[h,m] = ) U[h,s] * E[s,m]

y„,

liat is,

s = l

L = UE.

Consulting equation (5), we see that the next step in com- Iting Creg requires the determination of Cload [m] for each Tdule m. This is simply the amount of memory required by module

for each instance of whatever module m does. If a copy of *dule m is required for each instance of whatever module m does, ■en Cload [m] is the total memory used by each copy of module m.

the copies of module m all share the same reentrant program ^ each has a separate copy of the data space, then Cload [m] is tie size of the data space for a copy of module m. If there is Ply one copy of module m which allocates table space and buffer fece for each instance of whatever module m does, then Cload [m] the size of the table space and buffer space allocated for fch instance of whatever module m does.

Consulting equations (6) and (7), we see that the final ips in computing Creq require the determination of Cfixed[m]

29

NFE Final Report 9/30/77

br each module m. This is simply the amount of memory required U module m independent of how many instances there are of what- «/er module m does. If a copy of module m is required for each jistance of whatever module m does, then Cfixed[m] is zero. If tie copies of module m all share the same reentrant program but cich has a separate copy of the data space, then Cf ixed [m] is the s.ze of the reentrant program. If there is only one copy of indule m which allocates table space and buffer space for each iistance of whatever module m does, then Cfixed[m] is the size of irdule m when no such table space and buffer space is allocated.

The computation of Creq thus consists of the following 9 seps :

(a.) construction of the matrix U , 2) construction of the matrix E, ) computation of the matrix L = UE, ) computation of Lmodule[m] using equation (4), ) determination of Cload[m],

) computation of Cvar [m] using equation (5), ) determination of Cfixedfm],

) computation of Cmodule[m] using equation (6), and ) computation of Creq using equation (7). note that steps 1, 2, 3, and 4 are independent of the resource J:3er consideration. Therefore these steps need be performed a»ly once for all resources.

1

2 3 4 5 6 7 8 9

If Climit > Creq, the total memory in the NFE will be ade- ate. But if the NFE is implemented on a mini-computer, we may

30

NFE Final Report 9/30/77

lso have to take into account the limitation on the amount of ismory that can be addressed by each module.

We let Caddr be the address limit imposed on each module by le structure of the NFE hardware. Then the condition that must bid is

Caddr > Cmodule[m], for all modules m. Ills requires no additional computation, since the Cmodule [m] fere computed in step 7 above.

We now turn our attention to the time-limited resources: rocessor time and interface bandwidth.

Let us first consider the processor time resource P. This rsource will be treated differently from primary memory in three wys:

1) We are more likely to be interested in the frac- tional utilization of P than in absolute measures m seconds. That is, Plimit will be 1 and Ploadfm], Pfixedfm], Pmodule[m], and Preg will be expressed in fractions of the available processor time.

2) For most modules m, Pfixed[m] will be very small or zero. Exceptions will be those modules which implement communication protocols that require continuous activity for maintaining communica- tion, even when there is no data to be sent or received.

3) Our simple linear model of the relation between load and resource use may have to be replaced by a more sophisticated model such as queueinq theory.

The last resource we consider is the interface bandwidth source B. This resource will be treated differently from pri-

31

:NFE Final Report 9/30/77

lary memory in the three ways mentioned above under processor ime. Further, interfaces are specialized resources as opposed

0 universal resources such as memory space and processor time, his has two consequences:

1) Use of each interface will probably be limited to a single module.

2) The computations for each interface, or kind of interface, will have to be made separately.

We now apply our modeling method to a concrete example using

umerical data. We set ourselves the task of computing Creq for

ie ENFE in a hypothetical multi-host configuration. Our purpose

s to illustrate the application of the method. Therefore we

till oversimplify wherever it suits our convenience.

We employ the 9-step method discussed above.

In step 1, we construct the matrix U. We recall that U[h,s]

1 the number of simultaneous uses of service s by host h. We mst therefore define the services performed by the ENFE. There c'e five:

1) HH performs ARPANET Host-Host Protocol interpre- tation for each of the local hosts.

2) SVTn serves as intermediary between the local hosts one one hand and remote terminals connected via the ARPANET on the other.

3) SVTt serves as intermediary between the local hosts on one hand and local terminals attached to the ENFE on the other.

4) UVTh serves as intermediary between terminals at- tached to the local hosts on one hand and remote hosts connected via the ARPANET on the other.

5) UVTt serves as intermediary between local termi-

32

NFE Final Report 9/30/77

nals attached to the ENFE on one hand and remote hosts connected via the ARPANET on the other.

e note that the service UVTt does not involve the local hosts,

t only involves local terminals attached to the ENFE. To ac-

ount for the ENFE resources consumed by this service, we intro-

uce a -phantom host" T. Let there be 3 local hosts, and let the

se of ENFE services be as shown in Figure 3. This then is the latrix U.

service HH SVTn SVTt UVTh UVTt

h 1 10 5 5 10

o 2 12 6 7 4

S3 9 3 11 6

t T - - - 15

Figure 3. U[h,s]

In step 2, we construct the matrix E. We recall that E[h,s]

the number of unit loads imposed on module m for each use of

service s. We must therefore examine the structure of the ENFE

t> identify the modules in the ENFE, determine what constitutes a

it load for each module, and determine how many unit loads are

uposed on each module for each use of each service.

There are 9 modules in the ENFE that are pertinent to this cscussion :

1) CPM is the Host-to-Front-End Protocol (HFP) chan- nel protocol module. Its unit load is a logical channel .

2) HHS is the HFP service module that enables the local hosts to access the ARPANET NCP. Its unit load is a logical channel and the associated du-

33

NFE Final Report 9/30/77

plex ARPANET connection.

3) NCPD is the portion of the ARPANET NCP which is implemented as a daemon process. Its unit load is a duplex ARPANET connection.

4) NCPK is the portion of the ARPANET NCP which is implemented as part of the Unix operating system kernel. Its unit load is a simplex ARPANET con- nection.

5) SVTS is the HFP service module that enables the local hosts to access both remote terminals con- nected via the ARPANET and local terminals at- tached to the ENFE. Its unit load is a logical channel and the associated terminal.

6) UTH is the program that enables terminals at- tached to the local hosts to access remote hosts connected to the ARPANET. Its unit load is a ter- minal and the associated ARPANET connection.

7) UTT is the program that enables terminals at- tached to the ENFE to access remote hosts con- nected via the ARPANET. Its unit load is a termi- nal and the associated ARPANET connection.

8) TD is the Unix terminal device driver. It enables terminals attached to the ENFE to access other modules in the ENFE. Its unit load is a terminal.

9) PA is the HFP service module that enables the lo- cal hosts to access programs in the ENFE (such as UTH). Its unit load is a logical channel and the associated program.

Gven the functions performed by each of these modules, the ma tix E is as shown in Figure 4.

module CPM HHS NCPD NCPK SVTS UTH UTT TD PA

s

e

HH

1

1

1

2

0

0

0

0

0

r

SVTn

1

0

1

2

1

0

0

0

0

v

SVTt

1

0

0

0

1

0

0

1

0

i

UVTh

1

0

1

2

0

1

0

0

1

c

UVTt

0

0

1

2

0

0

1

1

0

e

F

igure 4.

E[s

rm]

34

NFE Final Report 9/30/77

In step 3, we compute the matrix L = UE. This is shown in figure 5. We now have the load imposed on each module by each hst. For example, host 1 imposes 30 unit loads on the CPM.

module CPM HHS NCPD NCPK SVTS UTH UTT TD PA

h

1

30

10

25

50

10

10

0

5

10

0

2

29

12

22

44

13

4

0

7

4

s

3

29

9

18

36

14

6

0

11

6

t

T

0

0

15

30

0

0

15

15

0

Figure 5. L[h,s]

In step 4, we compute Lmodule[m] by summing the columns of L For example, the sum for the CPM column of L is: Lmodule[CPM] = 30 + 29 + 29 + 0 = 88. I now have the total load imposed on each module by all hosts.

In step 5, we determine Cload [m] for each module m. For ex- aple, by examining the ENFE program listings, we find that for ech logical channel, the CPM requires 14 bytes of table space, lerefore, Cload [m] for the CPM is 14 bytes.

In step 6, we compute Cvar [m] as the product of Lmodulefm] W CLoad[m]. For example, for the CPM we have:

Cvar [CPM] = 88 * 14 = 1232 bytes.

In step 7, we determine Cfixed[m]. For example, by examining m memory maps of the ENFE modules, we find that the CPM re- ires 12522 bytes, exclusive of the memory required per logical

35

ENFE Final Report 9/30/77

channel. We note that the modules NCPK and TD are actually im- plemented as parts of the Unix operating system. To account for the memory used by the operating system, we introduce another module called UNIX. We include the Cf ixed [m] for NCPK and TD in the Cfixed[m] for UNIX.

In step 8, we compute Cmodule[m] as the sum of Cvar [m] and Cfixedfm]. For example, for the CPM we have:

Cmodule[CPM] = 1232 + 12522 = 13754 bytes. We now have the total memory required by each module under the load that is defined by matrix U.

The results of steps 4 through 8 are shown in Figure 6. Module m Lmodule[m] Cload[m] Cvar [m] Cfixed[m] Cmodule[m]

CPM

88

14

1232

12522

13754

HHS

31

32

992

10982

11974

NCPD

80

58

4640

17864

22504

NCPK

160

202

32320

0

32320

SVTS

37

56

2072

13162

15234

UTH

20

1294

25880

2974

28854

UTT

15

3976

59640

7183

66823

TD

38

192

7296

0

7296

PA

20

32

640

13522

14162

UNIX

0

0 Figure

0 6.

71452

71452

Finally, in step 9, we compute Creq as the sum of the Cmodule [m] . We find that

Creq = 284373 bytes. For a PDP-11/70

Climit = 2**22 = 4194304 bytes. Hence we have, for the load defined by matrix U,

36

ENFE Final Report 9/30/77

Creg < Climit.

All of the foregoing deals with the quantitative aspects of deciding whether an NFE can support a multi-host configuration. But there are also qualitative aspects to this question. These qualitative aspects deal with the structure of the NFE hardware and software and with the structure of the protocols that the NFE uses. To facilitate our discussion, we will consider the ENFE and the protocols that it uses.

We first consider the structure of the ENFE hardware. The hardware basis of the ENFE is the PDP-11/70 computer. A multi- host configuration will require the addition of ABSI's to the ENFE hardware. The UNIBUS structure of the PDP-11/70 makes the addition of ABSI's relatively easy.

We next consider the structure of the ENFE software. The software must take into account the existence of multiple ABSI's and of multiple hosts connected through them. There are two ef- fects which must be considered:

1) the effect on the Unix operating system, and

2) the effect on the CPM and on the service modules.

The existence of multiple ABSI's presents no problem in the Unix operating system. The structure of the I/O system permits multiple devices of the same kind to be driven by a single re- entrant device driver without confusion. It will permit the CPM to determine on which ABSI a given message arrived. Therefore the CPM will be able to determine from which host the message was

37

ENFE Final Report 9/30/77

received. It will also permit the CPM to direct a message to the proper ABSI, hence to the proper host.

The existence of multiple ABSI's, and of multiple hosts con- nected through them, may have some effect on the structure of the CPM and of the service modules. This effect can be very small if a minor change is made to the HFP. We first discuss the problem.

The current version of the CPM and of the service modules uses logical channel numbers to identify the separate logical communications channels between the CPM and the service modules. These are the same logical channel numbers that are used in com- munication between the CPM in the host and the CPM in the ENFE. If there are multiple hosts, and if no change is made in the current policy for assigning logical channel numbers, confusion will result in the ENFE when two or more hosts use the same logi- cal channel number.

One solution is to add a host field to the state tables in the CPM and in the service modules. A host field would also have to be added to all communications between the CPM and the service modules. This host field would be used to distinguish logical channels with the same logical channel numbers but from different nosts. This solution would require substantial alteration of the CPM and the service modules.

Another solution is to allot to each of the hosts a disjoint subset of the logical channel name space. This would require that the CPM check the logical channel number as it receives each

38

ENFE Final Report 9/30/77

message. This would ensure that the logical channel number in the message matches the host from which it was received. The CPM would also have to use the logical channel number to direct each outgoing message to the proper ABSI. This solution requires re- latively minor changes to the CPM. It requires no change at all to the service modules.

We last of all consider the structure of the protocols that the ENFE uses. Two protocols could affect, or could be affected by, a multi-host configuration: the HFP and the ARPANET Host-Host Protocol .

The only change which might be required in the HFP is in the way that logical channel numbers are assigned. Currently the (single) host may attempt to establish a logical channel using any 28-bit number whose high-order bit is zero. In a multi-host configuration, additional high-order bits could be used for iden- tifying which logical channels belong to which hosts. This is not a significant change to the structure of the HFP or to its implementation.

The ARPANET Host-Host Protocol assumes a one-to-one correspondence between network ports, hosts, and NCP's. This means that confusion could result if a multi-host configuration used a single network interface as shown in Figure 2. It might se necessary to have a network interface for each host as shown in Figure 7. This situation should be avoided in the design of future networks such as AUTODIN II.

39

ENFE Final Report 9/30/77

Multi-Host NFE Configuration (Multiple Network Interfaces)

HOST

HOST

H I

N I

H

NFE

N

I

I

l—

i_

H I

N I

T I

0

/ \ / \

/ \

0

N E T W 0 R K

where:

HI NI TI PT

T

Interface to the host Interface to the network Interfaces to terminals Port Terminal

Figure 7.

40

ENFE Final Report 9/30/77

OFFLOADING STRATEGIES

In CAC Document No. 230, "Offloading ARPANET Protocols to a Front End," we presented a broad survey of offloading stra- tegies for the most important ARPA Network protocols. This sur- vey should be useful as a basis both for the future design of expanded front ends and for quantitative studies to determine optimal offloading strategies.

Offloading the Telnet Protocol. We discussed in detail the trade-offs involved in different degrees of offloading and provided a brief analysis of the potential tor offloading each of :ne Telnet options. We also discussed which process-to-service protocols were needed to implement the various schemes. The sym- metry of this protocol allowed us to develop a new process-to- service protocol (the network virtual terminal PSP) designed to efficiently implement an intermediate level of Telnet offloading. (The maximum offloading strategy adopted tor the ENFE was imple- •ented with two separate PSP's - one for the user side and one 'or the server side) .

Offloading the File Transfer Protocol. We identified two ajor aspects of the File Transfer Protocol (FTP) that were can- idates for offloading: the data transfer process and the book- ing and marker handling required for restarting a transfer •hat has aborted. Since FTP makes use of the Telnet protocol, Telnet functions can also be offloaded. Considering only the -ser FTP, we tound that there are eight possible offloading themes that differ from one another in major ways.

With regard to the offloading of Server FTP, we confined

41

ENFE Final Report 9/30/77

ourselves to examining the individual FTP Commands, classifying them as to whether they must be handled in the host or can be handled in the front end.

To make specific the schemes for offloading FTP, we designed three new process-to-service protocols: a User FTP PSP, a Server FTP PSP, and a File Access PSP. The latter provides a general facility for transferring files between host and front end.

Offloading Other ARPANET Protocols. Although our major contractual obligation was to study the offloading of FTP (and Telnet as a natural conjunct of this), we also looked briefly at three other protocols: Remote Job Entry (RJE) , Teleconferencing, 2nd Network Graphics.

42

ENFE Final Report 9/30/77

ALTERNATIVE ARCHITECTURES

Alternative Architecture Research Plan

We developed a research plan (CAC Document No. 232) lead- ing to the specification of a network front end (NFE) designed to meet WWMCCS needs through the 1980's. In CAC Document 232, we briefly reviewed the current state of the art as it affects NFE development, identified some promising directions for research, and presented a detailed plan for conducting the research.

State of the Art. We concluded that the NFE must be modular, efficient, and multi-level secure if it is to meet WWMCCS needs. Three groups of systems were considered relevant to NFE development:

1. existing network access systems,

2. secure systems, and

3. high-bandwidth communications systems. Examination of these three groups revealed that there does not exist, nor will there exist in the near future, a system which will meet WWMCCS needs.

Research Directions. We presented some promising ideas for research which might lead to solutions to NFE problems. Two alternative hardware architectures were presented tor solving the bandwidth problem. An alternative software architecture, the Hub System, was presented which may solve the problems of producing a modular, efficient, and multi-level secure system.

Research Plan . We presented a research plan with four phases :

1. the preparation phase,

43

ENFE Final Report 9/30/77

2. the research phase,

3. the prototype phase, and

4. the specification phase.

The preparation phase will produce a set of technical constraints for the design of the WWMCCS NFE and will select a set of design features to be studied in the research phase. The selection will be performed through mathematical modeling using the technical constraints. The research phase will design and construct a Research NFE that will be used tor evaluating architectural con- cepts through an iterative process of implementation, testing, and measurement. The prototype phase will design and construct a Prototype NFE which will serve as the basis for specifying the WWMCCS NFE. The specification phase will develop the WWMCCS NFE specf ication.

44

UNCLASSIFIED

JCURITY CLASSIFICATION OF THIS PAGE (Whin Data Entered)

REPORT DOCUMENTATION PAGE

READ INSTRUCTIONS BEFORE COMPLETING FORM

REPORT NUMBER

CAC Document Number 240 CCTC-WAD DnmmPnr Niimhpr 7S1fi

2. GOVT ACCESSION NO

3. RECIPIENT'S CATALOG NUMBER

TITLE tend Subtitle)

Networking Research in Front Ending - Final Report

5. TYPE OF REPORT ft PERIOD COVERED

Research

6. PERFORMING ORG. REPORT NUMBER CAC #240

AUTHOR(»J

8. CONTRACT OR GRANT NUMBERf*,)

DCA100-76-C-0088

PERFORMING ORGANIZATION NAME AND ADDRESS

Center for Advanced Computation University of Illinois at Urbana-Champaign Urbana, Illinois 61801

10. PROGRAM ELEMENT, PROJECT, TASK AREA ft WORK UNIT NUMBERS

. CONTROLLING OFFICE NAME AND ADDRESS

Command and Control Technical Center

WWMCCS ADP Directorate, 11440 Isaac Newton Sq., N

Reston, Virginia 22090

12. REPORT DATE

September 30, 1977

13. NUMBER OF PAGES 44

4 MONITORING AGENCY NAME ft ADD R ESSf/f dltttrent horn Controlling Olllce)

15. SECURITY CLASS, (of this ruport)

UNCLASSIFIED

15a. DECLASSIFI CATION/ DOWN GRADING SCHEDULE

5 DISTRIBUTION ST ATEMEN T (ol thit Ruport)

Copies may be requested from the address given in (11) above.

' DISTRIBUTION STATEMENT (ol the abstract entered In Block 20, II different from Report)

No restriction on distribution.

! SUPPLEMENTARY NOTES

None.

' KEY WORDS (Continue on reverse side it necessary and identity by block number)

Network front end Network protocol

ABSTRACT (Continue on reverse side II necessary and Identity by block number)

The CAC has been engaged in an investigation of the benefits to be gained by employing a network front end. A DEC PDP- 11/70 was used as front end for connecting a Honeywell 6000 host to the ARPANET. All work (except the multi- host study) performed under the contract has already been thoroughly documentec Thus, this final report abstracts those reports produced.

)f\ FORM -i ...»

^ I JAN 73 1473 EDITION OF I NOV 65 IS OBSOLETI

UNCLASSIFIED

SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered)

unui

pi

1 'III

I

J mill

UNIVERSITY OF ILLINOIS-URBANA

in 111 in mi ii mi 3 0112 005438004

I

m