AN ENGINEERING VIEW OF THE LRL OCTOPUS COMPUTER NETWORK

David L. Pehrson David L. Pehrson David L. Pehrson


by David L. Pehrson
November 17, 1970
LRL UCID # 15754

Editor's Introduction

These remarks are added mainly as an explanation to those who weren't directly involved.�In a few respects, this is a sad story that, while uncomplicated, is difficult to relate.

Dave Pehrson was interviewed, along with others, during the early 1990s.�The interview went so well that he agreed to a further interview session.

The interview process requires a classification review before an interview can be published.�During this review, sensitive material is removed to produce an unclassified transcript for publication. The offending tape is then destroyed or locked in an approved vault. For reasons that are entirely beyond my understanding, Dave's interview tape was one of five tapes held back by the classification officer.�Of these tapes, two interviews were lost when our then typist and editor, Catherine Williams, had to take medical retirement from the Laboratory.� Dave Pehrsons' interview was one of those lost tapes.

By itself, the loss of a tape is not a mortally wounding event; we can redo the interview. In Dave's case, however, a new interview is not possible.�Even as preparations for the follow-on interview were being made, Dave suffered a massive stroke which left him paralyzed and unable to speak. The tragic fate that has befallen Dave Pehrson does not, in any way, diminish his seminal contributions. It turns out that his engineering perceptions of the Octopus network are unique and deserve, all the more, to be made public.

I am indebted to Dave's wife for her help in republishing this early report; it is the only integrated discussion of the early roots� of Octopus. Dave's report covers the early Octopus. Octopus  was later modernized to use the newest and best networking designs, overall reliability was improved enormously, and the speeds of all the channels were raised over a factor of ten.� In addition to collecting a staff of superb engineers and technicians, Dave Pehrson led these developments.

This publication was read to Dave. I like to believe that he has given us his approval.


ABSTRACT

This document describes the evolution of Lawrence Radiation Laboratory's time-sharing system, the OCTOPUS Computer Network. To acquaint the reader more fully with time sharing at LRL, the present level of implementation and some considerations for future development are discussed.

A complete description of the background is provided, with emphasis on the features peculiar to LRL computing needs. The design of the File Transport Network, the Teletype Network, and other remote I/O is discussed and implementation of these designs is described. New efforts are considered and an overview of the future computing system is presented in the final section.

Daily use of the OCTOPUS system after initial development has produced stability requirements. The resultant multi-network approach provides for continuity, flexibility, and longevity unachievable with any single-computer system.

BACKGROUND

Goals


The LRL OCTOPUS Computer Network was conceived and initiated in 1964. The basic goal for OCTOPUS was increasing the overall effectiveness and efficiency of the computing facilities provided by LRL's several large computers (currently CDC 6000- and 7000-series systems).

Increased effectiveness was to be obtained by operating each large (worker) computer in a time-sharing mode and linking all of them to a central system providing two basic features. The first feature was on-line mass storage via a centralized hierarchy of storage devices that could be shared by the worker computer systems (a Centralized or Shared Data Base). The second feature was provision of a focal point for various forms of remote and local Input/Output (I/O) terminals. This provided universal access to all worker computers from all terminals and permitted the multi-computer complex to be viewed as a single computing resource.

A Shared Data Base

Interactive multiprogramming of the large CDC machines creates a requirement for on-line storage in vast quantities. Batch or fast batch operation can rely heavily on magnetic tape since user interaction is essentially "over the counter." However, tape is very awkward when interactive remote terminals are introduced, particularly in a large center with 100 or more active users. Delays caused by tape use (requesting a tape from a vault, having it hung on a drive, and reading it to create a file) typically last for minutes and can be much longer under heavy load. This is not interactive operation. Interaction requires on-line availability of mass storage of tape capacity with access times comparable to human responses-a few seconds. In a multi-computer network system, it is reasonable to centralize and share this storage resource.

The Shared Data Base is not new as a concept, but examples of its implementation are still uncommon. Currently, LRL's OCTOPUS Data Base has a hierarchy of a Librascope fixed-head disk (billion-bit, rapid access, high transfer rate) and an IBM Data Cell, both of which support the large IBM Photostore (trillion-bit, photographic chip store)-the major mass storage media. There are two factors which make it advantageous for these storage devices to be shared among and accessed by the several large CDC systems.

The first and probably principal factor is economics. In computer subsystems, economy comes through scale (providing technology is not pushed excessively). As an example, file storage in the Photostore costs approximately 10-4 cents/bit, including amortization of capital investment. Presently available large movable-head disk files have storage costs of at least 10-2 cents/bit. Moving upward to fixed-head disk/drum files increases storage costs another order of magnitude to 10-1 cents/bit.

Therefore, there is strong incentive to use (or at least share) a mass-storage device such as the Photostore. Storage devices of this type cost on the order of $10,000,000 because of their complexity and use of multiple technologies. For this reason, one can hardly justify such a device on each large worker computer system, given that the trillion-bit capacity is not imperative for each system today. The only reasonable alternative is sharing the device among the several large systems.

Economy through scale in storage devices is achieved by using different technologies. Rotating magnetic storage (disk/drum) asymptotically approaches a lower cost limit because of mechanical and physical limitations. The very large storage devices use alternate technologies such s digital recording on photographic film (LRL's Photostore, for example), laser recording on metallized mylar, and possibly holographic techniques in the future. These systems tend to be complex and require elaborate pneumatic, hydraulic, and mechanical subsystems for handling the storage media. They are therefore cost effective only in the very large sizes.

Present mass storage devices also have significantly longer access times than does rotating magnetic storage. Random access times are typically one second, compared with 10 to 100 msec for disk/drum systems. A storage hierarchy to balance and smooth I/O loads is therefore required to support a large device such as the Photostore. The hierarchy is also required to provide an indexing mechanism for the mass storage device. About 1 bit per 1000 bits of mass storage (depending on the average file size) is required for the indexing function, so approximately one billion bits are required for the file index functions for the trillion-bit Photostore.

The second factor favoring a Shared Data Base is the flexibility and operational advantages that can be achieved. For example, files can be created on one worker system, transported to the Photostore, and accessed subsequently from another worker system. This can lead to file sharing by a group of users and elimination of the need for unique copies of all public files on each worker system. The store and forward computer controlling the storage hierarchy also allows computer-to-computer file transactions.

Centralized Data Bases are not without disadvantages. Specifically, reliability requirements are greatly increased since all worker systems now depend on the shared storage hierarchy. A major effort is also required to establish a network to implement the centralized storage.

Common Remote I/O

Effective time sharing of the large worker systems requires remote terminals for user/machine interaction. The OCTOPUS Network currently relies almost solely on Teletypes, with approximately 350 units connected from throughout the Laboratory. There is a strong need to augment these Teletypes with other terminals offering higher performance (speed), greater flexibility (easier interaction), and more capability (i.e., more than just a keyboard and printer).

All teletypes are connected to the network via dedicated minicomputer based concentrators which have access to all worker systems. Any terminal can therefore access any worker machine. This enables individual users to view the complex of worker computers (and Centralized Data Base) in the computer center as a single computing resource of several relatively independent systems. The results are obvious flexibility and a close approximation to a computing utility. Any individual system is occasionally unavailable because of preventive maintenance or failure. However, the user still has access to all other available systems with the implementation of common remote I/O.

An equally strong argument for common remote I/O is its adaptability to a changing and evolving computer center. The terminal system becomes relatively independent of the acquisition of new worker computer systems. Connecting the new system to the network provides access to the entire existing terminal I/O capability and simultaneously makes the new system available to all users. Also, acquisition of a new worker system does not require simultaneous acquisition of remote terminals for support.

Evolutionary Character of Network

A necessity for efficient computer utilization is provision of stability during inevitable change. The set of major systems comprising the worker computers at LRL is changing at the rate of about one major system per year.

Adding a system requires only connection to the OCTOPUS Network for integration with existing I/O and storage facilities. Users can then make a graceful transition to the new system by using it as an augmentation of existing worker computer resources. A step function from one stand-alone system to a new stand-alone system is not required.

Connection of a new type system (i.e., Serial No. 1) to the OCTOPUS Network does not make it immediately available to all users since, in practice, significant systems software development is necessary. This is only a temporarily introduced restraint and does not limit the advantage of full access by all users as the new system's full capabilities become available.

OCTOPUS Network connection reduces software efforts required for the new system; capabilities provided by the centralized storage facility and remote I/O systems need not be duplicated. Hardware and software interfaces only are required.

Classical OCTOPUS Network

Figure 1 shows the classical OCTOPUS Network as initially presented and implemented. A dual Digital Equipment Corporation (DEC) PDP-6 CPU system was acquired to control all network transactions, the second CPU being for backup. A large core memory of 256K words (K = 1024) formed the heart of the system. The processors, each worker computer, and the hierarchy of storage devices were all connected to it via direct memory data channels.

The above subsystems and links were termed the File transport Network and used to implement the centralized storage capability. The core memory and disk provided staging and queuing for the Photostore (or mass store). File indexes for the mass store were kept on the IBM Data Cell which also provided medium-life file storage.

All local and remote I/O equipment was also directly attached to this control facility via the I/O bus of the PDP-6 CPU. Most of the data for these devices were handled on a character-by-character basis, causing a heavy interrupt load on the PDP-6. Slower response to real-time tasks resulted.

Two major problems became apparent about two years ago, both related to connecting all parts of the network to one system. The first and most apparent was the vulnerability of the complex to difficulties in the central system. Loss of a file transport channel between a worker computer and the central system inhibited access to the central storage devices. In addition, it inhibited access from all user terminals to that worker machine. Even worse, difficulties with the central system (e.g., loss of PDP-6 core memory) disrupted communication with all worker systems.

The second problem was equally important but less obvious. The need to change had to be reconciled with the need for reliability. Reliability is generally achieved because one leaves things alone. One gets the chance to get the bugs out of the system at both the hardware and the software levels. This system, unfortunately, is not constant, and it is inevitable that it must change, at least from the addition of new large worker systems. The CDC 7600s have been delivered; a follow-on system will be installed by mid-1971. We are adding to the I/0 capabilities steadily, and the system will continue to evolve. These facts have both software and hardware implications; when everything is tied into such a large network, any addition or change affects the rest of the system.

It is inevitable that we must change, but it is certainly necessary to have some reasonable level of reliability. We are faced with the dilemma that change is not conducive to reliability; in fact, change normally counteracts reliability objectives. We need to reconcile these changes with a reasonable level of reliability, at least to the terminals and to the users who depend on the operational capabilities of the system.

We have learned that we cannot rely on the central PDP-6 system to do everything for us. There was a pressing need to separate or partition major system functions to minimize loss of capability during failure.

Separating functions physically and logically has the additional benefit of enabling graceful addition of new functions. Hardware and software development efforts can proceed relatively independently of existing operational subsystems. Integration of the new capability is eased since impact on (or required change to) existing capabilities is minimized.

Present Network Philosophy

The present OCTOPUS Network is, in fact, the superposition of two nearly independent networks: (1) a file Transport Network consisting of the PDP-6 CPU and core memory, the storage hierarchy, and high performance links to each of the large CDC worker machines; and (2) the Teletype Network of three line concentrators based on PDP-8 class machines and separate direct connections to the CDC worker machines. This configuration is shown in Figure 2.

It should be noted that the two networks are not completely independent at the present time. Since some shunting (forwarding) of messages through the PDP-6 is required for access to all machines from all teletypes. This is being augmented (late 1970) by establishing direct links between the PDP-8 Teletype concentrators for complete independence of the two networks.

Although the networks can be considered logically independent, they are still interconnected, providing backup paths for alternate routing and introducing some redundancy. For example, loss of a direct Teletype Network link from a PDP-8 concentrator to a CDC machine is no longer catastrophic since these Teletype messages (a subset of the short "packet class" of messages) can be routed via the PDP-6 over the file transport link.

Additional networks will be added (in a superimposed fashion) for new OCTOPUS functions. In particular, a third network will be incorporated by mid-1971 for Remote Job Entry terminals (RJET). This network will be, basically, several remote card reader/line printer terminals supported by a dedicated concentrator computer system with direct links to each worker machine and the File Transport Network via the PDP-6 (for access to the Centralized Data Base).

The Centralized Data Base has a dual role. Within the file Transport Network, the PDP-6 and supporting equipment function as a support system for the worker computers. However, they can also appear as a host (or worker) system to either the Teletype or RJET Net work for direct access to the storage facility.

The implementation of the OCTOPUS Network as multiple superimposed networks has obvious advantages for reliability and minimization of vulnerability by partitioning functions and providing alternate routing capability. Removing any part of a network (or even a whole network) for preventive maintenance, repair, or modification makes unavailable only a subset of the network system capabilities.

Network partitioning and independence also reduce the impact of necessary change and evolution. Addition of the RJET Network, for example, can proceed almost independently of existing operation. Isolated connection of the new network to a single worker machine allows necessary software and system integration to take place without affecting other worker systems. Connection to remaining machines becomes an essentially plug-in operation.

Separate functional networks are also the only practical implementation of a very large network system from a staffing viewpoint. Many hardware and software personnel are required for the total effort which is spread among the various functional subsystems. The amount of communication required among these individuals increases in a highly nonlinear fashion (more like exponentially) as network complexity increases. Partitioning the network allows separation into manageable subsystems. Individuals need be expert only in specialized areas to make a meaningful contribution. The requirement for intercommunication among persons responsible for the separate networks is reduced to a workable level.

Change in Role of Centralized System

The shift to functionally separate networks has been accompanied by a modification in the role of the centralized PDP-6 system. During early development of the classical OCTOPUS Network, all remote I/O and the centralized storage hierarchy for file transport were connected to the PDP-6. It was suggested that, in addition to providing local control for these, the PDP-6 have a major central role for the network including the major worker computers.

One of the functions suggested was scheduling jobs for the various worker systems attached to the network. A user logging in at a Teletype would have been automatically steered to the system which was least heavily loaded. This was completely unreasonable for at least two reasons.

The first and major reason was that the large computers attached to the OCTOPUS System were not, in general, identical. (In fact, this is one of the strengths of the network, allowing flexibility for computer power growth consistent with technology and funding considerations.) Therefore, each worker machine did not offer identical resources for complete problem interchangeability. Many problems are highly optimized to a specific machine type and could execute (after possible recompilation) on another type only very inefficiently.

A second difficulty with centralized scheduling (job allocation) was measuring the load on a specific worker. A single multi-programmed (time-shared) large machine is a dynamic system in which load is a function of the number of jobs in execution, the job mix, and the combined demand on all resources provided by that particular machine. The dynamism is further influenced by the priority (and charge rate) associated with each job, which is under user control and can be varied during execution.

Therefore, logical connection of a remote terminal to a worker system is under user control and is specified by him when he logs in to the system.

Today, the PDP-6 based File Transport Network and other networks (e.g., Teletype Network) retain only that local control necessary for their functional responsibilities. They are passive resources to each large machine in roughly the same sense that local disk storage is a passive capability supported by local system software.

It was also suggested that the PDP-6 might do all the time accounting for the network and some "computing" in its spare time. However, centralized accounting is of marginal value, offers no real return for the effort required, and is also vulnerable to central system problems. Using the PDP-6 for problem solving is not realistic, considering the relative powers of the PDP-6 and the large CDC machines. The CDC 6600 is at least 10 times more powerful (perhaps more like 20), and the 7600 is larger by another factor of four or five. With these relationships, software development efforts for general problem solving on the PDP-6 are not warranted.

The PDP-6 based File Transport Network, and more recently the Teletype and RJET Networks, therefore, assume a different role from that original envisioned. These networks now fill a support role for the major systems by augmenting their capability. We have thus gained stability and independence for these functions so the computer nodes (worker machines) can change or be replaced upward with some degree of continuity.

DESIGN OF THE FILE trANSPORT NETWORK

The Storage Hierarchy

The ideal mass store device, with essentially infinite capacity and zero access time, does not exist. We use a secondary hierarchy (Figure 3) consisting of a fixed-head disk and an IBM Data Cell supporting the IBM Photostore. Characteristics are listed in Table 1.

The 256K words of core memory are in effect part of the hierarchy but are placed above the disk in performance and cost. Core also serves as the intermediate buffer when files are moved up or down the hierarchy (e.g., between disk and Data Cell). Direct transfers between secondary storage devices are not possible because of their synchronous nature (mechanical rotation) and mismatch of data rates.

Table 1. Characteristics of the OCTOPUS storage hierarchy.


Storage Device Librascope Fixed-Head Disk IBM 2321 Data Cell IBM 1360 Photostore
Organization Head per track Replaceable drum surface Pneumatic-access photographic chips
Usable Capacity (bits) 0.806 x 109 3.20 x 109 1.02 x 1012
Average random-access time (msec) 35 400 3000, overlapped
Transfer ratea (bps) 10 or 20 x 106 0.437 x 106 2.5 x 106 read
0.55 x 106 write
Cost/bit (cents) 0.1 0.005 0.0001

a The transfer rates shown are burst mode read or write rates; average achievable rates across several consecutive sectors or records may be less by up to 40%.

All devices in the hierarchy have been connected to the File Transport Network for at least three years. Appreciable cost/performance improvement over these storage devices is offered today only by the disk-pack systems such as the recently announced IBM 3330. Cost of this file is comparable to the Data Cell, but access times are reduced to around 40 msec.

The Photostore and its Constraints

The large capacity of the Photostore is very useful, but there are several severe operating constraints. The Photostore uses a photographic process in which chips of silver-emulsion film are exposed by an electron beam and developed by an automatic developing system. When developed, the chip is read-only storage since the developed emulsion cannot be re-exposed. When a file is changed, it is read back onto magnetic storage, modified, and completely rewritten on a new film chip.

Approximately 5 million data bits plus redundancy can be recorded on a single chip (an area about 1 X 2-1/4 in.). Thirty-two chips are stored in a plastic container termed a cell. Five million bits are on the order of 105 CDC 660 or 7600 words, but typical file lengths are only about 104 words. Since one cannot expose part of a chip, develop it, and then come back and write some more on it, queuing of files before writing is necessary for efficient utilization of the system.

This is one of the basic functions of the disk in the OCTOPUS Network. By sharing the storage hierarchy among the worker computer systems, the time one must wait to accumulate one chip of data is approximately inversely proportional to the number of machines connected to it. The average delay required to accumulate a reasonable amount of information is thus greatly reduced from that required by one worker computer recording on one mass store. (It is, of course, not possible to put a device of this kind on each worker system.) A maximum wait time is established so that a partial chip will be written if a full chip of data is not accumulated within a reasonable time.

After a chip is recorded, it proceeds through the developing process which takes about 3-1/2 minutes. It is washed, dried, and put back into the plastic cell. The data are then read back to verify that a "good" chip has been written.

Film, unfortunately, cannot be fully tested without being exposed, and virtually all film emulsions have minute defects. At the Photostore recording density of about 2 million bits per square inch, each defect causes several bits to be dropped out. To provide reasonable data integrity, very sophisticated polynomial error checking and correcting mechanisms are used. They allow correction of burst errors to 30 bits in length.

After being developed, the recorded chip is read with tightened chip error limits for record verification. The more stringent limits allow tolerance for some long-term degradation.

Let's consider a sample operation to see how the various parts fit together: Assume that a user has logged into the 6600-L machine and is compiling a program. When the program is compiled, a binary object file is generated. Rather than punching out a deck of cards or using magnetic tape, the user wishes to file it on the Photostore. (The files in the Photostore are stored for indefinite periods and are called long life-time files. Medium life-time files in the Data Cell have a life of about four weeks; if not accessed in that time, they are destroyed.)

He issues a call to the operating system in the 6600 to send the file to the PDP-6 system. The file is moved across the data channel from the L-machine to the PDP-6 core, which is the center buffer through which everything passes. The file is then written on the disk for the queuing process.

After some delay, the file (possibly with others) is moved from the disk back to core and then to the Photostore for the actual recording process which, unfortunately, is not very rapid. The effective recording rate is only about 250 kilobits per second (Kbps), one of the slowest data paths in the system. Effective read-back rates are somewhat higher; they are about 1.50 megabits per second (Mbps). (Burst mode read and write rates are 0.55 and 2.50 Mbps, respectively.) When the chip is exposed, it proceeds through the developing process, after which the read check is automatically made to verify the chip.

The file can be recalled from the storage hierarchy before recording on the Photostore is completed since it temporarily resides at a higher level in the storage hierarchy. The process of moving it down the hierarchy, from core to disk to Photostore, is entirely under PDP-6 system control [1] (the Elephant software system) and is invisible to the user. As far as the user is concerned, the file has entered the filing system when the transfer to the PDP-6 is complete.

OCTOPUS File transport Channels

The set of worker computers for OCTOPUS originally comprised the UNIVAC LARC system, the IBM 7094 and 7030 (StrETCH) systems, and the CDC 3600 system. This set has evolved to the present two CDC 6600 and two CDC 7600 systems. The connection of such dissimilar and changing working computers to the centralized storage hierarchy has been a basic difficulty in implementing the File Transport Network.

To accommodate this changing set, a fundamental decision was made to define and implement a standard data channel to connect these systems to the central system. This channel is termed the LRL OCTOPUS Data Channel. It is a first cousin to the CDC 3000-series channel and has remained basically unchanged except for expanding the data path to 36 bits for higher bandwidth.

All Transfers are hardware-initiated by the worker machine, which interrupts the PDP-6 to start a read or write operation on the PDP-6. Prior to initiation of a transfer, status messages and channel status bits are transferred bidirectionally so that logical control of the network remains the responsibility of the PDP-6. Word-by-word transfers between the worker machine and the PDP-6 use Data/Reply (ready/resume) conventions.

To read data from the PDP-6:

  1. A Data signal is sent over the channel to the PDP-6 to request the next word.
  2. A reply signal is sent back with a word of information to say "here it is."
  3. The process repeats until the operation is terminated (normally by the PDP-6, the sending end).


To write data to the PDP-6:
  1. A Data signal is sent over the channel to the PDP-6 along with a word of information to say "take a word."
  2. A Reply is returned to say "the word has been taken-send the next one."
  3. The sequence repeats until completion (normally terminated by the PPU, the sending end.


Each type of worker machine has its own set of channel specifications and conventions. The LRL OCTOPUS Data Channel is connected to the particular worker machine channel by an adapter as shown in Figure 4 and Figure 5. This LRL-designed unit performs signal-level conversion, timing synchronization, word-length adjustment, and data buffering.

The 6600 and 7600 perform all I/O via Peripheral Processing Units (PPUs), which are 12-bit, 4K-word, programmed I/O processors. The 6600 has 10 PPUs, any of which, on a job basis, may be connected via channel switch to one of the 12 data channels on the system. The file transport connection, as implemented, requires the dedication of one 6000-series channel, but not of a PPU (Figure 4).

Initial 6600 operation used a 12-bit (data path) LRL channel and a single PPU. Files were transferred from PDP-6 core to the PPU, to the main frame, and then out to 6600 disk. This required pauses in a transfer (since files are long compared with the buffering capability of a 4K-word PPU) and required temporary assignment of part of the 6600 main core for staging before going to disk. Transfers to disk used two PPUs, alternating between core and the 6600 disk channel, for streaming (since the disk is mechanically synchronous and cannot pause).

Present operation has been upgraded by a 36-bit wide LRL Data Channel to allow data rates comparable to that of the 6600 disk. A pair of PPUs alternate between the adapter and the Disk Channel to stream data from the PDP-6 core directly to the 6600 disk, bypassing the main frame core. This reduces overhead on the 6600 system but requires tighter communication between the two systems in scheduling the transfer. (A 6600 disk must be allocated and the heads positioned before initiating the transfer.)

The 7600 (Figure 5) does not have a channel switch; it provides eight dedicated channels for each PPU. The 7000-series channel is rather rudimentary so that two channels are required for a 7600 adapter, one for data transfer and one for control. Since channels are dedicated, conventions for connection are similar to those for initial 6600 connections. The 7600 PPUs and CPU are significantly faster, and required pauses are of reasonable duration.

Channel data rates between the worker machines and the PDP-6 are approximately 10 X 106 bps. Long term rates are reduced by required pauses and setup time.

The PPUs and data channels on the 6000- and 7000-series systems are not truly buffered and not very flexible in certain operations, a real problem with this system arrangement. When a block transfer occurs between a PPU and a channel, the accumulator of the PPU functions as the word-count register. The PPU stops program execution until the transfer goes to completion; no instructions are executed. Some of the logic in the PPU and the accumulator sequences the word-by-word transfer until the transfer is terminated. The PPU terminates when the word count is decremented to zero; the PDP-6 terminates if the file being shipped to the CDC machine is shorter than the word count. When everything is working, this is not a big problem, but it does present difficulties during system implementation.

When software controls the transfer and hardware provides microcontrol functions, problems will exist initially at both hardware and software levels. Everything quits when a difficulty occurs since the PPU is stopped during the transfer. It can be very difficult to trace the problem since repetitive operation may be impossible. This has been resolved somewhat by incorporating additional logic in both the adapters and line units. The added logic facilitates local diagnostics which do not rely on operation of the full link and cooperation by the computer systems at each end of the link.

Another problem with the PPUs is the absence of an easy way to determine the contents of the PPU's program counter when the PPU has stopped; core dumps may provide little help. The location specified by the program counter can be determined only by probing the corresponding test points with an oscilloscope, a very awkward process.

PDP-6 Core Memory and Bus Multiplexors

The PDP-6 Core Memory Buffer itself consists of sixteen 16K-word banks. Fourteen of these banks are one-microsecond core; the other two are a little slower. There are six memory buses (ports) in the system (Figure 6); two go to the two CPUs, and the other four go to the various I/O on the system.

The memory structure is basically asynchronous; the requests on one bus are unresolving in time to the requests on another bus. The function of each bank interface is resolving simultaneous requests to that particular core bank, since memory access requests occur in a rather random fashion. Quite often, two or more requests arrive simultaneously at a single memory bank. The interface and its hardware priority algorithms (which have been incorporated in the interface) parcel out the cycles within the 16K core module according to the appropriate algorithm so that each requester gets a "fair shake."

A fair shake is not equal time. Memory requests for synchronous devices must be serviced within a fixed period of time, even when the average request rate is slow. Purely asynchronous devices (a CPU or file transport channel) are very tolerant to delays but can profitable use nearly all remaining memory cycles.

The algorithm implemented in the core memory interfaces gives strict priority to the first two memory buses. The first has priority over the second, which has priority over the remaining four. The other four memory buses have equal priority and are allowed memory references to a particular bank on a first come-first served (sequencing) basis.

The CPUs are connected to two of the sequencing ports (the memory bus connection to each of the memory interfaces) while only the Librascope Disk has the top priority port because of its high-speed synchronous nature. The remaining memory access requirements are serviced by memory bus multiplexers which allow several devices or data channels to share a single bus.

Two types of bus multiplexers are in the system now. The preferred type is the Radial Bus Multiplexer, two of which are installed. Each connected peripheral device switched basis. This provides configuration (memory loading) flexibility and redundancy to minimize the effect of losing a subport on either multiplexer.

A fractional-binary-sequencing hardware algorithm is incorporated in each multiplexer to guarantee each pair of subports a minimum number of cycles as a fractional power of two. The top subport pair is guaranteed half the cycles, the next pair one-fourth of the cycles, and so on in sequence to a guaranteed one of every 256 cycles for the bottom subport pair. Any low priority subport can obtain a large number of cycles, however, since the fractional binary sequence is only a minimal guarantee to satisfy the demands of synchronous devices.

The second type of multiplexer, the "Flip-Chip" Bus Multiplexer, operates on a scan basis. Only two devices remain connected to it and they will shortly be converted to the Radial Multiplexer pair. Expansion to a third Radial Bus Multiplexer (replacing the Flip-Chip Bus Multiplexer) will be required in the near future.

Connection of Secondary Storage

Each device in the centralized storage hierarchy is connected to the PDP-6 core memory via a controller (Figure 3). Controllers gain memory access via bus multiplexers except for the Librascope Disk which effectively has the top priority memory port to itself on a dedicated basis. A console display also uses the top priority port when the disk is inactive.

Each controller is a special-purpose processor. It can execute a stylized or limited set of instructions to position mechanical storage elements, seek to a storage device location, select head groups, or control Read/Write/Verify operations for the storage device. Data assembly/disassembly, formatting, synchronization, and buffering among PDP-6 core memory accesses are provided as required.

The IBM 1360 Photostore is unique in its containing a control computer which is a modified IBM 1800 System operating as a process controller.

DESIGN OF THE TELETYPE NETWORK AND OTHER REMOTE I/O

The Separate Network Concept

The second basic feature to be provided by the OCTOPUS Network was a capability allowing logical connection of any remote terminal to any worker machine. Teletypes are the primary terminal types now available on the system and are connected via their private network as shown in Figure 2. The Teletype Network is redrawn separately in more detail in Figure 7.

Each worker machine has a Teletype Network link to one of the PDP-8 based line concentrators over which all messages are passed for terminals logged into that machine. If the Teletype is not attached to the PDP-8 with the worker machine link, the message must be forwarded via the PDP-6 system to the appropriate PDP-8 concentrator. Direct links among the three PDP-8 systems are being implemented and will be operational late in 1970, providing complete independence of the Teletype and File Transport Networks.

The PDP-6 will remain attached to the Teletype Network for several reasons. Logging into the PDP-6 (as a host or worker machine) is possible for direct manipulation of files.

The File transport system (software and the hardware network) uses the Teletype Network links to pass control messages between the worker machines and the PDP-6. The Teletype Network thus provides a mechanism for transferring packet messages within the OCTOPUS Network. The Teletype messages are but a subset of the class of short messages which are termed "packets." This is done for two reasons. First, the packet messages are all short and do not mix well in a time statistic sense with the much larger files transported over the file transport links. Second, the PPU in a 6600 or 7600 is dedicated to packet handling to ensure quick response, whereas PPUs for file transport are assigned dynamically as required, with attending delay.

The above features are reasons for retaining PDP-6 connection to the Teletype Network and are necessities. However, an advantage is also gained with respect to reliability. File transport links can be (and are) used as a secondary backup route for transferring packet messages when a PDP-8/worker machine link is lost. Greater delay is encountered and some awkwardness in PPU utilization results, but backup provides a major step toward utility-grade availability.

A degree of vulnerability remains for a given PDP-8 concentrator system as implemented; loss of a system is loss of that set of Teletypes (one-third of the full set). This is undesirable but tolerable except for a few crucial operator-oriented Teletypes for each worker machine. These few teletypes are connected to two concentrators on a manually witched basis so that operability of each worker machine is guaranteed.

Message Buffering Teletype Concentrator

A single PDP-8 based Teletype concentrator with a single channel to a CDC 6600/7600 worker machine is shown in Figure 8. Each PDP-8 (or PDP-8/L) is an 8K-word system (12 bits/word) with the second 4K words of memory used for line message assembly/disassembly buffering. A Teletype multiplexer, operating on a memory cycle steal basis, performs serial-to-parallel and parallel-to-serial character conversion for each of the 128 full-duplex Teletype lines attached to it. Highly specialized software in the PDP-8 controls the multiplexer, packs (or unpacks) characters into (or out of) the message buffer, and controls forwarding of messages over the Teletype Network links.

The motivations for using minicomputers in dedicated concentrator applications appear stronger today than when initially implemented. It is apparent that a minicomputer can be used very efficiently (therefore economically) because its relatively low cost permits specialized use and permits hardware and software design to be optimized.

Less apparent is the advantage of maximum buffering and intelligence as close to the user terminal as possible. Very large worker machines, as a class, are not particularly efficient at very small I/O tasks, and I/O interrupts at a hardware or software level are rather costly. One might say that they have a processing inertia or momentum which must be overcome to perform an I/O transaction. To minimize the frequency of these transactions, the information passed during a transaction must be maximized. This is a Teletype line message in the case of Teletypes, including a system-oriented header describing the message. Message analysis on a character-by character basis is performed after the I/O transfer has been completed.

Teletype Network Channels

The links in the Teletype Network use the same LRL OCTOPUS Data Channel used by the File Transport Network, except for the connection to the PDP-6. An adapter is again provided to connect the LRL Channel to the channel on each worker machine with an additional provision for a clock attachment (provides chronological time and date information to the worker machine for accounting purposes).

The LRL Channel is connected to a PDP-8 by a PDP-8 Line Unit, which is essentially identical to the PDP-6 Line Unit except for the width of the data path. Performance of this link is only about 1.5 X 106 bps, but this is more than adequate for the load. Note that large files normally transported on the File Transport Network are not routed via the Teletype Network as an alternate. This is not possible because of PDP-8 data rate and storage limitations.

Connection of a PDP-8 to the PDP-6 (core memory) is provided by the PDP-8 File Channel. This channel operates on a core-to-core basis, with 12-to-36 bit assembly/disassembly at data rates of 1.5 x 106 bps, which are comparable to the worker machine links.

All channels on the PDP-8s, including the forthcoming PDP-8/PDP-8 links, appear logically identical. This allows use of identical reentrant software control routines.

Selector Adapters

As implemented, each adapter currently connecting a Teletype Network link to a CDC machine requires the dedication of a CDC channel. An adapter is, in effect, a 1 X 1 channel converter. Additional connections for I/O will be required in the near future, initially for connection of the RJET Network which is now being implemented.

A modified version of the adapter, the "Selector Adapter," is under implementation and will permit connection of several (design maximum of 16) LRL OCTOPUS links to a single CDC channel. The Selector Adapter, then, is a 1 X N channel converter where N will be limited (in practice) by system loading and delay constraints. Connection to one of the N channels will be made on a job basis and, with a CDC channel and PPU, will be similar to a "Selector I/O Processor."

Television Monitor Display System (TMDS)

A 16-channel visual display system, based on disk refresh of static images on standard television monitors for low cost, has been implemented on the OCTOPUS Network (Figure 9). The disk has 32 tracts and rotates synchronously at 3600 rpm, twice the standard TV frame rate of 30 frames per second. Standard TV is interlaced so that either ODD or EVEN scan lines are refreshed in 1/60th of a second on an alternating basis. Therefore, two disk tracks are assigned per channel, one for the ODD line field and one for the EVEN line field.

Information is recorded as digital bits on the disk, corresponding to black or white picture points on a 512 X 512 display area. To provide 512 visible horizontal lines, 559 scan lines are required to allow for vertical retrace. Display data are recorded once with refresh coming from the disk. A controller connects the disk refresh subsystem to PDP-6 core as the source of the display data. (This is the same core buffer used for file transport applications.) The controller has a scan converter for alphanumeric characters; however, vector display data must be converted to raster scan format by software prior to writing the disk.

An electronic "crossbar switch" will shortly be added to TMDS to enable sharing of the 16 available channels by more than 16 monitors. Multiple monitors will also be able to display simultaneously the contents of a single disk channel, with appropriate security precautions.

It is apparent that connection of the TMDS to the PDP-6 core memory and supporting system is not entirely consistent with the philosophy of separate networks partitioned on a functional basis. The connection to the PDP-6 is partially the result of economics; the TMDS was intended as a pilot effort to develop techniques for low-cost display systems. It will not be expanded beyond its present capacity of 16 channels. Also, it was acquired before the separation of networks concept was finally accepted.

Data Acquisition System

Another I/O capability which has been implemented is a medium-speed serial line concentrator for 16 lines at 1200 bps (Figure 10). This system is similar to the PDP-8 based Teletype concentrators with a corresponding tradeoff between line speed and number of lines. Communication over the serial lines is full-duplex asynchronous by character, as for the Teletype systems.

The system provides a mechanism for transmitting data between the computer center (OCTOPUS Network) and remote experimental areas. This system is only partially operational at present as other parts of the OCTOPUS Network have higher priority.

The Data Acquisition System is connected only to the PDP-6 at present. Most applications require the ability to build a file for subsequent processing, which is possible now. Expansion to a full network by direct links from this concentrator to the CDC worker machines is possible but rather unlikely and unwarranted at present, considering the impending implementation of the RJET Network with its higher performance communications capability and direct connection to each worker machine.

NEW EFFORTS AND FUTURE DIRECTIONS

Present file transport network

Two areas require upgrading in the present centralized storage system. The CPUs are troublesome and require extensive high-quality maintenance. Spare circuit modules are no longer in production, yet are critical in circuit characteristics. Module replacement is highly selective and often requires replacement of components for satisfactory operation. Repair of processor problems therefore takes an inordinate amount of time and reliability is often marginal.

Installation of PDP-10 central processors will reduce preventive maintenance and difficulties during production operation (system crashes). The PDP-10 CPU is nearly a plug-in replacement for the PDP-6 CPU so only minor software adjustments will be required.

The second area of hardware reliability problems centers around the Photostore file index, which uses the Data Cell. The Data Cell is a rather complex, hybrid storage device using pneumatic, hydraulic, and mechanical subsystems for handling the magnetic strips which are wrapped around the drum for Read/Write operations. Breakdowns and error rates are highly proportional to the rate of strip accesses and result in rather frequent unscheduled outages because present use consists of many accesses to short files (an LRL need which is quite different from the Data Cell's designed usage).

This area of difficulty will be significantly reduced by the acquisition of a large, high-performance disk-pack system of a few billion bits capacity. It will fit into the storage hierarchy between the present Librascope fixed-head disk file and the Data Cell, with the Data Cell then being devoted primarily to medium life-time user file storage. Backup copies of the file index will be kept on the Photostore for recovery purposes. Use of the Photostore as an operating backup is not reasonable because of its slow random-access times.

Remote Job Entry Terminal (RJET) Network

A third network is being implemented to provide a capability for conventional card input and printer output at remote sites within the Lab. Studies have demonstrated cost effectiveness based on time saved by users traveling between remote office areas and the computer center. Further gains are achieved by the increased productivity of the user resulting from reduced turnaround. User efficiency will also be increased if artificial operational delays are eliminated.

The RJET Network is blocked out in Figure 11. It uses a dedicated line concentrator and message buffer (a dual PDP-11 system) to interface each serial line to the set of worker machines. The individual lines will use Standard Binary Synchronous Communications protocol for compatibility with industry standard at speed multiples of 1200 bps. Standard communication speed with RJET terminals will be 4800 bps, either half- or full-duplex.

The "front-end" Communications Handler PDP-11 provides line interfacing with a capacity of 18 lines at 4800 bps, half-duplex. Expansion beyond this would require another front-end PDP-11. The other PDP-11, the Message Handler, controls OCTOPUS Network communications and buffers partial messages as they are received from or transmitted to the respective terminals.

Connection to each CDC worker machine will be via one of the subports on the Selector Adapter similar to the Teletype Network connections. Translation from the 12-bit data path LRL OCTOPUS Channel to the 16-bit word length of the PDP-11 is performed in the Selector Line Unit. This line unit connects the message-buffering PDP-11 to each OCTOPUS channel in the RJET Network.

The PDP-6 file storage system connection to the RJET Network will be similar to that for the CDC worker machines; the PDP-6 will appear as another host or worker machine. One PDP-11 Selector Line Unit subport will link to the PDP-6 via new adapter unit. This PDP-6 Adapter is rather like the inverse of the line unit channel used for file transport, since the channel is asymmetrical in a control sense.

High Performance Graphics System

Some of LRL's research programs require a fast and high interactive graphics system capability. This requirement is partially met by multiple consoles on a Sigma 7/Signa 3 time-shared system and by a single graphics console on an IBM 7094 used in a dedicated mode. Both of these are off-line from LRL's main computer power as they are not part of the OCTOPUS Network. The 7094 System also has a limited life due to the economics of operation and maintenance.

A two-console graphics processing system will be added to the OCTOPUS Network by mid-1971 to augment the existing systems. This will provide, for the first time, an on-line capability closely coupled to the large CDC machines. The display processor supporting the two consoles will have a minimum performance level of 6000 short vectors at 40 frames/sec and special-purpose picture processing capabilities. It will be coupled to the PDP-6 core bank supporting the OCTOPUS File Transport Network and supported by one of the two CPUs (soon to be PDP-10s).

Medium Performance CRT Terminals

Practically all remote terminals on OCTOPUS are still teletypewriters despite pilot efforts in other areas (e.g., TMDS). Teletypes are well known to be slow, noisy, and not too reliable, but they are inexpensive - a factor of five or so less costly than small CRT-based systems. Terminal Technology is moving rapidly, however.

Teletypes are tolerated because there is no cost-equivalent alternative. However, we need to look beyond initial dollar costs. Total investment in Teletypes, associated PDP-8 concentrators, and Teletype Network (for about 350 Teletype units) approximates 1% of the computing machinery investment at LRL. A very small percentage improvement in utilization of these computer systems would justify a significantly improved terminal and its cost. The economic value of improvements in user efficiency are more difficult to measure analytically but offer even greater potential return.

A minimal CRT terminal should display at least a thousand alphanumerics and provide graphical displays of a few hundred vector inches (measured on the CRT face). Interaction could be limited to that attainable from a keyboard with a cursor on the display. The inclusion of limited graphics would have a large influence on user habits since a few curves can impart information equal to a page of print. Actually seeing the curves and their relationships should provide much-improved user efficiency. Faster output of alphanumerics leads to use of the CRT terminal for "page flipping" of text, reducing requirements for large quantity printout.

Dollar constraints prohibit the mass replacement of Teletypes on a 1 for 1 basis. A program to augment the Teletypes with superior terminals at about 128 units per increment is being pursued at a technical level, but has not yet been funded.

The Future for File Transport

The present File transport Network has an upper limit in attainable data rate, The Photostore records at about 0.25 X 106 bps and reads at 1.5 X 106 bps. The core buffer, disk, and file transport channels are fairly well matched for operation at about 10 X 106 bps. Even with a better mass storage device, I/O rates to the central hierarchy cannot exceed 10 X 106 bps until a new system is implemented (a major effort). Up-grading the present system is equivalent to implementing a new system since it pinches out all points almost simultaneously.

The 7600s and STAR (the next large worker computer) have I/O rates of at least 50 X 106 bps, which can be viewed as a lower limit for large systems appearing in this decade. Support of these new systems will require new file transport capabilities which will likely be similar to those in the existing approach.

It appears that optical mass memory systems will become available within two or three years and will offer on the order of trillion-bit capacities with sub-microsecond access times. Data streaming rates will be 100 X 106 bps or greater, and storage costs will fall to about 10-4 cents/bit. This will result in a radical change in large machine organization, completely eliminating the elaborate memory hierarchies presently required. The memory structure may take the form of one of these fast, relatively in-expensive optical mass memories, backed up by several active circuit cache memories.

User data files tend to fall into two categories, short-term and long-term. Short-term files tend to be problem dependent and machine oriented and are accessed frequently for a few days. Long-term files tend to be reference copies of the short-term files or data bases which have been built up over a long period of time and have common interest to many users.

The present file transport system distinguishes between these two categories by using the Data Cell for short-term storage and the Photostore for long-term storage. On-line disk/drum storage on the large computers has a lifetime of only about four hours.

The projected storage capabilities of the large machines may allow a more efficient structure with their increased local storage capabilities. Short-term files could be kept locally for a few days if desired, avoiding the continual overhead of transporting between centralized store and worker machine.

A centralized mass-storage system with supporting file-management capabilities will still be necessary for common data-base applications and long-term storage. It is reasonable to assume that economy through scale will remain valid. Economics will continue to favor the very large store shared among the various computers in the OCTOPUS Network.

The fundamental change from the present system will be an increase in the capacities of the central store and large computer stores, and in their interconnecting data rates of one to two orders of magnitude. This will preserve and possibly improve the relationship of file storage facilities to the computing capabilities of the computers.

A Final Overview

The several implemented and planned networks comprised by OCTOPUS are communications networks with various unique emphases or capabilities, depending on function. The provision of I/O resources is common to all of them and offers stability and independence of these resources from changes in and additions to the large worker machine computing resources. The networks allow a degree of dynamism in the makeup of the overall system and longevity through gradual evolution, which may in the end be their greatest strength.

ACKNOWLEDGEMENTS

The author is grateful to Mrs. C. Russell for her time and effort in preparing the original illustrations, to the many Computation Department personnel who helped gather this material, to Thom Nelson of TID for his editorial comments and suggestions, and to Mrs. C. Larsen and the CIC staff for their assistance and cooperation.






[1]
P. Du Bois, J. Fletcher, and M. Richards, "Files," Ch. 4, Part 1, LTSS - Livermore Time-Sharing system (Lawrence Radiation Laboratory, Livermore, Oct. 14, 1970.




Octopus Engineering Diagrams

(Figures 1 - 11)




Octopus.dir/images/Octopus.Figure.1.jpg Octopus.dir/images/Octopus.Figure.2.jpg Octopus.dir/images/Octopus.Figure.3.jpg



Octopus.dir/images/Octopus.Figure.4.jpg Octopus.dir/images/Octopus.Figure.5.jpg Octopus.dir/images/Octopus.Figure.6.jpg



Octopus.dir/images/Octopus.Figure.7.jpg Octopus.dir/images/Octopus.Figure.8.jpg Octopus.dir/images/Octopus.Figure.9.jpg


 
Octopus.dir/images/Octopus.Figure.10.jpg Octopus.dir/images/Octopus.Figure.11.jpg  

For information about this page, contact us at:
comment@computer-history.info