High Performance Computing & Communications: Toward a National Information Infrastructure "1994 lysical. Mathematical, and Enginee Veering, and Tectinology • Office of Scf. A. Nico Habermann (1932-1993) This "Blue Book" is dedicated to the memory of our friend and colleague, A. Nico Habermann, who passed away on August 8, 1993. as this publication was going to press, Nico served as Assistant Director of NSF for Computer and Information Science and Engineering (CISE) for the past two years and led NSF's commitment to developing the HPCC Program and enabling a National Information Infrastructure. He came to NSF from Carnegie Mellon University, where he was the Alan J. Perils Professor of Computer Science and founding dean of the School of Computer Science. He was a visionary leader who helped promote collaboration between computer science and other disciplines and was devoted to the development and maturation of the interagency collaboration in HPCC. We know Nico was proud to be part of these efforts. HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS TOWARD A NATIONAL INFORMATION INFRASTRUCTURE > ; cr A Report by the Committee on \ ^ Physical, Mathematical, and Engineering Sciences ' un : ^ : D Federal Coordinating Council for ; <=i Science, Engineering, and Technology ^^^ Office of Science and Technology Policy EXECUTIVE OFFICE OF THE PRESIDENT OFFICE OF SCIENCE AND TECHNOLOGY POLICY WASHINGTON, DC. 20506 MEMBERS OF CONGRESS: I am pleased to forward with this letter "High Performance Computing and Communications: Toward a National Information Infrastructure" prepared by the Committee on Physical, Mathematical, and Engineering Sciences (CPMES) of the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), to supplement the President's Fiscal Year 1994 Budget. This report describes the FCCSET Initiative in High Performance Computing and Communications. The interagency HPCC Initiative is developing computing, communications, and software technologies for the 21st century. It is making progress toward providing the high performance computing and communications capabilities and advanced software needed in critical research and development programs. The HPCC Program is fully supportive of and coordinated with the emerging National Information Infrastructure (NH) Initiative, which is part of the President's and Vice President's Technology Initiative released February 22, 1993. To enable the NH Initiative to build on the HPCC Program, the FCCSET CPMES High Performance Computing, Communications and Information Technology (HPCCIT) Subcommittee has included a new program component. Information Infrastructure Technology and Applications (IITA). It will provide for research and development needed to a address National Challenges, and it will also address problems where the application of HPCC technology can provide huge benefits to America. Working with industry, the participating agencies will develop and apply high performance computing and communications technologies to improve information systems for National Challenges in areas such as health care, design and manufacturing, the environment, public access to government information, education, digital libraries, energy management, public safety, and national security. IITA will support the development of 'the Nil and the development of the computer, network, and software technologies needed to provide appropriate privacy and security protection for users. The coordination and integration of the interagency research and development strategy for this Initiative and its coordination with other interagency FCCSET Initiatives has been led very ably by the CPMES and its HPCCIT Subcommittee. Donald A. B. Lindberg, Chair of the HPCCIT Subcommittee, his interagency colleagues, their associates, and staff are to be commended for their efforts in the Initiative itself and in this report. ah^ Office of Science and Technology Policy Federal Coordinating Council for Science, Engineering, and Technology Committee on Physical, Mathematical, and Engineering Sciences Department of Agriculture Department of Commerce Department of Defense Department of Education Department of Energy Department of Health and Human Services Department of Interior Environmental Protection Agency National Aeronautics and Space Administration National Science Foundation Office of Management and Budget Office of Science and Technology Policy FCCSET Directorate Charles H. Dickens, Executive Secretary Elizabeth Rodriguez, Senior Policy Analyst High Performance Computing and Communications and Information Technology Subcommittee HPCCIT HPCCIT Executive Name Agency or Department Subcommittee Committee (see n. Hi) Donald A.B. Lindberg National Coordination Office Chair Chair James Burrows National Institute of Standards and Technology Representative Representative John S. Cavallini Department of Energy Alternate Alternate Melvyn Ciment National Science Foundation Alternate Alternate George Cotter National Security Agency Representative Robin L. Dennis Environmental Protection Agency Alternate Name Norman Click A. Nico Habermann f Deceased) Lee Holcomh Paul E. Hunter Steven Isakowitz Charles R. Kaiina Norman H. Kreisman R.J. (Jerry) Linn Daniel R. Masys Bruce McConnell William L. McCoy James A. Mitchell David Nelson Michael R. Nelson Joan H. Novak Agency or Department National Security Agency National Science Foundation National Aeronautics and Space Administration National Aeronautics and Space Administration Office of Management and Budget National Coordination Office Department of Energy National Institute of Standards and Technology National Institutes of Health Office of Management and Budget Federal Aviation Administration Department of Education Department of Energy Office of Science and Technology Policy Environmental Protection Agency HPCCIT Subcommittee Alternate HPCCIT Executive Committee tsce p. in I Representative Representative Representative Representative Alternate Alternate Representative Executive Secretary Alternate Alternate Alternate Alternate Representative Alternate Observer Representative Representative Representative Representative Representative Name Merrell Patrick Agency or Department National Science Foundation HPCCIT Subcommittee Alternate HPCCIT Executive Committee (Sw Nmc below) Alternate Alex Poliakoff Thomas N. Fyke Department of Education National Oceanic and Atmospheric Administration Alternate Representative Representative John Silva Paul H. Smith Advanced Research Alternate Projects Agency National Aeronautics Alternate and Space Administration Alternate Alternate Stephen L. Squires John C. Toole Judith L. Vaitukaitis Advanced Research Projects Agency Advanced Research Projects Agency National Institutes of Health Alternate Alternate Representative Representative Alternate Note: The Advanced Research Projects Agency, the National Science Foundation, the Department and Energy, and the National Aeronautics and Space Administration hold permanent positions on the HPCCIT Executive Committee: the other two positions rotate cnnong the other agencies. Editorial Group for High Performance Computing and Communications 1994 Editor Sally E. Howe Assistant Director for Technical Programs. National Coordination Office Program Component and Section Editors Melvyn Ciment, NSF. Past Co-Editor Stephen Griffin, NSF Daniel R. Masys, NIH Merrell Patrick, NSF HPCC Program and Program Components Case Studies Video BRHR Calvin T. Ramos, NASA Stephen L. Squires, ARPA Roger Taylor, NSF ASIA HPCC Program, HPCS, IITA NREN Agency Editors Norman Click Stephen Griffin Frederick C. Johnson Thomas Kitchens Daniel R. Masys James A. Mitchell Joan H. Novak Thomas N. Pyke Calvin T. Ramos Stephen L. Squires National Security Agency National Science Foundation National Institute of Standards and Technology Department of Energy National Institutes of Health Department of Education Environmental Protection Agency National Oceanic and Atmospheric Administration National Aeronautics and Space Administration Advanced Research Projects Agency Copy Editor Patricia N. Williams National Coordination Office Acknowledgments In addition to the FCCSET, CPMES, HPCCIT Subcommittee, and Editorial Group, many other people contributed to this book, and we thank them for their efforts. We explicitly thank the following contribu- tors to the Component and Agency Sections: Robert Aiken, Lawrence Livermore National Laboratory, DOE Robin Dennis, EPA Darken Fisher, NSF Michael St. Johns, USAF, ARPA Anthony Villasenor, NASA Contributors to the Case Studies are acknowledged at the end of the Case Studies section. We thank Joe Fitzgerald and Troy Hill of the Audiovisual Program Development Branch at the National Library of Medicine for their artistic contributions and for the preparation of the book in its final form, and we thank Patricia Carson and Shannon Uncangco of the National Coordination Office for their contribu- tions throughout the preparation of this book. IV Table of Contents I. Executive Summary II. The HPCC Program 1 . Program Overview 7 2. Program Management 1 -"i 3. Interdependencies Among Components 20 4. Coordination Among Agencies 21 5. Membership on the HPCCIT Subcommittee 22 6. Technology Collaboration with Industry and Academia 24 7. Agency Budgets by HPCC Components 25 8. Agency Responsibilities by HPCC Components 26 III. HPCC Program Components 1. High Performance Computing Systems 29 2. National Research and Education Network 32 3. Advanced Software Technology and Algorithms 41 4. Information Infrastructure Technology and Applications 47 5. Basic Research and Human Resources 35 IV. Individual Agency Programs 1 . Advanced Research Projects Agency 59 2. National Science Foundation 64 3. Department of Energy 72 4. National Aeronautics and Space Administration 81 5. National Institutes of Health 87 6. National Security Agency 96 7. National Institute of Standards and Technology 101 8. National Oceanic and Atmospheric Administration 106 9. Environmental Protection Agency 1 1 1 10. Department of Education 117 V. Case Studies Introduction 1 1 9 1 . Climate Modeling 120 2. Sharing Remote Instruments 123 3. Design and Simulation of Aerospace Vehicles 1 26 4. High Performance Life Science: From Molecules to MRI 129 Table of Contents (cont. 5. Non-Renewable Energy Resource Recovery 132 6. Groundwater Remediation 135 7. Improving Environmental Decision Making 137 8. Galaxy Formation 139 9. Chaos Research and Applications 141 10. Virtual Reality Technology 143 11. HPCC and Education 145 12. GAMS: An Example of HPCC Community Resources 148 13. Process Simulation and Modeling 150 14. Semiconductor Manufacturing for the 21st Century 152 15. Advances Based on Field Programmable Gate Arrays 154 16. High Performance Fortran and its Environinent 156 Contributing Writers 158 VI. Glossary 159 VII. Contacts 166 IX. Index 173 VI List of Tables HPCCIT (High Performance Computing, Communications, and Information Technology) Subcommittee i-iii Some Grand Challenge Problems 5 Some National Challenge Application Areas 5 Overview of the Five HPCC Components 1 1 HPCC Program Goals 12 HPCC Agencies 12 HPCC Program Strategies 1 3 HPCC Research Areas 14 Major HPCC Conferences and Workshops Dunng FY 1 993 18 HPCC-Related Conferences 18-19 Evaluation Criteria for the HPCC Program 22-23 Agency Budgets by HPCC Program Components 25 Agency Responsibilities by HPCC Program Components 26-27 Major Federal Components of the Interagency Internet 33 InterNIC Service Awards 36 Gigabit Testbeds 39 HPCC Agency Grand Challenge Research Teams 43 Contrasts Between Grand Challenges and National Challenges 49 NSF centers directly supported within the ASTA component 66 On-Going NSF Grand Challenge Applications Groups 68 NSF Grand Challenge Applications Groups Initiated in FY 1993 69 High performance systems under evaluation for biomedical applications 90 NOAA Grand Challenges Requiring HPCC Resources 106 NOAA Laboratories Involved in HPCC 107 NOAA National Data Centers 1 08 VII List of Illustrations Schematic of interconnected NSF, NASA, and DOE "baci^bone" networks High performance interconnect and fine-grained parallel systems board High performance networking map Modern operating systems schematic National Challenges and Nil program layers schematic and examples NSF metacenter hardware configuration Industrial affiliates and partners at NSF Supercomputer Centers (pie chart) Mosaic screens Hydrogen/air turbulent reacting flame streams ESnet map Biochemically activated environmental chemical benzo-a-pyrene Parallel Virtual Machine (PVM) National Storage Laboratory (NSL) High School Science Students Honors Program "Superkids" Volume renderings of density of bubble hit by shock wave Pacific Northwest Mesoscale Meteorological (MM) model results High velocity impact model results Simulation of the Earth's ozone layer Aeronautics Network (AERONet) map NASA Science Internet (NSINet) maps Simulated temperature profile for hypersonic re-entry body Simulation of surface pressure on High Speed Civil Transport (HSCT) Freestream airflow and engine exhaust of Harrier Vertical Takeoff and Landing (VTOL) aircraft Herpes simplex virus capsid cross-section reconstruction Myoglobin protein Binding of small molecules to DNA Three-dimensional reconstruction from computed tomography and magnetic resonance images Four megabyte digital radiology image X-ray diffraction spectroscopy for determining protein structure Terasys workstation Graph of time improvements using multiple vs. single processor Memory traffic flow High Speed Network (HNET) Testbed Board employing Field Programmable Gate Arrays MultiKron chip Computer-controlled coordinate measuring machine Machine tool in the Shop of the 90s Cost-effective ways to help protect computerized data Statistical analysis of video microscope images Model of Colorado Front Range blizzard Contours of wind flow in middle atmosphere EPA networking maps Electric field vectors for benzopyrene NREN 35 ARPA 60 ARPA 61 ARPA 62 ARPA 63 NSF 64 NSF 65 NSF 67 DOE 72 DOE 73 DOE 74 DOE 75 DOE 76 DOE 77 DOE 78 DOE 79 DOE 80 NASA 81 NASA 82 NASA 82 NASA 83 NASA 84 NASA 85 NIH 87 NIH 89 NIH 91 NIH 92 NIH 93 NIH 94 NSA 96 NSA 97 NSA 98 NSA 99 NSA 100 NIST 101 NIST 102 NIST 103 NIST 104 NIST 105 NOAA 109 NOAA no EPA 111 EPA 112 VIII ^m List of Illustrations (cont. Regional Acid Deposition Model predictions of nitric acid over eastern U.S. Rendition of salinity in Chesapeake Bay Ozone concentrations through AVS interface Getting feedback on prototype user interface Fairfax County Public School teacher and students using PATHWAYS database and STAR SCHOOLS distance learning materials Simulation of speed of currents at ocean surface and global ocean model Sea surface temperature simulation maps Chick cerebellum - Purkinje cell Controlling high-voltage electron microscope over the Internet Simulation of airflow over and past high performance aircraft wing Airspeed contours around aircraft traveling at Mach 0.77 Simulation of airflow through jet engine components Modeling of Purkinje neuron Computer-aided biopsy from magnetic resonance observation Molecular dynamics of leukotriene molecules portrayed using "ghosting" Gulf Coast Basin as described by geological database Comparison of computed injectivity of carbon dio.xide and field data from Texas oil reservoir Groundwater remediation at Oak Ridge Waste Storage Facility Sediment transport in Lake Erie Astrophysical simulation of 8.8 million gravitational bodies Pattern found by "chaos" research Virtual reality environment for controlling scanning tunneling microscope Detail of virtual reality environment High school students using HPCC technologies DLA fractal object research for 1993 Westinghouse Science Talent Search Guide to Available Mathematical Software (GAMS) screen Simulation of liquid molding process for manufacturing polymer composite automotive body panels Simulation of cunent flow in bipolar transistor Splash 2 board containing Field Programmable Gate Arrays High Performance Fortran (HPF) code segments EPA 113 EPA 114 EPA 115 EPA 116 ED 117 118 120 123 124 126 127 128 129 130 131 132 133 135 137 139 141 143 144 145 146 148 150 152 154 157 IX High Performance Computing and Communications: Toward a National Information Infrastructure The FY 1994 U.S. Research and Development Program Executive Summary The goal of the Federal High Performance Computing and Communications (HPCC) Program is to accelerate the development of future generations of high perfomiance computers and networks and the use of these resources in the Federal government and throughout the American economy. Scalable high performance computers, advanced high speed computer communications networks, and advanced software are critical components of a new National Information Infrastructure (Nil). This infrastruc- ture is essential to our national competitiveness and will enable us to strengthen and improve the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environment, health care, manufacturing processes and products, national security, and public access to government information. The HPCC Program evolved out of the recognition in the early 1980s by American scientists and engi- neers and leaders in government and industry that advanced computer and telecommunications tech- nologies could provide huge benefits throughout the research community and the entire U.S. economy. The Program is the result of .several years of effort by senior government, industry, and academic sci- entists and managers to initiate and implement a program to extend U.S. leadership in high perfor- mance computing and networking technologies and to apply those technologies to areas of profound impact on and interest to the American people. The Program is planned, funded, and executed through the close cooperation of Federal agencies and laboratories, private industry, and academia. These efforts are directed toward ensuring that to the greatest extent possible the Program meets the needs of all communities involved and that the results of the Program are brought into the research and educational communities and into the commercial marketplace as rapidly as possible. Now halfway through its five-year effort, the Program's considerable achievements include: ^ More than a dozen high performance computing centers are in operation nationwide. New scal- able high performance systems are in operation at these centers, more advanced systems are in the pipeline, and new systems software is making these systems increasingly easy to use. Benchmark results improve markedly with each new generation of hardware and software and bring the Program clo.ser to its goal of achieving sustained teraflop (trillions of floating point operations per second) performance. 3 Traffic on federally-funded networks and the number of new local and regional networks con- nected to these networks continue to double every year. More than 6,000 regional, state, and local IP (Internet Protocol) networks in the U.S., and more than 12,000 worldwide, are connected; more than 800 of the approximately 3,200 two-year and four-year colleges and universities in the Nation are interconnected; and an estimated 1,000 high schools also are connected to the Internet. Traffic on the NSPT^ET backbone has doubled over the past year and has increased a hundred-fold since 1988. Already, HPCC research in the next generation of networking technologies indicates that the Program goal of sustained gigabit (billions of bits) per second transmission speeds will be achieved by no later than 1996. 3 Teams of researchers have made substantial progress in adapting software applications for use on scalable high performance systems and are taking advantage of the increased computational throughput to solve problems of increasing resolution and complexity. Many of these problems are "Grand Challenges," fundamental problems in science and engi- neering with broad economic and scientific impact whose solution can be advanced by applying high performance computing techniques and resources. These science and engineering Grand Challenge problems have motivated both the creation and the evolution of the HPCC Program. Solution of these problems is critical to the missions of several agencies participating in the Program. ^ The base of researchers, educators, and students trained in using HPCC technologies has grown substantially as agencies have provided training in these technologies and in application areas that rely on them. The HPCC Program fully supports and is closely coordinated with the Administration's efforts to accelerate the development and deployment of the Nil. The Program and its participating agencies will help provide the basic research and technological development to support Nil implementation. To this end, several strategic and programmatic modifications have been made to the HPCC Program. The most significant of these is the addition of a new program component. Information Infrastructure Technology and Applications (IITA). IITA is a research and development effort that will enable the integration of critical information sys- tems and their application to "National Challenge" problems. National Challenges are major societal needs that computing and communications technology can help address in key areas such as the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environment, health care, manufacturing processes and products, national security, and public access to government information. The IITA component will develop and demonstrate prototype solutions to National Challenge problems. IITA technologies will support advanced applications such as: 3 Routine transmission of an individual's medical record (including X-ray and CAT scan images) to a consulting physician located a thousand miles away. 3 The study of books, films, music, photographs, and works of art in the Library of Congress and in the Nation's great libraries, galleries, and museums on a regular basis by teachers and students anywhere in the country. -"The flexible incorporation of improved design and manufacturing to produce safer and more energy-efficient cars, airplanes, and homes. J Universal access by industry and the public to government data and information products. The five HPCC Program components and their key aspects are: High Performance Computing Systems (HPCS) 3 Scalable computing systems, with associated software, including networks of heterogeneous systems ranging from affordable workstations to large scale high performance systems Q Portable wireless interfaces National Research and Education Network (NREN) ^ Widened access by the research and education communities to high performance computing and research resources -" Accelerated development and deployment of networking technologies Advanced Software Technology and Algorithms (ASTA) J Prototype solutions to Grand Challenge problems through the development of advanced algo- rithms and software and the use of HPCC resources Information Infrastructure Technology and Applications (IITA) -"Prototype solutions to National Challenge problems using HPCC enabling technologies Basic Research and Human Resources (BRHR) ^Support for research, training, and education in computer science, computer engineering, and computational science, and infrastructure enhancement through the addition of HPCC resources HPCC Program agencies work closely with industry and academia in developing, supporting, and using HPCC technology. In addition, industrial, academic, and professional societies provide critical analyses of the HPCC Program through conferences, workshops, and reports. Through these efforts, Program goals and accomplishments are better understood and Program planning and management are strengthened. The National Coordination Office (NCO) for High Performance Computing and Communications was established in September 1992 to provide a central focus for Program implementation. The Office coordinates the activities of participating agencies and organizations, and acts as a liaison to Congress, industry, academia, and the public. National Library of Medicine Director Donald A. B. Lindberg concurrently serves as Director of the NCO. in which capacity he reports directly to John H. Gibbons, the Assistant to the President for Science and Technology and the Director of the Office of Science and Technology Policy. In the past year, the National Security Agency, in the Department of Defense, and the Department of Education have joined the HPCC Program, bringing to 10 the number of participating agencies. The total FY 1993 HPCC budget for these 10 agencies is $805 million. For FY 1994. the proposed HPCC Program budget for the 10 agencies is $1,096 billion, representing a 36 percent increase over the appropriated FY 1993 level. The HPCC Program is one of six multiagency programs under the Federal Coordinating Council for Science. Engineering, and Technology (FCCSET). The other five programs are Advanced Manufacturing; Advanced Materials and Processing; Biotechnology Research; Global Change Research; and Science. Mathematics, Engineering, and Technology Education. Each of these depends on the capabilities provided by HPCC. The FY 1994 Program and this document are the products of the High Performance Computing, Communications, and Information Technology Subcommittee (HPCCIT) under the direction of the FCCSET Committee on Physical. Mathematical, and Engineering Sciences (CPMES). Some Grand Challenge Problems Build more energy-efficient cars and airplanes Design better drugs Forecast weather and predict global climate change Improve environmental modeling Improve military systems Understand how galaxies are formed Understand the nature of new materials Understand the structure of biological molecules Some National Challenge Application Areas The civil infrastructure Digital libraries Education and lifelong learning Energy management The environment Health care Manufacturing processes and products National security Public access to government information HPCC Program Overview High performance computing has become a critical tool for scientific and engineering research. In many fields, computational science and engineering have become as important as the traditional meth- ods of theory and experiment. This trend has been powered by computing hardware and software, computational methodologies and algorithms, availability and access to high performance computing systems, and the growth of a trained pool of scientists and engineers. The High Performance Computing and Communications (HPCC) Program has accelerated this progress through its investment in advanced research in computer and network communications hard- ware and software, national networks, and agency high performance computing centers. The 10 Federal agencies that participate in the HPCC Program (listed on page 12), along with their partners in industry and academia, have made significant contributions to addressing critical areas of national interest to both the Federal government and the general public. High Performance computing is knowledge and technology intensive. Its development and application span all scientific and engineering disciplines. Over the last 10 years, a new approach to computing has emerged that can support a broad range of needs ranging from workstations for individuals to the largest scale highest performance systems that are used as shared resources. The workstations may also be small scale parallel systems and connect by high performance networks into clusters. Through the combination of advanced computing and computer communication networks with associated soft- ware, these systems may be scaled over a wide performance range, may be heterogeneous, and may be shared over large geographic distances by interdisciplinary research communities. The largest scale parallel systems are referred to as massively parallel when hundreds, thousands, or more processors are involved. Networks of workstations provide access to shared computing resources consisting of other workstations and larger scale higher performance systems. High performance computing refers to the full range of supercom- puting activities including existing supercomputer systems, special purpose and experimental systems, and the new generation of large scale parallel architectures. A Research and Development Strategy for High Performance Computing Executive Office of the President Office of Science and Technology Policy November 20, 1987 The uses of and demand for advanced computer networking funded in part by the HPCC Program continue to expand. Progress and productivity in many fields of modern scientific and technical research rely on the close interaction of people located at distant sites, sharing and accessing computa- tional resources across high performance networks. Their use of networks has provided researchers with unexpected and unique capabilities and collaborations. As a result, the scientific community is demanding even higher performance from networks. This increased demand includes increasing num- bers of users; increasing usage by individual users: the need to transmit more information at faster rates; more sophisticated applications; and the need for increased security, privacy, and the protection of intellectual property. The solution of "Grand Challenge" problems is a key part of the missions of many agencies in the HPCC Program. Grand Challenges are fundamental problems in science and engineering with broad economic and scientific impact whose solution can be advanced by applying high performance com- puting techniques and resources. These problems have and will continue to tax any available compu- tational and networking capabilities because of their demands for increased spatial and temporal reso- lution and increased model complexity. The fundamental physical sciences, engineering, and mathe- matical underpinnings are similar for many of these problems. To this end. a number of multiagency collaborations are underway. (Examples of these problems are identified in Some Grand Challenge Problems on page 5 and in HPCC Research Areas on page 14.) Although the U.S. remains the world leader in most of the critical areas of computing and computer communications technology, this lead is being threatened by countries that recognize the strategic nature of these technology developments. The HPCC Program leads the Federal investment in the frontiers of computing and computer communications technologies, formulated to satisfy national needs in science and technology, the economy, human resources, and technology transfer. The HPCC Program will help provide the technological foundation for the National Information Infrastructure (Nil). The Nil will consist of computers and information appliances (including tele- phones and video displays), all containing computerized information, linked by high speed telecom- munication lines capable of transmitting billions of bits of information in a .second (an entire encyclo- pedia in a few seconds). A Nation of users will be trained to use this technology. The computing and networking technology that will make the Nil possible is improving at an unprecedented rate, expanding its effectiveness and even further stimulating our imaginations about how it can be used. Using these technologies, a doctor who .seeks a second opinion could transmit a patient's entire medical record - X-rays and ultrasound scans included - to a colleague thousands of miles away, in less time that it takes to send a fax today. A school child in a small town could come home and through a personal computer reach into an electronic Library of Congress or a great art gallery or museum to view thousands of books, photographs, records, videos, and works of art. all stored electronically. At home, viewers could use equivalent commercial services to choose at any time to view one of thousands of films or segments of television programming. The Administration is committed to accelerating the development and deployment of the Nil. which the U.S. will need to compete in the 21st century. This infrastructure of "information superhighways" will revolutionize the way we work, learn, shop, and live, and will provide Americans the information they need, when they need it, and where they need it - whether in the form of text, images, sound, or video. It promises to have an even greater impact than the interstate highways or the telephone sys- tem. The Nil will be as ubiquitous as the telephone system, but will be able to carry information at least 1,000 times faster. It will be able to transmit not only voice and fax, but will also provide hun- dreds of channels of interactive high-definition TV programming, teleconferencing, and access to huge volumes of information. 8 Thanks in part to the HPCC Program, this technology is already in use in many of our research labora- tories where it is transforming the way research is done. Scientists and engineers can access informa- tion from computer databases scattered throughout the country and use high performance computers and research equipment thousands of miles away. Perhaps most importantly, researchers can collabo- rate and share infomiation and tools with colleagues across the country and around the world as easily as if they were in the same room. This same telecommunications and computing technology could soon be available to all Americans, provided there is adequate public and private investment and forward-looking government policies that promote its deployment and use. The Administration believes that the Federal government has several important roles to play in the development of this infrastructure, which will be built and operated primarily by the private sector. The HPCC Program is a key part of the Admniistration's strategy for the Nil. On February 22, 1993, the President and the Vice President unveiled a Technology Initiative that outlined the five parts of the Administration's strategy for building the National Information Infrastructure: 1. Implement the HPCC Program. This Program is helping develop the basic technology needed for the Nil. 2. Develop Nil technologies. Through the new Information Infrastructure Technology and Applications (IITA) component of the HPCC Program, industry, universities, and Federal laboratories will collaborate to develop technolo- gies needed to improve effective use of the NIL 3. Fund networking pilot projects. The Federal government will provide funding for networking pilot projects through the National Telecommunications and Information Administration (NTIA) of the Department of Commerce, which currently plays a key role in developing Federal communications policy. NTIA will provide matching grants to states, school districts, libraries, and other non-profit entities to purchase the computers and network connections needed for distance learning and for linking into computer networks such as the Internet. These pilot projects will demonstrate the benefits of networking in the educational and library communities. In addition, to the extent that other agencies undertake networking pilot pro- jects, NTIA will coordinate such projects, as appropriate. 4. Promote dissemination of Federal information. Every year, the Federal government spends billions of dollars collecting and processing information (e.g., economic data, environmental data, and technical information). Unfortunately, while much of this information is very valuable, many potential users either do not know that it exists or do not know how to access it. The Administration is committed to using new computer and networking technology to make this information more available to the taxpayers who paid for it. This will require consistent Federal information policies designed to ensure that Federal information is made available at a fair price to as many users as possible while encouraging the growth of the infomiation industry. 5. Reform telecommunications policies. Government telecommunications policy has not kept pace with new developments in telecommunica- tions and computer technology. As a result, government regulations have tended to inhibit competi- tion and delay deployment of new technology and services. Without a consistent, stable regulatory environment, the private sector will hesitate to make the investments necessary to build the high speed national telecommunications network that this country needs to compete successfully in the 21st cen- tury. To address this and other problems, the Administration has created a White House-level intera- gency Information Infrastructure Task Force that will work with Congress, the private sector, and state and local governments to reach consensus on and implement policy changes needed to accelerate deployment of the Nil. Although the HPCC Program began as a research and development program, its impact is already being felt far beyond the research and education communities. The high performance computing tech- nology developed under this Program has allowed users to improve understanding of global warming. discover more effective and safer drugs, design safer and more fuel-efficient cars and aircraft, and access huge "digital libraries" of information. The high speed networking technology developed and demonstrated by the HPCC Program has accelerated the growth of the Internet computer network and enabled millions of users not just to exchange electronic mail, but to access computers, digital libraries, and research equipment around the world. This technology, which allows Internet users to hold a video conference from their desks, is enabling researchers across the country to collaborate as effectively as if they were in the same room. The new IITA component of the HPCC Program will accelerate the deployment of HPCC technology into the marketplace and ensure that all Americans can enjoy its benefits. Federal investment in new technologies is one of the best investments the government can make, one that will provide huge, long-term benefits in terms of new jobs, better health care, better education, and a higher standard of living. This is particularly true in the case of the National Information Infrastructure, which will provide benefits to all sectors of our economy. Few initiatives offer as many potential benefits to all Americans. Several strategic and programmatic modifications have been made to the HPCC Program in order to enable the Nil Initiative to build on the Program's original four components. The most significant of these is the addition of the new IITA program component. IITA consists of research and development to enable the integration of critical information systems and the application of these systems to "National Challenges," problems where the application of HPCC technology can provide huge bene- fits to all Americans. These efforts will develop and apply high performance computing and communications technologies to improve information systems for National Challenges such as the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environment, health care, inanufac- turing processes and products, national security, and public access to government information. Working with industry. IITA will support the development of the Nil and the development of the computer, network, and database technology needed to provide appropriate privacy and security pro- tection for users. 10 Overview of the Five HPCC Components Five equally important, integrated components represent the key areas of high performance computing and communications: HPCS - High Performance Computing Systems Extend U.S. technological leadership in high performance computing through the development of scalable computing systems, with associated software, capable of sustaining at least one trillion opera- tions per second (teraops) performance. Scalable parallel and distributed computing systems will be able to support the full range of usage from workstations through the largest scale highest perfor- mance systems. Workstations will extend into portable wireless interfaces as technology advances. NREN - National Research and Education Network Extend U.S. technological leadership in computer communications by a program of research and development that advances the leading edge of networking technology and services. NREN will widen the research and education community's access to high perfonnance computing and research centers and to electronic information resources and libraries. This will accelerate the development and deployment of networking technologies by the telecommunications industry. It includes nation- wide prototypes for terrestrial, satellite, wireless, and wireline communications systems, including fiber optics, with common protocol support and applications interfaces. ASTA - Advanced Software Technology and Algorithms Demonstrate prototype solutions to Grand Challenge problems through the development of advanced algorithms and software and the use of HPCC resources. Grand Challenge problems are computation- ally intensive problems such as forecasting weather, predicting climate, improving environmental monitoring, building more energy-efficient cars and airplanes, designing better drugs, and conducting basic scientific research. IITA - Information Infrastructure Technology and Applications Demonstrate prototype solutions to National Challenge problems using HPCC enabling technologies. IITA will support integrated systems technology demonstration projects for critical National Challenge applications through development of intelligent systems interfaces. These will include sys- tems development environments with support for virtual reality, image understanding, language and speech understanding, and data and object bases for electronic libraries and commerce. BRHR - Basic Research and Human Resources Support research, training, and education in computer science, computer engineering, and computa- tional science, and enhance the infrastructure through the addition of HPCC resources. Initiation of pilot projects for K-12 and lifelong learning will support expansion of the Nil. 11 HPCC Program Goals Extend U.S. technological leadership in high performance comput- ing and computer communications. Provide wide dissemination and application of the technologies to speed the pace of innovation and to improve the national economic competitiveness, national security, education, health care, and the global environment. Provide key parts of the foundation for the National Information Infrastructure (Nil) and demonstrate selected Nil applications. HPCC Agencies ARPA - Advanced Research Projects Agency, Department of Defense DOE - Department of Energy ED - Department of Education EPA - Environmental Protection Agency NASA - National Aeronautics and Space Administration NIH- National Institutes of Health, Department of Health and Human Services NIST - National Institute of Standards and Technology, Department of Commerce NOAA - National Oceanic and Atmospheric Administration, Department of Commerce NSA - National Security Agency, Department of Defense NSF - National Science Foundation 12 HPCC Program Strategies Develop, through industrial collaboration, high performance comput- ing systems using scalable parallel designs and technologies capable of sustaining at least one trillion operations per second (teraops) perfor- mance on large scientific and engineering problems such as Grand Challenges. Support all HPCC components by helping to expand and upgrade the Internet. Develop the networking technology required for deployment of nation- wide gigabit speed networks through collaboration with industry. Demonstrate the productiveness of wide area gigabit networking to sup- port and enhance Grand Challenge applications collaborations. Demonstrate prototype solutions of Grand Challenge problems that achieve and exploit teraops performance. Provide and encourage innovation in the use of high performance com- puting systems and network access technologies for solving Grand Challenge and other applications by establishing collaborations to pro- vide and improve emerging software and algorithms. Create an infrastructure, including high performance computing research centers, networks, and collaborations that encourage the dif- fusion and use of high performance computing and communications technologies in U.S. research and industrial applications. Work with industry to develop information infrastructure technology to support the National Information Infrastructure. Leverage the HPCC investment by working with industry to implement National Challenge applications. Enhance computational science as a widely recognized discipline for basic research by establishing nationally recognized and accepted edu- cational programs in computational science at the pre-college, under- graduate, and postgraduate levels. Increase the number of graduate and postdoctoral fellowships in com- puter science, computer engineering, computational science and engi- neering, and informatics, and initiate undergraduate computational sciences scholarships and fellowships. 13 HPCC Research Areas • Aerospace - Aircraft - Spacecraft • Basic Science and Technology - Astronomy - Computers and network communications - Earth sciences - Molecular, atomic, and nuclear structure - The nature of new materials • Education • Energy - Combustion systems (in automobile engines, for example) - Energy-efficient buildings • Environment - Pollution - Weather, climate, and global change prediction and modeling • Health - Biological molecules - Improved drugs • Library and Information Science • Manufacturing • Military Systems and National Security Systems 14 HPCC Program Management The HPCC Program is planned, funded, and executed with the close cooperation of Federal agencies and laboratories, private industry, and academia. These efforts are directed toward ensuring that to the greatest extent possible the Program meets the needs of all communities involved and that the results of the Program are brought into the research and educational communities and into the commercial marketplace as rapidly as possible. The National Coordination Office (NCO) for High Performance Computing and Communications was established in September 1992. Donald A. B. Lindberg was selected to direct the NCO. while continu- ing to serve as Director of the National Library of Medicine. The NCO coordinates the activities of participating agencies and organizations, and serves as a liaison to Congress, industry, academia. and the public. As Director of the NCO. Lindberg reports to John H. Gibbons, the Assistant to the President for Science and Technology and the Director of the Office of Science and Technology Policy. The Director of the NCO also chairs the CPMES High Performance Computing. Communications, and Infonnation Technology (HPCCIl ) Subcommittee. The Subcommittee meets regularly to coordinate agency HPCC programs through information exchanges, the common development of interagency programs, and the review of indiv idual agency plans and budgets. It is also informed by presentations by other Federal working groups and by public bodies. Several HPCCIT working groups coordinate activities in specific areas. Individual agencies are responsible for coordinating these efforts: ^The Communications group, led by NSF. coordinates network integration activities and works closely with the Federal Networking Council (FNC). The FNC consists of representatives from interested Federal agencies, coordinates the efforts of government HPCC participants and other NREN constituents, and provides liaison to others interested in the Federal Program. -"The Applications group, led by NASA, coordinates activities related to Grand Challenge applica- tions, software tools needed for applications development, and software development at high per- formance computing centers. J The Research group, led by ARPA. focuses on basic research, technology trends, and alternative approaches to address the technological limits of information technology. Its activities are inte- grated into the overall research program through meetings with the various technical communi- ties. ^The Education group, led by NIH. coordinates HPCC education and training activities and pro- vides liaison with other education-related efforts under FCCSET. A Federal Networking Advisory Committee (FNC AC) supports the FNC by providing input and rec- ommendations from broad communities and constituencies of the NREN effort. Pursuant to P.L. 102- 194. the High Performance Computing Act of 1991. a High Performance Computing Advisory Committee will be established to support the overall HPCC Program. The Committee will improve communications and collaborations with U.S. industry, universities, state and local governments, and the public. 15 Each participating agency has focal points for addressing matters related to the HPCC Program. Organizational and management structures facilitating participation in the HPCC Program are described in the sections presenting individual agency programs. Many participating agencies have published documents about their HPCC programs and solicitations for research in HPCC areas; requests should be directed to the respective HPCC contacts listed at the end of this document. U.S. industry, academia, and other developers and users of HPCC technology are involved in agency program planning and execution through advisory committees, commissioned reviews, self-generated commentary, and through direct participation in HPCC research and development efforts and as sup- pliers of technology. The HPCC Program has benefited from the interest, advice, and specific recommendations of a vari- ety of governmental, industrial, academic, professional, trade, and other organizations. These include; Q Federal organizations - Commerce, Energy, NASA. NLM, Defense Information (CENDl) Group - Congressional Research Service (CRS) - Department of Commerce HPCC Coordinating Group - Department of Health and Human Services Agency Heads - Federal Aviation Admuiistration (FAA) - Federal Information Resources Management Policy Council (FIRMPOC) - Federal Library and Information Center Committee (FLICC) Forum on Information Policies - Food and Drug Administration (FDA) Center for Drug Evaluation and Research - House Armed Services Committee Staff - House Subcommittee on Science; Committee on Science, Space, and Technology - House Subcommittee on Technology, Environment, and Aviation; Committee on Science, Space, and Technology - House Subcommittee on Telecommunications and Finance; Committee on Energy and Commerce - NASA Advisory Council - National Telecommunications and Information Administration (NTIA) - NIH Information Resources Management Council - Securities and Exchange Commission (SEC) - Senate Science Subcommittee; Committee on Commerce, Science, and Transportation ^ Federally-chartered institutions - National Academy of Sciences (NAS)/Computer Science and Technology Board (CSTB) - NAS/CSTB/NRENaissance Study Group - NAS/Institute of Medicine - NAS/National Research Council (NRC) Executive Board Q State organizations - Texas Education Network - Wisconsin Governor's Council for Science and Technology 16 ^ University organizations - Computing Research Association (CRA) - EDUCOM 3 Professional societies - American Association of Engineering Societies (AAES) - American College of Cardiologists - American Institute of Medical and Biological Engineering (AIMBE) - Association of American Medical Colleges - Coalition of Academic Supercomputer Centers (CASC) - Computer Professionals for Social Responsibility (CPSR) - International Medical Informatics Association (IMIA) ^Industrial organizations - American Electronics Association (AEA) - Computer Systems Policy Project (CSPP) - Information Industry Association (IIA) ^ Local organizations - Montgomery County High Technology Council - The Suburban Maryland Technology Council -i Other organizations with interest in HPCC - Coalition for Networked Information (CNI) - Foundation for Educational Innovation - Microelectronics and Computer Technology Corporation (MCC) - Science, Technology, and Public Policy Program John F. Kennedy School of Government, Harvard University - Supercomputing Center Directors Q Representatives from individual corporations, publishers, research laboratories, and supercom- puting centers " 3 Foreign governments and organizations, including the British and Canadian governments 17 The HPCC Program has sponsored several major conferences and workshops. HPCC representatives have given presentations at a number of related conferences as well. These are itemized below. Major HPCC Conferences and Workshops During FY 1993 Event Sponsor High Performance Computing Industry NASA Presentations Improving Medical Care: Vision ofHPCCIT National Library of Medicine HPCC Applications in Chemistry NIH HPCC Comprehensive Review NASA Blue Ribbon Panel on HPCC NSF Workshop and Conference on Grand Challenge Nine HPCC Applications and Software Technology Agencies HPCC-Related Conferences American Institute of Medical and Biological Engineering 1993 Annual Meeting Coordinating Federal Health Care: Progress and Promise Workshop Council on Competitiveness - Forum on Information Infrastructure Electronic Industries Association Fifth Annual Federal Information Systems Conference Georgetown University Medical School Computer Health Care Conference 18 HPCC-Related Conferences (cont.) Institute of Electrical and Electronics Engineers (IEEE) Information Exchange Conference International Council for Scientific and Technical Information (ICSTI) Annual Meeting Nil Round Table sponsored by 3Com Corporation Research Consortium Inc. (RCI) North American Annual Member Executive Conference Scientific Computing and Automation Conference Supercomputing '92 3Com Interop Conference Wide Area Information Services (WAIS) Conference The dynamism and flexibility of the HPCC Program is illustrated by the incorporation of many of these recommendations into current Program plans. For example, the CSPP conducted its second intensive study of the HPCC Program structure and in January 1993 published "Perspectives on the National Information Infrastructure: CSPP's Vision and Recommendations for Action." Many of their recommendations formed the basis for plans in the HPCC Program's new IITA component. 19 Interdependencies Among Components A complex web of interdependencies exists among the five components of the HPCC Program; suc- cess in each component is critical to the success of the others. Because of these interdependencies, maintaining balance among the components is crucial for Program success. The current balance is designed to foster that success as rapidly as possible. Some examples of the large number of interde- pendencies are given below. Examples of two-component interdependencies: HFCS and NREN - The development of routers for the NREN component and other advanced com- ponent technologies depend on HPCS research in scalable computing and component technologies. HPCS and ASTA - The advanced computing systems with associated systems software are used by ASTA for Grand Challenge research. HPCS and IITA - HPCS systems are used by IITA for National Challenge research. NREN and ASTA - ASTA's Grand Challenge research helps to determine requirements for high performance networks for which NREN must provide new capabilities and technologies. Examples are distributed heterogeneous computing, scientific visualization, and the NSF metacenter research efforts. NREN and IITA - NREN provides the networking technology base for IITA. An example is interactive video. One example of a three-component interdependency: HPCS. NREN, ASTA - Networks developed under the NREN component are used to access testbeds developed under the HPCS component in ASTA Grand Challenge research. An example of a four-component interdependency: HPCS, NREN. ASTA, IITA - Some computationally intensive IITA applications will use systems developed under the HPCS component, connected by NREN-developed networks, and Grand Challenge software developed under ASTA. An example is managing emergen- cies such as hurricanes. Two five-component interdependencies: BRHR dependencies - BRHR depends on each of the other components, both individually and in combination, for research subjects. BRHR provides training and education in the four other components and their interrelationships. 20 """^1 Coordination Among Agencies The paiticipuling agencies cooperate extensively in their efforts toward accomplishing HPCC Program goals, in part through the management vehicles described earlier and through the use of HPCC products from those other agencies wherever feasible. There are many other collaborations, includina: o Evaluation of early systems - these systems are procured primarily by NSF. DOE. and NASA: together with NIH, NSA, NOAA. and EPA. they are evaluated for mission-specific computa- tional and information processing applications. a DOE and NASA are coordinating testbed development to ensure that a diverse set of computing systems are evaluated. ^NIST is developing guidelines for measuring system performance, performance measurement tools, and software needed to monitor and improve the performance of advanced computing sys- tems at HPCC-sponsored high performance computing centers. -"The gigabit testbeds (described on pages 37-.^9 in the NREN section). □ High speed networking experiments - ARPA. NASA, and NSA collaborate. Q Network security - ARPA. NSA, NIST. and other agencies collaborate. QThe NSF Supercomputer Centers. Other agencies jointly support and use these environments for their own missions and constituencies. One example is the NIH Biomedical Research Technology program in biomedical computing applications. QThe Concurrent SuperComputing Consortium (described on page 44 in the ASTA section). QThe National Consortium for High Performance Computing established by ARPA in coopera- tion with NSF (described on pages 44-45 in the ASTA section). -JThe High Performance Software Sharing Exchange uses ARPA's wide area file system. NASA's distributed access to electronic data, and software repositories from DOE and NIST. These repositories are accessed by the other agencies. Q Joint agency workshops (for example, the recent "Workshop and Conference on Grand Challenge Applications and Software Technology"). Q Representation on research proposal review panels (for example, DOE uses other agency experts in its Grand Challenge Review Committee). 21 Membership on the HPCCIT Subcommittee The National Coordination Office and the HPCCIT Subcommittee actively encourage other Federal agencies (Departments, agencies within Departments, or independent agencies) to consider joining HPCCIT either as Official Members or as Observers. Offlcial Membership If an agency proposes a program and the HPCCIT Subcommittee determines that the program meets the Evaluation Criteria (see below) and approves it, then the Subcommittee will recommend to the Committee on Physical, Mathematical, and Engineering Sciences (CPMES) that the agency be added to the HPCC Program and participate in the budget crosscut. Observer Status Upon request from the agency and approval by the HPCCIT Subcommittee, an agency may participate in the technical program and attend HPCCIT Subcommittee meetings. Requests for membership or observer status should be directed to the National Coordination Office (listed on page 166 in the Contacts section). Evaluation Criteria for the HPCC Program Relevance/Contribution. The research must significantly contribute to the over- all goals and strategy of the Federal High Performance Computing and Communications (HPCC) Program, including computing, .software, networking, information infrastructure, and basic research, to enable solution of the Grand Challenges and the National Challenges. Technical/Scientific Merit. The propo.sed agency program must be technically/scientifically sound and of high quality, and must be the product of a documented technical/scientific planning and review process. Readiness. A clear agency planning process must be evident, and the organiza- tion must have cJeinonstrated capctbility to carry out the program. Timeliness. The proposed work must be technically/scientifically timely for one or more of the HPCC Program components. 22 Linkages. The responsible organization must liave established policies, pro- grams, and activities promoting effective technical and scientific connections among government, industry, and academic sectors. Costs. The identified resources must be adequate, represent an appropriate share of the total available HPCC resources (e.g., a balance among program components), promote prospects fi>r joint finding, and address long-term resource implications. Agency Approval. The proposed program or activity must have policy-level approval by the submitting agency. 23 Technology Collaboration with Industry and Academia HPCC agencies work in partnership with each other and industry and academia to develop cost-effec- tive high performance computing and communications technologies. The HPCC Program fosters the development and use of advanced off-the-shelf technology so that research results and products are simultaneously available to support both Federal agency missions and the computational needs of the academic and private sectors. HPCC research is carried out in close collaboration with industry, particularly manufacturers of com- puter and comnuinications hardware, software developers, and representatives from key applications areas. A number of mechanisms are used to facilitate these interactions: consortia, contracts, cooper- ative agreements such as Cooperative Research and Development Agreements (CRADAs). grants, and other transactions. Some examples of these collaborations are: ^The gigabit testbeds (described on pages 37-39 in the NREN section). -"The Concurrent SuperComputing Consortium (described on page 44 in the ASTA section). 3 The National Consortium for High Performance Computing established by ARPA in coopera- tion with NSF (described on pages 44-45 in the ASTA section). 3NSF's National Supercomputer Centers and Science and Technology Centers and their metacen- ter efforts (described on page 66 in the NSF section). 3 The Computational Aerosciences Consortium (described on page 85 in the NASA section). J The Consortium on Advanced Modeling of Regional Air Quality (CAMRAQ) (described on pages 45-46 in the ASTA section). The National Coordination Office for High Performance Computing and Communications meets with representatives from industry and academia. Major HPCC conferences and workshops held by HPCC agencies or addressing HPCC issues have included a number of representatives from industry and academia as well. Further details about these activities are presented in the Program Management sec- tion. 24 Agency Budgets by HPCC Program Components FY 1993 Budget (Dollars in Millions) HPCS NREN ASTA BRHR Agencv TOTAL ARPA 119.5 43.6 49.7 62.2 275.0 NSF 25.9 40.5 108.0 50.8 225.2 DOE 10.9 10.0 65.3 14.8 101.0 NASA 11.1 9.0 59.1 2.9 82.1 NIH 3.0 4.1 31.4 8.0 46.5 NSA 34.8 3.2 5.4 0.2 43.6 NOAA _ 0.4 9.4 _ 9.8 EPA - 0.4 6.0 1.5 7.9 NIST 0.3 1.2 0.6 . 2.1 ED - 2.0 - - 2.0 TOTAL 205.5 114.4 334.9 140.4 795.2 FY 1994 Budget (Dollars in Millions) HPCS NREN ASTA IITA BRHR Agency TOTAL ARPA 151.8 60.8 58.7 71.7 343.0 NSF 34.2 57.6 140.0 36.0 73.2 341.0 DOE 10.9 16.8 75.1 21.0 123.8 NASA 20.1 13.2 74.2 12.0 3.5 123.0 NIH 6.5 6.1 26.2 24.0 8.3 71.1 NSA 22.7 11.2 7.6 0.2 41.7 NIST 0.3 1.2 0.6 24.0 . 26.1 NOAA - 1.6 10.5 0.3 12.4 EPA _ 0.7 9.6 1.6 11.9 ED - 2.0 - - 2.0 TOTAL 246.5 171.2 402.5 96.0 179.8 1,096.0 25 tn c o c o E o o E (0 G) O o o Q. (0 0) "(5 c o Q. (0 0) CC >» o c 0 G) < X CC m * Support university research in computer & computational science ■ Support university research in computer & computational science & engineenng ■ Improve HPCC infrastructure at colleges & universities * Support education & training * Support educational programs * Develop computational science texts * Support research in aeronautics & Earth sciences & high performance computing ■ Support educational programs 1 ■ Scalable Internet technologies for universal & ubiquitous access ■ fvtobile & wireless systems ' Hypermedia systems with intelligent user interfaces ■ Support development of tools for human communication & collaboration ' Develop multimedia software & educational materials • Support digital libranes research ■ Pilot projects for information- intensive applications ■ Support virtual reality applications c CD Cfl §1 ■ Develop prototype applications for accessing NASA data. & for digital libranes, education, & manufactunng < ■ Scalable algonthms & libranes ■ Software & system development environments for parallel & distnbuted heterogeneous systems ■ R&D in algonthms & software technologies, distnbuted & heterogeneous environments * Grand Challenges research, including interdisciplinary collaboration * Scientific databases * Deploy parallel systems in NSF Supercomputer Centers, testbeds, etc. ■ Implement metacenter ■ Energy Grand Challenges research * Research in database technology, computational techniques * High Performance Computing Research Centers * Grand Challenges research in aeronautics & the Earth's environment • HPCC Software Exchange z lU (T Z ■ Scalable Internet technologies with integration of different bitways ■ Gigabit technologies & testbeds ■ Coordinate NREN component ' Ivlanage & upgrade NSFNET backbone ■ Provide enhanced network secunty * Increase Internet connectivity ■ Provide Internet information services ■ Gigabit network testbeds ■ Manage & upgrade ESnet; provide access to energy research facilities ■ Procure fast packet services ' Research in gigabit networks, packetized video & voice * Manage & upgrade AERONet & NSInet * Procure fast packet services * Research in gigabit networks, satellite-based applications 0) o a. X ■ Scalable computing systems including networks of workstations, large scale parallel systems, & heterogeneous configurations ■ Portable scalable systems software * Advanced components, packaging, design tools. & rapid prototyping for VLSI systems * Research on systems architecture ■ Early systems prototyping & evaluation • Design & prototyping tools ■ Distnbuted design & intelligent manufacturing ■ Optical communications systems & devices ■ Procure & evaluate early systems ■ Research in parallel systems ■ Hierarchical data storage development — CD QJ QJ "cO CO > QJ oa c/j QJ £ 5 2 a. < 0. < a o Q (/) z Ul g < If) < z 26 cc 00 * Medical informatics fellowships * Biomedical science education * Collaboration among biomedical organizations * New and upgraded high performance computing resource centers ■ Support basic research in high performance computing & networking 03 2 c £ ,9 2 15 ^ o 03 ^ 03 2 3 o CO o CO >. cn cn o§ w cr 03 =3 o? e Q. s §■ (- cn ' Develop & evaluate materials for training users of environmental assessment tools * Support university research & educational programs 1 " Develop testbeds linking medical facilities & sharing medical information • Develop anatomy visualization software ' Develop virtual reality technology for simulating operations, etc ■ Develop medical database technology • Investigate developmental technologies supportive of information infrastructure applications ' Research in security, mass storage, database management, transaction processing i O m h E o Q-ra O o So. oi S- ■ Develop environmental monitoring, prediction. & assessment applications ' Expand access to environmental data ■ Expand access to environmental data • Develop intelligent user interlaces to environmental applications CO < ■ Grand Challenges research in molecular biology & biomedical imaging ' Scalable software libranes * Scientific databases * "Visible Human" project * HIV research ' Develop systems software for distributed parallel systems ■ Research in algorithms, programming languages, visualization, systems modeling ■ Algonthms & generic software for manufacturing & other applications ' Guide to Available Mathematical Software ' Environmental Grand Challenges research * Acquire scalable system ' Environmental assessment Grand Challenges research ■ Acquire high performance system z LU ■ Extend gigabit speed communications ■ Medical Connections - increase Internet connectivity ■ Access to X-ray images ■ Research in Unified Medical Language Systems & digital anatomy ■ Provide Internet connectivity ■ Develop network security technology ■ Gigabit network testbed ■ Coordinate development of standards for interoperability, user interfaces, security ■ Increase Internet access to systems & data ■ Provide information & assistance services 1 0) c CD "O CD OI Q. 00 ■% CD *- is 18 • Develop PATHWAYS for Internet access by teachers & parents ■ Increase Internet connectivity cn o Q. I ■ Procure & evaluate early systems for biomedical applications ■ Deploy experimental scalable systems, emphasizing interoperability & open systems ■ Develop mass archival storage ■ Promote interoperability * Develop instrumentation & methodology for performance measurement of systems & networks z X X i § a < < O UJ Q 111 c < < m ■D c (0 < < O z o o Q < Z Q O Q oT O Q (U N U Q) D) "D o o CL I '^ ^ ^ O Z o O Q- 1= X (0 ^ Q. o> cn 0> CD -^ -O > CD LL O). (U Cl) n ■n Q) c -> cn cn n 0) c ^ 0) r cu CO < T3 C CO < cn cu LL 4-* nr > < n T) CO CD N CU CO u B T3 I. CO 0) o S ? ^ CO 27 High Performance Computing Systems (HPCS) The HPCS component pioduees sculuble parallel computing systems in collaboration with industry and academia. Unlike dedicated, single processor architectures of the past, scalable parallel systems have the property that increases in size result in proportional improvement in performance. This is achieved by connecting multiple processors and memory units through a scalable interconnection structure. Scalable systems can be configured over a wide range that can deliver high performance computing to users at both small and very large scales. Because the computing system designs are scalable, they can be used in smaller scale workstations. Such workstations may also have high performance graphics capabilities to enable visualization of a computational result and provide interactive interfaces to the user. These workstations may be linked to local networks connected to the Internet, a network of networks that includes high performance sub- nets linking higher performance and larger scale computing systems throughout the country. HPCS focuses on the fundamental scientific and technological challenges of accelerating the advance of affordable scalable parallel high performance computing systems. Critical underlying technologies are developed in prototype form along with associated design tools. This allows evaluation of alterna- tives as the prototype systems mature. Evaluation continues throughout the research and development process, with experimental results used to refine successive generations of systems. Scalable computing technologies used in combination with scalable networking technologies provide the technology base needed to address the Grand Challenges and the National Challenges. The neces- sary software technologies are developed by the ASTA and IITA components. HPCS is composed of four elements to produce progressively more advanced and mature systems: I. Research for Future Generations of Computing Systems This element develops the miderlying architecture, components, packaging (integration of electronics, photonics, power, cooling, and other components), systems software, and scaling concepts to achieve affordable high performance computing systems. These efforts ensure that the required advanced technologies will be available for the new systems and provide a foundation for the more powerful systems to follow. This element also produces the basic approaches for systems software, program- ming languages, and environments for heterogeneous configurations of workstations and high perfor- mance servers. II. System Design Tools This element develops computer aided design tools and the technology to allow multiple design tools to work together in order to enable the design, analysis, simulation, and testing of system components and modules. These tools make rapid prototyping of new system concepts possible. New design tools will be produced to enable the design of more advanced prototype systems using new technologies as they emerge. 29 III. Advanced Prototype Systems Systems capable of scaling to 100 gigaops (billions of operations per second) performance have begun to emerge. Teraops (trillions of operations per second) performance designs will be demonstrated by the mid 1990s. Research in high performance systems focuses on reducing the cost and size of these systems so they can be used for a broader range of applications. IV. Evaluation of Early Systems Experimental systems will be placed at sites where researchers can provide feedback to systems and software designers. Performance evaluation criteria for systems and results of evaluations will be made widely available. Scalability enables small to medium size systems to be used for early perfor- mance evaluation and software development in preparation for larger scale applications. Larger scale systems are included in the ASTA component for applications such as the Grand Challenges. HPCS Accomplishments 2 Small, medium, and large scale systems developed under the HPCS component have been deployed and are being used in the ASTA component. This includes large systems deployed in various high performance computing centers and some systems installed in heterogeneous con- figurations. The small and medium scale systems are being used to develop algorithms and software, includ- ing fundamental building blocks for Grand Challenge problems and a wide variety of new scien- tific computation models. These prototypes are characterized by very fast routing and compo- nent technology, capable of scaling up to 100 gigaops system configurations. '^Scalable systems continue to be evaluated and refined, providing early feedback on hardware, operating systems, compilers, software development tools, input/output systems, and mass stor- age systems. This process has resulted in rapid upgrades in a commercial system to a scalable operating system based on very small and efficient software called microkernel technology. Extensions such as real time services and distributed and replicated file systems are under devel- opment. 3 New technologies are providing a scalable, modular approach to mass storage performance and archiving needed in the new large scale parallel computing systems: - Prototype scalable mass storage systems that use parallel arrays of inexpensive disk drives to achieve both high aggregate data transmission rates and large storage capacity have been demonstrated. These systems demonstrate an approach that is the basis for a new genera- tion of high performance file servers and mass storage systems that are internal to scalable parallel computing systems. - Petabyte mass storage systems, which can hold images from about 30 university libraries, are now available using commodity storage modules with automated robotic transfer to multiple read/write units. These systems help meet the dramatically increasing require- 30 merits for mass storage - from storing library information to storing remotely sensed satel- lite imagery. J Evolving advanced component technology is being employed in early experimental computer systems. This technology will form the basis for a new generation of higher performance, physi- cally smaller, and more affordable computing systems. Examples include the following: - Single chip nodes that integrate processing, storage and communications, new systems soft- ware, and new development environments, have been demonstrated. These have the poten- tial of providing very cost-etTective scalable computing using these single chip or fine grained nodes. - Multichip modules are being studied in experiments to determine the optimal design for future scalable units. ^ Supporting technologies that enable the rapid design, prototyping, and manufacturing of HPCS systems have made an important contribution to HPCS progress. Examples of rapid prototyping facilities u.sed by researchers include: - A laser direct write multichip module tool and associated design capability has been devel- oped to reduce the prototyping time of new modules from months to two weeks. This enables designs to be developed more rapidly, and allows for the exploration of more effec- tive and cost-effective alternatives. - New algorithms have been incorporated into design systems that extend synthesis to be applied to new technologies such as field programmable gate arrays and various integrated circuit technologies. - A model "factory of the future," linking advanced design technologies from workstations to large scalable computing, was completed and coupled to a prototype factory (described on pages 152-153 in the Case Studies section). These technologies form the basis of a new generation of computational prototyping, exploiting networked and distributed design pro- cesses for rapidly prototyping future generations. -I New competitive contractual mechanisms have been developed to enable the timely purchase of experimental systems. These joint government-industry research projects allow experimental use and early evaluation by a variety of user communities, which in turn provides early feedback to the hardware vendors and to developers of associated software technology. Such projects accelerate the maturation of these complex technologies in preparation for their larger scale use by the ASTA component. 31 National Research and Education Network Program (NREN) The NREN* component will establish a gigabit communications infrastructure to enhance the ability of U.S. researchers and educators to perform collaborative research and education activities, regard- less of their physical location or local computational and information resources. This infrastructure will be an extension of the Internet, and will serve as a catalyst for the development of the high speed communications and information systems needed for the National Information Infrastructure (Nil). The emerging Nil will require: advances in the underlying loundations of networking technology and in generic networking services; the development and deployment of major new networking technolo- gies; broader access to state-of-the-art high performance computing facilities; and early testing of new commercial products and services so that these can be effectively integrated into NREN associated networks. The principal objectives of the NREN component are to; J Establish and encourage wide use of gigabit networks by the research and education communi- ties to access high performance computing systems, research facilities, electronic information resources, and libraries. □ Develop advanced high performance networking technologies and accelerate their deployment and evaluation in research and education environments. 3 Stimulate the wide availability at reasonable cost of advanced network products and services from the private sector for the general research and education communities. a Catalyze the rapid deployment of a high speed general purpose digital communications infras- tructure for the Nation. The NREN component's Interagency Internet and Gigabit Research and Development elements con- tribute to reaching these soals: I. Interagency Internet Near-term enhanced network services will be developed on the Nation's evolving networking and telecommunications infrastructure for use by mission agencies and the research and education com- munities. Interagency Internet activities include expansion of the connectivity and enhancement ot the capabilities of the federally funded portion of today's research and education networks, and deployment of advanced high performance technologies and services as they mature. Coordinated among Federal agencies in cooperation with the private sector, this effort succeeds the Interim Interagency NREN element identified in previous reports about the HPCC Program. ' NSF has applied for registration of the service mark "NREN" with the U.S. Patent and Trademark Office. 32 The Interagency Inlemet is a network of networks, ranging from high speed cross-country networks, to regional and mid-level networks, to state and campus network systems. Its major Federal compo- nents are the national research agency networks listed below. When these agencies' "backbone net- works" are upgraded, together they will form a national gigabit network to support research and edu- cation. This network may in turn serve as a prototype for broader national gigabit networks. Major Federal Components of the Interagency Internet NSFNET NSF-fumk'd national backbone network sen'ice ESnet DOE's Energy Sciences Network NSl NASA 's Science Internet ARPA 's exploratory networks The Interagency Internet and the other, non-federally-supported, portions of the Internet connect the Nation's communities of researchers and educators to each other; to facilities and resources such as computation centers, databases, libraries, laboratories, and scientific instruments; and to supporting organizations such as publishers and hardware and software vendors. The Interagency Internet also provides international connections that serve the national interest. These services will be continually enhanced as the Interagency Internet evolves. The Interagency Internet also provides a testbed to stimulate the market for advanced network tech- nologies such as Synchronous Optical Network (SONET) transmission infrastructure. Asynchronous Transfer Mode (ATM) c"^ell switches, high speed routers, computer interfaces, and other communica- tions hardware and software. These technologies are being developed by the telecommunications industry, routing vendors, and computer manufacturers, in collaboration with government and academia, as part of the NREN component of the HPCC Program. Through these efforts, the HPCC agencies will provide expertise in the systems integration of key technologies to form an integrated and interoperable high performance network system that will continue to meet the needs of the Nation's research and education communities. Once the initial development risks are reduced through this collaboration among government, nidustry, and academia, the U.S. communications community can build on these experiences and develop new products and services to serve the broader market- place of Nil applications. II. Gigabit Research and Development The Gigabit Research and Development element is a comprehensive program to develop the funda- mental technologies needed for a national network with advanced capabilities and with a minimum 33 gigabit per second (Gb/s) transmission speed. Gigabit research and development takes place in two ways: through a basic research program that provides the building blocks to move data at increasingly faster rates with novel techniques such as all optical networking; and through the deployment of testbed networks that use and prove the viability of these techniques. The testbeds provide an envi- ronment for the development of advanced applications targeted toward the solution of HPCC Grand Challenges. As these technologies for networking hardware and software are developed and shown to be viable and cost-effective, they will be incorporated into the Interagency Internet. They will also provide a foundation for supporting Grand and National Challenges and their further extension to the National Information Infrastructure. Building on this foundation, the government and industrial partners will develop prototypes for a future high capability commercial communications infrastructure for the Nation. NREN Component Management Each agency implements its own NREN activities through normal agency structures and coordination with OMB and OSTP. All 10 agencies participate in the NREN component as users. Multiagency coordination is achieved through FCCSET, CPMES. HPCCIT. and the HPCCIT High Performance Communications working group. Operation of the Interagency Internet is coordinated by the Federal Networking Council (PNC), which consists of agency representatives. The FNC and its Executive Committee establish direction, provide further coordination, and address technical, operational, and management issues through working groups and ad hoc task forces. The FNC has established the Federal Networking Council Advisory Committee, which consists of representati\es from several sectors including library sciences, educa- tion, computers, telecommunications, information services, and routing vendors, to assure that pro- gram goals and objectives reflect the interests of these broad sectors. A ccompUsh ments Increased Connectivity and Use of the Interagency Internet The Interagency Internet has experienced tremendous growth in the number of connections (and hence the number of researchers) it supports, and in the amount of traffic that it carries. Significant leverag- ing of the Interagency Internet activities have resulted in the following: J More than 6,000 regional, state, and local IP (Internet Protocol) networks in the U.S. are con- nected as of March 1993. More than 12.000 such networks are connected worldwide. ^More than 800 of the approximately 3.200 two-year and four-year colleges and universities in the Nation are interconnected, including all of the schools in the top two categories ( "Research" and "Doctorate") of the Carnegie classification. 3 An estimated 1.000 U.S. high schools also are connected. The exact number is difficult to deter- mine since regional networks have widely leveraged NSF and other agency funds to connect 34 such institutions without direct agency involvement. For example, state initiatives in Texas and North Carolina proceed with little or no Federal funding or involvement. Traffic on the NSFNET backbone has doubled over the past year, and has increased a hundred-fold since 1988. Improvements and upgrades to the network made by NSF have kept pace with the increased traffic and have advanced the state of network technology and operations. ARFA, DOE, NASA, and NSF provide international connectivity to the Pacific Rim, Europe, Japan, United Kingdom, South America, China, and the former Soviet Union, for mission-specific scientific collaborations and general research and education infrastructure requirements. These links are of varying speeds, with many of the larger "fat pipes" cost-shared and co-managed by agencies requiring high speed connectivity. Schematic of the interconnected "backbone" networks of NSF. NASA, and DOE, together with selected client regional and other networks. The backbone topology is shown on a plane above the outline of the U.S. Line seg- ments connect backbone nodes with geographic locations where client networks attach. Procurement of Fast Packet Services The need for advanced networking services has been driven by the requirements of distributed scien- tific visualization and remote experiment control, and more recently by the phenomenal growth of multimedia applications. In order to satisfy these needs, DOE and NASA are in the process of jointly acquiring fast packet services based on new telecommunications industry-provided services (for 35 example, ATM/SONET). The initial deployment will provide a 45 megabits per second (Mb/s) back- bone service; upgrades to higher speeds are planned as soon as technology and budgets permit. A key feature of this procurement is the use of as yet untariffed telecommunications provider services in an alpha test mode. In order to meet this procurement's deployment schedule, the telecommunica- tions industry has accelerated its prototyping and deployment plans. The IITA component of the HPCC Program will later use these technologies via commercial services from the telecommunica- tions industry and router vendor market. NASA and ARPA will use NASA's geostationary Advanced Communications Technology Satellite (ACTS), launched in 1993, to provide even higher speed (for example, 622 Mb/s) ATM/SONET transmission to remote sites such as Alaska and Hawaii. The deployment will allow the HPCC agen- cies to gain experience in interfacing both terrestrial and satellite high speed communications systems. ARPA manages the development and deployment of the ACTS High Data Rate Terminals. Internet Network Information Center (InterNIC) In 1992, NSF issued a competitive solicitation for an Internet Network Information Center (InterNIC) to provide a variety of services to the worldwide Internet community. Awards were made to three organizations listed below to collaborate in providing these services. Information about how to con- nect to the Internet, pointers to network tools and resources, and seminars on various topics held across the country is available from the InterNIC Information Services (listed on page 167 in the Contacts section). InterNIC Service Awards Service Description Award Made To Information General information General about the Internet Atomics/CERFnet and how to use it Directory Coordinated directory AT&T and of the growing number Database of resources available on the Internet Registration Registry of the growing Network number of networks Solutions Inc. connected to the Internet 36 Prompted by the recent development of network-based tools to seek out information by querying remote databases. NSF has established a Clearinghouse for Networked Information Discovery and Retrieval tools for assembling, disseminating, and enhancing such publicly available network tools. The clearinghouse complements the InterNIC. Solicitation of the Next Generation NSFNET Now that basic network services are readily and economically available commercially, NSFNET will, beginning in 1994. evolve into a very high speed national backbone for research applications requiring high bandwidth. In a new solicitation. NSF is requesting proposals to: J Establish an unspecified number of Network Access Points (NAPs) where regional and other service providers will be able to exchange traffic and routing information. Q Establish a Routing Arbiter to ensure coherent and consistent routing of traffic among NAP par- ticipants. Q Establish a very high speed Backbone Network Service (vBNS) linking the NSF-supported Supercomputing Centers. 3 Allow existing or realigned regional networks to connect to NAPs or Network Service Providers, who would connect to NAPs, for interregional connectivity. The NAPs will provide connectivity to mid-level or regional networks serving both commercial and research and education customers and will also provide access to the vBNS. With respect to regional networks, this solicitation addresses only interregional connectivity. On- going complementary intraregional support will continue and will be funded at constant or rising lev- els. These efforts include the Connections Program, which provides grants either to individual institu- tions or to more effective or more economical aggregates. A separate announcement to address intraregional connection of high-bandwidth users to the vBNS is planned for FY 1994. Interconnecting the NSF Supercomputer Centers, the vBNS will be part of the Interagency Internet. It is expected that the vBNS will run at a minimum speed of 155 Mb/s and that low speed connections to NAPs will be routed elsewhere. Gigabit Researcii Projects By 1996, gigabit research will lead to an experimental nationwide network able to deliver speeds up to 2.4 billion bits per second to individual end user applications. Ongoing research and development addresses communications protocols, resource allocation algo- rithms, network security systems, exploration of alternative network architectures, hardware and soft- ware, and the validation of that research by the deployment of several wide-area testbed networks. Several high data rate local area network testbeds will allow Federal agencies, industry, and academic researchers to explore innovative approaches to advanced applications such as global change research, computer imagery, and chip design. 37 In 1990, ARPA and NSF jointly began sponsoring five gigabit network research testbeds; all are expected to be operational by the end of 1993. The research at the five testbeds and at testbeds initiat- ed subsequently (e.g., MAGIC sponsored by ARPA), focuses on network technology and network applications, with alternative network architectures, implementations, and applications of special interest. Each testbed explores at least one aspect of high performance distributed computing and networking; together they seek to create and investigate a balanced high performance computing and communica- tions environment. Testbed teams consist of several government agencies (ARPA, DOE, Department of the Interior, NASA, NSF, state centers, and supercomputer centers), a number of universities, computer compa- nies, and various local and long distance telephone companies that participate both as service providers and experimenters. Other Projects ARPA-sponsored consortia and individual projects are implementing novel networks that minimize or eliminate electronic content and replace it with optical technology. These efforts use alternative opti- cal schemes for data rates in excess of 10 Gb/s. Industry partnerships guarantee a rapid transition of the most promising technologies into the commercial sector. ARPA's Washington Area Bitway is a multiple-technology testbed in the Washington-Baltimore area that enables early experience with advanced network technologies. The first phase, called the Advanced Technology Demonstration Network (ATDnet) uses the best commercial prototypes of SONET/ATM technology to provide lOOMb/s-lGb/s services to several DOD agencies and NASA. Applications include ACTS ground connections, imaging, and gigabit encryption. Later phases will demonstrate advanced optical technologies over the same optical fiber paths. Another ARPA demonstration project will show the utility of asymmetric rate/asymmetric path (Cable TV and dialup) network access. Planned for the San Francisco Bay Area and for the Washington, D.C. area, the project is designed to explore a relatively inexpensive alternative to satisfying the "last mile" - that is, connections to homes and businesses - in high speed networking implementations. NSF also supports Project ACORN, a collaborative research effort with an NSF Engineering Research Center and its industrial consortia, that is investigating lightwave networks of the 21st century. The project's TeraNet, a laboratory implementation and feasibility demonstration, will lead to a campus- wide field experiment involving leading-edge users. NSF has also begun to support research in all- optical networks. 38 Gigabit Testbeds Testbed Description Sites Principal Research Participants and Collaborating Telecommunications Carriers Aurora" Explore alternative network Bellcore - Morristown, NJ Bell Atlantic technologies, new network IBM - Havrthorne, NY Bellcore management and distnbuted system MIT - Cambridge, MA IBM paradigms, and quality of service U. Pennsylvania - Philadelphia MCI techniques for gigabit network MIT multimedia applications. NYNEX U. Anzona U. Pennsylvania Blanca" Investigate network control, real- Lawrence Berkeley Laboratory (LBL) - Amentech time protocols, and distnbuted Berkeley, CA Astronautics interactive applications including NCSA - Champaign-Urbana, IL AT&T remote thunderstorm visualization. U. California at Berkeley Bell Atlantic radio astronomy imaging, and U. Illinois - Champaign-Urbana LBL multimedia digital libraries. U. Wisconsin at Madison NCSA Pacific Bell U- California at Berkeley U. Illinois at Champaign-Urbana U. Wisconsin at Madison Casa* Investigate distributed large-scale Caltech - Pasadena, CA Caltech supercomputing over wide-area Jet Propulsion Laboratory (JPL) - Pasadena, JPL gigabit networks using chemical CA LANL reaction dynamics, geology, and Los Alamos National Laboratory (LANL) MCI climate modeling applications. San Diego Supercomputer Center (SDSC) Pacific Bell UCLA SDSC UCLA USWest Nectar" Investigate software and interfacing Carnegie-Mellon U. (CMU) - Pittsburgh, PA Bell Atlantic/Bell of Pennsylvania environments for gigabit-based Pittsburgh Supercomputer Center (PSC) Bellcore heterogeneous computing and CMU explore chemical processing and PSC combinatorial optimization applications. VISTANef Evaluate the application of gigabit BellSouth - Chapel Hill, NC BellSouth networks and distnbuted computing GTE - Durham, NC GTE techniques to interactive radiation MCNC (formerly Microelectronics Center of MCNC therapy medical treatment planning. North Carolina) - Research Triangle Park, North Carolina State U NC UNC-CH U. North Carolina at Chapel Hill (UNC-CH) Magic Early demonstration of high speed Minnesota Supercomputer Center ~ Digital Equipment Corp. terrain visualization with longer Minneapolis DOE range plans to incorporate real-time U. of Kansas in Lawrence Earth Resources Observation sensor data into a real-time virtual U.S. Army's Future Battle Laboratory ~ Fort Systems Data Center world model and display. Leavenworth, KS LBL U.S. Geological Survey (USGS) - Sioux Falls, MITRE SD Minnesota Supercomputer Center Northern Telecom Split Rock Telecom Spnnt Southwestern Bell SRI International U Kansas U.S. Army High Performance Computing Research Center U.S. Army's Future Battle Laboratory USGS ■ Information summarized from "A Brief Description of the CNRI Gigabit Testbed Initiative. " Corporation for National Research Initiatives, 1895 Preston White Drive. Suite 100. Restcn. Virginia. 22091-5434. (703) 620-8990. 39 NREN FY 1994 Milestones Bring MAGIC testbed into operation. Conduct initial terrain visualization demonstrations. Demonstrate prototypes of gigabit ATM/SONET technology operating over fiber and satellite (using ACTS) media. Install initial gigabit network interconnection. Bring all-optical testbed networks into operation. Put medical, terrain visualization, and modeling applications on 100 megabit and gigabit class net- works. Complete ESnet and NSI fast packet upgrades. Beta test high speed LAN interconnects with ESnet's fast packet WAN services. Make awards to establish a series of NAPs. a Routing Arbiter, and a vBNS that links NSF super- computer sites and is accessible from the NAPs. Formulate programs and solicit proposals to support high bandwidth applications on the vBNS. Continue improvements in U.S. -to-international connectivity. 40 Advanced Software Technology and Algorithms (ASTA) The purpose of the ASTA component is to develop the scalable parallel algorithms and software need- ed to realize the potential of the new high performance computing systems in solving Grand Challenge problems in science and engineering. The early experimental use of this software on the new systems accelerates their maturation and identifies and resolves scaling issues on the most chal- lenging problems. The principal objectives of the ASTA component are to: ci Demonstrate large-scale, multidisciplinary computational results on heterogeneous, distributed systems, using the Internet to access distributed file systems and national software libraries. Q Establish portable, scalable libraries that enable transition to the new scalable computing base across different systems and their continued advance through successive generations. Q Develop a suite of software tools that enhance productivity (e.g., debuggers, monitoring and par- allelization tools, run-time optimizers, load balancing tools, and data management and visualiza- tion tools). '^ Promote broad industry involvement. The ASTA component is composed of four elements: I. Support for Grand Challenges Prototype applications software will be developed to address computationally intensive problems such as the Grand Challenges. Solution of these problems is not only critical to the missions of agencies in the HPCC Program, but has broad applicability to the national technology base. Continuing increases in computational power enable researchers in government, industry, and academia to address prob- lems of greater magnitude and complexity. Increased computational power enables: QMore realistic models as a result of higher resolution computational models. An example is weather models that show features on a local or regional scale, not just on a continental or global scale. 3 Reduced execution times. Models that took days of execution time now take hours, enabling the user to modify the input more frequently, perhaps interactively and graphically, thus gaining insight faster. Reduced execution times also enable modeling over longer time scales (for exam- ple, 100 year climate models can now be executed in the same time it used to take for 10 year models). Q More sophisticated models, including models formerly too time consuming. The radiative prop- erties of clouds can be included in climate models, for example. Q Lower cost solutions to specific problems, resulting in availability to a larger user community. 41 Multidisciplinary teams of computational scientists and disciplinary specialists from one or more Federal agencies and from industry and academia are working together to address these problems. Many of these Grand Challenges projects are cofunded and cosponsored by industry. Significant progress has already been made toward solving many of these problems, and is expected to continue as the Program progresses. These new computational approaches are already being incorpo- rated by industry into new products, services, and industrial processes such as testing and manufactur- ing. II. Software Components and Tools A complete collection of software components and tools is essential for a mature scalable parallel computing environment that supports portable software. These software components and tools must include standard higher level languages; advanced compilers; tools for performance measurement, optimization and parallelization, debugging and analysis; visualization capabilities; and interoperabili- ty and data management protocols. As scalable parallel computing extends to a distributed computing environment, greater demands will be made upon the advanced network technologies developed and deployed through the NREN compo- nent. Operating systems and database management software for heterogeneous configurations of workstations and high performance servers will be developed along with remote procedure calls and interprocess communication protocols to support gigabit per second interconnections. Broad industry involvement will be promoted through the identification and development of system structures using open interfaces. Industry involvement will also increase as computational approaches become available for more problem domains and more individuals are trained in high performance computing. III. Computational Techniques Portable scalable software libraries are being developed to enable software to move across different computational platforms and from one generation to the next. The performance and generality of the new computing technologies will be evaluated using a variety of experimental applications. Standard systems-level tools will be developed to support visualization of data and processes. IV. High Performance Computing Research Centers (HPCRC) The HPCRCs will deploy prototype large scale parallel computing facilities accessible over the Internet through the integration of advanced and innovative computing architectures (both hardware and software). Computational scientists working on Grand Challenge applications, software compo- nents and tools, and computational techniques will be able to access the largest advanced systems available in order to conduct a wide spectrum of experiments and scalability studies. Through the HPCRCs, the HPCC Program will evaluate prototype system and subsystem interfaces, protocols, advanced prototypes of hierarchical storage subsystems, and high performance visualization facilities. This work is done in cooperation with the Evaluation of Early Systems element of the HPCS compo- nent. 42 Major FY 1993 activities and accomplishments and FY 1994 plans Grand Challenges Research The majority of ASIA Grand Challenge research has focused on key computational models using first generation HPCC computers. This research will be extended to include applications software for mul- tidisciplinary applications, hybrid computational models, and heterogeneous and distributed problems on second generation computer teslbeds. HPCC Agency Grand Challenge Research Teams NSF - Computational science and engineering in academic disciplines DOE - Energy and materials NASA - Aerosciences and Earth and space sciences NIH - Biomedical applications NOAA - Atmospheric and oceanic computational modeling EPA - Environmental modeling Collaboration with industry and academia, primarily through CRADAs and consortia, is a high priori- ty. Collaborative efforts are currently underway in the areas of environmental and Earth sciences; computational physics; computational biology; computational chemistry; material sciences; and com- putational fluid and plasma dynamics. A four day "Workshop and Conference on Grand Challenge Applications and Software Technology" was held during May 1993 in Pittsburgh, PA. Funded by nine HPCC agencies, it brought together some 30 Grand Challenge teams. Discussions centered on multidisciplinary computational science research issues and approaches, the identification of major technology challenges facing users and providers, and refinements in software technology needed for Grand Challenges research applications. A special session was devoted to industrial applications, including the aeronautics, automotive, chemi- cal, energy, financial, health care, and textile industries. 43 Software Sharing The large collection of software needed to address the Grand Challenges and other computationally intensive problems is certain to grow at a rapid rate. Effective and efficient mechanisms to manage and reuse this software are essential. Toward this end. NASA coordinates the collection of and access to a high performance computing software repository. The High Performance Computing Software Exchange uses ARPA's wide area file systems and NASA's distributed access to electronic data to connect software repositories in sev- eral Federal agencies. These repositories include netlib from NSF and DOE, and NIST's Guide to Available Mathematical Software (GAMS) (described on pages 148-149 in the Case Studies section). They will be expanded to include databases and bibliographic archives. Mosaic, a hypermedia interface to repositories throughout the Internet, including gopher and WAIS, has been developed by the National Center for Supercomputer Applications (NCSA). In addition to providing a means to browse the Internet, it enables research results to be electronically published with text, images, image sequences, software libraries, and other resources. (Mosaic screens are shown on page 67 in the NSF section.) An HPCC Software Exchange Prototype System is being built to establish foundations and guidelines for software submission standards, directories, indices, and Unix client/server interfaces. The critical processes and procedures to be used are derived from a 1992 Software Sharing Experiment, which identified and reviewed current software and set priorities and mechanisms for creating needed soft- ware. Supercomputing Centers and Consortia Two examples illustrate the collaborative efforts initiated through the HPCC Program: t^The Concurrent SuperComputing Consortium (CSCC) is an alliance of universities, research laboratories, government agencies, and industry that pool their resources to gain access to unique computational facilities and to exchange technical information, share expertise, and col- laborate on high performance computing, communications, and data storage issues. CSCC members are: ARPA Argonne National Laboratory California Institute of Technology The Center for Research on Parallel Computation (an NSF Science and Technology Center) Intel's Supercomputer Systems Division Jet Propulsion Laboratory Los Alamos National Laboratory NASA Pacific Northwest Laboratory Purdue University Sandia National Laboratory ciThe National Consortium for High Performance Computing was initiated by ARPA to accelerate advances in high performance computing technology. It focuses on 1 ) software and applications 44 development, and 2) the fostering of interdisciplinary collaboration among DOD and other Federal agencies and laboratories, industry, academic partners, and other research and develop- ment organizations to solve important problems in defense and national security. Initiated in coordination and consultation with NSF using non-HPCC funds, this Consoilium is an example of how HPCC technologies are being deployed on a national scale. Scalable parallel systems are also located at the NSF sponsored Supercomputer Centers - the Cornell Theory Center (CTC). the National Center for Supercomputer Applications (NCSA) at Chanipaign- Urbana, the Pittsburgh Supercomputer Center, and the San Diego Supercomputer Center (SDSC). Plans are underway to establish a "metacenter" in which these Centers will be interconnected via high performance networks, allowing their supercomputing resources to be used as if they were an integrat- ed high performance computing system. NSF's Supercomputer Centers offer an interdisciplinary and collaborative environment for industrial and academic researchers. More than 100 corporations have formal affiliations with the Centers resulting in transition of enabling technologies and expertise. The Centers also work directly with vendors to identify and predict the needs of the computational science research community and to develop and test hardware and software systems. Special programs at the Centers are funded by other agencies, including the NIH Biomedical Research Technology program in biomedical computing applications at CTC and NCSA. Other federally-supported high performance computing activities not coordinated or budgeted through the HPCC Program, such as the National Center for Atmospheric Research (NCAR) in Boulder. CO. maintain important connections with HPCC. DOE and NASA have also established HPCRCs. These include the DOE facilities at Los Alamos National Laboratory and Oak Ridge National Laboratory and NASA facilities at Ames Research Center and the Goddard Space Flight Center. The Consortium on Advanced Modeling of Regional Air Quality (CAMRAQ) is working to develop pollution modeling systems, such as the regional environmental impact of pollutants including ozone, sulfate, nitrates, and articulates. Participants include: Federal organizations Defense Nuclear Agency EPA NASA NOAA Aeronomy Laboratory Atmospheric Research Laboratory National Meteorological Center National Park Service U.S. Army Atmospheric Sciences Laboratory Federally-chartered institutions National Academy of Sciences/National Research Council State and local organizations California Air Resources Board Northeast States for Coordinated Air Use Management South Coast Air Quality Management District 45 Industrial organizations American Petroleum Institute Chevron Research Corporation Electric Power Research Institute PG&E Southern California Edison Company International organizations Environment Canada, Atmospheric Environment Service EUROTRAC/EUMAC Ontario Ministry of the Environment DOE AND NASA are responsible for coordinating Grand Challenges applications software develop- ment, and are coordinating testbed development to ensure that a diverse set of computing systems are evaluated. Applications software and high performance computing benchmarks will be used by par- ticipating agencies to evaluate high performance computing system options. A key component of this effort will be to provide feedback to developers of teraops systems. ASTA Milestones FY 1993 - 1994 3 Demonstrate initial multidisciplinary Grand Challenge applications. ^ Deploy 100-gigaops systems to major high performance computing centers. FY 1994 ^ Complete initial software components and tools for large scale systems. ^ Deploy HPCC prototype libraries to the Nil. Beginning in FY 1994 3 Deploy 300-gigaops systems to major high pertbrmance computing centers and enable their use. 46 Information Infrastructure Technology and Applications (IITA) The IITA component of the HPCC Program is a research and development program intended to: -I Develop the technology base underlying a universally accessible National Information Infrastructure (Nil). ^Work with industry in using this technology to develop and demonstrate prototype "National Challenge" applications. National Challenges are major societal needs that high performance computing and communications technology can address in areas such as the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environment, health care, manufacturing processes and products, national security, and public access to government information. The list of National Challenges is dynamic and will expand as the technologies and other applications mature. Solution of these National Challenges requires a three-part technology base consisting of the following services, tools, and interfaces: Q Services that are common to and necessary for the efficient operation of the Nil. For example, conventions and standards are needed to handle different formats of data such as text, image, audio, and video. 3 Tools for developing and supporting common services, user interfaces, and Nil applications. 3 Intelligent user interfaces to Nil applications. The IITA component depends critically on the technologies already developed by the HPCC Program, and places its own set of demands on the Program. IITA efforts will strengthen the underlying HPCC technology base, broaden the market for these technologies, and accelerate industry development of the NIL The Federal agencies that participate in the HPCC Program will work with industry and academia to develop these technologies. The Nil. however, will be built and operated primarily by the private sec- tor, which can form new markets for products and services enabled by the emerging Nil. This joint effort by government, industry, academia. and the public to de\elop the Nil and address the National Challenges will require: ^ Deployment of substantially more high performance computing systems of increasingly higher performance. QA nationwide communications network with vastly greater capacity for connections and throughput than today's Internet. 47 3 Further development and wider deployment of applications software for computationally inten- sive problems such as the Grand Challenges. ^ Education of large numbers of developers of Nil technologies and training of a Nation of users. Users of the early Nil will be able to take advantage of small to moderate capacity computers and slow to medium speed communications, provided they have high quality user interfaces and access to the applications. As user interfaces improve, more computing and communications performance may be required. This can be achieved through the continual advances in the underlying technology devel- oped under the HPCC Program. The HPCC Program's original focus on research and development will continue to play a pivotal role in enhancing the Nation's computing and communications capabilities. For example, the Grand Challenges will continue to provide the scientific focus for critical computing technologies because of their profound and direct impact on fundamental knowledge. The IITA component will enable the extension of these technologies and the development of National Challenge applications that have immediate and direct impact on critical information systems affecting every individual in the Nation. Distinctions between National Challenges and Grand Challenges are shown in the table on the next page. The following two examples illustrate potential applications of the NIL A Medical Emergency Having taken ill, a traveler is hospitalized and undergoes tests, including X-rays. CAT scans, and MRl. At the same time, the attending medical professionals quickly retrieve test results from the trav- eler's last physical examination. The images are compared, diagnoses made, and treatments pre- scribed. This scenario is difficult if not impossible to implement today, in part because diagnostic images are commonly not in computer-readable form and network speeds are generally too slow to transmit large three-dimensional image data sets. Truly remote medical care will depend on services, standards, tools, and user interfaces to store, find, transmit, manipulate, display (and superimpose), compare, and analyze three-dimensional image data from several sources. Diagnostic test results and large image data sets from the physical examination must be available on computers that can be accessed from the hospital's computers over a communica- tions network; they must be retrieved quickly; the scientific data used in guiding the diagnosis and treatment must also be available from electronic libraries and must be quickly retrieved; and the priva- cy of these patient records must be protected. All of this supposes completion of rather extensive and complex inter-professional medical arrangements. In addition, it must be done using a user interface customized for the practice of "distance medicine," including collaborations among different sources of expertise. 48 Contrasts Between Grand Challenges and National Challenges Grand Challenges National Challenges Focus Computation intensive information intensive Users Computer scientists and computa- Information and application users tional scientists, extending to sci- in major national sectors, extending entists and engineers (numbering to all sectors (numbering in tfie in tlie millions) hundreds of millions) Distribution of HPCS Focus on largest systems at "cen- Focus on distributed systems of Resources ters" and smaller systems for soft- moderate scale with many users. ware development scaling with increasing number of users Main HPCS Use Workstation client/server systems Workstation client/server systems for computing and systems devel- for access to and processing of opment, scaling witfi scientific information with extensions to needs mobile and wireless systems Main NREN Use Share computing resources, data, and software: collaboration Support distributed systems Main ASTA Use Scientific software for computation- ally-Intensive applications Computationally-based services Privacy and Security Desirable but not critical to deploy- ment; essential in long term Essential for deployment Copyright Desirable but not critical to deploy- ment Essential for deployment Network Highi among largest scale computer Nominal to all involved, moderate Performance systems and users needing visual- to most when needed, high to ization those with greatest need 49 A Weather Emergency Using a real-time weather forecast for the area 20 miles directly ahead, a trucker diverts to an alternate route and reduces by hours the potential delay in delivering critically needed parts to a company that uses a just-in-time inventory system. Relayed from a weather forecast center to the truck's on board navigation system, this highly accurate forecast that pinpoints developing adverse weather conditions is made possible by the use of new computer weather prediction models that exploit the capabilities of high performance, massively parallel computing systems. Already, advances in computing and communications technologies have led to significant improve- ments in weather forecasting. As illustrated in the recent case of Hurricane Andrew in Florida, this improved forecasting can save lives as well as millions of dollars in evacuation costs through better targeting of evacuation efforts. Services, standards, tools, and user interfaces are required to build and support systems for acquiring large amounts of three-dimensional environmental data from different sources (e.g., in situ and remote sensing observations). These systems must also support high resolution modeling using these data, incorporating improved representations of the physical environment, and a real-time information dis- semination capability to provide detailed forecast information for hundreds of different locations to thousands of users. Unlike the first example, the user community for environmental information is larger and more diverse. Weather forecasts are needed by the general public, and for aviation, ship navigation, and agriculture, for example, while HPCC-funded researchers use much of the same observational data to model global change. Starting with user interfaces tailored to different kinds of users, individual users customize them for their own needs. The delivery and use of environmental information for this broad range of applications is performed by a partnership of government and value added private sector information companies, all part of the Nil. The IITA component will enable the development of an integrated infrastructure so that these two apparently unrelated applications can work together efficiently. This infrastructure includes: ^ A networked computing base that provides appropriate performance. ci Methods to provide security, privacy, and copyright protection, and other services such as "digi- tal signatures" to authenticate transactions. Q Technical conventions and standards (especially for databases). -•Tools to build and support the user interfaces. 3 Tools to build and support the applications themselves. The IITA component will demonstrate these technologies through testbed and pilot projects. These projects will evaluate new technologies, provide training in their use. and demonstrate specific National Challenge applications. Successful projects will serve as models to be further refined and engineered for larger scale deployment. 50 In order to facilitate the deployment of the Nil by the private sector, the government will work closely with industry, academia, and users on all aspects of the IITA component. The private sector is expect- ed to deploy many of its own applications, in areas such as commerce and entertainment. The govern- ment's goal is that from the user's point of view these applications will be integrated as seamlessly as possible into a single Nil. with appropriate use restrictions and protections incorporated as needed. The IITA component is organized into four interrelated elements. Each builds on the foundation of the HPCC Program and, in large measure, builds on its predecessor. I. Information Infrastructure Services These are the basic services and interfaces, and the underlying technical conventions and standards, that provide the coirmion foundation for a broad range of information technology-based applications. Building upon the Interagency Internet, these modular units will in turn be the building blocks of the Nil applications. These include: 2 Data formats and object structures (including single- and multimedia formats such as image, audio, and video). Q Methods for managing distributed databases. Q Services that provide access to electronic libraries and computer-based databases. 3 Methods for exchanging data (e.g., data compression) and integrating data (e.g., merging and overlaying images) from one or more electronic libraries and computer-based databases. 3 Services to search for and retrieve data and objects. ^ Protocols and processes such as "digital signatures" needed to obtain appropriately secure and legal access to information (including protection of copyrighted material). Q Usage metering mechanisms to enable implementation of payment policies. ^ High integrity, fault-tolerant, trusted, scalable computing systems. Q Protocols and processes needed to obtain the appropriate communications speed and bandwidth. II. Systems Development and Support Environment This element includes a comprehensive suite of software tools and applications methods such as soft- ware toolkits and software generators for use by computer programmers, tools and methods for inte- grating elements of virtual reality systems, collaboration software systems, and applications-specific templates and frameworks. They will be used to: Q Interface to existing services (for example, existing search services). 3 Develop new distributed services for the Information Infrastructure Services element and for the NIL 51 Q Develop generic user interfaces, including templates and frameworks, to facilitate the use of the services provided by the Information Infrastructure Services element in the development of advanced and customized user interfaces by the Intelligent Interfaces element described below. Q Develop generic applications, including architectures and frameworks, for use in the Intelligent Interfaces element and for use by applications developers in implementing applications such as the National Challenges and other applications in the NIL This element also includes the systems simulators and modeling methods to be used in designing the technology underlying the NIL III. Intelligent Interfaces In the future, high level user interfaces will bridge the gap between users and the NIL A large collec- tion of advanced human/machine inteifaces must be developed in order to satisfy the vast range of preferences, abilities, and disabilities that affect how users interact with the NIL Intelligent interfaces will include elements of computer vision and image understanding: understand- ing of language, speech, handwriting, and printed text; knowledge-based processing; and multimedia computing and visualization. In order to enhance their functionality and ease of use, interfaces will access models of both the underlying infrastructure and the users. Just as people now do their own "desktop publishing," they will have their own "desktop work environments," environments that will extend to mobile and wireless networking modes. Users will be able to customize these environments, thereby reducing reliance on intermediate interface developers. IV. National Challenges National Challenges are fundamental applications that have broad and direct impact on the Nation's competitiveness and well-being. They will enable people to handle the increasing amounts of infor- mation and the increasing dynamics of the 21st century. Using selected HPCC enabling technologies and the technologies developed by the other IITA ele- ments, this element will use pilot projects to develop 'customized applications" in areas such as the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environ- ment, health care, manufacturing processes and products, national security, and public access to gov- ernment information. Detailed goals of four of these applications areas are as follows: Digital Libraries Develop systems and technology to: Q Enable electronic publishing and multimedia authoring. ^ Provide technology for storing petabytes of data for nearly instantaneous access by users num- bering in the millions or more. Q Quickly search, filter, and summarize large volumes of information. 52 Q Quickly and accurately convert printed text and "pictures" of all forms into electronic form. Q Categorize and organize electronic information in a variety of formats. Q Use visualization to quiclciy browse large volumes of imagery. Q Provide electronic data standards. Q Simplify the use of networked databases in the U.S. and worldwide. Prototypic scientific data bases, including remote-sensing images, will be developed. Librarians and other users will be trained in the development and use of this technology. Education and Lifelong Learning Conduct pilot projects that connect elementary and secondary schools to networks through which stu- dents and teachers can: Q Communicate with their peers and with students and faculty at colleges and universities across the country. Q Access information databases and other computing resources. Q Use authoring tools to embody the experiences of the best teachers in systems that others can use. Q Have greater access to the Nil technologies, enabling them to develop and use them more effec- tively. Q Enable future generations to be literate in information technology so that they will be prepared for the 2 1 st century and beyond. Health Care Develop and provide: Q Access to networks that link medical facilities and enable health care providers and researchers to share medical information. ^Technology to visualize and analyze human einatomy and to simulate medical procedures such as operations. QThe ability to treat patients in remote locations in real-time by having "distance collaborations" with experts at other medical facilities. Q Technology by which health care providers can readily access databases of medical information and literature. 53 CI Technology to store, access, and transmit patients' medical records and protect the accuracy and privacy of those records when doing so. Manufacturing Research and development to: Q Prototype advanced computer-integrated manufacturing systems and computer networks linking these systems. ^Work with industry in implementing standards for these advanced manufacturing operations, and process and inventory control. ci Transition the manufacturing process to the new scalable computing and networking technology base. 2 Train management and employees in advanced manufacturing. 54 Basic Research and Human Resources (BRHR) The BRHR component is designed to increase the flow of innovative ideas by encouraging investiga- tor-initiated, long-term research in scalable high performance computing; to increase the pool of skilled and trained personnel by enhancing education and training in high performance computing and communications; and to provide the infrastructure needed to support these research and education activities. The BRHR component is organized into four elements: I. Basic Research This element supports increased participation by individual investigators in conducting disciplinary and multidisciplinary research in computer science, computer engineering, and computational science and engineering related to high performance computing and communications. Research topics include: ti Foundations of future high performance computing systems. Q High performance hardware components and systems, high density packaging technologies, and system design tools. 3 Mathematical models, numeric and symbolic algorithms, and library development for scalable and massively parallel computers. Q High level languages, performance prediction models and tools, and fault tolerant strategies for parallel and distributed systems. 3 Large scale database processing; knowledge based processing; image processing; digital libraries; visualization; and multimedia computing. Q Resource management strategies and software collaboratory environments for scalable parallel and heterogeneous distributed systems. II. Research Participation and Training This element addresses the human resources pipeline in the computer and computational sciences, at undergraduate, graduate, and postdoctoral (training and re-training) levels. Activities include: Q Workshops, short courses, and seminars. Q Fellowships in computational science and engineering and experimental computer science. Q Career training in medical informatics through grants to young investigators. 55 o Institutional training and postdoctoral programs; knowledge transfer exchange programs at national laboratories, centers, universities, and industry. ^ Software dissemination through national databases and libraries. III. Infrastructure This element seeks to improve university and government facilities for computer science, computer engineering, and computational science and engineering research related to high performance comput- ing. Activities include: ci Improvement of equipment in computer science, computer engineering and computational sci- ence and engineering academic departments, centers, and institutions; development of scientific databases and repositories. Q Distribution of integrated system building kits and software tools. IV. Education, Training, and Curriculum This element seeks to expand existing activities and initiate new efforts to improve K-12, undergradu- ate, and graduate level education and training opportunities in high performance computing and com- munications technologies, computational science and engineering for both students and educators. The introduction of associated curriculum and training materials at all levels is an integral part of this effort. Activities include: Q Bringing people, especially teachers, to national centers and laboratories, for summer institutes and other training, technology transfer, and educational experiences. Q Utilizing professional scientists and engineers to provide curriculum development materials and instruction for high school students in the context of high school supercomputer programs, supercomputer user workshops, summer institutes, and career development informatics for health sciences. BRHR Component Implementation Each agency that participates in the BRHR component sponsors research participation and education/training programs designed to meet specific mission needs. Some of these activities are as follows. '^ ARPA supports basic research in such areas as high performance components, high density packaging, .scalable concepts, system design tools and foundations of petaops systems. 13 NSFs basic research programs promote innovative research on the foundation sciences and tech- nologies of HPCC as well as specific disciplinary activities in HPCC. NSF coordinates its basic research and infrastructure activities to foster balance in the multiagency HPCC Program. 56 Through its "research experiences for undergraduates." SuperQuest. postdoctoral, graduate fel- lowship, educational and minority infrastructure programs, NSF addresses long term national needs in HPCC. a DOE supports basic research to advance the knowledge of mathematical, computational, and computer sciences needed to model complex physical, chemical, and biological phenomena involved in energy production and storage systems. DOE also is actively involved in education and training activities at all levels. 3 NASA conducts basic research through NASA research institutes and university block grants, including support at the graduate and postdoctoral level. 3 NTH supports basic research and training in the use of advanced computing and network com- munications. Predoctoral and postdoctoral grants for career training in medical informatics are being expanded. C3 NOAA conducts basic research in computational fluid dynamics applications in atmospheric and oceanic processes. QEPA sponsors targeted fellowships and basic research activities, and develops and evaluates training methods and materials to support transfer of advanced environmental assessment tools to Federal, state, and industrial sectors. Parmerships with industry, universities, and government help accomplish BRHR objectives. FY 1993 Accomplishments In FY 1993. more than 1 .000 research awards fund the following activities: a Basic research in high performance computational approaches to materials processing, molecu- lar structures, fluid dynamics, and structural mechanics. Q Basic research on scalable parallel systems in fundamental areas of mathematical models, algo- rithms, performance evaluation techniques, databases, visualization and multimedia computing, digital libraries, and coUaboratory technologies. ta An increased number of computer and computational science and engineering postdoctoral awards and graduate fellowships. a High school honors programs, teacher training programs and "research experience for under- graduates" in HPCC. a The introduction of the computational science textbook, which involved 24 authors in 10 differ- ent disciplines, into classrooms as part of a pilot project for Computer Science for High School Teachers. 57 Q Institutional infrastructure awards to support experimental and novel high performance comput- ing research at universities and national laboratories. Q Educational and minority infrastructure awards in undergraduate institutions. FY 1994 Plans Q Develop a program to apply the principles of artificial intelligence to advanced intelligent manu- facturing. Q Develop a program to integrate virtual reality technology into high performance computing and communications systems. Q Develop an initiative in digital libraries. Q Increase research in real time, three-dimensional imaging and multimedia computing. ^Increase support of information intensive applications of HPCC technologies in health care, information libraries development, education, and manufacturing. Q Increase support for scalable parallel computers. Q Increase education and training activities in HPCC through establishment of network-based edu- cational testbeds. Q Increase number of postdoctoral and graduate fellowship awards. 58 Advanced Research Projects Agency (ARPA) As the HPCC Program reaches the middle of its initial five-year phase, the ARPA program is shifting focus from stimulating the development of the new scalable computing technology base and early experimental use toward developing the technologies needed to enable a broad base of applications and users, including their extension to a National Information Infrastructure. In addition, the foundations for future generations of computing systems involving even more advanced technologies are being developed. The current scalable computing technology base is char- acterized by the first 100 gigaop class computing sys- tems that are being experimentally used on a wide vari- ety of problems in the scientific and engineering research communities. Scalable operating systems are enabling software development on high performance workstations connected to higher performance servers through networks. The experience gained in the early experimental use of these new computing technologies is used to refine the next generation and guide the development of more advanced software and system development technolo- gies. The combination of scalable computing and scal- able networking technologies provides the foundation for solving both Grand Challenges with large scale parallel systems and National Challenges with large scale dis- tributed systems. This enables HPCC to progress toward an Nil. ARPA is the lead DOD agency for advanced technology research and has the leadership responsibility for High Performance Computing (HPC) within DOD. The ARPA HPC Program develops dual use technologies with broad applicability to enable the defense and intelligence communities to build on commercial technologies with rapid development of more specific characteristics when needed. ARPA has no laboratories or centers of its own and executes its programs in close cooperation with the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Office, Service Laboratories, the National Security Agency, and other DOD organizations and Federal agencies. ARPA partic- ipates in joint projects with other agencies in the Federal HPCC Program, a variety of Defense agencies, the Intelligence community, and other Federal institutions. 59 A multi-year Caltech project developed high performance interconnect and fine- grained parallel systems including boards consisting of 64 single nodes, scalable to thousands of nodes. Several commercial systems have adopted architectures based on this research, and are beginning to demon- strate how low cost modules can be configured to meet a broad range of applications. Joint projects with other agencies are established to accelerate technology development and transition. ARPA joint projects with NSF include foundations for scalable systems, visualization, Grand Challenges, giga- bit networks, and accelerating the maturation of systems software at NSF Supercomputer Centers. Joint projects with NASA include an Internet software exchange, sys- tem software maturation, and ground stations for the ACTS gigabit satellite system. ARPA, NSF, and NASA also have a joint program in digital libraries. Joint pro- jects with DOE include scalable software libraries and networking applications. ARPA is working with NIST to develop performance measurement technologies and techniques, privacy and trusted systems technologies, and the computer emergency response team system for the Internet. A joint project with NSA is developing giga- bit network security technology and other secure and trusted systems technologies. In addition, a variety of early evaluation and experimental use projects involve different kinds of scalable parallel computing systems. The ARPA program focuses on the advanced technolo- gy aspects of all five components of the HPCC Program as follows: HPCS ARPA projects stimulate the development of scalable computing technologies that are capable of being con- figured as networks of workstations and large scale par- allel computing systems capable of sustaining trillions of operations per second. Systems can be configured over a wide performance range. The systems will be bal- anced to provide the processor-to-memory, scalable interconnection, and input/output bandwidth needed to sustain high internal and external system performance. The modular design of the system units of replication will enable them to cover the full range from worksta- tions to the largest scale distributed and parallel sys- tems. Scalable systems with vector accelerators may be configured as parallel vector systems. Other kinds of accelerators such as field programmable logic arrays may be added for specialized applications. The largest scale parallel systems with hundreds to thousands of processors or more, are sometimes referred to as mas- sively parallel systems. The input/output interfaces of these systems may be used to configure heterogeneous systems with high performance networks. 60 Experimental gigabit networks are over- laying and enhancing the Internet. ARPA's Networking Systems program develops and evaluates these technolo- gies as foundations for a global scale, ubiquitous information infrastructure supporting Grand Challenge. National Challenge, and Defense needs. Scalable microkernel operating systems with a full com- plement of servers will enable software and system developers to work withi a uniform set of application interfaces over the scalable computing base. Through the use of multiple servers, different application inter- faces can be supported to enable the transition from legacy systems such as those available today. The sys- tem software may be configured as needed for particular applications including trusted and real-time systems. Advanced components and packaging technologies including the associated design, prototyping, and sup- port tools will enable higher performance and more com- pact systems to be developed. These technologies also enable the development of embedded systems so that computing can be put in specialized physical and envi- ronmental settings (such as airplanes, spacecraft, land vehicles, or ships). Early evaluation and experimental use of new computing systems is an integral part of the overall development process. Policies and mechanisms have been devel- oped that enable the timely purchase of new small to medium scale computing systems for the purpose of early evaluation and experimental use. As these tech- nologies mature, larger scale systems are purchased and deployed by other parts of the HPCC Program in consultation with their user communities. NREN ARPA projects develop scalable networking technolo- gies to support the full range of applications from local networks, to regional networks, to national and global scale networks including their wireless and mobile extensions. Different kinds of communication channels, or "bitways", will be integrated to enable network con- nectivity to be achieved between users and their applica- tions. Internet technologies will be developed to enable contin- ued scaling of the networks to every individual and sys- tem needing access. Scalable high performance net- working technologies will be developed to enable gigabit speeds to be delivered to the end users. A variety of networking testbeds developed in cooperation with other agencies are used to develop, demonstrate, and experi- mentally deploy new networking technologies. As these technologies mature, larger scale systems are deployed by other parts of the Program in consultation with their user communities. 61 Modern Operating Systems Message passing, transparent access to user services, advanced memory management, and real time response are l4,fl 1 ^K^ Two-dimensional clinical images from computed tomography and magnetic resonance imaging underpin modern medical diagnosis. Transmission of these diagnostic images over wide area networks and their reconstruction to form three-dimensional views are impor- tant health care applications of HPCC technologies. -"Human genetic linkage analysis to determine the likely position of a disease gene using LINKAGE, a gene linkage analysis program that calculates the probability of the association between a pat- tern of gene inheritance and a disease condition. It is used to analyze family pedigrees based on data obtained from gene probes. ASIA: "Visible Human" NLM initiated a two-year project to acquire the three- dimensional digital representation of entire human beings at millimeter-level resolution, derived from com- puted tomography, magnetic resonance imaging, and digitized cryosections. This "Visible Human" research data set will become available nationally via the Internet in 1994. ASIA: Prototype Program for Retrieving IVIolecular Biology Information NLM created a prototype advanced molecular biology information retrieval program that provides integrated access to genetic and protein molecular sequences, and the biomedical literature linked to those sequences. Field testing of the system has begun. ASTA: Faster Molecular Analysis and Imaging Algorithms NCRR and NLM achieved order of magnitude speedups in several existing molecular analysis algorithms. DCRT and NCRR developed new algorithms for regis- tration and rendering of three-dimensional images from two-dimensional clinical images and micrographs. ASTA: HIV Research Research conducted at NCI's Biomedical Supercomputer Center is increasing the understanding of the human immunodeficiency virus (HIV) that causes AIDS and is helping to design and develop new drugs to combat the deadly disease. NCI researchers have suc- cessfully predicted the secondary structure of the entire 9,000 unit HIV virus RNA. NCI and NCRR supercom- puting applications have assisted in the design of new drugs to inhibit HIV replication. 92 Digital radiology techniques require high speed networks: a single X-ray film rep- resented by a 2K-by-2K-by-10 bit gray scale generates a 4 megabyte image file. BRHR: Medical Informatics Grants and Other Training NLM competed the award of 10 Medical Informatics Training Grant programs at academic medical centers. The program supports cross-disciplinary training of health professionals in the use of advanced computing technologies. NCRR conducted a pilot project to introduce scientific computing methods to high school science teachers and their students. DCRT and NCRR sponsored "hands on" training of biomedical researchers in the use of new computational biology tools at NSF Supercomputer Centers and on the NIH campus in Bethesda. BRHR: Basic Research Through Long Distance Microscopy In the first demonstration of its kind, scientists at a work- station in Chicago viewed high-resolution images of nerve cells in a high-voltage electron microscope located 1,700 miles away at the San Diego Microscopy and Imaging Resource (SDMIR), which is supported by NCRR. Further details are given on pages 123-125 in the Case Studies section. FY 1994 Milestones NIH will accelerate the pace of molecular and genetic discovery by enabling the solution of currently intractable problems in molecular structure prediction, drug design, and human genome database analysis. The program will apply and evaluate new computer architectures to key problems of human health and dis- ease, in a manner that gives early feedback to computer designers on the strengths and limitations of their sys- tems for medical applications. The HPCC Program will rapidly build an electronic community among life science researchers by connecting academic medical centers to the Internet. It will create prototype medical imaging applications that use the Internet and provide a model for distance-independent medical consultation. It will double the pool of computationally trained investigators in biomedicine. 93 X-ray diffraction spectroscopy is a labo- ratory mettiod tor determining ttie folded structure of proteins and otiier biological macromolecules- Parallel computing systems can be used to automate inter- pretation of X-ray diffraction patterns acquired via two-dimensional array sen- sors. HPCS: Evaluation of Parallel Systems DCRT will obtain a next generation parallel computer that will allow the solution of new computationally inten- sive problems in biomedicine. NCRR will assess the efficiency and scalability of emerging massively parallel architectures for Grand Challenge problems. NREN: More Connections to the Internet NLM will establish Internet connections to 70 to 100 additional medical centers. NREN: Digital Anatomy Databases NLM will create advanced three-dimensional imaging databases for digital anatomy on the Internet by the nation's health professions schools. Workstations to access and display those images will be developed. NREN: "Knowbots" for Natural Language Queries An operational Knowbot-based database retrieval sys- tem will be deployed at NLM to provide integrated access to over 50 computerized knowledge sources in biomedicine, underpinned by a fully operational Unified Medical Language System that allows users to state sci- entific questions in their own language, and have the answer retrieved and synthesized automatically from multiple databases at multiple sites on the Internet. ASIA: Software Development and Hardware Placement NCI will expand its advanced software development pro- gram to allow research on a broader range of molecular dynamics, structure-function problems, and structure- assisted drug design. NLM will develop and deploy advanced molecular biolo- gy workstations to approximately 1,000 molecular biolo- gy laboratories. Through funding of five biomedical High Performance Computing Resource Center (HPCRC) programs at 94 NSF and ARPA-sponsored high performance computing centers, NCRR will support development of algorithms to compare molecular sequences, predict molecular struc- ture from genetic and protein sequences, simulate pro- tein folding, and model complex biological systems such as proteins interacting with membranes in an aqueous solution. Computed images of biological structure, from molecular to whole body, will be an additional area of ASTA software development. IITA: Health-Related Research and Development NIH will support the development of HPCC technologies through a Broad Agency Announcement to support two to five research and development projects in each of the six health-related areas listed above. NCI will provide Xconf, a prototype multimedia (including medical images) group conferencing tool that uses the Internet. Contingent on funding, NCI will 1) connect to the Internet both medical research centers (to facilitate multicenter environmental epidemiology studies) and the Cancer Information Service offices at its cancer centers, and 2) begin developing telemammography over high speed networks. BRHR: New and Upgraded Centers and Additional Fellowships In order to meet the needs of biomedical researchers, NCRR will establish at least one additional high perfor- mance computing resource center and upgrade the five existing centers. Enhanced cross training of biomedical research scientists will also be possible. NLM will fund an additional 50 medical informatics train- ing fellowships nationwide. 95 National Security Agency (NSA) The goal of NSA's HPCC Program is to accelerate the development and application of the highest performance computing and communications technologies to meet national security requirements and to contribute to collective progress in the Federal HPCC Program. mmmm ^1 By integrating processing directly into otherwise standard memory chips fabri- cated at Its Special Processing Laboratory, the Supercomputing Research Center has developed the Terasys workstation, which outperforms one Cray Y-MP processor by 5 to 48 times on a set of nine NSA applications. A mature software environment, avail- able for the workstation, makes the sys- tem easy to use. In support of this goal, NSA: -"Develops algorithms and architectural simulators and testbeds that contribute to a balanced environ- ment of workstations, vector supercomputers, massively parallel computer architectures, and high speed networks. ^Sponsors and participates in basic and applied research and development of gigabit networking technology. J Develops network security and information securi- ty techniques and testbeds appropriate for high speed in-house networking and for interconnection with public networks. 3 Develops software and hardware technology for highly parallel architectures scalable to sustained teraops performance. -I Investigates or develops new technologies in materials science, superconductivity, ultra-high- speed switching and interconnection techniques, networking, and mass storage systems fundamen- tal to increased performance objectives of high performance computing programs. NSA participates in al Program as follows. five components of the HPCC HPCS: Heterogeneous High Performance Computing; Balanced Architectures NSA deploys experimental scalable computer capabili- ties, emphasizing interoperability of massively parallel machines in a highly heterogeneous environment of workstations, vector supercomputers, and mass storage 96 •4aK400m, er prvccsDf apioczaoi Time improvement using multiple pro- cessors vs^ a single processor for a par- allel eigensolver algorithm for large dense symmetric matrices, which occur frequently in models of physical phe- nomena. A parallel algorithm "scales well" when it effectively uses almost all available resources as the problem size and the number of processors increase. The achievable performance improve- ment IS proportional to the increase in the number of processors. Parallel algorithm design and implementation is an Iterative process with the goal of achieving the maximum performance possible. SRC research continues seeking ever-faster eigensolvers. These data are from the Intel Touchstone Delta at Caltech. Results are being collected for the IBM SP1. the Thinking fvlachines CMS. and the Intel Paragon. devices. NSA's HPCS program also emphasizes an open systems approacti to development and mainte- nance of the HPCC environment and the integration of specialized high speed hardware within this environ- ment. NREN: Network and Security Technology NSA uses the Internet to provide high speed network connection among NSA, industry, and academic researchers. NSA takes the lead in developing network security and other information system security technolo- gy and products for high speed networks. It establishes high speed network testbeds to explore network and security interface technology issues. ASTA: High Performance Systems Software, Tools, and Algorithm Research NSA develops systems software to enhance productivity of users of high performance systems. This software includes compilers, simulators, performance monitoring tools, software that is portable among a variety of high performance systems, software for distributing jobs across a network, and visualization tools. New algo- rithms are developed to map problems to high perfor- mance architectures. IITA: Development of Dual-Use Technology NSA proposes to investigate developmental technolo- gies to support information infrastructure applications in manufacturing, education, public health, and digital libraries. Research and development in transaction pro- cessing, database management systems, and digital data storage and retrieval systems, are expected to make strong dual-use technology contributions to both the Federal and the IITA communities. BRHR: Fundamental High Performance System Research and Education NSA supports basic research into new technologies, theory and concepts of high performance computing and networking, and promotes research in high performance computing at the Institute for Defense Analyses' (IDA) Supercomputing Research Center (SRC) and at univer- 97 Memory traffic flow from CPU memory ports througfi a memory arbitration net- work, and back to CPU input ports. Eacti colored rectangle represents one ptiysical queue organized in groups. The colors represent the queue state: for example an empty queue is white and a full queue is red. The animation of this display shows memory message traffic through the memory network. Its purpose is to point out possible clogs and busy spots within the network. sities. NSA's National Cryptologic School and the SRC develop courses suitable for users of high performance computing environments. Management The overall coordination of NSA's HPCC activities resides in the office of the NSA Chief Scientist, who reports to the Director of NSA. Responsibilities for exe- cution of major elements of the plan lie with the Technology and Systems Directorate and the Information Systems Security Directorate. NSA spon- sors the SRC in exploring massively parallel computer architectures and in developing algorithms and systems software for parallel and distributed systems. The SRC heavily supports NSA's HPCC activities directly and by collaborative efforts with other HPCC participants, espe- cially industry and academia. FY 1994 Plans HPCS Extend existing high performance system simulators and testbeds at NSA and the SRC. Develop mass archival storage 10'' to 10'*^ bits. Define techniques for the interoperability of distributed operating systems spanning large sets of heterogeneous computer assets, including supercomputers. Integrate major new vector/scalar massively parallel pro- cessor as a research system. Demonstrate a terabit/second (lO'^-bit-operations-per- second) deskside SIMD system. Continue technical cooperation with major vector/mas- sively parallel processor developers. Investigate, cooperatively with industry, processing-in- memory (PIM) technology in established systems archi- tectures. 98 Front and rear views of High Speed Network (HNET) Testbed nese 64 Unix processors attaclied to 64 custom-chip switch nodes, each with seven ports, serve as a high speed net- worit. routing protocol testbed. The sys- tem can be configured to be an Asynchronous Transfer fvlode (ATlVl) switch, for example, to allow expenmen- tation with the efficiency of currently evolving commercial offerings. NREN Install in-house gigabit network testbed to explore high speed network architectures and techniques. Explore network security issues for 622 Mb/s and 2.4 Gb/s networks. Install a gigabit network testbed to explore compatibility of DOD networks with vendor-provided public networks. Initiate development of a bulk link encryptor for high speed networks. Develop proof of concept for cell encryption in very high speed switched network products. ASIA Develop parallel extensions to high-level programming languages suitable for parallel architectures. Develop visualization techniques for performance analy- sis of parallel and vector high-performance architec- tures. Develop system modeling tools for heterogeneous com- puting environments, including high performance sys- tems. Develop algorithms and implementations for solving eigensystems and manipulating sparse matrices on par- allel systems. Develop evaluation testbed for exploring routing algo- rithms and topologies for computer interconnects. IITA Candidates Study applicability of heterogeneous data base technol- ogy program to information infrastructure application needs. Investigate transferability of digital library technology to the private sector. Research public sector security technology issues unique to the IITA as well as dual-use technologies 99 imMTz NSA has used parallel processing to greatly accelerate workstation perfor- mance. Designed and fabricated at ttie SRC. tfiis board employs 32 Field Programmable Gate Arrays whicf) are custom programmed for each applica- tion. This customization and paralleliza- tion yields speedups of 100 to 1.000 times the already impressive worksta- tion performance. (Field Programmable Gate Arrays are also described on pages 154-155 in the Case Studies sec- tion.) applicable to both national security and public sector communities. BRHR Initiate collaborative and university research efforts in performance modeling of high performance computing systems for generic problem domains. Explore innovative parallel computer and network archi- tectures. Investigate new technologies in material sciences, superconductivity, optoelectronics, ultra-high-speed interconnection, and switching. 100 National Institute of Standards and Technology (NIST) High performance computing and communications technology is an essential enabling com- ponent of NIST's mission to promote U.S. industrial leadership and international competi- tiveness and to provide measurement, calibration, and quality assurance techniques to sup- port U.S. commercial and technological progress. The objectives of NIST's HPCC program are: to accelerate the development and deployment of high performance computing and networking technologies required for the National Information Infrastructure; to apply and test these technologies in a manufacturing environment; and to serve as coordinating agen- cy for the manufacturing component of the Federal Program. NIST's MultiKron chip provides low per- turbation measurements for perfor- mance evaluation of computer and com- munication systems. Specific goals of NIST's program are: 3 To apply high performance communications and networking technology to promote improved U.S. product quality and manufacturing performance, to reduce production costs and time-to-market, and to increase competitiveness in international mar- kets. -I To promote the development and deployment of advanced communications technology to support the education, research, and manufacturing com- munities and to increase the availability of scientif- ic and engineering data via the National Information Infrastructure. jTo advance instrumentation and performance measurement methodologies for high performance computing and networking systems and compo- nents to achieve improved system and application performance. QTo develop efficient algorithms and portable, scal- able software for the application of high perfor- mance computing systems to industrial problems, and to develop improved methods for the public dissemination of advanced software and docu- mentation. a To support, promote, and coordinate the develop- ment of voluntary standards that provide interoper- ability and common user interfaces amonn sys- tems. 101 A computer-controlled coordinate mea- suring machine determines the exact dimensions of a precision-machined stainless steel part. NIST researchers use this and similar instruments to devise ways to improve machine tool performance. NIST participates in four components of the HPCC Program: HPCS: Performance Measurement of Scalable Systems NIST develops instrumentation and methiodology for performance measurement of hiighi performance net- works and massively parallel computer systems. Emphasis is on the use of low-perturbation data capture hardware and simplified software-based approaches to performance characterization. NREN: Networking and Information Infrastructure NIST supports and coordinates the development of standards within the Federal government to provide interoperability, common user interfaces to systems, and enhanced security. NIST works with other agencies to promote open system standards to aid in the commer- cialization of technology by U.S. industry. NIST pro- motes the development of communications infrastruc- ture and the use of the Internet through information technology research, development, and related activities to enhance basic communications capabilities. ASIA: Application Systems and Software Technology NIST develops algorithms and generic software for advanced scientific, engineering, and manufacturing applications. Common elements and techniques are encapsulated in software libraries to promote ease of use and application portability. NIST's Guide to Available Mathematical Software (GAMS) provides industry and the public with improved electronic access to reusable software. IITA: Systems Integration for Manufacturing Applications NIST will build upon its experience in information tech- nology and manufacturing engineering to accelerate the application of high performance computing and commu- nications technology to manufacturing environments. 102 A machinist monitors a machine tool retrofitted with a personal computer controller. The machine is located in NIST's Shop of the 90s where manufac- turers can learn how to use open sys- tem integration technology and low-cost automation technigues to improve pro- ductivity and product quality. NIST will support expanded programs in advanced man- ufacturing systems integration technologies; develop- ment and testing of prototype components and interface specifications for manufacturing systems; application of high performance computing and networking technolo- gies to integrate design and production processes; and testbeds for achieving cost-effective application of advanced manufacturing systems and networks. FY 1993 Accomplishments and FY 1994 Plans Systems Integration for Manufacturing Applications Beginning in FY 1994, NIST will establish an Advanced Manufacturing Systems and Networking Testbed to sup- port research and development in high performance manufacturing systems and to test high performance computer and networking hardware and software in a manufacturing environment. The testbed will serve as a demonstration site for use by industrial technology sup- pliers and users, and to assist industry in the develop- ment and implementation of voluntary consensus stan- dards. Research and testing will be conducted at the NIST testbed as well as at testbeds funded through the NIST Advanced Technology Program. A manufacturing systems environment will be developed to support the integration of advanced manufacturing systems and net- working software and products. A standards-based data exchange effort for computer integrated manufacturing will focus on improving data exchange among computer aided design, process, and manufacturing activities. Prototype systems and interface specifications will be communicated to appropriate standards organizations. Results will be made available to U.S. industry through workshops, training materials, electronic data reposito- ries, and pre-commercial prototype systems that can be installed by potential vendors for test and evaluation. One role of advanced computing technology in manufac- turing process modeling and simulation is described on pages 150-151 in the Case Studies section. Networking and Information Infrastructure NIST performance evaluation activities include iiic&sure- ment and characterization of the impact of software pro- 103 Since the early 1970s. NIST has been developing cost-effective ways to help protect computerized data. NIST has devised a prototype system for control- ling access to a computer system that uses a password, a smart card, a finger- print reader, and cryptography. tocols on communication performance in order to mini- mize communication bottlenecl