Normal view MARC view ISBD view

Languages, Compilers and Run-time Environments for Distributed Memory Machines.

By: Saltz, J.
Contributor(s): Mehrotra, P.
Material type: TextTextSeries: eBooks on Demand.Advances in Parallel Computing: Publisher: Amsterdam : Elsevier Science, 2014Copyright date: ©1992Description: 1 online resource (323 pages).Content type: text Media type: computer Carrier type: online resourceISBN: 9781483295381.Subject(s): Compilers (Computer programs) | Electronic data processing -- Distributed processing | Programming languages (Electronic computers)Genre/Form: Electronic books.Additional physical formats: Print version:: Languages, Compilers and Run-time Environments for Distributed Memory MachinesDDC classification: 004.36 Online resources: Click here to view this ebook.
Contents:
Front Cover -- Languages, Compilers and Run-Time Environments for Distributed Memory Machines -- Copyright Page -- Table of Contents -- PREFACE -- Chapter 1. SUPERB: Experiences and Future Research -- Abstract -- 1 Introduction -- 2 Program Splitting -- 3 Data Partitioning -- 4 Interprocedural Partitioning Analysis -- 5 Automatic Insertion of Masking and Communication -- 6 Optimization -- 7 System Structure -- 8 Current and Future Research -- 9 Conclusion -- References -- Chapter 2. Scientific Programming Languages for Distributed Memory Multiprocessors : Paradigms and Research Issues -- Abstract -- 1. Introduction -- 2. An Emerging Paradigm for Distributed Parallel Languages -- 3. An Example of the Paradigm : The DINO Language -- 4. Research Issues Regarding Virtual Parallel Computers -- 5. Research Issues Regarding Distributed Data Structures -- 6. Research Issues Regarding Models of Parallel Computation -- 7. Additional Research Issues Regarding Communication Features -- 8. Research Issues Regarding Support for Complex Parallel Programs -- 9. References -- Chapter 3. VIENNA FORTRAN - A FORTRAN LANGUAGE EXTENSION FOR DISTRIBUTED MEMORY MULTIPROCESSORS* -- Abstract -- 1 Introduction -- 2 The Basic Features of Vienna Fortran -- 3 Examples -- 4 Related Work -- 5 Conclusions -- Acknowledgments -- References -- Chapter 4. Compiler Parallelization of SIMPLE for a Distributed Memory Machine -- Abstract -- 1 Introduction -- 2 What is SIMPLE? -- 3 Machine Model -- 4 Data Distribution -- 5 Code Generation -- 6 Results and Analysis -- 7 Summary -- Acknowledgements -- References -- Chapter 5. Applications of the "Phase Abstractions" for Portable and Scalable Parallel Programming -- Abstract -- 1 Introduction -- 2 Preliminaries -- 3 Jacobi Iteration Example -- 4 Specification of the Processes, Level X -- 5 Global Data Declaration.
6 Phase Definitions, Y Level -- 7 Main Program Body, Æ Level -- 8 Commentary on the Program and Abstractions -- 9 Conclusions -- 10 Acknowledgments -- References -- Chapter 6. Nicke - C Extensions for Programming on Distributed-Memory Machines -- Abstract -- 1 Introduction -- 2 Basic Constructs -- 3 Shared Variables -- 4 Impiementation -- 5 Conclusion -- References -- Chapter 7. A Static Performance Estimator in the Fortran D Programming System -- Abstract -- 1. INTRODUCTION -- 2. DISTRIBUTED MEMORY PROGRAMMING MODEL -- 3. CHOOSING THE DATA DECOMPOSITION SCHEME -- 4. AN EXAMPLE -- 5. THE TRAINING SET METHOD OF PERFORMANCE ESTIMATION -- 6. THE PERFORMANCE ESTIMATION ALGORITHM -- 7. A PROTOTYPE IMPLEMENTATION -- 8. CONCLUSION AND FUTURE WORK -- References -- Chapter 8. Compiler Support for Machine-Independent Parallel Programming in Fortran D -- Abstract -- 1 Introduction -- 2 Fortran D -- 3 Basic Compilation Strategy -- 4 Compilation of Whole Programs -- 5 Validation -- 6 Relationship to Other Research -- 7 Conclusions and Future Work -- 8 Acknowledgements -- References -- Chapter 9. PANDORE: A System to Manage Data Distribution -- Abstract -- 1. INTRODUCTION -- 2. OVERVIEW OF THE PANDORE SYSTEM -- 3. THE PANDORE LANGUAGE -- 4. FURTHER WORK -- References -- Chapter 10. DISTRIBUTED MEMORY COMPILER METHODS FOR IRREGULAR PROBLEMS - DATA COPY REUSE AND RUNTIME PARTITIONING1 -- Abstract -- 1 Introduction -- 2 Overview -- 3 The PARTI Primitives -- 4 PARTI Compiler -- 5 Experimental Results -- 6 Conclusions -- Acknowledgements -- References -- Chapter 11. Scheduling EPL Programs for Parallel Processing -- 1 Introduction -- 2 Basic Scheduling in EPL -- 3 Case Study: Horizontal Partitioning for the CM -- 4 Alignment Problem -- 5 Optimum Direction of Computation -- 6 Conclusion -- References.
Chapter 12. Parallelizing Programs for Distributed-Memory Machines using the Crystal System -- Abstract -- 1 Introduction -- 2 Summary of Our Position -- 3 The Crystal Model and its Language and Metalanguage Features -- 4 The Crystal Compiler -- 5 Performance Results -- 6 Crystalizing FORTRAN -- References -- A Appendix -- Chapter 13. Iteration Space Tiling for Distributed Memory Machines -- Abstract -- 1 Introduction -- 2 Background -- 3 Issues in tiling of iteration spaces -- 4 Extreme vectors and deadlock free tiling -- 5 Computing the extreme vectors -- 6 Choosing deadlock-free tiles -- 7 Tile Space Graph (TSG) -- 8 Optimizing tile size -- 9 Discussion -- References -- Chapter 14. Systolic Loops -- Abstract -- 1. SUMMARY -- 2. TARGET PROCESSOR ARCHITECTURE -- 3. EFFICIENT PARALLEL LOOPS -- 4. UNIFORM RECURRENCE EQUATIONS AND SYSTOLIC ARRAYS -- 5. SYSTOLIC ARRAYS, WAVEFRONTS AND LOOP SKEWING -- 6. SYSTOLIC LOOP PROCESSING -- 7. OTHER WORK -- References -- Chapter 15. An Optimizing C* Compiler for a Hypercube Multicomputer -- Abstract -- 1 Introduction -- 2 The C* Programming Language -- 3 Design of the C* Compiler -- 4 The Optimizer -- 5 Supporting Program Analysis -- 6 Evaluating the Optimizer -- 7 Summary -- References -- Chapter 16. The Paragon Programming Paradigm and Distributed Memory Multicomputers -- Abstract -- 1 Introduction -- 2 Programming in Paragon -- 3 Paragon Primitive Implementation -- 4 Conclusion -- References.
Summary: Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programming distributed memory machines.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Call number URL Status Date due Barcode
Electronic Book UT Tyler Online
Online
QA76.9.D5 -- .L3655 1992 (Browse shelf) http://ebookcentral.proquest.com/lib/uttyler/detail.action?docID=1877036 Available EBC1877036

Front Cover -- Languages, Compilers and Run-Time Environments for Distributed Memory Machines -- Copyright Page -- Table of Contents -- PREFACE -- Chapter 1. SUPERB: Experiences and Future Research -- Abstract -- 1 Introduction -- 2 Program Splitting -- 3 Data Partitioning -- 4 Interprocedural Partitioning Analysis -- 5 Automatic Insertion of Masking and Communication -- 6 Optimization -- 7 System Structure -- 8 Current and Future Research -- 9 Conclusion -- References -- Chapter 2. Scientific Programming Languages for Distributed Memory Multiprocessors : Paradigms and Research Issues -- Abstract -- 1. Introduction -- 2. An Emerging Paradigm for Distributed Parallel Languages -- 3. An Example of the Paradigm : The DINO Language -- 4. Research Issues Regarding Virtual Parallel Computers -- 5. Research Issues Regarding Distributed Data Structures -- 6. Research Issues Regarding Models of Parallel Computation -- 7. Additional Research Issues Regarding Communication Features -- 8. Research Issues Regarding Support for Complex Parallel Programs -- 9. References -- Chapter 3. VIENNA FORTRAN - A FORTRAN LANGUAGE EXTENSION FOR DISTRIBUTED MEMORY MULTIPROCESSORS* -- Abstract -- 1 Introduction -- 2 The Basic Features of Vienna Fortran -- 3 Examples -- 4 Related Work -- 5 Conclusions -- Acknowledgments -- References -- Chapter 4. Compiler Parallelization of SIMPLE for a Distributed Memory Machine -- Abstract -- 1 Introduction -- 2 What is SIMPLE? -- 3 Machine Model -- 4 Data Distribution -- 5 Code Generation -- 6 Results and Analysis -- 7 Summary -- Acknowledgements -- References -- Chapter 5. Applications of the "Phase Abstractions" for Portable and Scalable Parallel Programming -- Abstract -- 1 Introduction -- 2 Preliminaries -- 3 Jacobi Iteration Example -- 4 Specification of the Processes, Level X -- 5 Global Data Declaration.

6 Phase Definitions, Y Level -- 7 Main Program Body, Æ Level -- 8 Commentary on the Program and Abstractions -- 9 Conclusions -- 10 Acknowledgments -- References -- Chapter 6. Nicke - C Extensions for Programming on Distributed-Memory Machines -- Abstract -- 1 Introduction -- 2 Basic Constructs -- 3 Shared Variables -- 4 Impiementation -- 5 Conclusion -- References -- Chapter 7. A Static Performance Estimator in the Fortran D Programming System -- Abstract -- 1. INTRODUCTION -- 2. DISTRIBUTED MEMORY PROGRAMMING MODEL -- 3. CHOOSING THE DATA DECOMPOSITION SCHEME -- 4. AN EXAMPLE -- 5. THE TRAINING SET METHOD OF PERFORMANCE ESTIMATION -- 6. THE PERFORMANCE ESTIMATION ALGORITHM -- 7. A PROTOTYPE IMPLEMENTATION -- 8. CONCLUSION AND FUTURE WORK -- References -- Chapter 8. Compiler Support for Machine-Independent Parallel Programming in Fortran D -- Abstract -- 1 Introduction -- 2 Fortran D -- 3 Basic Compilation Strategy -- 4 Compilation of Whole Programs -- 5 Validation -- 6 Relationship to Other Research -- 7 Conclusions and Future Work -- 8 Acknowledgements -- References -- Chapter 9. PANDORE: A System to Manage Data Distribution -- Abstract -- 1. INTRODUCTION -- 2. OVERVIEW OF THE PANDORE SYSTEM -- 3. THE PANDORE LANGUAGE -- 4. FURTHER WORK -- References -- Chapter 10. DISTRIBUTED MEMORY COMPILER METHODS FOR IRREGULAR PROBLEMS - DATA COPY REUSE AND RUNTIME PARTITIONING1 -- Abstract -- 1 Introduction -- 2 Overview -- 3 The PARTI Primitives -- 4 PARTI Compiler -- 5 Experimental Results -- 6 Conclusions -- Acknowledgements -- References -- Chapter 11. Scheduling EPL Programs for Parallel Processing -- 1 Introduction -- 2 Basic Scheduling in EPL -- 3 Case Study: Horizontal Partitioning for the CM -- 4 Alignment Problem -- 5 Optimum Direction of Computation -- 6 Conclusion -- References.

Chapter 12. Parallelizing Programs for Distributed-Memory Machines using the Crystal System -- Abstract -- 1 Introduction -- 2 Summary of Our Position -- 3 The Crystal Model and its Language and Metalanguage Features -- 4 The Crystal Compiler -- 5 Performance Results -- 6 Crystalizing FORTRAN -- References -- A Appendix -- Chapter 13. Iteration Space Tiling for Distributed Memory Machines -- Abstract -- 1 Introduction -- 2 Background -- 3 Issues in tiling of iteration spaces -- 4 Extreme vectors and deadlock free tiling -- 5 Computing the extreme vectors -- 6 Choosing deadlock-free tiles -- 7 Tile Space Graph (TSG) -- 8 Optimizing tile size -- 9 Discussion -- References -- Chapter 14. Systolic Loops -- Abstract -- 1. SUMMARY -- 2. TARGET PROCESSOR ARCHITECTURE -- 3. EFFICIENT PARALLEL LOOPS -- 4. UNIFORM RECURRENCE EQUATIONS AND SYSTOLIC ARRAYS -- 5. SYSTOLIC ARRAYS, WAVEFRONTS AND LOOP SKEWING -- 6. SYSTOLIC LOOP PROCESSING -- 7. OTHER WORK -- References -- Chapter 15. An Optimizing C* Compiler for a Hypercube Multicomputer -- Abstract -- 1 Introduction -- 2 The C* Programming Language -- 3 Design of the C* Compiler -- 4 The Optimizer -- 5 Supporting Program Analysis -- 6 Evaluating the Optimizer -- 7 Summary -- References -- Chapter 16. The Paragon Programming Paradigm and Distributed Memory Multicomputers -- Abstract -- 1 Introduction -- 2 Programming in Paragon -- 3 Paragon Primitive Implementation -- 4 Conclusion -- References.

Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programming distributed memory machines.

Description based on publisher supplied metadata and other sources.

There are no comments for this item.

Log in to your account to post a comment.