Cantitate/Preț
Produs

Languages, Compilers and Run-Time Systems for Scalable Computers

Editat de Boleslaw K. Szymanski, Balaram Sinharoy
en Limba Engleză Hardback – 31 oct 1995
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session.
Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts.
Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
Citește tot Restrânge

Preț: 66589 lei

Preț vechi: 106296 lei
-37%

Puncte Express: 999

Preț estimativ în valută:
12758 13819$ 10940£

Carte indisponibilă temporar

Doresc să fiu notificat când acest titlu va fi disponibil:

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780792396352
ISBN-10: 0792396359
Pagini: 335
Ilustrații: XVIII, 335 p.
Dimensiuni: 155 x 235 x 21 mm
Greutate: 0.67 kg
Ediția:1996
Editura: Springer Us
Colecția Springer
Locul publicării:New York, NY, United States

Public țintă

Research

Descriere

Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session.
Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts.
Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.

Cuprins

1. Non-Linear Array Dependence Analysis; W. Pugh, D. Wonnacott. 2. Path Collection and Dependence Testing in the Presence of Dynamic, Pointer-Based Data Structures; J. Hummel, et al. 3. CDA Loop Transformations; D. Kulkarni, M. Stumm. 4. Optimizing Data-Parallel Stencil Computations in a Portable Framework; S.W. Chappelow, et al. 5. A Compiler Strategy for Shared Virtual Memories; F. Bodin, M. O'Boyle. 6. Machine-Independent Parallel Programming Using the Divide-and-Conquer Paradigm; S. Kumaran, M.J. Quinn. 7. Load Balancing and Data Locality via Fractiling: An Experimental Study; S.F. Hummel, et al. 8. A Path to Scalability and Efficient Performance; C.K. Shank, et al. 9. Runtime Support for Portable Distributed Data Structures; Chih-Po Wen, et al. 10. User Defined Compiler Support for Constructing Distributed Arrays; M. Rosing. 11. Compiling for Multithreaded Multicomputer; B. Sinharoy. 12. Enabling Primitives for Compiling Parallel Languages; S.C. Goldstein, et al. 13. Integrating Data and Task Parallelism in Scientific Programs; E. Deelman, et al. 14. Communication Generation for Cyclic(K) Distributions; K. Kennedy, et al. 15. Point-to-Point Communication Using Migrating Ports; I.T. Foster, et al. 16. The Performance Impact of Address Relation Caching; P.A. Dinda, D.R. O'Hallaron. 17. The Design of Microkernel Support for the SR Concurrent Programming Language; G.D. Benson, R.A. Olsson. 18. Runtime Support for Programming in AdaptiveParallel Environments; G. Agrawal, et al. 19. Data-Parallel Language Features for Sparse Codes; M. Ujaldon, et al. 20. The Quality of Partitions Produced by an Iterative Load Balancer; C.L. Bottasso, et al. 21. A New Compiler Technology for Handling HPF Data Parallel Constructs; F. Andre, et al. 22. An Improved Type-Inference Algorithm to Expose Parallelism in object-Oriented Programs; S. Kumar, et al. 23. Automatic Distribution of Shared ata Objects; K. Langendoen, et al. 24. Bottom-Up Scheduling with Wormhole and Circuit Switched Routing; K. Ghose, N. Mehdiratta. 25. Communication-Buffers for Data-Parallel, Irregular Computations; A. Muller, R. Ruhl. 26. Compiling Assembly Pattern on a Shared Virtual Memory; M. Hahadm, et al. 27. Distributed Memory Implementation of a Shared-Address Parallel Object-Oriented Language; Chu-Cheow Lim, J.A. Feldman. 28. Distributed Tree Structures for N-Body Simulation; A.S. Pai, et al. 29. Communication Generation and Optimiation for HPF; A. Thirumalai, et al. 30. Prediction Based Task Scheduling in Distributed Computing; M. Samadani, E. Kaltofen. 31. Refined Single-Threading for Parallel Functional Programming; G. Becker, et al. 32. Symmetric Distributed Computing with Dynamic Load Balancing and Fault Tolerance; T. Bubeck, et al. 33. The Relationship between Language paradigm and parellelism: The EQ Prototyping Language; T. Derby, et al. Index.