When dealing with massively parallel architectures, many difficult problems have to be solved. In this paper we will show how, by adopting a structured style of programming and a set of template-based compiling tools, most of the burden required in writing massively parallel applications can be moved to the compiler design phase. In particular, we will discuss how the problem of implementing a parallel application onto a machine having a limited number of processing elements can be tackled. By exploiting information on the structure of the parallelism coming from the high level language and the templates in the compiling tools, we are able to devise a polynomial time procedure that achieves efficient implementation of structured parallel programs onto distributed memory, MIMD machines based on a regular interconnection topology and having a limited number of resources.
Resource Optimization via Structured Parallel Programming
DANELUTTO, MARCO;PELAGATTI, SUSANNA
1994-01-01
Abstract
When dealing with massively parallel architectures, many difficult problems have to be solved. In this paper we will show how, by adopting a structured style of programming and a set of template-based compiling tools, most of the burden required in writing massively parallel applications can be moved to the compiler design phase. In particular, we will discuss how the problem of implementing a parallel application onto a machine having a limited number of processing elements can be tackled. By exploiting information on the structure of the parallelism coming from the high level language and the templates in the compiling tools, we are able to devise a polynomial time procedure that achieves efficient implementation of structured parallel programs onto distributed memory, MIMD machines based on a regular interconnection topology and having a limited number of resources.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.