Shared-memory and message-passing are two op- posite models to develop parallel computations. The shared- memory model, adopted by existing frameworks such as OpenMP, represents a de-facto standard on multi-/many-core architectures. However, message-passing deserves to be studied for its inherent properties in terms of portability and flexibility as well as for its better ease of debugging. Achieving good performance from the use of messages in shared-memory architectures requires an efficient implementation of the run-time support. This paper investigates the definition of a delegation mechanism on multi- threaded architectures able to: (i) overlap communications with calculation phases; (ii) parallelize distribution and collective oper- ations. Our ideas have been exemplified using two parallel bench- marks on the Intel Phi, showing that in these applications our message-passing support outperforms MPI and reaches similar performance compared to standard OpenMP implementations.

Optimizing message-passing on multicore architectures using hardware multi-threading

BUONO, DANIELE;MENCAGLI, GABRIELE;VANNESCHI, MARCO
2014-01-01

Abstract

Shared-memory and message-passing are two op- posite models to develop parallel computations. The shared- memory model, adopted by existing frameworks such as OpenMP, represents a de-facto standard on multi-/many-core architectures. However, message-passing deserves to be studied for its inherent properties in terms of portability and flexibility as well as for its better ease of debugging. Achieving good performance from the use of messages in shared-memory architectures requires an efficient implementation of the run-time support. This paper investigates the definition of a delegation mechanism on multi- threaded architectures able to: (i) overlap communications with calculation phases; (ii) parallelize distribution and collective oper- ations. Our ideas have been exemplified using two parallel bench- marks on the Intel Phi, showing that in these applications our message-passing support outperforms MPI and reaches similar performance compared to standard OpenMP implementations.
2014
978-147992728-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/666270
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact