ISPRAS Logo
mpC
Intro Background Projects Offers Publications Downloads Staff
ISPRAS Description Logo
Home   R&D Groups   Projects   Grants   Publications   Downloads

       Let us consider two version: homogeneous and heterogeneous of a complex real life mpC application solving partial differential equations using a hierarchy of nested grids for 3D modeling of supernova explosion. The only difference between versions is that second one takes into account difference in processor performances. For computations was used local network of 12 diverse uniprocessor PCs running Windows 2000. While increasing number of computers involved in computations we first involve most powerful computers.

      To estimate how fully the application utilizes the performance potential of the executing network of computer we use parallelization efficiency.

mpC Diagram

       The network consists of computers with processors Athlon 1700, Pentium III 933 MGh, Pentium III 733 MGh, and Pentium III 533 MGh interconnected with Intel ES460T24 Fast Ethernet switch. This configuration, which wasn't initially designed for parallel computing, is usual for a team network. As underlined communication platform for mpC we use MPI implementation MPIPro 1.6.3.

       Relative performances of the computers demonstrated on this application are the following:

Processor
Athlon 1700
Pentium III 933
Pentium III 733
Pentium III 533
Computer number
1-3
4-8
9
10-12
Relative performance
1000
540
435
315

       Parallelization efficiency is defined as S real / S ideal , where Sreal is the real speedup achieved by the parallel application on the parallel system, and Sideal is the ideal speedup that could be achieved while parallelizing the problem. The latter is calculated as sum of performances of processors, consisting the parallel system, divided by performance of a base processor. As base processor we use the most powerful processor.

 


Copyright © 2002 ISP RAS