Next: FFTW MPI Reference, Previous: FFTW MPI Performance Tips, Up: Distributed-memory FFTW with MPI
In certain cases, it may be advantageous to combine MPI
(distributed-memory) and threads (shared-memory) parallelization.
FFTW supports this, with certain caveats. For example, if you have a
cluster of 4-processor shared-memory nodes, you may want to use
threads within the nodes and MPI between the nodes, instead of MPI for
all parallelization. FFTW's MPI code can also transparently use
FFTW's Cell processor support (e.g. for clusters of Cell processors).
In particular, it is possible to seamlessly combine the MPI FFTW
routines with the multi-threaded FFTW routines (see Multi-threaded FFTW). In this case, you will begin your program by calling both
fftw_mpi_init()
and fftw_init_threads()
. Then, if you
call fftw_plan_with_nthreads(N)
, then every MPI process
will launch N
threads to parallelize its transforms.
For example, in the hypothetical cluster of 4-processor nodes, you
might wish to launch only a single MPI process per node, and then call
fftw_plan_with_nthreads(4)
on each process to use all
processors in the nodes.
This may or may not be faster than simply using as many MPI processes
as you have processors, however. On the one hand, using threads within a
node eliminates the need for explicit message passing within the node.
On the other hand, FFTW's transpose routines are not multi-threaded,
and this means that the communications that do take place will not
benefit from parallelization within the node. Moreover, many MPI
implementations already have optimizations to exploit shared memory
when it is available.
(Note that this is quite independent of whether MPI itself is
thread-safe or multi-threaded: regardless of how many threads you
specify with fftw_plan_with_nthreads
, FFTW will perform all of
its MPI communication only from the parent process.)