mpi_initialize_alm_tools


This subroutine initializes the mpi_alm_tools module, and must be run prior to any of the advanced interface working routines by all processors in the MPI communicator. The root processor must supply all arguments, while it is optional for the slaves. However, the information is disregarded if they do.
A major advantage of MPI parallelization is large quantities of memory, allowing for pre-computation of the Legendre polynomials even with high Nside and lmax, since each processor only needs a fraction (1/Nprocs) of the complete table. This feature is controlled by the ``precompute_plms'' parameter. In general, the CPU time can be expected to decrease by roughly 50% using pre-computed Legendre polynomials for temperature calculations, and by about 30% for polarization calculations.

Location in HEALPix directory tree: src/f90/mod/mpi_alm_tools.f90 


FORMAT

call mpi_initialize_alm_tools( comm, [nsmax], [nlmax], [nmmax], [zbounds], [polarization], [precompute_plms], [w8ring] )


ARGUMENTS

name & dimensionality kind in/out description
       
comm I4B IN MPI communicator.
nsmax I4B IN the Nside value of the HEALPix map. (OPTIONAL)
nlmax I4B IN the maximum l value used for the alm. (OPTIONAL)
nmmax I4B IN the maximum m value used for the alm. (OPTIONAL)
zbounds(1:2) DP IN section of the map on which to perform the alm analysis, expressed in terms of $z=\sin({\rm latitude}) =
\cos(\theta)$. If zbounds(1)<zbounds(2), the analysis is performed on the strip zbounds(1)<z<zbounds(2); if not, it is performed outside of the strip zbounds(2)<z<zbounds(1). (OPTIONAL)
polarization LGT IN if polarization is required, this should be set to true, else it should be set to false. (OPTIONAL)
precompute_plms I4B IN 0 = do not pre-compute any Pl m's; 1 = pre-compute PlmT; 2 = pre-compute PlmT and PlmP. (OPTIONAL)
w8ring_TQU(1:2*nsmax, 1:p) DP IN ring weights for quadrature corrections. If ring weights are not used, this array should be 1 everywhere. p is 1 for a temperature analysis and 3 for (T,Q,U). (OPTIONAL)


EXAMPLE:

call mpi_comm_rank(comm, myid, ierr)
if (myid == root) then
call mpi_initialize_alm_tools(comm, nsmax, nlmax, nmmax,
zbounds,polarization, precompute_plms)
call mpi_map2alm(map, alms)
else
call mpi_initialize_alm_tools(comm)
call mpi_map2alm_slave
end
call mpi_cleanup_alm_tools
This example 1) initializes the mpi_alm_tools module (i.e., allocates internal arrays and defines required parameters), 2) executes a parallel map2alm operation, and 3) frees the previously allocated memory.


RELATED ROUTINES

This section lists the routines related to mpi_initialize_alm_tools

mpi_cleanup_alm_tools
Frees memory that is allocated by the current routine.
mpi_alm2map
Routine for executing a parallel inverse spherical harmonics transform (root processor interface)
mpi_alm2map_slave
Routine for executing a parallel inverse spherical harmonics transform (slave processor interface)
mpi_map2alm
Routine for executing a parallel spherical harmonics transform (root processor interface)
mpi_map2alm_slave
Routine for executing a parallel spherical harmonics transform (slave processor interface)
mpi_alm2map_simple
One-line interface to the parallel inverse spherical harmonics transform
mpi_map2alm_simple
One-line interface to the parallel spherical harmonics transform

Version 3.31, 2017-01-06