mpi
#
Utilities for interacting with MPI.
Functions:
-
is_rank_zero
–Returns true if the current MPI rank is zero, or if the application is not
-
get_mpi_comm
–A context manager that returns the main MPI communicator and installs signal
-
reduce_dict
–Reduce a dictionary of values across MPI ranks.
-
divide_tasks
–Determine how many tasks the current MPI process should run given the total
-
divide_gpus
–Attempts to divide the available GPUs across MPI ranks. If there are more ranks
-
run_on_rank_zero
–A convenient decorator that ensures the function is only run on rank zero and
is_rank_zero
#
Returns true if the current MPI rank is zero, or if the application is not running using MPI.
Source code in femto/md/utils/mpi.py
get_mpi_comm
#
A context manager that returns the main MPI communicator and installs signal handlers to abort MPI on exceptions.
The signal handlers are restored to their defaults when the context manager exits.
Returns:
-
ContextManager[Intracomm]
–The global MPI communicator.
Source code in femto/md/utils/mpi.py
reduce_dict
#
Reduce a dictionary of values across MPI ranks.
Parameters:
-
value
(dict[_K, _T]
) –The dictionary of values to reduce.
-
mpi_comm
(Intracomm
) –The MPI communicator to use for the reduction.
-
root
(int | None
, default:None
) –The rank to which the reduced dictionary should be sent. If None, the reduced dictionary will be broadcast to all ranks.
Returns:
-
dict[_K, _T]
–The reduced dictionary of values.
Source code in femto/md/utils/mpi.py
divide_tasks
#
Determine how many tasks the current MPI process should run given the total number that need to be distributed across all ranks.
Parameters:
-
mpi_comm
(Intracomm
) –The main MPI communicator.
-
n_tasks
(int
) –The total number of tasks to run.
Returns:
-
tuple[int, int]
–The number of tasks to run on the current MPI process, and the index of the first task to be run by this worker.
Source code in femto/md/utils/mpi.py
divide_gpus
#
Attempts to divide the available GPUs across MPI ranks. If there are more ranks than GPUs, then each GPU will be assigned to multiple ranks.
Source code in femto/md/utils/mpi.py
run_on_rank_zero
#
A convenient decorator that ensures the function is only run on rank zero and that the outputs are broadcast to the other ranks.