Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
software:mpi4py:caviness [2020-04-23 12:45] – [Batch job] anita | software:mpi4py:caviness [2021-04-27 16:21] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Mpi4py for Caviness ====== | ||
+ | MPI for Python ([[http:// | ||
+ | <code bash> | ||
+ | $ vpkg_versions python-mpi | ||
+ | |||
+ | Available versions in package (* = default version): | ||
+ | |||
+ | [/ | ||
+ | python-mpi | ||
+ | * 2.7.15: | ||
+ | 3.6.5: | ||
+ | </ | ||
+ | |||
+ | ===== Sample mpi4py script ===== | ||
+ | |||
+ | Adapted from the documentation provided by [[https:// | ||
+ | |||
+ | <code - scatter-gather.py> | ||
+ | # | ||
+ | # Loaded Modules | ||
+ | # | ||
+ | import numpy as np | ||
+ | from mpi4py import MPI | ||
+ | |||
+ | # | ||
+ | # Communicator | ||
+ | # | ||
+ | comm = MPI.COMM_WORLD | ||
+ | |||
+ | my_N = 4 | ||
+ | N = my_N * comm.size | ||
+ | |||
+ | if comm.rank == 0: | ||
+ | A = np.arange(N, | ||
+ | else: | ||
+ | #Note that if I am not the root processor A is an empty array | ||
+ | A = np.empty(N, dtype=np.float64) | ||
+ | |||
+ | my_A = np.empty(my_N, | ||
+ | |||
+ | # | ||
+ | # Scatter data into my_A arrays | ||
+ | # | ||
+ | comm.Scatter( [A, MPI.DOUBLE], | ||
+ | |||
+ | if comm.rank == 0: | ||
+ | print "After Scatter:" | ||
+ | |||
+ | for r in xrange(comm.size): | ||
+ | if comm.rank == r: | ||
+ | print "[%d] %s" % (comm.rank, my_A) | ||
+ | comm.Barrier() | ||
+ | |||
+ | # | ||
+ | # Everybody is multiplying by 2 | ||
+ | # | ||
+ | my_A *= 2 | ||
+ | |||
+ | # | ||
+ | # Allgather data into A again | ||
+ | # | ||
+ | comm.Allgather( [my_A, MPI.DOUBLE], | ||
+ | |||
+ | if comm.rank == 0: | ||
+ | print "After Allgather:" | ||
+ | |||
+ | for r in xrange(comm.size): | ||
+ | if comm.rank == r: | ||
+ | print "[%d] %s" % (comm.rank, A) | ||
+ | comm.Barrier() | ||
+ | </ | ||
+ | |||
+ | ===== Batch job ===== | ||
+ | |||
+ | Any MPI job requires you to use '' | ||
+ | |||
+ | The best results have been found by using the // | ||
+ | |||
+ | <code bash> | ||
+ | cp / | ||
+ | </ | ||
+ | |||
+ | and modify it for your application. There are several ways to communicate the number and layout of worker processes. In this example, we will modify the job script to specify a single node and 4 cores using ''# | ||
+ | |||
+ | <code bash> | ||
+ | vpkg_require python-mpi/ | ||
+ | </ | ||
+ | |||
+ | Lastly, modify the section to execute your MPI program.gh | ||
+ | |||
+ | < | ||
+ | ${UD_MPIRUN} python scatter-gather.py | ||
+ | </ | ||
+ | |||
+ | All the options for '' | ||
+ | |||
+ | <code bash> | ||
+ | sbatch mympi4py.qs | ||
+ | </ | ||
+ | |||
+ | The following output is based on the Python 2 script '' | ||
+ | |||
+ | <code bash> | ||
+ | Adding dependency `python/ | ||
+ | Adding dependency `libfabric/ | ||
+ | Adding dependency `openmpi/ | ||
+ | Adding package `python-mpi/ | ||
+ | -- Open MPI job setup complete (on r03n33): | ||
+ | -- mpi job startup | ||
+ | -- nhosts | ||
+ | -- nproc = 4 | ||
+ | -- nproc-per-node | ||
+ | -- cpus-per-proc | ||
+ | |||
+ | -- Open MPI environment flags: | ||
+ | -- OMPI_MCA_btl_base_exclude=tcp | ||
+ | -- OMPI_MCA_rmaps_base_display_map=true | ||
+ | -- OMPI_MCA_orte_hetero_nodes=true | ||
+ | -- OMPI_MCA_hwloc_base_binding_policy=core | ||
+ | -- OMPI_MCA_rmaps_base_mapping_policy=core | ||
+ | |||
+ | Data for JOB [51033,1] offset 0 Total slots allocated 4 | ||
+ | |||
+ | | ||
+ | |||
+ | Data for node: r03n33 | ||
+ | Process OMPI jobid: [51033,1] App: 0 Process rank: 0 Bound: socket 0[cor | ||
+ | e 0[hwt 0]]: | ||
+ | Process OMPI jobid: [51033,1] App: 0 Process rank: 1 Bound: socket 0[cor | ||
+ | e 1[hwt 0]]: | ||
+ | Process OMPI jobid: [51033,1] App: 0 Process rank: 2 Bound: socket 0[cor | ||
+ | e 2[hwt 0]]: | ||
+ | Process OMPI jobid: [51033,1] App: 0 Process rank: 3 Bound: socket 0[cor | ||
+ | e 3[hwt 0]]: | ||
+ | |||
+ | | ||
+ | After Scatter: | ||
+ | [0] [0. 1. 2. 3.] | ||
+ | [1] [4. 5. 6. 7.] | ||
+ | [2] [ 8. 9. 10. 11.] | ||
+ | [3] [12. 13. 14. 15.] | ||
+ | After Allgather: | ||
+ | [0] [ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] | ||
+ | [1] [ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] | ||
+ | [2] [ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] | ||
+ | [3] [ 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] | ||
+ | </ | ||
+ | |||
+ | ===== Recipes ===== | ||
+ | If you need to build a Python virtualenv based on a collection of Python modules including mpi4py, then you will need to follow this recipe to get a properly-integrated mpi4py module. | ||
+ | |||
+ | * [[technical: |