mpi

package
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 10, 2023 License: BSD-3-Clause Imports: 2 Imported by: 0

README

Gosl. mpi. Message Passing Interface for parallel computing

GoDoc

More information is available in the documentation of this package.

The mpi package is a light wrapper to the OpenMPI C++ library designed to develop algorithms for parallel computing.

This package allows parallel computations over the network and extends the concurrency capabilities of Go.

Both goroutines and MPI calls can co-exist to assist on High Performance Computing (HPC) work.

Examples

Communication between 3 processors

The next code can be executed with the following command:

mpirun -np 3 go run my_mpi_code.go
func setslice(x []float64) {
	switch mpi.Rank() {
	case 0:
		copy(x, []float64{0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3})
	case 1:
		copy(x, []float64{10, 10, 10, 20, 20, 20, 30, 30, 30, 40, 40})
	case 2:
		copy(x, []float64{100, 100, 100, 1000, 1000, 1000, 2000, 2000, 2000, 3000, 3000})
	}
}

mpi.Start(false)
defer mpi.Stop(false)

if mpi.Rank() == 0 {
    io.PfYel("\nTest MPI 01\n")
}
if mpi.Size() != 3 {
    chk.Panic("this test needs 3 processors")
}
n := 11
x := make([]float64, n)
id, sz := mpi.Rank(), mpi.Size()
start, endp1 := (id*n)/sz, ((id+1)*n)/sz
for i := start; i < endp1; i++ {
    x[i] = float64(i)
}

// Barrier
mpi.Barrier()

io.Pfgrey("x @ proc # %d = %v\n", id, x)

// SumToRoot
r := make([]float64, n)
mpi.SumToRoot(r, x)
var tst testing.T
if id == 0 {
    chk.Vector(&tst, fmt.Sprintf("SumToRoot:       r @ proc # %d", id), 1e-17, r, []float64{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})
} else {
    chk.Vector(&tst, fmt.Sprintf("SumToRoot:       r @ proc # %d", id), 1e-17, r, make([]float64, n))
}

// BcastFromRoot
r[0] = 666
mpi.BcastFromRoot(r)
chk.Vector(&tst, fmt.Sprintf("BcastFromRoot:   r @ proc # %d", id), 1e-17, r, []float64{666, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10})

// AllReduceSum
setslice(x)
w := make([]float64, n)
mpi.AllReduceSum(x, w)
chk.Vector(&tst, fmt.Sprintf("AllReduceSum:    w @ proc # %d", id), 1e-17, w, []float64{110, 110, 110, 1021, 1021, 1021, 2032, 2032, 2032, 3043, 3043})

// AllReduceSumAdd
setslice(x)
y := []float64{-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000}
mpi.AllReduceSumAdd(y, x, w)
chk.Vector(&tst, fmt.Sprintf("AllReduceSumAdd: y @ proc # %d", id), 1e-17, y, []float64{-890, -890, -890, 21, 21, 21, 1032, 1032, 1032, 2043, 2043})

// AllReduceMin
setslice(x)
mpi.AllReduceMin(x, w)
chk.Vector(&tst, fmt.Sprintf("AllReduceMin:    x @ proc # %d", id), 1e-17, x, []float64{0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3})

// AllReduceMax
setslice(x)
mpi.AllReduceMax(x, w)
chk.Vector(&tst, fmt.Sprintf("AllReduceMax:    x @ proc # %d", id), 1e-17, x, []float64{100, 100, 100, 1000, 1000, 1000, 2000, 2000, 2000, 3000, 3000})

Documentation

Overview

package mpi wraps the Message Passing Interface for parallel computations

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Abort

func Abort()

Abort aborts MPI

func AllReduceMax

func AllReduceMax(x, w []float64)

AllReduceMax combines all values in 'x' from all processors. When corresponding components (at the same position) exist in a number of processors, the maximum value is selected. 'w' is a workspace with length = len(x). The operations are:

w := join_all_selecting_max(x)
x := w

func AllReduceMin

func AllReduceMin(x, w []float64)

AllReduceMin combines all values in 'x' from all processors. When corresponding components (at the same position) exist in a number of processors, the minimum value is selected. 'w' is a workspace with length = len(x). The operations are:

w := join_all_selecting_min(x)
x := w

func AllReduceSum

func AllReduceSum(x, w []float64)

AllReduceSum combines all values in 'x' from all processors. Corresponding components in slice 'x' are added together. 'w' is a workspace with length = len(x). The operations are:

w := join_all_with_sum(x)
x := w

func AllReduceSumAdd

func AllReduceSumAdd(y, x, w []float64)

AllReduceSumAdd combines all values in 'x' from all processors and adds the result to another slice 'y'. Corresponding components in slice 'x' are added together. 'w' is a workspace with length = len(x). The operations are:

w := join_all_with_sum(x)
y += w

func Barrier

func Barrier()

Barrier forces synchronisation

func BcastFromRoot

func BcastFromRoot(x []float64)

BcastFromRoot broadcasts 'x' slice from root (Rank == 0) to all other processors

func DblRecv

func DblRecv(vals []float64, from_proc int)

DblRecv receives a slice of floats from processor 'from_proc'

NOTE: 'vals' must be pre-allocated with the right number of values that will
      be sent by 'from_proc'

func DblSend

func DblSend(vals []float64, to_proc int)

DblSend sends a slice of floats to processor 'to_proc'

func IntAllReduceMax

func IntAllReduceMax(x, w []int)

IntAllReduceMax combines all (int) values in 'x' from all processors. When corresponding components (at the same position) exist in a number of processors, the maximum value is selected. 'w' is a workspace with length = len(x). The operations are:

w := join_all_selecting_max(x)
x := w

func IntRecv

func IntRecv(vals []int, from_proc int)

IntRecv receives a slice of integers from processor 'from_proc'

NOTE: 'vals' must be pre-allocated with the right number of values that will
      be sent by 'from_proc'

func IntSend

func IntSend(vals []int, to_proc int)

IntSend sends a slice of integers to processor 'to_proc'

func IsOn

func IsOn() bool

InOn tells whether MPI is on or not

NOTE: this returns true even after Stop

func Rank

func Rank() int

Rank returns the processor rank/ID

func SingleIntRecv

func SingleIntRecv(from_proc int) (val int)

SingleIntRecv receives a single integer 'val' from processor 'to_proc'

func SingleIntSend

func SingleIntSend(val, to_proc int)

SingleIntSend sends a single integer 'val' to processor 'to_proc'

func Size

func Size() int

Size returns the number of processors

func Start

func Start(debug bool)

Start initialises MPI

func Stop

func Stop(debug bool)

Stop finalises MPI

func SumToRoot

func SumToRoot(dest, orig []float64)

SumToRoot sums all values in 'orig' to 'dest' in root (Rank == 0) processor

NOTE: orig and dest must be different slices, i.e. not pointing to the same underlying data structure

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL