== CIVL-C Reference == Almost everything from sequential C is allowed. The standard library is partially implemented, including: `printf`, `assert`, `malloc`, `free`, .... Be sure to include the appropriate headers. `$when (expr) stmt`:: a guarded command. Blocks until `expr` is true. Executes the first atomic action of `stmt` atomically with the evaluation of `expr` to true. Example: `$when (s>0) s--` defines Dijkstra's P operation on semaphores. The V operation is just s++. Note that s-- and s++ are both atomic in CIVL-C. `_Bool`:: The Boolean type. Values: 0 and 1. This is the same as C11. `$proc`:: the process type. A value of this type refers to a process. `$spawn stmt`:: creates a new process executing the statement. Returns a value of type `$proc` referencing the new process. `$wait($proc p)`:: blocks until process p terminates `$parfor (int i:a..b) stmt`:: spawns threads, one for each i in the range a..b, each executing `stmt`. Waits for them to terminate. `$assert(expr)`:: An assertion. This is treated exactly the same as C's `assert`, except with the latter you must include `assert.h`. CIVL-C has a richer assertion language that C---e.g., you can use quantifiers. `$assume(expr)`:: Assumes this expression is true, i.e., if the expression is not true, the the execution is not considered to be a valid execution of the program. `$forall` and `$exists`:: Mean what you think. Here are some examples to show the syntax: * `$forall (double x | x>1) x>0` -- for all doubles x that are greater than 1, x is greater than 0 * `$forall (int i | a<=i && i<=b) p(i)` -- for all i between a and b (inclusive), p(i) holds * `$forall (int i:a..b) p(i)` -- same as above, but more convenient == MPI Reference == === Constants === * `MPI_COMM_WORLD` (type `MPI_Comm`) * `MPI_STATUS_IGNORE` (type `MPI_Status*`) * `MPI_ANY_SOURCE` (type `int`) * `MPI_ANY_TAG` (type `int`) * `MPI_INT` (type `MPI_Datatype`) * `MPI_DOUBLE` (type `MPI_Datatype`) === Functions === * `MPI_Init(NULL, NULL)` * `MPI_Finalize()` `MPI_Init` must be called before other MPI functions. `MPI_Finalize` must be called before termination and after all other MPI functions. * `MPI_Comm_size(MPI_Comm comm, int *nprocs)` * `MPI_Comm_rank(MPI_Comm comm, int *rank)` Get the number of processes in the communicator, and the "rank" (ID number) of this process within the communicator. * `int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)` Sends a message to the process of rank dest. You can use `NULL` for `buf` if `count` is 0 --- an empty message. * `int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)` Receives a message from the specified source process. `source` can be `MPI_ANY_SOURCE` --- to receive from any source ("wildcard"). status can be `MPI_STATUS_IGNORE`, if you don't care about the status. Otherwise, declare a variable of type `MPI_Status` and pass a pointer to it. {{{ int MPI_Sendrecv(const void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status); }}} Combines one `MPI_Send` operation and one `MPI_Recv` operation into a single command. It behaves as if the two operations execute concurrently in separate threads. Used to avoid deadlocks that could result if all processes do `MPI_Send; MPI_Recv`.