Changes between Initial Version and Version 1 of Challenge


Ignore:
Timestamp:
03/11/19 14:58:03 (7 years ago)
Author:
siegel
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Challenge

    v1 v1  
     1
     2
     3== CIVL-C Reference ==
     4
     5Almost everything from sequential C is allowed.
     6The standard library is partially implemented, including:
     7  `printf`, `assert`, `malloc`, `free`, ....
     8Be sure to include the appropriate headers.
     9
     10Boolean type: `_Bool`.  Values: 0 and 1.
     11  (Same as C11).
     12
     13Semaphore operations:
     14*  V(s) = `s++`
     15*  P(s) = `$when (s>0) s--`
     16You could define them as functions, or as preprocessor macros
     17(`#define` ...)
     18
     19 `$proc`::
     20  the process type.  A value of this type refers to a process.
     21
     22 `$spawn stmt`::
     23  create a new process executing stmt.  Returns a $proc.
     24
     25 `$wait($proc p)`::
     26  block until process p terminates
     27
     28$parfor (int i:a..b) f(.. i ..);
     29  spawns threads calling f, one for each i in a..b,
     30  waits for them to terminate
     31
     32assertions:
     33  Use either C's assert (include <assert.h>) or CIVL's $assert.
     34  They mean the same thing.
     35
     36Quantification: $forall and $exists.  Examples:
     37  $forall (double x | x>0) x>1
     38  $forall (int i | a<=i && i<=b) p(i)
     39  $forall (int i:a..b) p(i)
     40
     41
     42
     43MPI Reference
     44=============
     45
     46Constants:
     47
     48  MPI_COMM_WORLD (type MPI_Comm)
     49  MPI_STATUS_IGNORE (type MPI_Status*)
     50  MPI_ANY_SOURCE (type int)
     51  MPI_INT (type MPI_Datatype)
     52  MPI_DOUBLE (type MPI_Datatype)
     53
     54
     55Functions:
     56
     57MPI_Init(NULL, NULL);
     58MPI_Finalize();
     59
     60MPI_Init must be called before other MPI functions.   MPI_Finalize
     61must be called before termination and after all other MPI functions.
     62
     63MPI_Comm_size(MPI_Comm comm, int *nprocs);
     64MPI_Comm_rank(MPI_Comm comm, int *rank);
     65
     66Get the number of processes in the communicator, and the "rank" (ID
     67number) of this process within the communicator.
     68
     69int MPI_Send(const void *buf, int count, MPI_Datatype datatype,
     70             int dest, int tag, MPI_Comm comm);
     71
     72Sends a message to the process of rank dest.
     73You can use NULL for buf if count is 0 --- an empty message.
     74
     75int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source,
     76             int tag,  MPI_Comm comm, MPI_Status *status);
     77
     78Receives a message from the specified source process.
     79source can be MPI_ANY_SOURCE --- to receive from any source ("wildcard").
     80status can be MPI_STATUS_IGNORE, if you don't care about the status.
     81Otherwise, declare a variable of type MPI_Status and pass a pointer to it.
     82
     83int MPI_Sendrecv(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
     84                 int dest, int sendtag,
     85                 void *recvbuf, int recvcount, MPI_Datatype recvtype,
     86                 int source, int recvtag,
     87                 MPI_Comm comm, MPI_Status *status);
     88
     89Combines one MPI_Send operation and one MPI_Recv operation into a
     90single command.  It behaves as if the two operations execute
     91concurrently in separate threads.  Used to avoid deadlocks that could
     92result if all processes do MPI_Send; MPI_Recv.
     93