== OpenMP Primitives == Constructs * `parallel` * worksharing * `for` * `sections` and `section` * `single` * synchronization * `barrier` * `critical` * `atomic` * `ordered` * `master` * `threadprivate` * `flush` Clauses * `private(`list`)` * `firstprivate(list)` * `lastprivate(list)` * `copyin(list)` * `shared(`list`)` * `default(none`|`shared)` * `num_threads(`n`)` * `schedule(static, n)` * `schedule(dynamic, n)` * ... * `ordered` * `nowait` Functions * `omp_get_num_threads()` * `omp_get_thread_num()` == Helper primitives == Here are some new proposed CIVL abstract datatypes that may be helpful. Handle type `$intseq`, representing a finite sequence of integers: {{{ $intseq $intseq_empty(); // returns empty sequence $intseq $intseq_add($intseq seq, int m); // appends m to sequence $intseq $intseq_remove($intseq seq, int i); // removes i-th element int $intseq_get($intseq seq, int i); // returns i-th element $intseq $intseq_set($intseq seq, int index, int value); // sets i-th element $intseq $intseq_insert($intseq seq, int index, int value); // inserts at index i int $intseq_length($intseq seq); // returns length of seq }}} Handle type `$distribution`, representing a distribution of n things into k disjoint sequences: {{{ /* distribute the integers 0,1,...,n-1 into k sequences NONDETERMINISTICALLY */ $distribution $distribute(int n, int k); $intseq $distribution_get(int i); }}} We can play with different implementations of the distribute function. On one extreme, it can return an arbitrary distribution; this will allow exploration of all possible executions of an OpenMP for loop. On the other extreme, we can sacrifice soundness for tractability and have it used a fixed strategy, like round-robin with chunk size 1. And various things in between. == Translation Strategies == === Translating shared variables === === Translating `parallel` === `parallel`: this spawns some nondeterministic number of threads. We will assume there is a constant `THREAD_MAX` defined somewhere. The number of threads created will be between 1 and `THREAD_MAX` (inclusive). Each thread is assigned an ID. The original ("master") thread has ID 0. All threads execute the parallel region. {{{ #pragma omp parallel ... S }}} => {{{ { int _nthreads = 1+$choose_int(THREAD_MAX); $proc _threads[_nthreads]; void _thread(int _tid) { translate(S) } for (int i=0; i<_nthreads; i++) _threads[i]=$spawn _thread(i); for (int i=0; i<_nthreads; i++) $wait(_threads[i]); } }}} All variables that occur in the parallel construct, i.e., the lexical extent of the parallel construct, must be determined to be either private or shared. This is determined by the clauses and the default rules as specified in the OpenMP Standard. Obviously any variable declared within the construct itself must be private. For all private variables `x` not declared within the parallel construct, create a new variable of the same type, `_x`. The new variable is declared within the thread scope. If `x` is also firstprivate, then `_x` is initialized with the value of `x`, e.g. `int _x=x;`. Otherwise, `_x` is uninitialized, so has an undefined value. === Translating `for` === Try to determine whether the loop iterations are independent. In that case, they can all be executed by one thread. Otherwise, iterations must be distributed among the threads in some nondeterministic way. This could blow up rapidly! Also, a thread does not have to execute its iterations in increasing order. It can execute them in any order. Trying a few different things for now: picking a particular scheduling policy like round-robin (status with chunk size 1). Of course you can always do this if schedule is specified to be static. The question is do we ever want to try to explore these interleavings? Is there any loss of generality by just running all iterations concurrently? One approach: assume you have a function or macro `CIVL_owns(n, t, i)`. It takes three ints and returns a boolean. The arguments are `n`: the number of threads; `t`: a thread ID between 0 and `n`-1 (inclusive); and `i`, an iteration index. {{{ #pragma omp parallel for for (i...) S }}} => {{{ for (i...) { if (CIVL_owns(_nthreads, _tid, i)) { translate(S) } } barrier (unless no wait) }}} === Translating `sections` === If there are n sections, create n functions: section1, section2, .... Again the question is how to distribute them among threads and in what order. As with loops, you really want to check these are independent and only do the interleaving exploration as a last resort. {{{ #pragma omp sections { #pragma omp section ... #pragma omp section ... } }}} => {{{ { void section0() { ... } void section1() { ... } ... if (CIVL_owns(_nthreads, _tid, 0) section0(); if (CIVL_owns(_nthreads, _tid, 1) section1(); ... barrier unless nowait; } }}} === Translating `single` === Nondeterministically choose a thread, i.e, `$choose_int(threads)`. That thread executes the code, the rest skip it. The question is, which thread does the choosing? The first thread to arrive at that construct? Once again, try to determine if it matters. If the modifications and reads do not involve any private data, it doesn't matter which thread does it, so make it thread 0. There is a barrier at the end. === Translating `barrier` === Provide some system functions for this. All the threads in the team (threads[i]) register with a barrier object and partake in the barrier. Can re-use that barrier object for multiple barriers. === Translating `critical` === Basically, use a lock for each critical name, plus one for the "no name". All threads must obtain lock to enter the critical section, then release it. === Translating `atomic` === This is just `$atomic`. === Translating`ordered` === This can only be used inside and OMP `for` loop in which the pragma used the `ordered` clause. (Check that.) It indicates that the specified region must be executed in iteration order. {{{ #pragma omp for ordered for (i=a; i {{{ { int order1=a, order2=a; for (i=a; i