| Version 36 (modified by , 7 years ago) ( diff ) |
|---|
INDEX
1. The Sequentially Consistent Subset of OpenMP
2. Supported OpenMP Features
3. OpenMP Simplification
4. OpenMP Transofrmation
1. The Sequentially Consistent Subset of OpenMP
OpenMP programs that:
- Do not use non-sequentially consistent atomic directives;
- Do not rely on the accuracy of a false result from omp_test_lock and omp_test_nest_lock; and
- Correctly avoid data races as required in Section 1.4.1 on page 23 (OpenMP spec 5.0) (spec)
The relaxed consistency model is invisible for such programs, and any explicit flush operations in such programs are redundant.
2. Supported OpenMP Features
OpenMP Constructs
parallelprivate(list)firstprivate(list)copyin(list)shared(list)default(none|shared)num_threads(n)reduction(op:list)
sectionsprivate(list)firstprivate(list)lastprivate(list)reduction(op:list)nowait
section
singleprivate(list)firstprivate(list)copyprivate(list)nowait
forprivate(list)firstprivate(list)lastprivate(list)reductionschedulecollapsenowait
simdsafelen(n)linear(n)aligned(n)privatelastprivatereductioncollapse
for simdsafelen(n)linear(n)aligned(n)privatelastprivatereductioncollapsefirstprivatenowaitschedule
declare simdsimdlen(n)linearaligned(n)uniforminbranchnotinbranch
barrier
critical[name]
atomicread | write | update | captureseq_cst
master
OpenMP Types
omp_lock_t
OpenMP Functions
omp_get_num_threads()omp_get_thread_num()omp_get_wtime()omp_init_lockomp_destroy_lockomp_set_lockomp_unset_lockomp_test_lock
Plan
- We are going to get rid of the current OMP2CIVL transformer and come up a new transformer that assumes given OpenMP programs are sequentially consistent
- We are gong to improve the current OmpSimplifier using pointer alias analysis
- Note that an atomic construct without
seq_cstis outside of the sequentially consistent subset of the language, we need a way to deal with that.
Notes
- Currently, the simplifier is not aware of the cases that out-of-bound access on multiple dimensional arrays can raise data race. For example,
int a[10][5]; #pragma omp parallel for for (int i = 0; i < 5; i++) for (int j = 1; j < 10; j++) a[i][j] = a[i][j-1] // a[0][4] and a[1][-1] refer to the same element
The current simplifier will incorrectly sequentialize the example above without realizing the fact that this example is sequentializable if and only if no "logical" out-of-bound happens during the execution. A fix for the simplifier could be sequentializing the example with inserted assertion for making sure that there is no "logical" out-of-bound error.
Related
3. OpenMP Simplification
OpenMP Simplifier
- We improve the existing OpenMP simplifier with the informations provided by the Static Analysis.
- Example 1: (
DRB067-restrictpointer1-orig-no.cfrom DataRaceBench v1.2.0)#include <stdlib.h> typedef double real8; void foo(real8 * restrict newSxx, real8 * restrict newSyy, int length) { int i; #pragma omp parallel for private (i) firstprivate (length) for (i = 0; i <= length - 1; i += 1) { newSxx[i] = 0.0; newSyy[i] = 0.0; } } int main() { int length=10; real8* newSxx = malloc (length* sizeof (real8)); real8* newSyy = malloc (length* sizeof (real8)); foo(newSxx, newSyy, length); free (newSxx); free (newSyy); return 0; }- The OpenMP simplifier analyzes the parallel region with points-to informations about the two pointer argument
newSxxandnewSyyfrom static analysis. - The OpenMP simplifier can determine that no data race will happen in the parallel region as long as no array out-of-bound error happens in it.
- The OpenMP simplifier sequentializes the program which will be checked by CIVL. Any possible array out-of-bound error will be caught by CIVL.
- The OpenMP simplifier analyzes the parallel region with points-to informations about the two pointer argument
- Example 2 : (
DRB014-outofbounds-orig-yes.cfrom DataRaceBench v1.2.0 )#include <stdio.h> int main(int argc, char* argv[]) { int i,j; int n=10, m=10; double b[n][m]; #pragma omp parallel for private(j) for (i=1;i<n;i++) for (j=0;j<m;j++) // Note there will be out of bound access b[i][j]=b[i][j-1]; printf ("b[50][50]=%f\n",b[50][50]); return 0; }- OpenMP Simplifier proves that
b[i][j]andb[i][j-1]that accessed by different threads which hold different values ofiwill have no data race as long as no array out-of-bound error happens. - OpenMP Simplifier sequentializes the OpenMP program while the verification condition for array out-of-bound freedom will be checked by CIVL by default.
- The CIVL back-end runs the sequentialized program and detects the array out-of-bound access.
- OpenMP Simplifier proves that
The Current Approach For Model Checking OpenMP Programs
- Example:
#include <omp.h> int main() { int a[10],i; #pragma omp parallel for for(i=0; i<10; i++) { int c; c = i % 2; a[i + c] = 0; } }- OpenMP Simplifier cannot prove data-race freedom for this program.
- OpenMP2CIVL Transformer will transform this program to a equivalent CIVL-C program for model checking. Here is the code snippet for the CIVL-C program after transformation:
int main() { int a[10]; { { int _elab_i = 0; for (; _elab_i < _omp_num_threads; _elab_i = _elab_i + 1) ; } int $sef$0 = $choose_int(_omp_num_threads); int _omp_nthreads = 1 + $sef$0; $range _omp_thread_range = 0 .. _omp_nthreads - 1; $domain(1) _omp_dom = ($domain){_omp_thread_range}; $omp_gteam _omp_gteam = $omp_gteam_create($here, _omp_nthreads); $omp_gshared _omp_a_gshared = $omp_gshared_create(_omp_gteam, &(a)); $parfor (int _omp_tid: _omp_dom) { $omp_team _omp_team = $omp_team_create($here, _omp_gteam, _omp_tid); int _omp_a_local[10]; int _omp_a_status[10]; $omp_shared _omp_a_shared = $omp_shared_create(_omp_team, _omp_a_gshared, &(_omp_a_local), &(_omp_a_status)); { $range _omp_r1 = 1 .. 10 - 1 # 1; $domain(1) _omp_loop_domain = ($domain){_omp_r1}; $domain $sef$1 = $omp_arrive_loop(_omp_team, 0, ($domain)_omp_loop_domain, 2); $domain(1) _omp_my_iters = ($domain(1))$sef$1; $for (int _omp_i_private: _omp_my_iters) { int c; c = _omp_i_private % 2; int _omp_write0; _omp_write0 = 0; $omp_write(_omp_a_shared, &(_omp_a_local[c]), &(_omp_write0)); } } $omp_barrier_and_flush(_omp_team); $omp_shared_destroy(_omp_a_shared); $omp_team_destroy(_omp_team); } $omp_gshared_destroy(_omp_a_gshared); $omp_gteam_destroy(_omp_gteam); } } - CIVL reports PROVABLE errors:
Thread 1 can not safely write to memory location &<d5>a[0], because thread 0 has written to that memory location and hasn't flushed yet. Violation 0 encountered at depth 64: CIVL execution violation in p2 (kind: ASSERTION_VIOLATION, certainty: PROVEABLE) at OpenMPTransformer "_omp_a_shared, &(_om" inserted by OpenMPTransformer.a_sharedWriteCall before civlc.cvh:105.14-20 "$malloc" Assertion: false -> false . . . Call stacks: process 0: main at OpenMPTransformer "$parfor (int _omp_ti" inserted by OpenMPTransformer.parallelPragma before civlc.cvh:105.14-20 "$malloc" process 1: $barrier_exit at concurrency.cvl:58.2-6 "$when" called from $barrier_call at concurrency.cvl:63.2-14 "$barrier_exit" called from $omp_barrier_and_flush at civl-omp.cvl:322.2-14 "$barrier_call" called from _par_proc0 at OpenMPTransformer "_omp_team" inserted by OpenMPTransformer.barrierAndFlushCall before civlc.cvh:105.14-20 "$malloc" process 2: $omp_write at civl-omp.cvl:247.4-10 "$assert" called from _par_proc0 at OpenMPTransformer "_omp_a_shared, &(_om" inserted by OpenMPTransformer.a_sharedWriteCall before civlc.cvh:105.14-20 "$malloc" process 3: $omp_arrive_loop at civl-omp.cvl:405.2-8 "$atomic" called from _par_proc0 at OpenMPTransformer "_omp_team, 0, ($doma" inserted by OpenMPTransformer.myItersDeclaration before civlc.cvh:105.14-20 "$malloc" Logging new entry 0, writing trace to CIVLREP/testOmp5_0.trace Terminating search after finding 1 violation.
The New Approach for Model Checking OpenMP programs
- Using CIVL's new infrastructure that captures reads / writes at runtime
- If the OpenMP Simplifier fails to sequentialize a program, transform the OpenMP program to the following form for model checking:
$mem writes[nthreads], reads[nthreads]; $parfor (int tid:0..nthreads-1) { $write_set_push(); $read_set_push(); block1; // barrier writes[tid] = $write_set_pop(); reads[tid] = $read_set_pop(); // check for dataraces (collective operation) $write_set_push(); $read_set_push(); // barrier stmt2; // barrier writes[tid] = $write_set_pop(); reads[tid] = $read_set_pop(); // check for dataraces (collective operation) } - Functions for managing
$memobjects can be found inmem.cvh
4. OpenMP Transformations
Support Types
$omp_gteam: global team object, represents a team of threads executing in a parallel region. A handle type. This is where all the state needed to correctly execute a parallel region will be stored. This includes a global barrier and a worksharing queue (incomplete array-of-$omp_work_record) for every thread. Definition:typedef struct OMP_gteam { $scope scope; int nthreads; _Bool init[]; $omp_work_record work[][]; $gbarrier gbarrier; } * $omp_gteam;
$omp_team: local object belonging to a single thread and referencing the global team object. A handle type. It also includes a local barrier. Definition:typedef struct OMP_team { $omp_gteam gteam; $scope scope; int tid; $barrier barrier; } * $omp_team;
$omp_work_record: the worksharing information that a thread needs for executing a worksharing region. It contains the kind of the worksharing region, the location of the region, the status of the region and the subdomain (iterations/sections/task assigned to the thread).typedef struct OMP_work_record { int kind; // loop, barrier, sections, or single int location; // location in model of construct _Bool arrived; // has this thread arrived yet? $domain loop_dom;// full loop domain; null if not loop $domain subdomain; // tasks this thread must do } $omp_work_record;
Support Functions
Team creation and destruction
$omp_gteam $omp_gteam_create($scope scope, int nthreads)- creates new global team object, allocating object in heap in the specified scope. Number of threads that will be in the team is
nthreads.
- creates new global team object, allocating object in heap in the specified scope. Number of threads that will be in the team is
void $omp_gteam_destroy($omp_gteam gteam)- destroys the global team object. All shared objects associated to the team must have been destroyed before calling this function.
$omp_team $omp_team_create($scope scope, $omp_gteam gteam, int tid)- creates new local team object for a specific thread.
void $omp_team_destroy($omp_team team)- destroys the local team object
Other
void $omp_apply_assoc(void * x, $operation op, void * y)- translation of
x = x op y
- translation of
void $omp_flush_all($omp_team)- performs an OpenMP flush operation on all shared objects. This is the default in OpenMP if no argument is specified for a flush construct.
Worksharing and barriers
void $omp_barrier($omp_team team)- performs a barrier only. Note however that usually (always?) a barrier is accompanied by a flush-all, so
$omp_barrier_and_flushshould be used instead.
- performs a barrier only. Note however that usually (always?) a barrier is accompanied by a flush-all, so
void $omp_barrier_and_flush($omp_team team)- checks data race violations by examining read & write set intersections
- combines a barrier and a flush on all shared objects owned by the team. Implicit in many OpenMP worksharing constructs.
$domain $omp_arrive_loop($omp_team team, int location, $domain loop_dom, $DecompositionStrategy strategy)- called by a thread when it reaches an omp for loop, this function returns the subset of the loop domain specifying the iterations that this thread will execute. The dimension of the domain returned equals the dimension of the given domain
loop_dom.
- called by a thread when it reaches an omp for loop, this function returns the subset of the loop domain specifying the iterations that this thread will execute. The dimension of the domain returned equals the dimension of the given domain
$domain(1) $omp_arrive_sections($omp_team team, int location, int numSections)- called by a thread when it reaches an omp sections construct, this function returns the subset of the integers 0..numSections-1 specifying the indexes of the sections that this thread will execute. The sections are numbered from 0 in increasing order.
int $omp_arrive_single($omp_team team, int location)- called by a thread when it reaches on omp single construct, returns the thread ID of the thread that will execute the single construct.
Worksharing model
This section describes how the system functions dealing with worksharing are implemented.
The global data structure $omp_gteam contains a FIFO queue for each thread. The queue contains work-sharing records, one record for each work-sharing or barrier construct encountered. The record contains the basic information about the construct as provided by the arguments to the arrival function, as well as the distribution chosen for that thread.
The constructs are a lot like MPI collective operations, and are modeled similarly.
When a thread arrives at one of these constructs, it invokes the relevant arrival function. At this point you can determine whether this thread is the first to arrive at that construct. If its queue is empty, it is the first, otherwise it is not first, and the oldest entry in its queue will be the entry corresponding to this construct.
When a thread is the first thread to arrive at a construct, a distribution is chosen for every thread and a record is created and enqueued in each thread queue (including the caller). The distributions can be chosen nondeterministically, possibly with some restrictions to achieve some tractability/soundness compromise. The record for this thread is then dequeued and the iterator returned.
If a thread is not the first to arrive, its record is dequeued and compared with the arguments given in the function call. They should match, and if they don't, an error is reported. This indicates that either threads encountered constructs in different orders or the loop parameters changed.
Translations of specific directives
1. parallel construct
2. worksharing-loop construct
Translating parallel
parallel: this spawns some nondeterministic number of threads, unless a num_threads clause is present, in which case the number of threads is specified exactly.. We will assume there is a constant THREAD_MAX defined somewhere. The number of threads created will be between 1 and THREAD_MAX (inclusive). Each thread is assigned an ID. The original ("master") thread has ID 0. All threads execute the parallel region.
float a; // shared
int i; // private
int j; // firstprivate
#pragma omp parallel shared(a) private(i) firstprivate(j)
{
BLOCK
}
=>
float a; // shared
int i; // private
int j; // firstprivate
{ // parallel construct (begin)
int nthreads = 1+$choose_int(num_threads);
int fstpvt_j = j;
$omp_gteam gteam = $omp_gteam_create($here, nthreads);
$parfor (int tid : ($domain){0..nthreads-1}) {
$local_start();
$omp_team team = $omp_team_create($here, gteam, tid);
$read_set_push(); $write_set_push();
int i; // private
int j = fstpvt_j; // first private init.
transfromed(BLOCK)
$omp_barrier_and_flush(team); //check data race
$read_set_pop(); $write_set_pop();
$omp_team_destroy(team);
$local_end();
}
$omp_gteam_destroy(gteam);
} // parallel construct (end)
All variables that occur in the parallel construct, i.e., the lexical extent of the parallel construct, must be determined to be either private or shared. This is determined by the clauses and the default rules as specified in the OpenMP Standard. Obviously any variable declared within the construct itself must be private.
For all private variables i and j (out of $parfor) not declared within
the parallel construct, create new variables of the same type, i and j (in the $parfer).
The new variable is declared within the thread scope.
j (out of $parfor) is also firstprivate, then j (in the $parfor) is
initialized with the value of fstpvt_j.
Otherwise, i (in the $parfor) is uninitialized, so has an undefined value.
Translating for
Try to determine whether the loop iterations are independent. In that case, they can all be executed by one thread. Otherwise:
#pragma omp for
for (i=1; i<n; i=i+1) {
BLOCK
}
=>
{ // worksharing-loop construct (begin)
// [lb .. ub # step] (lb <= ub)
$domain loop_domain = {1 .. n-1 # 1};
$domain(1) loop_dist = ($domain(1))$omp_arrive_loop(
team, loop_id++, loop_domain, STRATEGY);
$for (int i : loop_dist) {
transfromed(BLOCK)
}
$barrier_and_flush(team);
} // worksharing-loop construct (end)
We can vary the way the sub-domains are chosen to explore different tradeoffs and strategies. On one extreme, every kind of partition can be explored; on the other, some fixed strategy like round-robin with chunksize 1 can be used. This only changes the definition of $omp_arrive_loop, not the translation above.
#pragma omp parallel for collapse(3)
for (i=0; i<n; i++)
for (j=0; j<m; j++)
for (k=0; k<l; k++) {
BLOCK
}
=>
{ // worksharing-loop construct (begin)
$domain loop_domain = {0..n-1 #1, 0..m-1 #1, 0..l-1 #1};
$domain(3) loop_dist = ($domain(3))$omp_arrive_loop(
team, loop_id++, loop_domain, STRATEGY);
$for (int i, j, k : my_iters) {
transfromed(BLOCK)
}
$omp_barrier_and_flush(team);
} // worksharing-loop construct (end)
Translating reduction clause
#pragma omp for reduction(+:x,y)
for (i=a; i<b; i++) {
S
}
=>
{
$domain loop_domain = {a..b-1};
$domain(1) my_iters = ($domain(1))$omp_arrive_loop(team, FOR_LOC++, loop_domain, STRATEGY);
double _x=0.0, _y=0.0; // not the local view of the shared variable x/y
$for (int i : my_iters) {
translate(S) but replace x with _x and y with _y;
}
$omp_apply_assoc(x_shared, CIVL_SUM, &_x);
$omp_apply_assoc(y_shared, CIVL_SUM, &_y);
$omp_barrier_and_flush(team);
}
Translating sections
Say there are numSections sections. This number is known statically.
#pragma omp sections #pragma omp section S0 // section 1 #pragma omp section S1 // section 2 ...
=>
{
$domain(1) my_secs = $omp_arrive_sections(team, section_id++, numSections);
$for (int i : my_secs) {
if(section_id == 1) {
translate(S0);
}
if(section_id == 2) {
translate(S0);
}
...
} /* end of switch */
} /* end of $for loop */
$omp_barrier_and_flush();
}
Translating single
#pragma omp single S
=>
int owner = $omp_arrive_single(team, SINGLE_LOC++);
if (owner == _tid) {
translate(S);
}
$omp_barrier_and_flush(team);
Translating barrier
#pragma omp barrier
=>
$omp_barrier_and_flush(team);
Translating critical
Basically, use a lock for each critical name, plus one for the "no name". All threads must obtain lock to enter the critical section, then release it.
I.e., if there are critical sections name a, b, and c, there should be global root-scope variables of boolean type named _critical_noname, _critical_a, etc.
#pragma omp critical(a) S
=>
... _Bool _critical_a = $false; . . . $when (!_critical_a) _critical_a=$true; translate(S); _critical_a=$false;
Translating atomic
In general, reads and writes to shared variables will be processed using the protocols described above. However if the operation occurs within an omp atomic construct, it is translated differently.
TODO: need to look up the rules on the different flavors of atomics.
If sequentially consistent atomic...
If non-sequentially consistent atomic...
Translatingordered
This can only be used inside and OMP for loop in which the pragma used the ordered clause. (Check that.) It indicates that the specified region must be executed in iteration order.
In this case the system function must return an int iterator in which the ints occur in loop order.
#pragma omp for ordered
for (i=a; i<b; i++) {
...
#pragma omp ordered
S1
...
#pragma omp ordered
S2
...
}
=>
{
$domain loop_domain = {a..b};
$domain(1) my_iters = ($domain(1))$omp_arrive_loop(team, FOR_LOC++, loop_domain, STRATEGY);
int order1=a, order2=a;
$for (int i : my_iters) {
...
$when (order1==i) {
translate(S1);
order1++;
}
...
$when (order2==i) {
translate(S2);
order2++;
}
...
}
}
Translating master
#pragma omp master S
=>
if (_tid == 0) {
translate(S);
}
Translating nowait
Just leave out the $omp_barrier_and_flush at the end of the translated construct.
