Document Number:N4578
Date:
Revises:N4505
Editor: Jared Hoberock
NVIDIA Corporation
jhoberock@nvidia.com

Working Draft, Technical Specification for C++ Extensions for Parallelism Version 2

Note: this is an early draft. It’s known to be incomplet and incorrekt, and it has lots of bad formatting.

1

General

[parallel.general]
1.1

Scope

[parallel.general.scope]

This Technical Specification describes requirements for implementations of an interface that computer programs written in the C++ programming language may use to invoke algorithms with parallel execution. The algorithms described by this Technical Specification are realizable across a broad class of computer architectures.

This Technical Specification is non-normative. Some of the functionality described by this Technical Specification may be considered for standardization in a future version of C++, but it is not currently part of any C++ standard. Some of the functionality in this Technical Specification may never be standardized, and other functionality may be standardized in a substantially changed form.

The goal of this Technical Specification is to build widespread existing practice for parallelism in the C++ standard algorithms library. It gives advice on extensions to those vendors who wish to provide them.

1.2

Normative references

[parallel.general.references]

The following referenced document is indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

  • ISO/IEC 14882:—1, Programming Languages — C++

ISO/IEC 14882:— is herein called the C++ Standard. The library described in ISO/IEC 14882:— clauses 17-30 is herein called the C++ Standard Library. The C++ Standard Library components described in ISO/IEC 14882:— clauses 25, 26.7 and 20.7.2 are herein called the C++ Standard Algorithms Library.

Unless otherwise specified, the whole of the C++ Standard's Library introduction (C++14 §17) is included into this Technical Specification by reference.

1.3

Namespaces and headers

[parallel.general.namespaces]

Since the extensions described in this Technical Specification are experimental and not part of the C++ Standard Library, they should not be declared directly within namespace std. Unless otherwise specified, all components described in this Technical Specification are declared in namespace std::experimental::parallel::v2v1.

[ Note: Once standardized, the components described by this Technical Specification are expected to be promoted to namespace std. end note ]

Unless otherwise specified, references to such entities described in this Technical Specification are assumed to be qualified with std::experimental::parallel::v2v1, and references to entities described in the C++ Standard Library are assumed to be qualified with std::.

Extensions that are expected to eventually be added to an existing header <meow> are provided inside the <experimental/meow> header, which shall include the standard contents of <meow> as if by

    #include <meow>
1.4

Terms and definitions

[parallel.general.defns]

For the purposes of this document, the terms and definitions given in the C++ Standard and the following apply.

A parallel algorithm is a function template described by this Technical Specification declared in namespace std::experimental::parallel::v2v1 with a formal template parameter named ExecutionPolicy.

Parallel algorithms access objects indirectly accessible via their arguments by invoking the following functions:

  • All operations of the categories of the iterators that the algorithm is instantiated with.
  • Functions on those sequence elements that are required by its specification.
  • User-provided function objects to be applied during the execution of the algorithm, if required by the specification.
  • Operations on those function objects required by the specification. [ Note: See clause 25.1 of C++ Standard Algorithms Library. end note ]
These functions are herein called element access functions. [ Example: The sort function may invoke the following element access functions:
  • Methods of the random-access iterator of the actual template argument, as per 24.2.7, as implied by the name of the template parameters RandomAccessIterator.
  • The swap function on the elements of the sequence (as per 25.4.1.1 [sort]/2).
  • The user-provided Compare function object.
end example ]
1.5

Feature-testing recommendations

[parallel.general.features]

An implementation that provides support for this Technical Specification shall define the feature test macro(s) in Table 1.

Table 1 — Feature Test Macro(s)
Name Value Header
__cpp_lib_experimental_parallel_algorithm 201505 <experimental/algorithm>
<experimental/exception_list>
<experimental/execution_policy>
<experimental/numeric>
__cpp_lib_experimental_parallel_task_block 201510 <experimental/task_block>
2

Execution policies

[parallel.execpol]
2.1

In general

[parallel.execpol.general]

This clause describes classes that are execution policy types. An object of an execution policy type indicates the kinds of parallelism allowed in the execution of an algorithm and expresses the consequent requirements on the element access functions.

[ Example:
std::vector<int> v = ...

// standard sequential sort
std::sort(v.begin(), v.end());

using namespace std::experimental::parallel;

// explicitly sequential sort
sort(seq, v.begin(), v.end());

// permitting parallel execution
sort(par, v.begin(), v.end());

// permitting vectorization as well
sort(par_vec, v.begin(), v.end());

// sort with dynamically-selected execution
size_t threshold = ...
execution_policy exec = seq;
if (v.size() > threshold)
{
  exec = par;
}

sort(exec, v.begin(), v.end());
end example ]
 
    [ Note:
    
      Because different parallel architectures may require idiosyncratic
      parameters for efficient execution, implementations of the Standard Library 
      may provide additional execution policies to those described in this
      Technical Specification as extensions.
    
    end note ]
  
  
    
2.2

Header <experimental/execution_policy> synopsis

[parallel.execpol.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2v1 {
  // 2.3, Execution policy type trait
  template<class T> struct is_execution_policy;
  template<class T> constexpr bool is_execution_policy_v = is_execution_policy<T>::value;

  // 2.4, Sequential execution policy
  class sequential_execution_policy;

  // 2.5, Parallel execution policy
  class parallel_execution_policy;

  // 2.6, Parallel+Vector execution policy
  class parallel_vector_execution_policy;

  // 2.7, Dynamic execution policy
  class execution_policy;
}
}
}
}
2.3

Execution policy type trait

[parallel.execpol.type]
template<class T> struct is_execution_policy { see below };

is_execution_policy can be used to detect parallel execution policies for the purpose of excluding function signatures from otherwise ambiguous overload resolution participation.

is_execution_policy<T> shall be a UnaryTypeTrait with a BaseCharacteristic of true_type if T is the type of a standard or implementation-defined execution policy, otherwise false_type.



    [ Note:
    
      This provision reserves the privilege of creating non-standard execution policies to the library implementation.
    
    end note ]
  
    
    

The behavior of a program that adds specializations for is_execution_policy is undefined.

2.4

Sequential execution policy

[parallel.execpol.seq]
class sequential_execution_policy{ unspecified };

The class sequential_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and require that a parallel algorithm's execution may not be parallelized.

2.5

Parallel execution policy

[parallel.execpol.par]
class parallel_execution_policy{ unspecified };

The class parallel_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be parallelized.

2.6

Parallel+Vector execution policy

[parallel.execpol.vec]
class parallel_vector_execution_policy{ unspecified };

The class class parallel_vector_execution_policy is an execution policy type used as a unique type to disambiguate parallel algorithm overloading and indicate that a parallel algorithm's execution may be vectorized and parallelized.

2.7

Dynamic execution policy

[parallel.execpol.dynamic]
class execution_policy
{
  public:
    // 2.7.1, execution_policy construct/assign
    template<class T> execution_policy(const T& exec);
    template<class T> execution_policy& operator=(const T& exec);

    // 2.7.2, execution_policy object access
    const type_info& type() const noexcept;
    template<class T> T* get() noexcept;
    template<class T> const T* get() const noexcept;
};

The class execution_policy is a container for execution policy objects. execution_policy allows dynamic control over standard algorithm execution.

[ Example:
std::vector<float> sort_me = ...
        
using namespace std::experimental::parallel;
execution_policy exec = seq;

if(sort_me.size() > threshold)
{
  exec = std::par;
}
 
std::sort(exec, std::begin(sort_me), std::end(sort_me));
end example ]

Objects of type execution_policy shall be constructible and assignable from objects of type T for which is_execution_policy<T>::value is true.

2.7.1

execution_policy construct/assign

[parallel.execpol.con]
template<class T> execution_policy(const T& exec);
Effects:
Constructs an execution_policy object with a copy of exec's state.
Remarks:
This constructor shall not participate in overload resolution unless is_execution_policy<T>::value is true.
template<class T> execution_policy& operator=(const T& exec);
Effects:
Assigns a copy of exec's state to *this.
Returns:
*this.
2.7.2

execution_policy object access

[parallel.execpol.access]
const type_info& type() const noexcept;
Returns:
typeid(T), such that T is the type of the execution policy object contained by *this.
template<class T> T* get() noexcept;template<class T> const T* get() const noexcept;
Returns:
If target_type() == typeid(T), a pointer to the stored execution policy object; otherwise a null pointer.
Requires:
is_execution_policy<T>::value is true.
2.8

Execution policy objects

[parallel.execpol.objects]
constexpr sequential_execution_policy      seq{};
constexpr parallel_execution_policy        par{};
constexpr parallel_vector_execution_policy par_vec{};

The header <experimental/execution_policy> declares a global object associated with each type of execution policy defined by this Technical Specification.

3

Parallel exceptions

[parallel.exceptions]
3.1

Exception reporting behavior

[parallel.exceptions.behavior]

During the execution of a standard parallel algorithm, if temporary memory resources are required and none are available, the algorithm throws a std::bad_alloc exception.

During the execution of a standard parallel algorithm, if the invocation of an element access function exits via an uncaught exception, the behavior of the program is determined by the type of execution policy used to invoke the algorithm:

  • If the execution policy object is of type class parallel_vector_execution_policy, std::terminate shall be called.
  • If the execution policy object is of type sequential_execution_policy or parallel_execution_policy, the execution of the algorithm exits via an exception. The exception shall be an exception_list containing all uncaught exceptions thrown during the invocations of element access functions, or optionally the uncaught exception if there was only one.
    
    
                  [ Note:
        
                    For example, when for_each is executed sequentially,
                    if an invocation of the user-provided function object throws an exception, for_each can exit via the uncaught exception, or throw an exception_list containing the original exception.
                  
        end note ]
      
    
    
                  [ Note:
        
                    These guarantees imply that, unless the algorithm has failed to allocate memory and
                    exits via std::bad_alloc, all exceptions 
    thrown during the execution of
                    the algorithm are communicated to the caller. It is 
    unspecified whether an algorithm implementation will "forge ahead" after
     
                    encountering and capturing a user exception.
                  
        end note ]
      
    
                  [ Note:
        
                    The algorithm may exit via the std::bad_alloc
     exception even if one or more
                    user-provided function objects have exited via an 
    exception. For example, this can happen when an algorithm fails to 
    allocate memory while
                    creating or adding elements to the exception_list object.
                  
        end note ]
      
                
  • If the execution policy object is of any other type, the behavior is implementation-defined.

3.2

Header <experimental/exception_list> synopsis

[parallel.exceptions.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2v1 {

  class exception_list : public exception
  {
    public:
      typedef unspecified iterator;
  
      size_t size() const noexcept;
      iterator begin() const noexcept;
      iterator end() const noexcept;

      const char* what() const noexcept override;
  };
}
}
}
}
      

The class exception_list owns a sequence of exception_ptr objects. The parallel algorithms may use the exception_list to communicate uncaught exceptions encountered during parallel execution to the caller of the algorithm.

The type exception_list::iterator shall fulfill the requirements of ForwardIterator.

size_t size() const noexcept;
Returns:
The number of exception_ptr objects contained within the exception_list.
Complexity:
Constant time.
iterator begin() const noexcept;
Returns:
An iterator referring to the first exception_ptr object contained within the exception_list.
iterator end() const noexcept;
Returns:
An iterator that is past the end of the owned sequence.
const char* what() const noexcept override;
Returns:
An implementation-defined NTBS.
4

Parallel algorithms

[parallel.alg]
4.1

In general

[parallel.alg.general]
This clause describes components that C++ programs may use to perform operations on containers and other sequences in parallel.
4.1.1

Requirements on user-provided function objects

[parallel.alg.general.user]

Function objects passed into parallel algorithms as objects of type BinaryPredicate, Compare, and BinaryOperation shall not directly or indirectly modify objects via their arguments.

4.1.2

Effect of execution policies on algorithm execution

[parallel.alg.general.exec]

Parallel algorithms have template parameters named ExecutionPolicy which describe the manner in which the execution of these algorithms may be parallelized and the manner in which they apply the element access functions.

The invocations of element access functions in parallel algorithms invoked with an execution policy object of type sequential_execution_policy execute in sequential order in the calling thread.

The invocations of element access functions in parallel algorithms invoked with an execution policy object of type parallel_execution_policy are permitted to execute in an unordered fashion in either the invoking thread or in a thread implicitly created by the library to support parallel algorithm execution. Any such invocations executing in the same thread are indeterminately sequenced with respect to each other. [ Note: It is the caller's responsibility to ensure correctness, for example that the invocation does not introduce data races or deadlocks. end note ]

[ Example:
using namespace std::experimental::parallel;
int a[] = {0,1};
std::vector<int> v;
for_each(par, std::begin(a), std::end(a), [&](int i) {
  v.push_back(i*2+1);
});
The program above has a data race because of the unsynchronized access to the container v. end example ]


      
    
    [ Example:
    
using namespace std::experimental::parallel;
std::atomic<int> x = 0;
int a[] = {1,2};
for_each(par, std::begin(a), std::end(a), [&](int n) {
  x.fetch_add(1, std::memory_order_relaxed);
  // spin wait for another iteration to change the value of x
  while (x.load(std::memory_order_relaxed) == 1) { }
});
The above example depends on the order of execution of the iterations, and is therefore undefined (may deadlock). end example ]


      
    
    [ Example:
    
using namespace std::experimental::parallel;
int x=0;
std::mutex m;
int a[] = {1,2};
for_each(par, std::begin(a), std::end(a), [&](int) {
  m.lock();
  ++x;
  m.unlock();
});
The above example synchronizes access to object x ensuring that it is incremented correctly. end example ]

The invocations of element access functions in parallel algorithms invoked with an execution policy of type parallel_vector_execution_policy are permitted to execute in an unordered fashion in unspecified threads, and unsequenced with respect to one another within each thread. [ Note: This means that multiple function object invocations may be interleaved on a single thread. end note ]



        [ Note:
    
          This overrides the usual guarantee from the C++ standard, Section 1.9 [intro.execution] that
          function executions do not interleave with one another.
        
    end note ]
  
        


          Since parallel_vector_execution_policy allows the execution of element access functions to be
          interleaved on a single thread, synchronization, including the use of mutexes, risks deadlock. Thus the
          synchronization with parallel_vector_execution_policy is restricted as follows:


          A standard library function is vectorization-unsafe if it is specified to synchronize with
          another function invocation, or another function invocation is specified to synchronize with it, and if
          it is not a memory allocation or deallocation function. Vectorization-unsafe standard library functions
          may not be invoked by user code called from parallel_vector_execution_policy algorithms.


          [ Note:
    
            Implementations must ensure that internal synchronization inside standard library routines does not
            induce deadlock.
          
    end note ]
  
      

[ Example:
using namespace std::experimental::parallel;
int x=0;
std::mutex m;
int a[] = {1,2};
for_each(par_vec, std::begin(a), std::end(a), [&](int) {
  m.lock();
  ++x;
  m.unlock();
});
The above program is invalid because the applications of the function object are not guaranteed to run on different threads. end example ]


      [ Note:
    
        The application of the function object may result in two consecutive calls to
        m.lock on the same thread, which may deadlock.
      
    end note ]
  


      [ Note:
    
        The semantics of the parallel_execution_policy or the
        parallel_vector_execution_policy invocation allow the implementation to fall back to
        sequential execution if the system cannot parallelize an algorithm invocation due to lack of
        resources.
      
    end note ]
  

      

Algorithms invoked with an execution policy object of type execution_policy execute internally as if invoked with the contained execution policy object.

The semantics of parallel algorithms invoked with an execution policy object of implementation-defined type are implementation-defined.

4.1.3

ExecutionPolicy algorithm overloads

[parallel.alg.overloads]

The Parallel Algorithms Library provides overloads for each of the algorithms named in Table 1, corresponding to the algorithms with the same name in the C++ Standard Algorithms Library. For each algorithm in Table 2, if there are overloads for corresponding algorithms with the same name in the C++ Standard Algorithms Library, the overloads shall have an additional template type parameter named ExecutionPolicy, which shall be the first template parameter. In addition, each such overload shall have the new function parameter as the first function parameter of type ExecutionPolicy&&.

Unless otherwise specified, the semantics of ExecutionPolicy algorithm overloads are identical to their overloads without.

Parallel algorithms shall not participate in overload resolution unless is_execution_policy<decay_t<ExecutionPolicy>>::value is true.

[ Note: Not all algorithms in the Standard Library have counterparts in Table 2. end note ]
4.2

Definitions

[parallel.alg.defns]

Define GENERALIZED_SUM(op, a1, ..., aN) as follows:

  • a1 when N is 1
  • op(GENERALIZED_SUM(op, b1, ..., bK), GENERALIZED_SUM(op, bM, ..., bN)) where
    • b1, ..., bN may be any permutation of a1, ..., aN and
    • 1 < K+1 = M ≤ N.

Define GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aN) as follows:

  • a1 when N is 1
  • op(GENERALIZED_NONCOMMUTATIVE_SUM(op, a1, ..., aK), GENERALIZED_NONCOMMUTATIVE_SUM(op, aM,
    ..., aN) where 1 < K+1 = M ≤ N.

4.3

Non-Numeric Parallel Algorithms

[parallel.alg.ops]
4.3.1

Header <experimental/algorithm> synopsis

[parallel.alg.ops.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2v1 {
  template<class ExecutionPolicy,
           class InputIterator, class Function>
    void for_each(ExecutionPolicy&& exec,
                  InputIterator first, InputIterator last,
                  Function f);
  template<class InputIterator, class Size, class Function>
    InputIterator for_each_n(InputIterator first, Size n,
                             Function f);
  template<class ExecutionPolicy,
           class InputIterator, class Size, class Function>
    InputIterator for_each_n(ExecutionPolicy&& exec,
                             InputIterator first, Size n,
                             Function f);
}
}
}
}
4.3.2

For each

[parallel.alg.foreach]
template<class ExecutionPolicy,
      class InputIterator, class Function>
void for_each(ExecutionPolicy&& exec,
              InputIterator first, InputIterator last,
              Function f);
Effects:
Applies f to the result of dereferencing every iterator in the range [first,last). [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Complexity:
Applies f exactly last - first times.
Remarks:
If f returns a result, the result is ignored.
Notes:
Unlike its sequential form, the parallel overload of for_each does not return a copy of its Function parameter, since parallelization may not permit efficient state accumulation.
Requires:
Unlike its sequential form, the parallel overload of for_each requires Function to meet the requirements of CopyConstructible.
template<class InputIterator, class Size, class Function>
InputIterator for_each_n(InputIterator first, Size n,
                         Function f);
Requires:
Function shall meet the requirements of MoveConstructible [ Note: Function need not meet the requirements of CopyConstructible. end note ]
Effects:
Applies f to the result of dereferencing every iterator in the range [first,first + n), starting from first and proceeding to first + n - 1. [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Returns:
first + n for non-negative values of n and first for negative values.
Remarks:
If f returns a result, the result is ignored.
template<class ExecutionPolicy,
      class InputIterator, class Size, class Function>
InputIterator for_each_n(ExecutionPolicy && exec,
                         InputIterator first, Size n,
                         Function f);
Effects:
Applies f to the result of dereferencing every iterator in the range [first,first + n), starting from first and proceeding to first + n - 1. [ Note: If the type of first satisfies the requirements of a mutable iterator, f may apply nonconstant functions through the dereferenced iterator. end note ]
Returns:
first + n for non-negative values of n and first for negative values.
Remarks:
If f returns a result, the result is ignored.
Notes:
Unlike its sequential form, the parallel overload of for_each_n requires Function to meet the requirements of CopyConstructible.
4.4

Numeric Parallel Algorithms

[parallel.alg.numeric]
4.4.1

Header <experimental/numeric> synopsis

[parallel.alg.numeric.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2v1 {
  template<class InputIterator>
    typename iterator_traits<InputIterator>::value_type
      reduce(InputIterator first, InputIterator last);
  template<class ExecutionPolicy,
           class InputIterator>
    typename iterator_traits<InputIterator>::value_type
      reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last);
  template<class InputIterator, class T>
    T reduce(InputIterator first, InputIterator last, T init);
  template<class ExecutionPolicy,
           class InputIterator, class T>
    T reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last, T init);
  template<class InputIterator, class T, class BinaryOperation>
    T reduce(InputIterator first, InputIterator last, T init,
             BinaryOperation binary_op);
  template<class ExecutionPolicy, class InputIterator, class T, class BinaryOperation>
    T reduce(ExecutionPolicy&& exec,
             InputIterator first, InputIterator last, T init,
             BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class T>
    OutputIterator
      exclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class T>
    OutputIterator
      exclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init);
  template<class InputIterator, class OutputIterator,
           class T, class BinaryOperation>
    OutputIterator
      exclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class T, class BinaryOperation>
    OutputIterator
      exclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result);
  template<class InputIterator, class OutputIterator,
           class BinaryOperation>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class BinaryOperation>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op);
  template<class InputIterator, class OutputIterator,
           class BinaryOperation, class T>
    OutputIterator
      inclusive_scan(InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op, T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class BinaryOperation, class T>
    OutputIterator
      inclusive_scan(ExecutionPolicy&& exec,
                     InputIterator first, InputIterator last,
                     OutputIterator result,
                     BinaryOperation binary_op, T init);

  template<class InputIterator, class UnaryOperation,
           class T, class BinaryOperation>
    T transform_reduce(InputIterator first, InputIterator last,
                       UnaryOperation unary_op,
                       T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class UnaryOperation,
           class T, class BinaryOperation>
    T transform_reduce(ExecutionPolicy&& exec,
                       InputIterator first, InputIterator last,
                       UnaryOperation unary_op,
                       T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class T, class BinaryOperation>
    OutputIterator
      transform_exclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               T init, BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class T, class BinaryOperation>
    OutputIterator
      transform_exclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               T init, BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation>
    OutputIterator
      transform_inclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation>
    OutputIterator
      transform_inclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op);

  template<class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation, class T>
    OutputIterator
      transform_inclusive_scan(InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op, T init);
  template<class ExecutionPolicy,
           class InputIterator, class OutputIterator,
           class UnaryOperation, class BinaryOperation, class T>
    OutputIterator
      transform_inclusive_scan(ExecutionPolicy&& exec,
                               InputIterator first, InputIterator last,
                               OutputIterator result,
                               UnaryOperation unary_op,
                               BinaryOperation binary_op, T init);
}
}
}
}
4.4.2

Reduce

[parallel.alg.reduce]
template<class InputIterator>
typename iterator_traits<InputIterator>::value_type
    reduce(InputIterator first, InputIterator last);
Effects:
Same as reduce(first, last, typename iterator_traits<InputIterator>::value_type{}).
template<class InputIterator, class T>
T reduce(InputIterator first, InputIterator last, T init);
Effects:
Same as reduce(first, last, init, plus<>()).
template<class InputIterator, class T, class BinaryOperation>
T reduce(InputIterator first, InputIterator last, T init,
         BinaryOperation binary_op);
Returns:
GENERALIZED_SUM(binary_op, init, *first, ..., *(first + (last - first) - 1)).
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the range [first,last).
Complexity:
O(last - first) applications of binary_op.
Notes:
The primary difference between reduce and accumulate is that the behavior of reduce may be non-deterministic for non-associative or non-commutative binary_op.
4.4.3

Exclusive scan

[parallel.alg.exclusive.scan]
template<class InputIterator, class OutputIterator, class T>
OutputIterator exclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              T init);
Effects:
Same as exclusive_scan(first, last, result, init, plus<>()).
template<class InputIterator, class OutputIterator, class T, class BinaryOperation>
OutputIterator exclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              T init, BinaryOperation binary_op);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, *first, ..., *(first + (i - result) - 1)).
Returns:
The end of the resulting range beginning at result.
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications of binary_op.
Notes:
The difference between exclusive_scan and inclusive_scan is that exclusive_scan excludes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of exclusive_scan may be non-deterministic.
4.4.4

Inclusive scan

[parallel.alg.inclusive.scan]
template<class InputIterator, class OutputIterator>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result);
Effects:
Same as inclusive_scan(first, last, result, plus<>()).
template<class InputIterator, class OutputIterator, class BinaryOperation>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              BinaryOperation binary_op);template<class InputIterator, class OutputIterator, class BinaryOperation>
OutputIterator inclusive_scan(InputIterator first, InputIterator last,
                              OutputIterator result,
                              BinaryOperation binary_op, T init);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, *first, ..., *(first + (i - result))) or GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, *first, ..., *(first + (i - result))) if init is provided.
Returns:
The end of the resulting range beginning at result.
Requires:
binary_op shall not invalidate iterators or subranges, nor modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications of binary_op.
Notes:
The difference between exclusive_scan and inclusive_scan is that inclusive_scan includes the ith input element in the ith sum. If binary_op is not mathematically associative, the behavior of inclusive_scan may be non-deterministic.
4.4.5

Transform reduce

[parallel.alg.transform.reduce]
template<class InputIterator, class UnaryFunction, class T, class BinaryOperation>
T transform_reduce(InputIterator first, InputIterator last,
                   UnaryOperation unary_op, T init, BinaryOperation binary_op);
Returns:
GENERALIZED_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (last - first) -
1))).
Requires:
Neither unary_op nor binary_op shall invalidate subranges, or modify elements in the range [first,last)
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
transform_reduce does not apply unary_op to init.
4.4.6

Transform exclusive scan

[parallel.alg.transform.exclusive.scan]
template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class T, class BinaryOperation>
OutputIterator transform_exclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        T init, BinaryOperation binary_op);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (i
- result) - 1))).
Returns:
The end of the resulting range beginning at result.
Requires:
Neither unary_op nor binary_op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
The difference between transform_exclusive_scan and transform_inclusive_scan is that transform_exclusive_scan excludes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of transform_exclusive_scan may be non-deterministic. transform_exclusive_scan does not apply unary_op to init.
4.4.7

Transform inclusive scan

[parallel.alg.transform.inclusive.scan]
template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class BinaryOperation>
OutputIterator transform_inclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        BinaryOperation binary_op);template<class InputIterator, class OutputIterator,
      class UnaryOperation,
      class BinaryOperation, class T>
OutputIterator transform_inclusive_scan(InputIterator first, InputIterator last,
                                        OutputIterator result,
                                        UnaryOperation unary_op,
                                        BinaryOperation binary_op, T init);
Effects:
Assigns through each iterator i in [result,result + (last - first)) the value of GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, unary_op(*first), ..., unary_op(*(first + (i -
result)))) or GENERALIZED_NONCOMMUTATIVE_SUM(binary_op, init, unary_op(*first), ..., unary_op(*(first + (i
- result)))) if init is provided.
Returns:
The end of the resulting range beginning at result.
Requires:
Neither unary_op nor binary_op shall invalidate iterators or subranges, or modify elements in the ranges [first,last) or [result,result + (last - first)).
Complexity:
O(last - first) applications each of unary_op and binary_op.
Notes:
The difference between transform_exclusive_scan and transform_inclusive_scan is that transform_inclusive_scan includes the ith input element from the ith sum. If binary_op is not mathematically associative, the behavior of transform_inclusive_scan may be non-deterministic. transform_inclusive_scan does not apply unary_op to init.
5

Task Block

[parallel.task_block]
5.1

Header <experimental/task_block> synopsis

[parallel.task_block.synopsis]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2 {
  class task_cancelled_exception;

  class task_block;

  template<class F>
    void define_task_block(F&& f);

  template<class f>
    void define_task_block_restore_thread(F&& f);
}
}
}
}
     
5.2

Class task_cancelled_exception

[parallel.task_block.task_cancelled_exception]
namespace std {
namespace experimental {
namespace parallel
inline namespace v2 {

  class task_cancelled_exception : public exception
  {
    public:
      task_cancelled_exception() noexcept;
      virtual const char* what() const noexcept;
  };
}
}
}
}
     

The class task_cancelled_exception defines the type of objects thrown by task_block::run or task_block::wait if they detect than an exception is pending within the current parallel block. See 5.5, below.

5.2.1

task_cancelled_exception member function what

[parallel.task_block.task_cancelled_exception.what]
virtual const char* what() const noexcept
Returns:
An implementation-defined NTBS.
5.3

Class task_block

[parallel.task_block.class]
namespace std {
namespace experimental {
namespace parallel {
inline namespace v2 {

  class task_block
  {
    private:
      ~task_block();

    public:
      task_block(const task_block&) = delete;
      task_block& operator=(const task_block&) = delete;
      void operator&() const = delete;

      template<class F>
        void run(F&& f);

      void wait();
  };
}
}
}
}
     

The class task_block defines an interface for forking and joining parallel tasks. The define_task_block and define_task_block_restore_thread function templates create an object of type task_block and pass a reference to that object to a user-provided function object.

An object of class task_block cannot be constructed, destroyed, copied, or moved except by the implementation of the task block library. Taking the address of a task_block object via operator& is ill-formed. Obtaining its address by any other means (including addressof) results in a pointer with an unspecified value; dereferencing such a pointer results in undefined behavior.

A task_block is active if it was created by the nearest enclosing task block, where “task block” refers to an invocation of define_task_block or define_task_block_restore_thread and “nearest enclosing” means the most recent invocation that has not yet completed. Code designated for execution in another thread by means other than the facilities in this section (e.g., using thread or async) are not enclosed in the task block and a task_block passed to (or captured by) such code is not active within that code. Performing any operation on a task_block that is not active results in undefined behavior.

When the argument to task_block::run is called, no task_block is active, not even the task_block on which run was called. (The function object should not, therefore, capture a task_block from the surrounding block.)

[ Example:
define_task_block([&](auto& tb) {
  tb.run([&]{
    tb.run([] { f(); });               // Error: tb is not active within run
    define_task_block([&](auto& tb2) { // Define new task block
      tb2.run(f);
      ...
    });
  });
  ...
});
     
end example ]


     [ Note:
    
       Implementations are encouraged to diagnose the above error at translation time.
     
    end note ]
  

     
    

    
5.3.1

task_block member function template run

[parallel.task_block.class.run]
template<class F> void run(F&& f);
Requires:
F shall be MoveConstructible. DECAY_COPY(std::forward<F>(f))() shall be a valid expression.
Preconditions:
*this shall be the active task_block.
Effects:
Evaluates DECAY_COPY(std::forward<F>(f))(), where DECAY_COPY(std::forward<F>(f)) is evaluated synchronously within the current thread. The call to the resulting copy of the function object is permitted to run on an unspecified thread created by the implementation in an unordered fashion relative to the sequence of operations following the call to run(f) (the continuation), or indeterminately sequenced within the same thread as the continuation. The call to run synchronizes with the call to the function object. The completion of the call to the function object synchronizes with the next invocation of wait on the same task_block or completion of the nearest enclosing task block (i.e., the define_task_block or define_task_block_restore_thread that created this task_block).
Throws:
task_cancelled_exception, as described in 5.5.
Remarks:
The run function may return on a thread other than the one on which it was called; in such cases, completion of the call to run synchronizes with the continuation. [ Note: The return from run is ordered similarly to an ordinary function call in a single thread. end note ]
Remarks:
The invocation of the user-supplied function object f may be immediate or may be delayed until compute resources are available. run might or might not return before the invocation of f completes.
5.3.2

task_block member function wait

[parallel.task_block.class.wait]
void wait();
Preconditions:
*this shall be the active task_block.
Effects:
Blocks until the tasks spawned using this task_block have completed.
Throws:
task_cancelled_exception, as described in 5.5.
Postconditions:
All tasks spawned by the nearest enclosing task block have completed.
Remarks:
The wait function may return on a thread other than the one on which it was called; in such cases, completion of the call to wait synchronizes with subsequent operations. [ Note: The return from wait is ordered similarly to an ordinary function call in a single thread. end note ] [ Example:
define_task_block([&](auto& tb) {
  tb.run([&]{ process(a, w, x); }); // Process a[w] through a[x]
  if (y < x) tb.wait();             // Wait if overlap between [w,x) and [y,z)
  process(a, y, z);                 // Process a[y] through a[z]
});
end example ]
5.4

Function template define_task_block

[parallel.task_block.define_task_block]
template<class F>
void define_task_block(F&& f);
       template<class F>
void define_task_block_restore_thread(F&& f);
       
Requires:
Given an lvalue tb of type task_block, the expression f(tb) shall be well-formed
Effects:
Constructs a task_block tb and calls f(tb).
Throws:
exception_list, as specified in 5.5.
Postconditions:
All tasks spawned from f have finished execution.
Remarks:
The define_task_block function may return on a thread other than the one on which it was called unless there are no task blocks active on entry to define_task_block (see 5.3), in which case the function returns on the original thread. When define_task_block returns on a different thread, it synchronizes with operations following the call. [ Note: The return from define_task_block is ordered similarly to an ordinary function call in a single thread. end note ] The define_task_block_restore_thread function always returns on the same thread as the one on which it was called.
Notes:
It is expected (but not mandated) that f will (directly or indirectly) call tb.run(function-object).
5.5

Exception Handling

[parallel.task_block.exceptions]

Every task_block has an associated exception list. When the task block starts, its associated exception list is empty.

When an exception is thrown from the user-provided function object passed to define_task_block or define_task_block_restore_thread, it is added to the exception list for that task block. Similarly, when an exception is thrown from the user-provided function object passed into task_block::run, the exception object is added to the exception list associated with the nearest enclosing task block. In both cases, an implementation may discard any pending tasks that have not yet been invoked. Tasks that are already in progress are not interrupted except at a call to task_block::run or task_block::wait as described below.

If the implementation is able to detect that an exception has been thrown by another task within the same nearest enclosing task block, then task_block::run or task_block::wait may throw task_canceled_exception; these instances of task_canceled_exception are not added to the exception list of the corresponding task block.

When a task block finishes with a non-empty exception list, the exceptions are aggregated into an exception_list object, which is then thrown from the task block.

The order of the exceptions in the exception_list object is unspecified.