tf::PartitionerBase class

class to derive a partitioner for scheduling parallel algorithms

The class provides base methods to derive a partitioner that can be used to schedule parallel iterations (e.g., tf::Taskflow::for_each).

An partitioner defines the scheduling method for running parallel algorithms, such tf::Taskflow::for_each, tf::Taskflow::reduce, and so on. By default, we provide the following partitioners:

Depending on applications, partitioning algorithms can impact the performance a lot. For example, if a parallel-iteration workload contains a regular work unit per iteration, tf::StaticPartitioner can deliver the best performance. On the other hand, if the work unit per iteration is irregular and unbalanced, tf::GuidedPartitioner or tf::DynamicPartitioner can outperform tf::StaticPartitioner. In most situations, tf::GuidedPartitioner can deliver decent performance and is thus used as our default partitioner.

Derived classes

class DynamicPartitioner
class to construct a dynamic partitioner for scheduling parallel algorithms
class GuidedPartitioner
class to construct a guided partitioner for scheduling parallel algorithms
class RandomPartitioner
class to construct a random partitioner for scheduling parallel algorithms
class StaticPartitioner
class to construct a dynamic partitioner for scheduling parallel algorithms

Constructors, destructors, conversion operators

PartitionerBase() defaulted
default constructor
PartitionerBase(size_t chunk_size) explicit
construct a partitioner with the given chunk size

Public functions

auto chunk_size() const -> size_t
query the chunk size of this partitioner
void chunk_size(size_t cz)
update the chunk size of this partitioner

Protected variables

size_t _chunk_size
chunk size