public class Parallel extends Object
Inspired by the .NET Task Parallel Library. Allows control over the way data is partitioned using inspiration from Reed Copsey's blog.
| Modifier and Type | Class and Description | 
|---|---|
| static class  | Parallel.IntRangeAn integer range with a step size. | 
| Constructor and Description | 
|---|
| Parallel() | 
| Modifier and Type | Method and Description | 
|---|---|
| static <T> void | forEach(Iterable<T> objects,
       Operation<T> op)Parallel ForEach loop over  Iterabledata. | 
| static <T> void | forEach(Iterable<T> objects,
       Operation<T> op,
       ThreadPoolExecutor pool)Parallel ForEach loop over  Iterabledata. | 
| static <T> void | forEach(Partitioner<T> partitioner,
       Operation<T> op)Parallel ForEach loop over partitioned data. | 
| static <T> void | forEach(Partitioner<T> partitioner,
       Operation<T> op,
       ThreadPoolExecutor pool)Parallel ForEach loop over partitioned data. | 
| static <T> void | forEachPartitioned(Partitioner<T> partitioner,
                  Operation<Iterator<T>> op)Parallel ForEach loop over batched partitioned data. | 
| static <T> void | forEachPartitioned(Partitioner<T> partitioner,
                  Operation<Iterator<T>> op,
                  ThreadPoolExecutor pool)Parallel ForEach loop over partitioned data with batches of data. | 
| static <T> void | forEachUnpartitioned(Iterator<T> data,
                    Operation<T> op)Parallel ForEach loop over unpartitioned data. | 
| static <T> void | forEachUnpartitioned(Iterator<T> data,
                    Operation<T> op,
                    ThreadPoolExecutor pool)Parallel ForEach loop over unpartitioned data. | 
| static void | forIndex(int start,
        int stop,
        int incr,
        Operation<Integer> op)Parallel integer for loop. | 
| static void | forIndex(int start,
        int stop,
        int incr,
        Operation<Integer> op,
        ThreadPoolExecutor pool)Parallel integer for loop. | 
| static void | forRange(int start,
        int stop,
        int incr,
        Operation<Parallel.IntRange> op)Parallel integer for loop. | 
| static void | forRange(int start,
        int stop,
        int incr,
        Operation<Parallel.IntRange> op,
        ThreadPoolExecutor pool)Parallel integer for loop. | 
public Parallel()
public static void forIndex(int start, int stop, int incr, Operation<Integer> op, ThreadPoolExecutor pool)
start - starting valuestop - stopping valueincr - increment amountop - operation to performpool - the thread pool.public static void forIndex(int start, int stop, int incr, Operation<Integer> op)
start - starting valuestop - stopping valueincr - increment amountop - operation to performGlobalExecutorPool.getPool()public static void forRange(int start, int stop, int incr, Operation<Parallel.IntRange> op)
forIndex(int, int, int, Operation), but potentially slightly
 faster as it avoids auto-boxing/unboxing and results in fewer method
 calls. The downside is that users have to write an extra loop to iterate
 over the Parallel.IntRange object. Uses the default global thread pool.start - starting valuestop - stopping valueincr - increment amountop - operation to performpublic static void forRange(int start, int stop, int incr, Operation<Parallel.IntRange> op, ThreadPoolExecutor pool)
forIndex(int, int, int, Operation, ThreadPoolExecutor), but
 potentially slightly faster as it avoids auto-boxing/unboxing and results
 in fewer method calls. The downside is that users have to write an extra
 loop to iterate over the Parallel.IntRange object.start - starting valuestop - stopping valueincr - increment amountop - operation to performpool - the thread pool.public static <T> void forEach(Iterable<T> objects, Operation<T> op, ThreadPoolExecutor pool)
Iterable data. The data is
 automatically partitioned; if the data is a List, then a
 RangePartitioner is used, otherwise a
 GrowingChunkPartitioner is used.T - type of the data itemsobjects - the dataop - the operation to applypool - the thread pool.GlobalExecutorPool.getPool()public static <T> void forEach(Iterable<T> objects, Operation<T> op)
Iterable data. Uses the default global
 thread pool. The data is automatically partitioned; if the data is a
 List, then a RangePartitioner is used, otherwise a
 GrowingChunkPartitioner is used.T - type of the data itemsobjects - the dataop - the operation to applyGlobalExecutorPool.getPool()public static <T> void forEach(Partitioner<T> partitioner, Operation<T> op)
T - type of the data itemspartitioner - the partitioner applied to the dataop - the operation to applyGlobalExecutorPool.getPool()public static <T> void forEach(Partitioner<T> partitioner, Operation<T> op, ThreadPoolExecutor pool)
Implementation details: 1.) create partitions enumerator 2.) schedule nprocs partitions 3.) while there are still partitions to process 3.1) on completion of a partition schedule the next one 4.) wait for completion of remaining partitions
T - type of the data itemspartitioner - the partitioner applied to the dataop - the operation to applypool - the thread pool.public static <T> void forEachUnpartitioned(Iterator<T> data, Operation<T> op)
FixedSizeChunkPartitioner with a chunk size of 1,
 but with slightly less overhead. The unpartitioned for-each loop has
 slightly less throughput than a partitioned for-each loop, but exhibits
 much less delay in scheduling an item for processing as a partition does
 not have to first be populated. The unpartitioned for-each loop is
 particularly useful for processing temporal Streams of data.
 Implementation details: 1.) create partitions enumerator 2.) schedule nprocs partitions 3.) while there are still partitions to process 3.1) on completion of a partition schedule the next one 4.) wait for completion of remaining partitions
T - type of the data itemsdata - the iterator of data itemsop - the operation to applypublic static <T> void forEachUnpartitioned(Iterator<T> data, Operation<T> op, ThreadPoolExecutor pool)
FixedSizeChunkPartitioner with a chunk size of 1,
 but with slightly less overhead. The unpartitioned for-each loop has
 slightly less throughput than a partitioned for-each loop, but exhibits
 much less delay in scheduling an item for processing as a partition does
 not have to first be populated. The unpartitioned for-each loop is
 particularly useful for processing temporal Streams of data.
 Implementation details: 1.) create partitions enumerator 2.) schedule nprocs partitions 3.) while there are still partitions to process 3.1) on completion of a partition schedule the next one 4.) wait for completion of remaining partitions
T - type of the data itemsdata - the iterator of data itemsop - the operation to applypool - the thread pool.public static <T> void forEachPartitioned(Partitioner<T> partitioner, Operation<Iterator<T>> op, ThreadPoolExecutor pool)
Implementation details: 1.) create partitions enumerator 2.) schedule nprocs partitions 3.) while there are still partitions to process 3.1) on completion of a partition schedule the next one 4.) wait for completion of remaining partitions
T - type of the data itemspartitioner - the partitioner applied to the dataop - the operation to applypool - the thread pool.public static <T> void forEachPartitioned(Partitioner<T> partitioner, Operation<Iterator<T>> op)
T - type of the data itemspartitioner - the partitioner applied to the dataop - the operation to applyGlobalExecutorPool.getPool()