DataParallelStrategy¶
- class lightning.fabric.strategies.DataParallelStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision=None)[source]¶
- Bases: - ParallelStrategy- Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data. - all_reduce(collection, group=None, reduce_op='mean')[source]¶
- Reduces the given tensor (e.g. across GPUs/processes). 
 - barrier(*args, **kwargs)[source]¶
- Synchronizes all processes which blocks processes until the whole group enters this function. 
 - batch_to_device(batch, device=None)[source]¶
- Moves the batch to the correct device. - The returned batch is of the same type as the input batch, just having all tensors on the correct device. 
 - load_module_state_dict(module, state_dict, strict=True)[source]¶
- Loads the given state into the model. - Return type:
 
 - reduce_boolean_decision(decision, all=True)[source]¶
- Reduces a boolean decision over distributed processes. By default is analogous to - allfrom the standard library, returning- Trueonly if all input decisions evaluate to- True. If- allis set to- False, it behaves like- anyinstead.
 - setup_module(module)[source]¶
- Wraps the given model into a - DataParallelmodule.- Return type: