DDPStrategy¶
- class lightning.fabric.strategies.DDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='popen', **kwargs)[source]¶
- Bases: - ParallelStrategy- Strategy for multi-process single-device training on one or multiple nodes. - all_reduce(tensor, group=None, reduce_op='mean')[source]¶
- Reduces a tensor from several distributed processes to one aggregated tensor. - Parameters:
- Return type:
- Returns:
- reduced value, except when the input was not a tensor the output remains is unchanged 
 
 - barrier(*args, **kwargs)[source]¶
- Synchronizes all processes which blocks processes until the whole group enters this function. 
 - load_module_state_dict(module, state_dict, strict=True)[source]¶
- Loads the given state into the model. - Return type:
 
 - setup_environment()[source]¶
- Setup any processes or distributed connections. - This must be called by the framework at the beginning of every process, before any distributed communication takes place. - Return type:
 
 - setup_module(module)[source]¶
- Wraps the model into a - DistributedDataParallelmodule.- Return type: