HalfPrecision¶
- class lightning.pytorch.plugins.precision.HalfPrecision(precision='16-true')[source]¶
- Bases: - Precision- Plugin for training with half precision. - Parameters:
- precision¶ ( - Literal[- 'bf16-true',- '16-true']) – Whether to use- torch.float16(- '16-true') or- torch.bfloat16(- 'bf16-true').
 - convert_input(data)[source]¶
- Convert model inputs (forward) to the floating point precision type of this plugin. - This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32). - Return type:
 
 - convert_module(module)[source]¶
- Convert the module parameters to the precision type this plugin handles. - This is optional and depends on the precision limitations during optimization. - Return type:
 
 - forward_context()[source]¶
- A context manager to change the default tensor type when tensors get created during the module’s forward. - See: - torch.set_default_tensor_type()
 - module_init_context()[source]¶
- Instantiate module parameters or tensors in the precision type this plugin handles. - This is optional and depends on the precision limitations during optimization. - Return type: