Trainer¶
- class lightning.pytorch.trainer.trainer.Trainer(*, accelerator='auto', strategy='auto', devices='auto', num_nodes=1, precision=None, logger=None, callbacks=None, fast_dev_run=False, max_epochs=None, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, overfit_batches=0.0, val_check_interval=None, check_val_every_n_epoch=1, num_sanity_val_steps=None, log_every_n_steps=None, enable_checkpointing=None, enable_progress_bar=None, enable_model_summary=None, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm=None, deterministic=None, benchmark=None, inference_mode=True, use_distributed_sampler=True, profiler=None, detect_anomaly=False, barebones=False, plugins=None, sync_batchnorm=False, reload_dataloaders_every_n_epochs=0, default_root_dir=None, enable_autolog_hparams=True, model_registry=None)[source]¶
- Bases: - object- Customize every aspect of training via flags. - Parameters:
- accelerator¶ ( - Union[- str,- Accelerator]) – Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “hpu”, “mps”, “auto”) as well as custom accelerator instances.
- strategy¶ ( - Union[- str,- Strategy]) – Supports different training strategies with aliases as well custom strategies. Default:- "auto".
- devices¶ ( - Union[- list[- int],- str,- int]) – The devices to use. Can be set to a positive number (int or str), a sequence of device indices (list or str), the value- -1to indicate all available devices should be used, or- "auto"for automatic selection based on the chosen accelerator. Default:- "auto".
- num_nodes¶ ( - int) – Number of GPU nodes for distributed training. Default:- 1.
- precision¶ ( - Union[- Literal[- 64,- 32,- 16],- Literal[- 'transformer-engine',- 'transformer-engine-float16',- '16-true',- '16-mixed',- 'bf16-true',- 'bf16-mixed',- '32-true',- '64-true'],- Literal[- '64',- '32',- '16',- 'bf16'],- None]) – Double precision (64, ‘64’ or ‘64-true’), full precision (32, ‘32’ or ‘32-true’), 16bit mixed precision (16, ‘16’, ‘16-mixed’) or bfloat16 mixed precision (‘bf16’, ‘bf16-mixed’). Can be used on CPU, GPU, TPUs, or HPUs. Default:- '32-true'.
- logger¶ ( - Union[- Logger,- Iterable[- Logger],- bool,- None]) – Logger (or iterable collection of loggers) for experiment tracking. A- Truevalue uses the default- TensorBoardLoggerif it is installed, otherwise- CSVLogger.- Falsewill disable logging. If multiple loggers are provided, local files (checkpoints, profiler traces, etc.) are saved in the- log_dirof the first logger. Default:- True.
- callbacks¶ ( - Union[- list[- Callback],- Callback,- None]) – Add a callback or list of callbacks. Default:- None.
- fast_dev_run¶ ( - Union[- int,- bool]) – Runs n if set to- n(int) else 1 if set to- Truebatch(es) of train, val and test to find any bugs (ie: a sort of unit test). Default:- False.
- max_epochs¶ ( - Optional[- int]) – Stop training once this number of epochs is reached. Disabled by default (None). If both max_epochs and max_steps are not specified, defaults to- max_epochs = 1000. To enable infinite training, set- max_epochs = -1.
- min_epochs¶ ( - Optional[- int]) – Force training for at least these many epochs. Disabled by default (None).
- max_steps¶ ( - int) – Stop training after this number of steps. Disabled by default (-1). If- max_steps = -1and- max_epochs = None, will default to- max_epochs = 1000. To enable infinite training, set- max_epochsto- -1.
- min_steps¶ ( - Optional[- int]) – Force training for at least these number of steps. Disabled by default (- None).
- max_time¶ ( - Union[- str,- timedelta,- dict[- str,- int],- None]) – Stop training after this amount of time has passed. Disabled by default (- None). The time duration can be specified in the format DD:HH:MM:SS (days, hours, minutes seconds), as a- datetime.timedelta, or a dictionary with keys that will be passed to- datetime.timedelta.
- limit_train_batches¶ ( - Union[- float,- int,- None]) – How much of training dataset to check (float = fraction, int = num_batches). Value is per device. Default:- 1.0.
- limit_val_batches¶ ( - Union[- float,- int,- None]) – How much of validation dataset to check (float = fraction, int = num_batches). Value is per device. Default:- 1.0.
- limit_test_batches¶ ( - Union[- float,- int,- None]) – How much of test dataset to check (float = fraction, int = num_batches). Value is per device. Default:- 1.0.
- limit_predict_batches¶ ( - Union[- float,- int,- None]) – How much of prediction dataset to check (float = fraction, int = num_batches). Value is per device. Default:- 1.0.
- overfit_batches¶ ( - Union[- int,- float]) – Overfit a fraction of training/validation data (float) or a set number of batches (int). Default:- 0.0.
- val_check_interval¶ ( - Union[- int,- float,- str,- timedelta,- dict[- str,- int],- None]) – How often to check the validation set. Pass a- floatin the range [0.0, 1.0] to check after a fraction of the training epoch. Pass an- intto check after a fixed number of training batches. An- intvalue can only be higher than the number of training batches when- check_val_every_n_epoch=None, which validates after every- Ntraining batches across epochs or during iteration-based training. Additionally, accepts a time-based duration as a string “DD:HH:MM:SS”, a- datetime.timedelta, or a dict of kwargs to- datetime.timedelta. When time-based, validation triggers once the elapsed wall-clock time since the last validation exceeds the interval; the check occurs after the current batch completes, the validation loop runs, and the timer is reset. Default:- 1.0.
- check_val_every_n_epoch¶ ( - Optional[- int]) – Perform a validation loop after every N training epochs. If- None, validation will be done solely based on the number of training batches, requiring- val_check_intervalto be an integer value. When used together with a time-based- val_check_intervaland- check_val_every_n_epoch> 1, validation is aligned to epoch multiples: if the interval elapses before the next multiple-N epoch, validation runs at the start of that epoch (after the first batch) and the timer resets; if it elapses during a multiple-N epoch, validation runs after the current batch. For- Noneor- 1cases, the time-based behavior of- val_check_intervalapplies without additional alignment. Default:- 1.
- num_sanity_val_steps¶ ( - Optional[- int]) – Sanity check runs n validation batches before starting the training routine. Set it to -1 to run all batches in all validation dataloaders. Default:- 2.
- log_every_n_steps¶ ( - Optional[- int]) – How often to log within steps. Default:- 50.
- enable_checkpointing¶ ( - Optional[- bool]) – If- True, enable checkpointing. It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint in- callbacks. Default:- True.
- enable_progress_bar¶ ( - Optional[- bool]) – Whether to enable to progress bar by default. Default:- True.
- enable_model_summary¶ ( - Optional[- bool]) – Whether to enable model summarization by default. Default:- True.
- accumulate_grad_batches¶ ( - int) – Accumulates gradients over k batches before stepping the optimizer. Default: 1.
- gradient_clip_val¶ ( - Union[- float,- int,- None]) – The value at which to clip gradients. Passing- gradient_clip_val=Nonedisables gradient clipping. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before. Default:- None.
- gradient_clip_algorithm¶ ( - Optional[- str]) – The gradient clipping algorithm to use. Pass- gradient_clip_algorithm="value"to clip by value, and- gradient_clip_algorithm="norm"to clip by norm. By default it will be set to- "norm".
- deterministic¶ ( - Union[- bool,- Literal[- 'warn'],- None]) – If- True, sets whether PyTorch operations must use deterministic algorithms. Set to- "warn"to use deterministic algorithms whenever possible, throwing warnings on operations that don’t support deterministic mode. If not set, defaults to- False. Default:- None.
- benchmark¶ ( - Optional[- bool]) – The value (- Trueor- False) to set- torch.backends.cudnn.benchmarkto. The value for- torch.backends.cudnn.benchmarkset in the current session will be used (- Falseif not manually set). If- deterministicis set to- True, this will default to- False. Override to manually set a different value. Default:- None.
- inference_mode¶ ( - bool) – Whether to use- torch.inference_mode()or- torch.no_grad()during evaluation (- validate/- test/- predict).
- use_distributed_sampler¶ ( - bool) – Whether to wrap the DataLoader’s sampler with- torch.utils.data.DistributedSampler. If not specified this is toggled automatically for strategies that require it. By default, it will add- shuffle=Truefor the train sampler and- shuffle=Falsefor validation/test/predict samplers. If you want to disable this logic, you can pass- Falseand add your own distributed sampler in the dataloader hooks. If- Trueand a distributed sampler was already added, Lightning will not replace the existing one. For iterable-style datasets, we don’t do this automatically.
- profiler¶ ( - Union[- Profiler,- str,- None]) – To profile individual steps during training and assist in identifying bottlenecks. Default:- None.
- detect_anomaly¶ ( - bool) – Enable anomaly detection for the autograd engine. Default:- False.
- barebones¶ ( - bool) – Whether to run in “barebones mode”, where all features that may impact raw speed are disabled. This is meant for analyzing the Trainer overhead and is discouraged during regular training runs. The following features are deactivated:- enable_checkpointing,- logger,- enable_progress_bar,- log_every_n_steps,- enable_model_summary,- num_sanity_val_steps,- fast_dev_run,- detect_anomaly,- profiler,- log(),- log_dict().
- plugins¶ ( - Union[- Precision,- ClusterEnvironment,- CheckpointIO,- LayerSync,- list[- Union[- Precision,- ClusterEnvironment,- CheckpointIO,- LayerSync]],- None]) – Plugins allow modification of core behavior like ddp and amp, and enable custom lightning plugins. Default:- None.
- sync_batchnorm¶ ( - bool) – Synchronize batch norm layers between process groups/whole world. Default:- False.
- reload_dataloaders_every_n_epochs¶ ( - int) – Set to a positive integer to reload dataloaders every n epochs. Default:- 0.
- default_root_dir¶ ( - Union[- str,- Path,- None]) – Default path for logs and weights when no logger/ckpt_callback passed. Default:- os.getcwd(). Can be remote file paths such as s3://mybucket/path or ‘hdfs://path/’
- enable_autolog_hparams¶ ( - bool) – Whether to log hyperparameters at the start of a run. Default:- True.
- model_registry¶ ( - Optional[- str]) – The name of the model being uploaded to Model hub.
 
- Raises:
- TypeError – If - gradient_clip_valis not an int or float.
- MisconfigurationException – If - gradient_clip_algorithmis invalid.
 
 - fit(model, train_dataloaders=None, val_dataloaders=None, datamodule=None, ckpt_path=None)[source]¶
- Runs the full optimization routine. - Parameters:
- model¶ ( - LightningModule) – Model to fit.
- train_dataloaders¶ ( - Union[- Any,- LightningDataModule,- None]) – An iterable or collection of iterables specifying training samples. Alternatively, a- LightningDataModulethat defines the- train_dataloaderhook.
- val_dataloaders¶ ( - Optional[- Any]) – An iterable or collection of iterables specifying validation samples.
- datamodule¶ ( - Optional[- LightningDataModule]) – A- LightningDataModulethat defines the- train_dataloaderhook.
- ckpt_path¶ ( - Union[- str,- Path,- None]) –- Path/URL of the checkpoint from which training is resumed. Could also be one of three special keywords - "last",- "hpc"and- "registry". Otherwise, if there is no checkpoint file at the path, an exception is raised.- best: the best model checkpoint from the previous - trainer.fitcall will be loaded
- last: the last model checkpoint from the previous - trainer.fitcall will be loaded
- registry: the model will be downloaded from the Lightning Model Registry with following notations: - 'registry': uses the latest/default version of default model set with- Trainer(..., model_registry="my-model")
- 'registry:model-name': uses the latest/default version of this model model-name
- 'registry:model-name:version:v2': uses the specific version ‘v2’ of the model model-name
- 'registry:version:v2': uses the default model set with- Trainer(..., model_registry="my-model")and version ‘v2’
 
 
 
- Raises:
- TypeError – If - modelis not- LightningModulefor torch version less than 2.0.0 and if- modelis not- LightningModuleor- torch._dynamo.OptimizedModulefor torch versions greater than or equal to 2.0.0 .
 - For more information about multiple dataloaders, see this section. :rtype: - None
 - init_module(empty_init=None)[source]¶
- Tensors that you instantiate under this context manager will be created on the device right away and have the right data type depending on the precision setting in the Trainer. - The parameters and tensors get created on the device and with the right data type right away without wasting memory being allocated unnecessarily. 
 - predict(model=None, dataloaders=None, datamodule=None, return_predictions=None, ckpt_path=None)[source]¶
- Run inference on your data. This will call the model forward function to compute predictions. Useful to perform distributed and batched predictions. Logging is disabled in the predict hooks. - Parameters:
- model¶ ( - Optional[- LightningModule]) – The model to predict with.
- dataloaders¶ ( - Union[- Any,- LightningDataModule,- None]) – An iterable or collection of iterables specifying predict samples. Alternatively, a- LightningDataModulethat defines the- predict_dataloaderhook.
- datamodule¶ ( - Optional[- LightningDataModule]) – A- LightningDataModulethat defines the- predict_dataloaderhook.
- return_predictions¶ ( - Optional[- bool]) – Whether to return predictions.- Trueby default except when an accelerator that spawns processes is used (not supported).
- ckpt_path¶ ( - Union[- str,- Path,- None]) – Either- "best",- "last",- "hpc",- "registry"or path to the checkpoint you wish to predict. If- Noneand the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous- trainer.fitcall will be loaded if a checkpoint callback is configured.
 
 - For more information about multiple dataloaders, see this section. - Return type:
- Returns:
- Returns a list of dictionaries, one for each provided dataloader containing their respective predictions. 
- Raises:
- TypeError – If no - modelis passed and there was no- LightningModulepassed in the previous run. If- modelpassed is not LightningModule or torch._dynamo.OptimizedModule.
- MisconfigurationException – If both - dataloadersand- datamoduleare passed. Pass only one of these.
- RuntimeError – If a compiled - modelis passed and the strategy is not supported.
 
 - See Lightning inference section for more. 
 - print(*args, **kwargs)[source]¶
- Print something only on the first process. If running on multiple machines, it will print from the first process in each machine. - Arguments passed to this method are forwarded to the Python built-in - print()function.- Return type:
 
 - save_checkpoint(filepath, weights_only=False, storage_options=None)[source]¶
- Runs routine to create a checkpoint. - This method needs to be called on all processes in case the selected strategy is handling distributed checkpointing. - Parameters:
- Raises:
- AttributeError – If the model is not attached to the Trainer before calling this method. 
- Return type:
 
 - test(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]¶
- Perform one evaluation epoch over the test set. It’s separated from fit to make sure you never run on your test set until you want to. - Parameters:
- model¶ ( - Optional[- LightningModule]) – The model to test.
- dataloaders¶ ( - Union[- Any,- LightningDataModule,- None]) – An iterable or collection of iterables specifying test samples. Alternatively, a- LightningDataModulethat defines the- test_dataloaderhook.
- ckpt_path¶ ( - Union[- str,- Path,- None]) – Either- "best",- "last",- "hpc",- "registry"or path to the checkpoint you wish to test. If- Noneand the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous- trainer.fitcall will be loaded if a checkpoint callback is configured.
- datamodule¶ ( - Optional[- LightningDataModule]) – A- LightningDataModulethat defines the- test_dataloaderhook.
 
 - For more information about multiple dataloaders, see this section. - Return type:
- Returns:
- List of dictionaries with metrics logged during the test phase, e.g., in model- or callback hooks like - test_step()etc. The length of the list corresponds to the number of test dataloaders used.
- Raises:
- TypeError – If no - modelis passed and there was no- LightningModulepassed in the previous run. If- modelpassed is not LightningModule or torch._dynamo.OptimizedModule.
- MisconfigurationException – If both - dataloadersand- datamoduleare passed. Pass only one of these.
- RuntimeError – If a compiled - modelis passed and the strategy is not supported.
 
 
 - validate(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]¶
- Perform one evaluation epoch over the validation set. - Parameters:
- model¶ ( - Optional[- LightningModule]) – The model to validate.
- dataloaders¶ ( - Union[- Any,- LightningDataModule,- None]) – An iterable or collection of iterables specifying validation samples. Alternatively, a- LightningDataModulethat defines the- val_dataloaderhook.
- ckpt_path¶ ( - Union[- str,- Path,- None]) – Either- "best",- "last",- "hpc",- "registry"or path to the checkpoint you wish to validate. If- Noneand the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous- trainer.fitcall will be loaded if a checkpoint callback is configured.
- datamodule¶ ( - Optional[- LightningDataModule]) – A- LightningDataModulethat defines the- val_dataloaderhook.
 
 - For more information about multiple dataloaders, see this section. - Return type:
- Returns:
- List of dictionaries with metrics logged during the validation phase, e.g., in model- or callback hooks like - validation_step()etc. The length of the list corresponds to the number of validation dataloaders used.
- Raises:
- TypeError – If no - modelis passed and there was no- LightningModulepassed in the previous run. If- modelpassed is not LightningModule or torch._dynamo.OptimizedModule.
- MisconfigurationException – If both - dataloadersand- datamoduleare passed. Pass only one of these.
- RuntimeError – If a compiled - modelis passed and the strategy is not supported.
 
 
 - property callback_metrics: dict[str, torch.Tensor]¶
- The metrics available to callbacks. - def training_step(self, batch, batch_idx): self.log("a_val", 2.0) callback_metrics = trainer.callback_metrics assert callback_metrics["a_val"] == 2.0 
 - property checkpoint_callback: Optional[Checkpoint]¶
- The first - ModelCheckpointcallback in the Trainer.callbacks list, or- Noneif it doesn’t exist.
 - property checkpoint_callbacks: list[lightning.pytorch.callbacks.checkpoint.Checkpoint]¶
- A list of all instances of - ModelCheckpointfound in the Trainer.callbacks list.
 - property ckpt_path: Optional[Union[str, Path]]¶
- Set to the path/URL of a checkpoint loaded via - fit(),- validate(),- test(), or- predict().- Noneotherwise.
 - property default_root_dir: str¶
- The default location to save artifacts of loggers, checkpoints etc. - It is used as a fallback if logger or checkpoint callback do not define specific save paths. 
 - property early_stopping_callback: Optional[EarlyStopping]¶
- The first - EarlyStoppingcallback in the Trainer.callbacks list, or- Noneif it doesn’t exist.
 - property early_stopping_callbacks: list[lightning.pytorch.callbacks.early_stopping.EarlyStopping]¶
- A list of all instances of - EarlyStoppingfound in the Trainer.callbacks list.
 - property estimated_stepping_batches: Union[int, float]¶
- The estimated number of batches that will - optimizer.step()during training.- This accounts for gradient accumulation and the current trainer configuration. This might be used when setting up your training dataloader, if it hasn’t been set up already. - def configure_optimizers(self): optimizer = ... stepping_batches = self.trainer.estimated_stepping_batches scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=1e-3, total_steps=stepping_batches) return [optimizer], [scheduler] - Raises:
- MisconfigurationException – If estimated stepping batches cannot be computed due to different accumulate_grad_batches at different epochs. 
 
 - property global_step: int¶
- The number of optimizer steps taken (does not reset each epoch). - This includes multiple optimizers (if enabled). 
 - property is_global_zero: bool¶
- Whether this process is the global zero in multi-node training. - def training_step(self, batch, batch_idx): if self.trainer.is_global_zero: print("in node 0, accelerator 0") 
 - property log_dir: Optional[str]¶
- The directory for the current experiment. Use this to save images to, etc… - Note - You must call this on all processes. Failing to do so will cause your program to stall forever. - def training_step(self, batch, batch_idx): img = ... save_img(img, self.trainer.log_dir) 
 - property logged_metrics: dict[str, torch.Tensor]¶
- The metrics sent to the loggers. - This includes metrics logged via - log()with the- loggerargument set.
 - property loggers: list[lightning.pytorch.loggers.logger.Logger]¶
- The list of - Loggerused.- for logger in trainer.loggers: logger.log_metrics({"foo": 1.0}) 
 - property model: Optional[Module]¶
- The LightningModule, but possibly wrapped into DataParallel or DistributedDataParallel. - To access the pure LightningModule, use - lightning_module()instead.
 - property num_predict_batches: list[Union[int, float]]¶
- The number of prediction batches that will be used during - trainer.predict().
 - property num_sanity_val_batches: list[Union[int, float]]¶
- The number of validation batches that will be used during the sanity-checking part of - trainer.fit().
 - property num_test_batches: list[Union[int, float]]¶
- The number of test batches that will be used during - trainer.test().
 - property num_training_batches: Union[int, float]¶
- The number of training batches that will be used during - trainer.fit().
 - property num_val_batches: list[Union[int, float]]¶
- The number of validation batches that will be used during - trainer.fit()or- trainer.validate().
 - property predict_dataloaders: Optional[Any]¶
- The prediction dataloader(s) used during - trainer.predict().
 - property progress_bar_callback: Optional[ProgressBar]¶
- An instance of - ProgressBarfound in the Trainer.callbacks list, or- Noneif one doesn’t exist.
 - property progress_bar_metrics: dict[str, float]¶
- The metrics sent to the progress bar. - This includes metrics logged via - log()with the- prog_barargument set.
 - property received_sigterm: bool¶
- Whether a - signal.SIGTERMsignal was received.- For example, this can be checked to exit gracefully.