trainers

class transfer_nlp.plugins.trainers.BasicTrainer(model: torch.nn.Module, dataset_splits: transfer_nlp.loaders.loaders.DatasetSplits, loss: torch.nn.Module, optimizer: torch.optim.Optimizer, metrics: Dict[str, ignite.metrics.Metric], experiment_config: transfer_nlp.plugins.config.ExperimentConfig, device: str = None, num_epochs: int = 1, seed: int = None, cuda: bool = None, loss_accumulation_steps: int = 4, scheduler: Any = None, regularizer: transfer_nlp.plugins.regularizers.RegularizerABC = None, gradient_clipping: float = 1.0, output_transform=None, tensorboard_logs: str = None, embeddings_name: str = None, finetune: bool = False)[source]
freeze_and_replace_final_layer()[source]

Freeze al layers and replace the last layer with a custom Linear projection on the predicted classes Note: this method assumes that the pre-trained model ends with a classifier layer, that we want to learn :return:

train()[source]

Launch the ignite training pipeline If fine-tuning mode is granted in the config file, freeze all layers, replace classification layer by a Linear layer and reset the optimizer :return:

This class contains the abstraction interface to customize runners. For the training loop, we use the engine logic from pytorch-ignite

Check experiments for examples of experiment json files

class transfer_nlp.plugins.trainers.TrainingMetric(metric: ignite.metrics.Metric)[source]