mixmo.core.scheduler.MultiGammaStepLR

class mixmo.core.scheduler.MultiGammaStepLR(optimizer, dict_milestone_to_gamma, last_epoch=- 1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler

Multi step decay scheduler, with decay applied to the learning rate every set milestone

__init__(optimizer, dict_milestone_to_gamma, last_epoch=- 1)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(optimizer, dict_milestone_to_gamma)

Initialize self.

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()

load_state_dict(state_dict)

Loads the schedulers state.

state_dict()

Returns the state of the scheduler as a dict.

step([epoch])