chmncc.optimizers package

Submodules

chmncc.optimizers.adam module

Adam module

chmncc.optimizers.adam.get_adam_optimizer(net: Module, lr: float, weight_decay: float) Optimizer[source]

ADAM optimizer

Parameters:
  • [nn.Module] (net) – network architecture

  • [float] (weight_decay) – learning rate

  • [float] – weight_decay

Returns:

optimizer [nn.Optimizer]

chmncc.optimizers.adam.get_adam_optimizer_with_gate(net: Module, gate: DenseGatingFunction, lr: float, weight_decay: float) Optimizer[source]

ADAM optimizer

Parameters:
  • [nn.Module] (net) – network architecture

  • [float] (weight_decay) – learning rate

  • [float] – weight_decay

Returns:

optimizer [nn.Optimizer]

chmncc.optimizers.exponential module

Exponential scheduler

chmncc.optimizers.exponential.get_exponential_scheduler(optimizer: Optimizer, gamma: float) _LRScheduler[source]

Exponential Decay Learning Rate

Parameters:
  • [nn.Optimizer] (optimizer) –

  • [float] (gamma) – decay rate

Returns:

scheduler [torch.optim.lr_scheduler._LRScheduler]

chmncc.optimizers.reduce_lr_on_plateau module

Reduce LR on plateau scheduler

chmncc.optimizers.reduce_lr_on_plateau.get_plateau_scheduler(optimizer: Optimizer, patience: int) _LRScheduler[source]

Get Reduce on Plateau scheduler

Parameters:
  • [nn.Optimizer] (optimizer) –

  • [int] (patience) –

Returns:

scheduler [torch.optim.lr_scheduler._LRScheduler]

chmncc.optimizers.sgd module

Stochastic gradient descend scheduler

chmncc.optimizers.sgd.get_sdg_optimizer_with_gate(net: Module, gate: DenseGatingFunction, lr: float, weight_decay: float) Optimizer[source]

ADAM optimizer

Parameters:
  • [nn.Module] (net) – network architecture

  • [float] (weight_decay) – learning rate

  • [float] – weight_decay

Returns:

optimizer [nn.Optimizer]

chmncc.optimizers.sgd.get_sgd_optimizer(net: Module, lr: float, momentum: float = 0.9) Optimizer[source]

SGD optimizer

Parameters:
  • [nn.Module] (net) – network architecture

  • [float] (momentum) – momentum

Returns:

optimizer [nn.Optimizer]

chmncc.optimizers.step_lr module

Step LR scheduler

chmncc.optimizers.step_lr.get_step_lr_scheduler(optimizer: Optimizer, step_size: int, gamma: float) _LRScheduler[source]

Get step lr scheduler

Parameters:
  • [nn.Optimizer] (optimizer) –

  • [int] (step_size) –

  • [float] (gamma) –

Returns:

scheduler [torch.optim.lr_scheduler._LRScheduler]

Module contents

Optimizers module It deals with all the optimizers we have employed or with the approaches we have experimented