Adam

class ivy.neural_net_stateful.optimizers.Adam(lr=0.0001, beta1=0.9, beta2=0.999, epsilon=1e-07, compile_step=False, dev_str=None)[source]

Bases: ivy.neural_net_stateful.optimizers.Optimizer

__init__(lr=0.0001, beta1=0.9, beta2=0.999, epsilon=1e-07, compile_step=False, dev_str=None)[source]

Construct an ADAM optimizer.

Parameters
  • lr (float, optional) – Learning rate, default is 1e-4.

  • beta1 (float, optional) – gradient forgetting factor, default is 0.9

  • beta2 (float, optional) – second moment of gradient forgetting factor, default is 0.999

  • epsilon (float, optional) – divisor during adam update, preventing division by zero, default is 1e-07

  • compile_step (bool, option) – Whether to compile the optimizer step, default is False.

  • dev_str (str, optional) – device on which to create the layer’s variables ‘cuda:0’, ‘cuda:1’, ‘cpu’ etc.

set_state(state)[source]

Set state of the optimizer.

Parameters

state (Ivy container of state tensors) – Nested state to update.

property state

Supported Frameworks:

empty jax_logo empty tf_logo empty pytorch_logo empty mxnet_logo empty numpy_logo empty