lars_update(ws, dcdws, lr, decay_lambda=0, inplace=True, stop_gradients=True)¶
Update weights ws of some function, given the derivatives of some cost c with respect to ws, [dc/dw for w in ws], by applying Layerwise Adaptive Rate Scaling (LARS) method.
ws (Ivy container) – Weights of the function to be updated.
dcdws (Ivy container) – Derivates of the cost c with respect to the weights ws, [dc/dw for w in ws].
lr (float) – Learning rate, the rate at which the weights should be updated relative to the gradient.
decay_lambda (float) – The factor used for weight decay. Default is zero.
inplace (bool, optional) – Whether to perform the operation inplace, for backends which support inplace variable updates, and handle gradients behind the scenes such as PyTorch. If the update step should form part of a computation graph (i.e. higher order optimization), then this should be set to False. Default is True.
stop_gradients (bool, optional) – Whether to stop the gradients of the variables after each gradient step. Default is True.
The new function weights ws_new, following the LARS updates.