Meta

ivy.fomaml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate, inner_optimization_step=<function gradient_descent_update>, inner_batch_fn=None, outer_batch_fn=None, average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None, keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]

Perform step of first order MAML.

Parameters
  • batch (ivy.Container) – The input batch

  • inner_cost_fn (callable) – callable for the inner loop cost function, receving task-specific sub-batch, inner vars and outer vars

  • outer_cost_fn (callable, optional) – callable for the outer loop cost function, receving task-specific sub-batch, inner vars and outer vars. If None, the cost from the inner loop will also be optimized in the outer loop.

  • variables (ivy.Container) – Variables to be optimized during the meta step

  • inner_grad_steps (int) – Number of gradient steps to perform during the inner loop.

  • inner_learning_rate (float) – The learning rate of the inner loop.

  • inner_optimization_step (callable, optional) – The function used for the inner loop optimization. Default is ivy.gradient_descent_update.

  • inner_batch_fn (callable, optional) – Function to apply to the task sub-batch, before passing to the inner_cost_fn. Default is None.

  • outer_batch_fn (callable, optional) – Function to apply to the task sub-batch, before passing to the outer_cost_fn. Default is None.

  • average_across_steps (bool, optional) – Whether to average the inner loop steps for the outer loop update. Default is False.

  • batched (bool, optional) – Whether to batch along the time dimension, and run the meta steps in batch. Default is True.

  • inner_v (dict str or list, optional) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values.

  • keep_inner_v (bool, optional) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default is True.

  • outer_v (dict str or list, optional) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values.

  • keep_outer_v (bool, optional) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default is True.

  • return_inner_v (str, optional) – Either ‘first’, ‘all’, or False. ‘first’ means the variables for the first task inner loop will also be returned. variables for all tasks will be returned with ‘all’. Default is False.

  • num_tasks (int, optional) – Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.

  • stop_gradients (bool, optional) – Whether to stop the gradients of the cost. Default is True.

Returns

The cost and the gradients with respect to the outer loop variables.

ivy.maml_step(batch, inner_cost_fn, outer_cost_fn, variables, inner_grad_steps, inner_learning_rate, inner_optimization_step=<function gradient_descent_update>, inner_batch_fn=None, outer_batch_fn=None, average_across_steps=False, batched=True, inner_v=None, keep_inner_v=True, outer_v=None, keep_outer_v=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]

Perform step of vanilla second order MAML.

Parameters
  • batch (ivy.Container) – The input batch

  • inner_cost_fn (callable) – callable for the inner loop cost function, receing sub-batch, inner vars and outer vars

  • outer_cost_fn (callable, optional) – callable for the outer loop cost function, receving task-specific sub-batch, inner vars and outer vars. If None, the cost from the inner loop will also be optimized in the outer loop.

  • variables (ivy.Container) – Variables to be optimized during the meta step

  • inner_grad_steps (int) – Number of gradient steps to perform during the inner loop.

  • inner_learning_rate (float) – The learning rate of the inner loop.

  • inner_optimization_step (callable, optional) – The function used for the inner loop optimization. Default is ivy.gradient_descent_update.

  • inner_batch_fn (callable, optional) – Function to apply to the task sub-batch, before passing to the inner_cost_fn. Default is None.

  • outer_batch_fn (callable, optional) – Function to apply to the task sub-batch, before passing to the outer_cost_fn. Default is None.

  • average_across_steps (bool, optional) – Whether to average the inner loop steps for the outer loop update. Default is False.

  • batched (bool, optional) – Whether to batch along the time dimension, and run the meta steps in batch. Default is True.

  • inner_v (dict str or list, optional) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values.

  • keep_inner_v (bool, optional) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default is True.

  • outer_v (dict str or list, optional) – Nested variable keys to be optimized during the inner loop, with same keys and boolean values.

  • keep_outer_v (bool, optional) – If True, the key chains in inner_v will be kept, otherwise they will be removed. Default is True.

  • return_inner_v (str, optional) – Either ‘first’, ‘all’, or False. ‘first’ means the variables for the first task inner loop will also be returned. variables for all tasks will be returned with ‘all’. Default is False.

  • num_tasks (int, optional) – Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.

  • stop_gradients (bool, optional) – Whether to stop the gradients of the cost. Default is True.

Returns

The cost and the gradients with respect to the outer loop variables.

ivy.reptile_step(batch, cost_fn, variables, inner_grad_steps, inner_learning_rate, inner_optimization_step=<function gradient_descent_update>, batched=True, return_inner_v=False, num_tasks=None, stop_gradients=True)[source]

Perform step of Reptile.

Parameters
  • batch (ivy.Container) – The input batch

  • cost_fn (callable) – callable for the cost function, receivng the task-specific sub-batch and variables

  • variables (ivy.Container) – Variables to be optimized

  • inner_grad_steps (int) – Number of gradient steps to perform during the inner loop.

  • inner_learning_rate (float) – The learning rate of the inner loop.

  • inner_optimization_step (callable, optional) – The function used for the inner loop optimization. Default is ivy.gradient_descent_update.

  • batched (bool, optional) – Whether to batch along the time dimension, and run the meta steps in batch. Default is True.

  • return_inner_v (str, optional) – Either ‘first’, ‘all’, or False. ‘first’ means the variables for the first task inner loop will also be returned. variables for all tasks will be returned with ‘all’. Default is False.

  • num_tasks (int, optional) – Number of unique tasks to inner-loop optimize for the meta step. Determined from batch by default.

  • stop_gradients (bool, optional) – Whether to stop the gradients of the cost. Default is True.

Returns

The cost and the gradients with respect to the outer loop variables.