LSTM(input_channels, output_channels, num_layers=1, return_sequence=True, return_state=True, dev_str='cpu', v=None)¶
__init__(input_channels, output_channels, num_layers=1, return_sequence=True, return_state=True, dev_str='cpu', v=None)¶
LSTM layer, which is a set of stacked lstm cells.
input_channels (int) – Number of input channels for the layer
output_channels (int) – Number of output channels for the layer
num_layers (int, optional) – Number of lstm cells in the lstm layer, default is 1.
return_sequence (bool, optional) – Whether or not to return the entire output sequence, or just the latest timestep. Default is True.
return_state (bool, optional) – Whether or not to return the latest hidden and cell states. Default is True.
dev_str (str, optional) – device on which to create the layer’s variables ‘cuda:0’, ‘cuda:1’, ‘cpu’ etc. Default is cpu.
v (ivy container of parameter arrays, optional) – the variables for each of the lstm cells, as a container, constructed internally by default.
Get the initial state of the hidden and cell states, if not provided explicitly