PyTorch (nn.modules)#

Sequence models#

class ncps.torch.CfC(input_size, units, proj_size=None, return_sequences=True, batch_first=True, mixed_memory=False, mode='default', activation='lecun_tanh', backbone_units=None, backbone_layers=None, backbone_dropout=None)[source]#

Bases: Module

Applies a Closed-form Continuous-time RNN to an input sequence.

Examples:

>>> from ncps.torch import CfC
>>>
>>> rnn = CfC(20,50)
>>> x = torch.randn(2, 3, 20) # (batch, time, features)
>>> h0 = torch.zeros(2,50) # (batch, units)
>>> output, hn = rnn(x,h0)
Parameters:
  • input_size (Union[int, Wiring]) – Number of input features

  • units – Number of hidden units

  • proj_size (Optional[int]) – If not None, the output of the RNN will be projected to a tensor with dimension proj_size (i.e., an implict linear output layer)

  • return_sequences (bool) – Whether to return the full sequence or just the last output

  • batch_first (bool) – Whether the batch or time dimension is the first (0-th) dimension

  • mixed_memory (bool) – Whether to augment the RNN by a memory-cell to help learn long-term dependencies in the data

  • mode (str) – Either “default”, “pure” (direct solution approximation), or “no_gate” (without second gate).

  • activation (str) – Activation function used in the backbone layers

  • backbone_units (Optional[int]) – Number of hidden units in the backbone layer (default 128)

  • backbone_layers (Optional[int]) – Number of backbone layers (default 1)

  • backbone_dropout (Optional[int]) – Dropout rate in the backbone layers (default 0)

training: bool#
forward(input, hx=None, timespans=None)[source]#
Parameters:
  • input – Input tensor of shape (L,C) in batchless mode, or (B,L,C) if batch_first was set to True and (L,B,C) if batch_first is False

  • hx – Initial hidden state of the RNN of shape (B,H) if mixed_memory is False and a tuple ((B,H),(B,H)) if mixed_memory is True. If None, the hidden states are initialized with all zeros.

  • timespans

Returns:

A pair (output, hx), where output and hx the final hidden state of the RNN

class ncps.torch.LTC(input_size, units, return_sequences=True, batch_first=True, mixed_memory=False, input_mapping='affine', output_mapping='affine', ode_unfolds=6, epsilon=1e-08, implicit_param_constraints=True)[source]#

Bases: Module

Applies a Liquid time-constant (LTC) RNN to an input sequence.

Examples:

>>> from ncps.torch import LTC
>>>
>>> rnn = LTC(20,50)
>>> x = torch.randn(2, 3, 20) # (batch, time, features)
>>> h0 = torch.zeros(2,50) # (batch, units)
>>> output, hn = rnn(x,h0)

Note

For creating a wired Neural circuit policy (NCP) you can pass a ncps.wirings.NCP object instead of the number of units

Examples:

>>> from ncps.torch import LTC
>>> from ncps.wirings import NCP
>>>
>>> wiring = NCP(10, 10, 8, 6, 6, 4, 6)
>>> rnn = LTC(20, wiring)

>>> x = torch.randn(2, 3, 20) # (batch, time, features)
>>> h0 = torch.zeros(2, 28) # (batch, units)
>>> output, hn = rnn(x,h0)
Parameters:
  • input_size (int) – Number of input features

  • units – Wiring (ncps.wirings.Wiring instance) or integer representing the number of (fully-connected) hidden units

  • return_sequences (bool) – Whether to return the full sequence or just the last output

  • batch_first (bool) – Whether the batch or time dimension is the first (0-th) dimension

  • mixed_memory (bool) –

    Whether to augment the RNN by a memory-cell to help learn long-term dependencies in the data

  • input_mapping

  • output_mapping

  • ode_unfolds

  • epsilon

  • implicit_param_constraints

property state_size#
property sensory_size#
property motor_size#
property output_size#
property synapse_count#
property sensory_synapse_count#
forward(input, hx=None, timespans=None)[source]#
Parameters:
  • input – Input tensor of shape (L,C) in batchless mode, or (B,L,C) if batch_first was set to True and (L,B,C) if batch_first is False

  • hx – Initial hidden state of the RNN of shape (B,H) if mixed_memory is False and a tuple ((B,H),(B,H)) if mixed_memory is True. If None, the hidden states are initialized with all zeros.

  • timespans

Returns:

A pair (output, hx), where output and hx the final hidden state of the RNN

training: bool#

Single time-step models (RNN cells)#

class ncps.torch.CfCCell(input_size, hidden_size, mode='default', backbone_activation='lecun_tanh', backbone_units=128, backbone_layers=1, backbone_dropout=0.0, sparsity_mask=None)[source]#

Bases: Module

A Closed-form Continuous-time cell.

Note

This is an RNNCell that process single time-steps. To get a full RNN that can process sequences see ncps.torch.CfC.

Parameters:
  • input_size

  • hidden_size

  • mode

  • backbone_activation

  • backbone_units

  • backbone_layers

  • backbone_dropout

  • sparsity_mask

init_weights()[source]#
forward(input, hx, ts)[source]#
training: bool#
class ncps.torch.LTCCell(wiring, in_features=None, input_mapping='affine', output_mapping='affine', ode_unfolds=6, epsilon=1e-08, implicit_param_constraints=False)[source]#

Bases: Module

A Liquid time-constant (LTC) cell.

Note

This is an RNNCell that process single time-steps. To get a full RNN that can process sequences see ncps.torch.LTC.

Parameters:
  • wiring

  • in_features

  • input_mapping

  • output_mapping

  • ode_unfolds

  • epsilon

  • implicit_param_constraints

property state_size#
property sensory_size#
property motor_size#
property output_size#
property synapse_count#
property sensory_synapse_count#
add_weight(name, init_value, requires_grad=True)[source]#
apply_weight_constraints()[source]#
forward(inputs, states, elapsed_time=1.0)[source]#
training: bool#