Tensorflow (tf.keras Layers)#

Sequence models#

class ncps.tf.CfC(*args, **kwargs)[source]#

Bases: RNN

Applies a Closed-form Continuous-time RNN to an input sequence.

Examples:

>>> from ncps.tf import CfC
>>>
>>> rnn = CfC(50)
>>> x = tf.random.uniform((2, 10, 20))  # (B,L,C)
>>> y = rnn(x)
Parameters:
  • units – Number of hidden units

  • mixed_memory – Whether to augment the RNN by a memory-cell to help learn long-term dependencies in the data (default False)

  • mode – Either “default”, “pure” (direct solution approximation), or “no_gate” (without second gate). (default “default)

  • activation – Activation function used in the backbone layers (default “lecun_tanh”)

  • backbone_units – Number of hidden units in the backbone layer (default 128)

  • backbone_layers – Number of backbone layers (default 1)

  • backbone_dropout – Dropout rate in the backbone layers (default 0)

  • return_sequences – Whether to return the full sequence or just the last output (default False)

  • return_state – Whether to return just the output of the RNN or a tuple (output, last_hidden_state) (default False)

  • go_backwards – If True, the input sequence will be process from back to the front (default False)

  • stateful – Whether to remember the last hidden state of the previous inference/training batch and use it as initial state for the next inference/training batch (default False)

  • unroll – Whether to unroll the graph, i.e., may increase speed at the cost of more memory (default False)

  • time_major – Whether the time or batch dimension is the first (0-th) dimension (default False)

  • kwargs

class ncps.tf.LTC(*args, **kwargs)[source]#

Bases: RNN

Applies a Liquid time-constant (LTC) RNN to an input sequence.

Examples:

>>> from ncps.tf import LTC
>>>
>>> rnn = LTC(50)
>>> x = tf.random.uniform((2, 10, 20))  # (B,L,C)
>>> y = rnn(x)

Note

For creating a wired Neural circuit policy (NCP) you can pass a ncps.wirings.NCP object instead of the number of units

Examples:

>>> from ncps.tf import LTC
>>> from ncps.wirings import NCP
>>>
>>> wiring = NCP(10, 10, 8, 6, 6, 4, 4)
>>> rnn = LTC(wiring)
>>> x = tf.random.uniform((2, 10, 20))  # (B,L,C)
>>> y = rnn(x)
Parameters:
  • units – Wiring (ncps.wirings.Wiring instance) or integer representing the number of (fully-connected) hidden units

  • mixed_memory

    Whether to augment the RNN by a memory-cell to help learn long-term dependencies in the data

  • input_mapping – Mapping applied to the sensory neurons. Possible values None, “linear”, “affine” (default “affine”)

  • output_mapping – Mapping applied to the motor neurons. Possible values None, “linear”, “affine” (default “affine”)

  • ode_unfolds – Number of ODE-solver steps per time-step (default 6)

  • epsilon – Auxillary value to avoid dividing by 0 (default 1e-8)

  • initialization_ranges – A dictionary for overwriting the range of the uniform weight initialization (default None)

  • return_sequences – Whether to return the full sequence or just the last output (default False)

  • return_state – Whether to return just the output of the RNN or a tuple (output, last_hidden_state) (default False)

  • go_backwards – If True, the input sequence will be process from back to the front (default False)

  • stateful – Whether to remember the last hidden state of the previous inference/training batch and use it as initial state for the next inference/training batch (default False)

  • unroll – Whether to unroll the graph, i.e., may increase speed at the cost of more memory (default False)

  • time_major – Whether the time or batch dimension is the first (0-th) dimension (default False)

  • kwargs

Single time-step models (RNN cells)#

class ncps.tf.CfCCell(*args, **kwargs)[source]#

Bases: AbstractRNNCell

A Closed-form Continuous-time cell.

Note

This is an RNNCell that process single time-steps. To get a full RNN that can process sequences, see ncps.tf.CfC or wrap the cell with a tf.keras.layers.RNN.

Parameters:
  • units – Number of hidden units

  • input_sparsity

  • recurrent_sparsity

  • mode – Either “default”, “pure” (direct solution approximation), or “no_gate” (without second gate).

  • activation – Activation function used in the backbone layers

  • backbone_units – Number of hidden units in the backbone layer (default 128)

  • backbone_layers – Number of backbone layers (default 1)

  • backbone_dropout – Dropout rate in the backbone layers (default 0)

  • kwargs

property state_size#
build(input_shape)[source]#
call(inputs, states, **kwargs)[source]#
class ncps.tf.LTCCell(*args, **kwargs)[source]#

Bases: AbstractRNNCell

A Liquid time-constant (LTC) cell.

Note

This is an RNNCell that process single time-steps. To get a full RNN that can process sequences, see ncps.tf.LTC or wrap the cell with a tf.keras.layers.RNN.

Examples:

>>> import ncps
>>> from ncps.tf import LTCCell
>>>
>>> wiring = ncps.wirings.Random(16, output_dim=2, sparsity_level=0.5)
>>> cell = LTCCell(wiring)
>>> rnn = tf.keras.layers.RNN(cell)
>>> x = tf.random.uniform((1,4)) # (batch, features)
>>> h0 = tf.zeros((1, 16))
>>> y = cell(x,h0)
>>>
>>> x_seq = tf.random.uniform((1,20,4)) # (batch, time, features)
>>> y_seq = rnn(x_seq)
Parameters:
  • wiring

  • input_mapping

  • output_mapping

  • ode_unfolds

  • epsilon

  • initialization_ranges

  • kwargs

property state_size#
property sensory_size#
property motor_size#
property output_size#
build(input_shape)[source]#
call(inputs, states)[source]#
get_config()[source]#
classmethod from_config(config)[source]#