Optimizers

Apart from the one(s) below, any optimizer in tensorflow.keras.optimizers can be used.

class libspn_keras.optimizers.OnlineExpectationMaximization(learning_rate=0.01, accumulate_batches=1, name='OnlineEM', **kwargs)

Online expectation maximization.

Requires sum layers to use any of the EM-based SumOpBase instances, such as SumOpEMBackprop SumOpHardEMBackprop.

Parameters
  • learning_rate (float) – Learning rate for EM. If learning rate is \(\eta\), then updates are given by: \(w \leftarrow (1-\eta)w + \eta \Delta w\)

  • accumulate_batches (int) – The number of batches to accumulate gradients before applying updates.

  • name (str) – Name of the optimizer

  • kwargs – Remaining kwargs to pass to Optimizer superclass