Models

This submodule provides some out-of-the box model analogues of Model. They can be used to train SPNs for e.g. generative scenarios, where there is no label for an input. There’s also a DynamicSumProductNetwork that can be used for

Feedforward models

class libspn_keras.models.SumProductNetwork(*args, unsupervised=True, **kwargs)

An SPN analogue of tensorflow.keras.Model that can be trained generatively.

It does not expect labels y when calling .fit() if unsupervised == True.

Parameters

unsupervised (bool) – If True (default) the model does not expect label inputs in .fit() or .evaluate(). Also, losses and metrics should not expect a target output, just a y_hat.

class libspn_keras.models.SequentialSumProductNetwork(layers, infer_no_evidence=False, unsupervised=None, infer_weighted_sum=True, **kwargs)

An analogue of Sequential that can be trained in an unsupervised way.

It does not expect labels y when calling .fit() if unsupervised == True. Inherits from Sequential, so layers are passed to it as a list.

Parameters
  • layers (list of Layer) – List of tf.keras.layers.Layer instance

  • infer_no_evidence (bool) – If True, the model expects an evidence mask defined as a boolean tensor which is used to mask out variables that are not part of the evidence.

  • unsupervised (bool) – If True the model does not expect label inputs in .fit() or .evaluate(). Also, losses and metrics should not expect a target output, just a y_hat. By default, it will be inferred from infer_no_evidence and otherwise defaults to True.

  • infer_weighted_sum (bool) – If True gives weighted sum of leaf representations, where coefficients are given by backprop signals. Otherwise chooses argmax of those coefficients.

zero_evidence_inference(size)

Do inference when no evidence at all is provided.

This means that internally, all evidence booleans are set to False.

Parameters

size (Tensor) – Size of batch.

Return type

Tensor

Returns

Representation of batch size in absence of evidence.

Temporal models

class libspn_keras.models.DynamicSumProductNetwork(template_network, interface_network_t0, interface_network_t_minus_1, top_network, return_last_step=True, unsupervised=True, **kwargs)

SPN that re-uses its nodes at each time step.

The input is expected to be pre-padded sequences with a full tensor shape of [num_batch, max_sequence_len, num_variables].

Parameters
  • template_network (Model) – Template network that is applied to the leaves and ends with nodes that cover all variables for each timestep.

  • interface_network_t0 (Model) – Interface network for t = t0, applied on top of the template network’s output at the current timestep.

  • interface_network_t_minus_1 (Model) – Interface network for t = t0 - 1, applied to the output of the interfaced output of the previous timestep

  • top_network (Model) – Network on top of the interfaced network at the current timestep (covers all variables of the current timestep, including those of previous timesteps). This network must end with a root sum layer.

  • return_last_step (bool) – Whether to return only the roots at the last step with shape [num_batch, root_num_out] or whether to [num_batch, max_sequence_len, root_num_out]

  • unsupervised (bool) –