Losses

Supervised generative learning

The following loss can be used when trying to maximize \(p(X,Y)\).

class libspn_keras.losses.NegativeLogJoint(reduction='auto', name=None)

Compute \(-\log(p(X,Y))\).

Assumes that its input is \(\log(p(X|Y))\) where Y is indexed on the second axis. This can be used for supervised generative learning with gradient-based optimizers or (hard) expectation maximization.

Unsupervised generative learning

The following loss can be used when trying to maximize \(p(X)\).

class libspn_keras.losses.NegativeLogLikelihood(reduction='auto', name=None)

Marginalize logits over last dimension so that it computes \(-\log(p(X))\).

This can be used for unsupervised generative learning.