tfmri.losses.ConfusionLoss

class ConfusionLoss(average='macro', class_weights=None, reduction='auto', name='confusion_loss')[source]

Bases: keras.losses.Loss

Abstract base class for losses derived from the confusion matrix.

A confusion matrix is a table that reports the number of true positives, false positives, true negatives and false negatives.

This provides a base class for losses that are calculated based on the values of the confusion matrix.

This class’s call method computes the confusion matrix and then calls method result. Subclasses are expected to implement this method to compute the loss value based on the confusion matrix. Then, the average is computed according to the configuration.

This class exposes the attributes true_positives, true_negatives, false_positives and false_negatives for use by subclasses. Each of these is a list containing one value for each class.

This loss may be used for binary, multi-class and multi-label classification.

Inputs y_true and y_pred are expected to have shape [..., num_classes], with channel i containing labels/predictions for class i. y_true[..., i] is 1 if the element represented by y_true[...] is a member of class i and 0 otherwise. y_pred[..., i] is the predicted probability, in the range [0.0, 1.0], that the element represented by y_pred[...] is a member of class i.

This class further assumes that inputs y_true and y_pred have shape [batch_size, ..., num_classes]. The loss is computed for each batch element y_true[i, ...] and y_pred[i, ...], and then reduced over this dimension as specified by argument reduction.

This loss works for binary, multiclass and multilabel classification and/or segmentation. In multiclass/multilabel problems, the different classes are combined according to the average and class_weights arguments. Argument average can take one of the following values:

  • 'micro': Calculate the loss globally by counting the total number of true positives, true negatives, false positives and false negatives.

  • 'macro': Calculate the loss for each label, and return their unweighted mean. This does not take label imbalance into account.

  • 'weighted': Calculate the loss for each label, and find their average weighted by class_weights. If class_weights is None, the classes are weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance.

Parameters
  • average – A str. The class averaging mode. Valid values are 'micro', 'macro' and 'weighted'. Defaults to 'macro'. See above for details on the different modes.

  • class_weights

    A list of float values. The weights for each class. Must have length equal to the number of classes. This parameter is only relevant if average is 'weighted'. Defaults is None.

  • reduction – A value in tf.keras.losses.Reduction. The type of loss reduction.

  • name

    A str. The name of the loss instance.

Initializes Loss class.

Parameters
  • reduction

    Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial](

  • name – Optional name for the instance.

call(y_true, y_pred)[source]

Invokes the Loss instance.

Parameters
  • y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]

  • y_pred – The predicted values. shape = [batch_size, d0, .. dN]

Returns

Loss values with the shape [batch_size, d0, .. dN-1].

get_config()[source]

Returns the config dictionary for a Loss instance.