tfmri.losses.FocalTverskyLoss

class FocalTverskyLoss(alpha=0.3, beta=0.7, gamma=0.75, epsilon=1e-05, average='macro', class_weights=None, reduction='auto', name='focal_tversky_loss')[source]

Bases: tensorflow_mri.python.losses.confusion_losses.ConfusionLoss

Focal Tversky loss function.

The focal Tversky loss is computed as:

\[L = \left ( 1 - \frac{\mathrm{TP} + \epsilon}{\mathrm{TP} + \alpha \mathrm{FP} + \beta \mathrm{FN} + \epsilon} \right ) ^ \gamma\]

This loss allows control over the relative importance of false positives and false negatives through the alpha and beta parameters, which may be useful in imbalanced classes. Additionally, the gamma exponent can be used to shift the focus towards difficult examples.

Inputs y_true and y_pred are expected to have shape [..., num_classes], with channel i containing labels/predictions for class i. y_true[..., i] is 1 if the element represented by y_true[...] is a member of class i and 0 otherwise. y_pred[..., i] is the predicted probability, in the range [0.0, 1.0], that the element represented by y_pred[...] is a member of class i.

This class further assumes that inputs y_true and y_pred have shape [batch_size, ..., num_classes]. The loss is computed for each batch element y_true[i, ...] and y_pred[i, ...], and then reduced over this dimension as specified by argument reduction.

This loss works for binary, multiclass and multilabel classification and/or segmentation. In multiclass/multilabel problems, the different classes are combined according to the average and class_weights arguments. Argument average can take one of the following values:

  • 'micro': Calculate the loss globally by counting the total number of true positives, true negatives, false positives and false negatives.

  • 'macro': Calculate the loss for each label, and return their unweighted mean. This does not take label imbalance into account.

  • 'weighted': Calculate the loss for each label, and find their average weighted by class_weights. If class_weights is None, the classes are weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance.

Parameters
  • alpha – A float. Weight given to false positives. Defaults to 0.3.

  • beta

    A float. Weight given to false negatives. Defaults to 0.7.

  • gamma

    A float. The focus parameter. A lower value increases the importance given to difficult examples. Default is 0.75.

  • epsilon

    A float. A smoothing factor. Defaults to 1e-5.

  • average – A str. The class averaging mode. Valid values are 'micro', 'macro' and 'weighted'. Defaults to 'macro'. See above for details on the different modes.

  • class_weights

    A list of float values. The weights for each class. Must have length equal to the number of classes. This parameter is only relevant if average is 'weighted'. Defaults is None.

  • reduction – A value in tf.keras.losses.Reduction. The type of loss reduction.

  • name

    A str. The name of the loss instance.

Notes

[1] and [2] use inverted notations for the \(\alpha\) and \(\beta\) parameters. Here we use the notation of [1]. Also note that [2] refers to \(\gamma\) as \(\frac{1}{\gamma}\).

References

[1] Salehi, S. S. M., Erdogmus, D., & Gholipour, A. (2017, September).

Tversky loss function for image segmentation using 3D fully convolutional deep networks. In International workshop on machine learning in medical imaging (pp. 379-387). Springer, Cham.

[2] Abraham, N., & Khan, N. M. (2019, April). A novel focal tversky loss

function with improved attention u-net for lesion segmentation. In 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019) (pp. 683-687). IEEE.

Initializes Loss class.

Parameters
  • reduction

    Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial](

  • name – Optional name for the instance.

get_config()[source]

Returns the config dictionary for a Loss instance.