tfmri.losses.ConfusionLoss
tfmri.losses.ConfusionLoss¶
- class ConfusionLoss(average='macro', class_weights=None, reduction='auto', name='confusion_loss')[source]¶
Bases:
keras.losses.LossAbstract base class for losses derived from the confusion matrix.
A confusion matrix is a table that reports the number of true positives, false positives, true negatives and false negatives.
This provides a base class for losses that are calculated based on the values of the confusion matrix.
This class’s
callmethod computes the confusion matrix and then calls methodresult. Subclasses are expected to implement this method to compute the loss value based on the confusion matrix. Then, the average is computed according to the configuration.This class exposes the attributes
true_positives,true_negatives,false_positivesandfalse_negativesfor use by subclasses. Each of these is a list containing one value for each class.This loss may be used for binary, multi-class and multi-label classification.
Inputs
y_trueandy_predare expected to have shape[..., num_classes], with channelicontaining labels/predictions for classi.y_true[..., i]is 1 if the element represented byy_true[...]is a member of classiand 0 otherwise.y_pred[..., i]is the predicted probability, in the range[0.0, 1.0], that the element represented byy_pred[...]is a member of classi.This class further assumes that inputs
y_trueandy_predhave shape[batch_size, ..., num_classes]. The loss is computed for each batch elementy_true[i, ...]andy_pred[i, ...], and then reduced over this dimension as specified by argumentreduction.This loss works for binary, multiclass and multilabel classification and/or segmentation. In multiclass/multilabel problems, the different classes are combined according to the
averageandclass_weightsarguments. Argumentaveragecan take one of the following values:'micro': Calculate the loss globally by counting the total number of true positives, true negatives, false positives and false negatives.'macro': Calculate the loss for each label, and return their unweighted mean. This does not take label imbalance into account.'weighted': Calculate the loss for each label, and find their average weighted byclass_weights. Ifclass_weightsis None, the classes are weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance.
- Parameters
average – A str. The class averaging mode. Valid values are
'micro','macro'and'weighted'. Defaults to'macro'. See above for details on the different modes.class_weights –
A list of float values. The weights for each class. Must have length equal to the number of classes. This parameter is only relevant if
averageis'weighted'. Defaults is None.reduction – A value in tf.keras.losses.Reduction. The type of loss reduction.
name –
A str. The name of the loss instance.
Initializes
Lossclass.- Parameters
reduction –
Type of
tf.keras.losses.Reductionto apply to loss. Default value isAUTO.AUTOindicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE. When used withtf.distribute.Strategy, outside of built-in training loops such astf.kerascompileandfit, usingAUTOorSUM_OVER_BATCH_SIZEwill raise an error. Please see this custom training [tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) for more details.
name – Optional name for the instance.
- call(y_true, y_pred)[source]¶
Invokes the
Lossinstance.- Parameters
y_true – Ground truth values. shape =
[batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape =[batch_size, d0, .. dN-1]y_pred – The predicted values. shape =
[batch_size, d0, .. dN]
- Returns
Loss values with the shape
[batch_size, d0, .. dN-1].