tfmri.losses.SSIMMultiscaleLoss
tfmri.losses.SSIMMultiscaleLoss¶
- class SSIMMultiscaleLoss(max_val=None, power_factors=(0.0448, 0.2856, 0.3001, 0.2363, 0.1333), filter_size=11, filter_sigma=1.5, k1=0.01, k2=0.03, batch_dims=None, image_dims=None, rank=None, multichannel=True, complex_part=None, reduction='auto', name='ssim_multiscale_loss')[source]¶
Bases:
tensorflow_mri.python.losses.iqa_losses.LossFunctionWrapperIQA
Computes the multiscale structural similarity (MS-SSIM) loss.
The MS-SSIM loss is equal to \(1.0 - extrm{MS-SSIM}\).
- Parameters
max_val – The dynamic range of the images (i.e., the difference between the maximum and the minimum allowed values). Defaults to 1 for floating point input images and
MAX
for integer input images, whereMAX
is the largest positive representable number for the data type.power_factors – A list of weights for each of the scales. The length of the list determines the number of scales. Index 0 is the unscaled resolution’s weight and each increasing scale corresponds to the image being downsampled by 2. Defaults to (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), which are the values obtained in the original paper.
filter_size – The size of the Gaussian filter. Defaults to 11.
filter_sigma – The standard deviation of the Gaussian filter. Defaults to 1.5.
k1 – Factor used to calculate the regularization constant for the luminance term, as
C1 = (k1 * max_val) ** 2
. Defaults to 0.01.k2 – Factor used to calculate the regularization constant for the contrast term, as
C2 = (k2 * max_val) ** 2
. Defaults to 0.03.batch_dims –
An int. The number of batch dimensions in input images. If None, it is inferred from inputs and
image_dims
as(rank of inputs) - image_dims - 1
. Ifimage_dims
is also None, thenbatch_dims
defaults to 1.batch_dims
can always be inferred ifimage_dims
was specified, so you only need to provide one of the two.image_dims –
An int. The number of spatial dimensions in input images. If None, it is inferred from inputs and
batch_dims
as(rank of inputs) - batch_dims - 1
. Defaults to None.image_dims
can always be inferred ifbatch_dims
was specified, so you only need to provide one of the two.rank –
An int. The number of spatial dimensions. Must be 2 or 3. Defaults to
tf.rank(y_true) - 2
. In other words, if rank is not explicitly set,y_true
andy_pred
should have shape[batch, height, width, channels]
if processing 2D images or[batch, depth, height, width, channels]
if processing 3D images.multichannel – A boolean. Whether multichannel computation is enabled. If False, the inputs
y_true
andy_pred
are not expected to have a channel dimension, i.e. they should have shapebatch_shape + [height, width]
(2D) orbatch_shape + [depth, height, width]
(3D).complex_part – The part of a complex input to be used in the computation of the metric. Must be one of
'real'
,'imag'
,'abs'
or'angle'
. Note that real and imaginary parts, as well as angles, will be scaled to avoid negative numbers.reduction – Type of
tf.keras.losses.Reduction
to apply to loss. Default value isAUTO
.name – String name of the loss instance.
References
- 1
Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging, 3(1), 47-57.
DEPRECATED FUNCTION ARGUMENTS
Deprecated: SOME ARGUMENTS ARE DEPRECATED:
(rank)
. They will be removed after 2022-09-01. Instructions for updating: Use argumentimage_dims
instead.