计算两个模型之间的渐变

问题描述

让我们假设我们正在构建一个基本的CNN,它可以识别猫和狗的图片(二进制分类器)。

这类CNN的例子如下:

model = Sequential([
  Conv2D(32, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2),

  Conv2D(32, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2)

  Conv2D(64, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2),

  Flatten(),
  Dense(64),
  Activation('relu'),
  Dropout(0.5),
  Dense(1),
  Activation('sigmoid')
])

让我们还假设我们希望将模型拆分成两个部分,即model_0model_1

model_0将处理输入,model_1将接受model_0输出并将其作为输入。

例如,以前的模型将变为:

model_0 = Sequential([
  Conv2D(32, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2),

  Conv2D(32, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2)

  Conv2D(64, (3,3), input_shape=...),
  Activation('relu'),
  MaxPooling2D(pool_size=(2,2)
])

model_1 = Sequential([
  Flatten(),
  Dense(64),
  Activation('relu'),
  Dropout(0.5),
  Dense(1),
  Activation('sigmoid')
])
如何训练这两个模型,就像它们是一个模型一样?我尝试手动设置渐变,但我不知道如何将渐变从model_1传递到model_0

for epoch in range(epochs):
    for step, (x_batch, y_batch) in enumerate(train_generator):

        # model 0
        with tf.GradientTape() as tape_0:
            y_pred_0 = model_0(x_batch, training=True)

        # model 1
        with tf.GradientTape() as tape_1:
            y_pred_1 = model_1(y_pred_0, training=True)

            loss_value = loss_fn(y_batch_tensor, y_pred_1)

        grads_1 = tape_1.gradient(y_pred_1, model_1.trainable_weights)
        grads_0 = tape_0.gradient(y_pred_0, model_0.trainable_weights)
        optimizer.apply_gradients(zip(grads_1, model_1.trainable_weights))
        optimizer.apply_gradients(zip(grads_0, model_0.trainable_weights))

这种方法当然行不通,因为我基本上只是分别训练两个模型,然后将它们绑定在一起,这不是我想要实现的。

这是一个Google Colab笔记本,用于解决此问题的简单版本,仅使用两个完全连接的层和两个激活函数:https://colab.research.google.com/drive/14Px1rJtiupnB6NwtvbgeVYw56N1xM6JU#scrollTo=PeqtJJWS3wyG

请注意,我知道Sequential([model_0, model_1]),但这不是我想要实现的目标。我想手动执行反向传播步骤。

另外,我想继续使用两盘单独的磁带。这里的诀窍是使用grads_1来计算grads_0

有什么线索吗?


解决方案

寻求帮助并更好地了解自动区分(或自动区分)的动态之后,我设法获得了一个有效的简单示例来说明我想要实现的目标。尽管这种方法不能完全解决问题,但它使我们在理解如何处理手头的问题方面向前迈进了一步。

参考模型

我已将模型简化为更小的模型:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense, Layer, Flatten, Conv2D
import numpy as np

tf.random.set_seed(0)
# 3 batches, 10x10 images, 1 channel
x = tf.random.uniform((3, 10, 10, 1))
y = tf.cast(tf.random.uniform((3, 1)) > 0.5, tf.float32)

layer_0 = Sequential([Conv2D(filters=6, kernel_size=2, activation="relu")])
layer_1 = Sequential([Conv2D(filters=6, kernel_size=2, activation="relu")])
layer_2 = Sequential([Flatten(), Dense(1), Activation("sigmoid")])

loss_fn = tf.keras.losses.MeanSquaredError()

我们分成三个部分layer_0, layer_1, layer_2。普通的方法就是把所有的东西放在一起,一个接一个地计算渐变(或者只需一步):

with tf.GradientTape(persistent=True) as tape:
    out_layer_0 = layer_0(x)
    out_layer_1 = layer_1(out_layer_0)
    out_layer_2 = layer_2(out_layer_1)
    loss = loss_fn(y, out_layer_2)

只需简单调用tape.gradient

即可计算出不同的梯度
ref_conv_dLoss_dWeights2 = tape.gradient(loss, layer_2.trainable_weights)
ref_conv_dLoss_dWeights1 = tape.gradient(loss, layer_1.trainable_weights)
ref_conv_dLoss_dWeights0 = tape.gradient(loss, layer_0.trainable_weights)

ref_conv_dLoss_dY = tape.gradient(loss, out_layer_2)
ref_conv_dLoss_dOut1 = tape.gradient(loss, out_layer_1)
ref_conv_dOut2_dOut1 = tape.gradient(out_layer_2, out_layer_1)
ref_conv_dLoss_dOut0 = tape.gradient(loss, out_layer_0)
ref_conv_dOut1_dOut0 = tape.gradient(out_layer_1, out_layer_0)
ref_conv_dOut0_dWeights0 = tape.gradient(out_layer_0, layer_0.trainable_weights)
ref_conv_dOut1_dWeights1 = tape.gradient(out_layer_1, layer_1.trainable_weights)
ref_conv_dOut2_dWeights2 = tape.gradient(out_layer_2, layer_2.trainable_weights)

我们稍后将使用这些值来比较我们方法的正确性。

使用手动自动比较拆分模型

对于拆分,我们的意思是每个layer_x都需要有自己的GradientTape,负责生成自己的梯度:

with tf.GradientTape(persistent=True) as tape_0:
    out_layer_0 = model.layers[0](x)

with tf.GradientTape(persistent=True) as tape_1:
    tape_1.watch(out_layer_0)
    out_layer_1 = model.layers[1](out_layer_0)

with tf.GradientTape(persistent=True) as tape_2:
    tape_2.watch(out_layer_1)
    out_flatten = model.layers[2](out_layer_1)
    out_layer_2 = model.layers[3](out_flatten)
    loss = loss_fn(y, out_layer_2)
现在,简单地对每个步骤使用tape_n.gradient将不起作用。我们基本上正在丢失大量信息,这些信息之后无法恢复。

相反,我们必须使用tape.jacobiantape.batch_jacobian,但不包括,因为我们只有一个值作为源。

dOut0_dWeights0 = tape_0.jacobian(out_layer_0, model.layers[0].trainable_weights)

dOut1_dOut0 = tape_1.batch_jacobian(out_layer_1, out_layer_0)
dOut1_dWeights1 = tape_1.jacobian(out_layer_1, model.layers[1].trainable_weights)

dOut2_dOut1 = tape_2.batch_jacobian(out_layer_2, out_layer_1)
dOut2_dWeights2 = tape_2.jacobian(out_layer_2, model.layers[3].trainable_weights)

dLoss_dOut2 = tape_2.gradient(loss, out_layer_2) # or dL/dY

我们将使用几个实用程序函数根据需要调整结果:


def add_missing_axes(source_tensor, target_tensor):
    len_missing_axes = len(target_tensor.shape) - len(source_tensor.shape)
    # note: the number of tf.newaxis is determined by the number of axis missing to reach
    # the same dimension of the target tensor
    assert len_missing_axes >= 0

    # convenience renaming
    source_tensor_extended = source_tensor
    # add every missing axis
    for _ in range(len_missing_axes):
        source_tensor_extended = source_tensor_extended[..., tf.newaxis]

    return source_tensor_extended

def upstream_gradient_loss_weights(dOutUpstream_dWeightsLocal, dLoss_dOutUpstream):
    dLoss_dOutUpstream_extended = add_missing_axes(dLoss_dOutUpstream, dOutUpstream_dWeightsLocal)
    # reduce over the first axes
    len_reduce = range(len(dLoss_dOutUpstream.shape))
    return tf.reduce_sum(dOutUpstream_dWeightsLocal * dLoss_dOutUpstream_extended, axis=len_reduce)

def upstream_gradient_loss_out(dOutUpstream_dOutLocal, dLoss_dOutUpstream):
    dLoss_dOutUpstream_extended = add_missing_axes(dLoss_dOutUpstream, dOutUpstream_dOutLocal)
    len_reduce = range(len(dLoss_dOutUpstream.shape))[1:]
    return tf.reduce_sum(dOutUpstream_dOutLocal * dLoss_dOutUpstream_extended, axis=len_reduce)

最后,我们可以应用链规则:


dOut2_dOut1 = tape_2.batch_jacobian(out_layer_2, out_layer_1)
dOut2_dWeights2 = tape_2.jacobian(out_layer_2, model.layers[3].trainable_weights)

dLoss_dOut2 = tape_2.gradient(loss, out_layer_2) # or dL/dY
dLoss_dWeights2 = upstream_gradient_loss_weights(dOut2_dWeights2[0], dLoss_dOut2)
dLoss_dBias2 = upstream_gradient_loss_weights(dOut2_dWeights2[1], dLoss_dOut2)

dLoss_dOut1 = upstream_gradient_loss_out(dOut2_dOut1, dLoss_dOut2)
dLoss_dWeights1 = upstream_gradient_loss_weights(dOut1_dWeights1[0], dLoss_dOut1)
dLoss_dBias1 = upstream_gradient_loss_weights(dOut1_dWeights1[1], dLoss_dOut1)

dLoss_dOut0 = upstream_gradient_loss_out(dOut1_dOut0, dLoss_dOut1)
dLoss_dWeights0 = upstream_gradient_loss_weights(dOut0_dWeights0[0], dLoss_dOut0)
dLoss_dBias0 = upstream_gradient_loss_weights(dOut0_dWeights0[1], dLoss_dOut0)

print("dLoss_dWeights2 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights2[0], dLoss_dWeights2).numpy())
print("dLoss_dBias2 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights2[1], dLoss_dBias2).numpy())
print("dLoss_dWeights1 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights1[0], dLoss_dWeights1).numpy())
print("dLoss_dBias1 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights1[1], dLoss_dBias1).numpy())
print("dLoss_dWeights0 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights0[0], dLoss_dWeights0).numpy())
print("dLoss_dBias0 valid:", tf.experimental.numpy.allclose(ref_conv_dLoss_dWeights0[1], dLoss_dBias0).numpy())

输出为:

dLoss_dWeights2 valid: True
dLoss_dBias2 valid: True
dLoss_dWeights1 valid: True
dLoss_dBias1 valid: True
dLoss_dWeights0 valid: True
dLoss_dBias0 valid: True

因为所有值都彼此接近。请注意,使用雅可比人的方法,我们将有一定程度的误差/近似,约为10^-7,但我认为这已经足够好了。

Gotchas

对于Extreme或玩具模特来说,这是完美的,效果很好。然而,在真实的场景中,你会有很多维度的大图像。这在处理雅各布人时并不理想,因为雅各布人很快就会达到非常高的维度。但这完全是一个问题。

您可以在以下资源上阅读有关该主题的更多信息:

  • (En)https://mblondel.org/teaching/autodiff-2020.pdf
  • (En)https://www.sscardapane.it/assets/files/nnds2021/Lecture_3_fully_connected.pdf
  • (ITA)https://iaml.it/blog/differenziazione-automatica-parte-1

相关文章