将Tensorflow2中的图冻结为pb文件

9

我们通过图形冻结将许多TF1模型部署:

tf.train.write_graph(self.session.graph_def, some_path)

# get graph definitions with weights
output_graph_def = tf.graph_util.convert_variables_to_constants(
        self.session,  # The session is used to retrieve the weights
        self.session.graph.as_graph_def(),  # The graph_def is used to retrieve the nodes
        output_nodes,  # The output node names are used to select the usefull nodes
)

# optimize graph
if optimize:
    output_graph_def = optimize_for_inference_lib.optimize_for_inference(
            output_graph_def, input_nodes, output_nodes, tf.float32.as_datatype_enum
    )

with open(path, "wb") as f:
    f.write(output_graph_def.SerializeToString())

然后通过以下方式加载它们:

with tf.Graph().as_default() as graph:
    with graph.device("/" + args[name].processing_unit):
        tf.import_graph_def(graph_def, name="")
            for key, value in inputs.items():
                self.input[key] = graph.get_tensor_by_name(value + ":0")

我们希望以类似的方式保存TF2模型。一个protobuf文件将包括图和权重。我该如何实现这一点?
我知道有一些保存方法:
- keras.experimental.export_saved_model(model, 'path_to_saved_model'),它是实验性的,并且创建多个文件 :( - model.save('path_to_my_model.h5'),它保存h5格式 :( - tf.saved_model.save(self.model, "test_x_model"),它再次保存多个文件 :(
4个回答

7

上述代码有点过时了。当转换vgg16模型时,它可能会成功,但转换resnet_v2_50模型时会失败。我的tf版本是tf 2.2.0。 最终,我找到了一个有用的代码片段:

import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import     convert_variables_to_constants_v2
import numpy as np


#set resnet50_v2 as a example
model = tf.keras.applications.ResNet50V2()
 
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
    tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()
 
layers = [op.name for op in frozen_func.graph.get_operations()]
print("-" * 50)
print("Frozen model layers: ")
for layer in layers:
    print(layer)
 
print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)
 
# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                  logdir="./frozen_models",
                  name="frozen_graph.pb",
                  as_text=False)

参考资料:https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v2(更新)


您分享的参考链接似乎已经过期了。能否请您再次分享一下呢? - Jash Shah
抱歉回复晚了。链接已更新,请查看。 - zhenglin Li

5

我使用TF2来转换模型,具体方法如下:

  1. 在训练过程中通过将 keras.callbacks.ModelCheckpoint(save_weights_only=True) 传递给 model.fit 来保存 checkpoint;
  2. 训练完成后,通过self.model.load_weights(self.checkpoint_path)来加载checkpoint,然后转换为h5 格式: self.model.save(h5_path, overwrite=True, include_optimizer=False);
  3. h5转换为pb:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras

# necessary !!!
tf.compat.v1.disable_eager_execution()

h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
    output_names = [out.op.name for out in model.outputs]
    input_graph_def = sess.graph.as_graph_def()
    for node in input_graph_def.node:
        node.device = ""
    graph = graph_util.remove_training_nodes(input_graph_def)
    graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
    tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")

1
感谢您提供这段代码!稍作更正:write_graph 需要文件名和目录名两个参数。除此之外似乎没有问题。 - YvesQuemener
这个可以工作,但似乎会关闭会话,在我的用例中,我们需要每n个epochs保存和导出模型到.pb文件(通过使用on_epoch_end_callback),有什么建议吗? - Luis Leal
更新:通过不使用“with K.get_session() as sess”来解决它,而是使用“session = tf.compat.v1.keras.backend.get_session()”存储默认的Keras会话。 - Luis Leal

3

我遇到了类似的问题,并找到了以下的解决方法,它是

from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.python.tools import optimize_for_inference_lib

loaded = tf.saved_model.load('models/mnist_test')
infer = loaded.signatures['serving_default']
f = tf.function(infer).get_concrete_function(
                            flatten_input=tf.TensorSpec(shape=[None, 28, 28, 1], 
                                                        dtype=tf.float32)) # change this line for your own inputs
f2 = convert_variables_to_constants_v2(f)
graph_def = f2.graph.as_graph_def()
if optimize :
    # Remove NoOp nodes
    for i in reversed(range(len(graph_def.node))):
        if graph_def.node[i].op == 'NoOp':
            del graph_def.node[i]
    for node in graph_def.node:
        for i in reversed(range(len(node.input))):
            if node.input[i][0] == '^':
                del node.input[i]
    # Parse graph's inputs/outputs
    graph_inputs = [x.name.rsplit(':')[0] for x in frozen_func.inputs]
    graph_outputs = [x.name.rsplit(':')[0] for x in frozen_func.outputs]
    graph_def = optimize_for_inference_lib.optimize_for_inference(graph_def,
                                                                  graph_inputs,
                                                                  graph_outputs,
                                                                  tf.float32.as_datatype_enum)
# Export frozen graph
with tf.io.gfile.GFile('optimized_graph.pb', 'wb') as f:
    f.write(graph_def.SerializeToString())


0

目前我使用的方法是 TF2 -> SavedModel(通过keras.experimental.export_saved_model) -> frozen_graph.pb(通过freeze_graph工具,可以将SavedModel作为输入)。但我不知道这是否是“推荐”的方法。

此外,我仍然不知道如何以“TF2方式”加载冻结模型并运行推理(即无图形、会话等)。

您还可以查看keras.save_model('path',save_format='tf'),它似乎会生成检查点文件(但您仍然需要将其冻结,因此我个人认为保存的模型路径更好)。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接