如何转换 TensorFlow 模型并在 OpenVINO™ 工具包中使用





5.00/5 (1投票)
一个非常简单的指南,适用于所有希望开始 OpenVINO 之旅的 TensorFlow 开发人员。
要使用 OpenVINO™ 工具包 运行网络,首先需要将其转换为 中间表示 (IR)。为此,您需要 模型优化器,它是 OpenVINO™ 工具包开发者包中的一个命令行工具。最简单的方法是通过 PyPi 获取它。
pip install openvino-dev
模型优化器直接支持 TensorFlow 模型,因此下一步是在终端中使用以下命令
mo --input_model v3-small_224_1.0_float.pb --input_shape "[1,224,224,3]"
这意味着您正在将 v3-small_224_1.0_float.pb 模型转换为一个尺寸为 224x224 的 RGB 图像。当然,您可以指定更多参数,例如预处理步骤或所需的模型精度(FP32 或 FP16)。
mo --input_model v3-small_224_1.0_float.pb --input_shape "[1,224,224,3]" --mean_values="[127.5,127.5,127.5]" --scale_values="[127.5]" --data_type FP16
您的模型会将所有像素归一化到 [-1,1] 的值范围,并且推理将以 FP16 模式进行。运行后,您应该会看到如下内容,其中包含所有显式和隐式参数,例如模型路径、输入形状、所选精度、通道反转、均值和缩放值、转换参数等等。
Exporting TensorFlow model to IR… This may take a few minutes.
Model Optimizer arguments:
Common parameters:
— Path to the Input Model: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.pb
— Path for generated IR: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model
— IR output name: v3-small_224_1.0_float
— Log level: ERROR
— Batch: Not specified, inherited from the model
— Input layers: Not specified, inherited from the model
— Output layers: Not specified, inherited from the model
— Input shapes: [1,224,224,3]
— Mean values: [127.5,127.5,127.5]
— Scale values: [127.5]
— Scale factor: Not specified
— Precision of IR: FP16
— Enable fusing: True
— Enable grouped convolutions fusing: True
— Move mean values to preprocess section: None
— Reverse input channels: False
TensorFlow specific parameters:
— Input model in text protobuf format: False
— Path to model dump for TensorBoard: None
— List of shared libraries with TensorFlow custom layers implementation: None
— Update the configuration file with input/output node names: None
— Use configuration file used to generate the model with Object Detection API: None
— Use the config file: None
— Inference Engine found in: /home/adrian/repos/openvino_notebooks/openvino_env/lib/python3.8/site-packages/openvino
Inference Engine version: 2021.4.1–3926–14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1–3926–14e67d86634-releases/2021/4
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.xml
[ SUCCESS ] BIN file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.bin
[ SUCCESS ] Total execution time: 9.97 seconds.
[ SUCCESS ] Memory consumed: 374 MB.
SUCCESS 表示一切都已成功转换。您应该获得 IR,它由两个文件组成:.xml 和 .bin。现在,您可以将此网络加载到推理引擎并运行推理。下面的代码假定您的模型用于 ImageNet 分类。
import cv2
import numpy as np
from openvino.inference_engine import IECore
# Load the model
ie = IECore()
net = ie.read_network(model="v3-small_224_1.0_float.xml", weights="v3-small_224_1.0_float.bin")
exec_net = ie.load_network(network=net, device_name="CPU")
input_key = next(iter(exec_net.input_info))
output_key = next(iter(exec_net.outputs.keys()))
# Load the image
# The MobileNet network expects images in RGB format
image = cv2.cvtColor(cv2.imread(filename="image.jpg"), code=cv2.COLOR_BGR2RGB)
# resize to MobileNet image shape
input_image = cv2.resize(src=image, dsize=(224, 224))
# reshape to network input shape
input_image = np.expand_dims(input_image.transpose(2, 0, 1), axis=0)
# Do inference
result = exec_net.infer(inputs={input_key: input_image})[output_key]
result_index = np.argmax(result)
# Convert the inference result to a class name.
imagenet_classes = open("imagenet_2012.txt").read().splitlines()
# The model description states that for this model, class 0 is background,
# so we add background at the beginning of imagenet_classes
imagenet_classes = ["background"] + imagenet_classes
print(imagenet_classes[result_index])
它有效!您将获得图像的类别(例如下图中的平毛寻回犬)。您可以使用此 演示 亲自尝试。
如果您想以更有限的方式尝试 OpenVINO,并且对代码进行更少的更改,请查看我们的 TensorFlow 集成 插件。