700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > OpenVINO 部署 YOLOv5 转换IR文件

OpenVINO 部署 YOLOv5 转换IR文件

时间:2024-05-25 01:29:48

相关推荐

OpenVINO 部署 YOLOv5 转换IR文件

环境:

Windows:10YOLOv5:3.1Python 3.7.10 torch:1.7.0+cu101torchvision:0.8.1+cu101 OpenVINO:openvino_.2.185Anaconda:2.0.4

运行以下脚本临时设置 OpenVINO 环境和变量:

cd C:\Program Files (x86)\Intel\openvino_.2.185\bin

setupvars.bat

(Python37) C:\Program Files (x86)\Intel\openvino_.2.185\bin>setupvars.bat

Python 3.7.10

[setupvars.bat] OpenVINO environment initialized

运行以下命令来生成YOLOv5模型的IR:

cd C:\Program Files (x86)\Intel\openvino_.2.185\deployment_tools\model_optimizer

python mo.py --input_model M:\yolov5-3.1\yolov5-v3\best.onnx --model_name M:\yolov5-3.1\best -s 255 --reverse_input_channels --output Conv_487,Conv_471,Conv_455

注:mo.py文件在 \openvino_[版本号]\deployment_tools\model_optimizer 路径下

–input_model 定义了预训练的模型–model_name 是生成的 IR 和输出 .xml/.bin 文件中的网络名称-s 表示来自原始网络输入的所有输入值将除以 this value,–reverse_input_channels 用于将输入通道顺序从RGB 切换到BGR(反之亦然)–output 表示模型的输出操作的名称。

(pytorch) C:\Program Files (x86)\Intel\openvino_.2.185\deployment_tools\model_optimizer>python mo.py --input_model M:\yolov5-3.1\yolov5-v3\best.onnx --model_name M:\yolov5-3.1\best -s 255 --reverse_input_channels --output Conv_487,Conv_471,Conv_455

Model Optimizer arguments:

Common parameters:

- Path to the Input Model: M:\yolov5-3.1\yolov5-v3\best.onnx

- Path for generated IR: C:\Program Files (x86)\Intel\openvino_.2.185\deployment_tools\model_optimizer.

- IR output name: M:\yolov5-3.1\best

- Log level: ERROR

- Batch: Not specified, inherited from the model

- Input layers: Not specified, inherited from the model

- Output layers: Conv_487,Conv_471,Conv_455

- Input shapes: Not specified, inherited from the model

- Mean values: Not specified

- Scale values: Not specified

- Scale factor: 255.0

- Precision of IR: FP32

- Enable fusing: True

- Enable grouped convolutions fusing: True

- Move mean values to preprocess section: None

- Reverse input channels: True

ONNX specific parameters:

Model Optimizer version: .2.0-1877-176bdf51370-releases//2

[ SUCCESS ] Generated IR version 10 model.

[ SUCCESS ] XML file: M:\yolov5-3.1\best.xml

[ SUCCESS ] BIN file: M:\yolov5-3.1\best.bin

[ SUCCESS ] Total execution time: 37.53 seconds.

It’s been a while, check for a new version of Intel® Distribution of OpenVINO™ toolkit here /content/www/us/en/develop/tools/openvino-toolkit/choose-download.html?cid=other&source=Prod&campid=ww__bu_IOTG&content=upg_pro&medium=organic_uid_agjj or on the GitHub*

对转换的IR文件使用OpenVINO .2 进行测试

python yolov5_OV.2.py -i img.jpg -m best.xml

yolov5_OV.2.py 文件 部分代码

#!/usr/bin/env pythonfrom __future__ import print_function, divisionimport loggingimport osimport sysfrom argparse import ArgumentParser, SUPPRESSfrom math import exp as expfrom time import timeimport numpy as npimport cv2from openvino.inference_engine import IENetwork, IECorelogging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)log = logging.getLogger()def build_argparser():parser = ArgumentParser(add_help=False)args = parser.add_argument_group('Options')args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",required=True, type=str)args.add_argument("-i", "--input", help="Required. Path to an image/video file. (Specify 'cam' to work with ""camera)", required=True, type=str)args.add_argument("-l", "--cpu_extension",help="Optional. Required for CPU custom layers. Absolute path to a shared library with ""the kernels implementations.", type=str, default=None)args.add_argument("-d", "--device",help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is"" acceptable. The sample will look for a suitable plugin for device specified. ""Default value is CPU", default="CPU", type=str)args.add_argument("--labels", help="Optional. Labels mapping file", default=None, type=str)args.add_argument("-t", "--prob_threshold", help="Optional. Probability threshold for detections filtering",default=0.5, type=float)args.add_argument("-iout", "--iou_threshold", help="Optional. Intersection over union threshold for overlapping ""detections filtering", default=0.4, type=float)args.add_argument("-ni", "--number_iter", help="Optional. Number of inference iterations", default=1, type=int)args.add_argument("-pc", "--perf_counts", help="Optional. Report performance counters", default=False,action="store_true")args.add_argument("-r", "--raw_output_message", help="Optional. Output inference results raw values showing",default=False, action="store_true")args.add_argument("--no_show", help="Optional. Don't show output", action='store_true')return parser

如果需要将yolov5s.pt转换为IR文件使用OpenVINO 部署以下是yolov5s.pt的下载链接:

/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt

自定义训练生成权重文件:

python train.py --img 640 --batch-size 2 --epoch 5 --data ./hand_data/hand.yaml --cfg ./models/yolov5s.yaml --weights '' --workers 0 --nosave --cache --device cpu

–img:输入图片分辨率大小,nargs=’+'表示参数可设置一个或多个–batch-size:批次大小,一次训练所选取的样本数,显卡不行的话,就调小点–epoch:训练总轮次,1个epoch等于使用训练集中的全部样本训练一次,值越大模型越精确,训练时间也越长。–data:数据集配置文件,数据集路径,类名等,使用数据集方面的coco.yaml文件–cfg:模型配置文件,网络结构,使用修改好的yolov5m.yaml文件–weights:选用训练的权重,可用根目录下的yolov5s.pt,也可用runs/train/exp/weights/best.pt,不选择的话,从头开始训练。–workers:dataloader的最大worker数量,建议为0–nosave:不保存模型,默认False–cache: 是否提前缓存图片到内存,以加快训练速度,默认False–device:训练的设备,cpu;0(表示一个gpu设备cuda:0);0,1,2,3(多个gpu设备)。值为空时,训练时默认使用 计算机自带的显卡或CPU

以下为测试自定义训练生成权重文件使用

hand.yaml 文件

train: ../hand_data/images/val: ../hand_data/images/nc: 1names: ['1']

数据集标定工具:LabelImg

下载:/tzutalin/labelImg

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。