700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 迁移学习训练分类模型实践第一篇

迁移学习训练分类模型实践第一篇

时间:2020-03-07 21:11:18

相关推荐

迁移学习训练分类模型实践第一篇

迁移学习训练分类模型实践第一篇

前言数据获取、预处理构建模型查看模型参数量和FLOPs测试模型

前言

为了简洁,本文不包含任何训练过程,仅介绍处理数据、构建模型、使用随机初始化权重推断;

关于如何使用预训练模型,训练整个流程,后面继续介绍。

数据获取、预处理

数据集:102 Category Flower Dataset

点击下载

包括102种花卉。每个类别包含40到258张图片。这些图像有很大的尺度,姿势和光线变化。此外,还有一些类别有很大的变化,以及一些非常相似的类别。

!unzip flower_data.zip

# 导入必要的库from collections import OrderedDictimport numpy as npimport torchfrom torch import nn, optimfrom torchvision import datasets, transforms, modelsimport torchvision.transforms.functional as TFfrom torch.utils.data import Subsetfrom thop import profile, clever_formatfrom torchsummary import summaryfrom PIL import Image

data_dir = 'flower_data'

input_size = 224# 用来归一化的均值和标准差normalize_mean = np.array([0.485, 0.456, 0.406])normalize_std = np.array([0.229, 0.224, 0.225])

构建模型

使用torchvision提供的resnet,并根据数据集修改模型的分类器,因为所提供的模型是基于ImageNet设计的,分类器是1000类,并不适用与这个数据集。将原始分类器改为102类的分类器。

这里也可以使用其他模型,后续将根据效果和需求适当调整模型

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")print(f'Running on: {str(device).upper()}')

Running on: CUDA

output_size = 102model = models.resnet18()# 替换分类器为102类classifier = OrderedDict()classifier['layer0'] = nn.Linear(model.fc.in_features, output_size)classifier['output_function'] = nn.LogSoftmax(dim=1)model.fc = nn.Sequential(classifier)model.to(device)

ResNet((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Sequential((0): BasicBlock((conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(1): BasicBlock((conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(layer2): Sequential((0): BasicBlock((conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(downsample): Sequential((0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(layer3): Sequential((0): BasicBlock((conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(downsample): Sequential((0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(layer4): Sequential((0): BasicBlock((conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(downsample): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))(fc): Sequential((layer0): Linear(in_features=512, out_features=102, bias=True)(output_function): LogSoftmax(dim=1)))

查看模型参数量和FLOPs

区分一下FLOPSFLOPs

FLOPs:注意s小写,是floating point operations的缩写(s表复数),意指浮点运算数,理解为计算量。可以用来衡量算法/模型的复杂度。

FLOPS:什么是FLOPS

参考:知乎

查看模型及其参数量以及FLOPs有助于我们对模型进一步了解,对以后部署也是可以提供优化方向的:

# model.to(device)_input = torch.randn(1, 3, input_size, input_size).to(device)flops, params = profile(model, inputs=(_input,)) # 自定义模块需要:custom_ops={YourModule: count_your_model}flops, params = clever_format([flops, params], '%.6f')print('FLOPs:', flops, '\tparams:', params ) # FLOPs: 1.819066G params: 11.689512M

[INFO] Register count_convNd() for <class 'torch.nn.modules.conv.Conv2d'>.[INFO] Register count_bn() for <class 'torch.nn.modules.batchnorm.BatchNorm2d'>.[INFO] Register zero_ops() for <class 'torch.nn.modules.activation.ReLU'>.[INFO] Register zero_ops() for <class 'torch.nn.modules.pooling.MaxPool2d'>.[WARN] Cannot find rule for <class 'torchvision.models.resnet.BasicBlock'>. Treat it as zero Macs and zero Params.[WARN] Cannot find rule for <class 'torch.nn.modules.container.Sequential'>. Treat it as zero Macs and zero Params.[INFO] Register count_adap_avgpool() for <class 'torch.nn.modules.pooling.AdaptiveAvgPool2d'>.[INFO] Register count_linear() for <class 'torch.nn.modules.linear.Linear'>.[WARN] Cannot find rule for <class 'torch.nn.modules.activation.LogSoftmax'>. Treat it as zero Macs and zero Params.[WARN] Cannot find rule for <class 'torchvision.models.resnet.ResNet'>. Treat it as zero Macs and zero Params.FLOPs: 1.818607G params: 11.228838M

自定义模块需要自己添加hook去计算

custom_ops={ YourModule: count_your_model }

YourModule:自定义模块

count_your_model:自定义模块的计算hook函数

参考:thop.profile.py

# model.to(device)summary(model, (3, input_size, input_size))

----------------------------------------------------------------Layer (type)Output Shape Param #================================================================Conv2d-1 [-1, 64, 112, 112] 9,408BatchNorm2d-2 [-1, 64, 112, 112] 128ReLU-3 [-1, 64, 112, 112]0MaxPool2d-4 [-1, 64, 56, 56]0Conv2d-5 [-1, 64, 56, 56]36,864BatchNorm2d-6 [-1, 64, 56, 56] 128ReLU-7 [-1, 64, 56, 56]0Conv2d-8 [-1, 64, 56, 56]36,864BatchNorm2d-9 [-1, 64, 56, 56] 128ReLU-10 [-1, 64, 56, 56]0BasicBlock-11 [-1, 64, 56, 56]0Conv2d-12 [-1, 64, 56, 56]36,864BatchNorm2d-13 [-1, 64, 56, 56] 128ReLU-14 [-1, 64, 56, 56]0Conv2d-15 [-1, 64, 56, 56]36,864BatchNorm2d-16 [-1, 64, 56, 56] 128ReLU-17 [-1, 64, 56, 56]0BasicBlock-18 [-1, 64, 56, 56]0Conv2d-19[-1, 128, 28, 28]73,728BatchNorm2d-20[-1, 128, 28, 28] 256ReLU-21[-1, 128, 28, 28]0Conv2d-22[-1, 128, 28, 28] 147,456BatchNorm2d-23[-1, 128, 28, 28] 256Conv2d-24[-1, 128, 28, 28] 8,192BatchNorm2d-25[-1, 128, 28, 28] 256ReLU-26[-1, 128, 28, 28]0BasicBlock-27[-1, 128, 28, 28]0Conv2d-28[-1, 128, 28, 28] 147,456BatchNorm2d-29[-1, 128, 28, 28] 256ReLU-30[-1, 128, 28, 28]0Conv2d-31[-1, 128, 28, 28] 147,456BatchNorm2d-32[-1, 128, 28, 28] 256ReLU-33[-1, 128, 28, 28]0BasicBlock-34[-1, 128, 28, 28]0Conv2d-35[-1, 256, 14, 14] 294,912BatchNorm2d-36[-1, 256, 14, 14] 512ReLU-37[-1, 256, 14, 14]0Conv2d-38[-1, 256, 14, 14] 589,824BatchNorm2d-39[-1, 256, 14, 14] 512Conv2d-40[-1, 256, 14, 14]32,768BatchNorm2d-41[-1, 256, 14, 14] 512ReLU-42[-1, 256, 14, 14]0BasicBlock-43[-1, 256, 14, 14]0Conv2d-44[-1, 256, 14, 14] 589,824BatchNorm2d-45[-1, 256, 14, 14] 512ReLU-46[-1, 256, 14, 14]0Conv2d-47[-1, 256, 14, 14] 589,824BatchNorm2d-48[-1, 256, 14, 14] 512ReLU-49[-1, 256, 14, 14]0BasicBlock-50[-1, 256, 14, 14]0Conv2d-51 [-1, 512, 7, 7] 1,179,648BatchNorm2d-52 [-1, 512, 7, 7] 1,024ReLU-53 [-1, 512, 7, 7]0Conv2d-54 [-1, 512, 7, 7] 2,359,296BatchNorm2d-55 [-1, 512, 7, 7] 1,024Conv2d-56 [-1, 512, 7, 7] 131,072BatchNorm2d-57 [-1, 512, 7, 7] 1,024ReLU-58 [-1, 512, 7, 7]0BasicBlock-59 [-1, 512, 7, 7]0Conv2d-60 [-1, 512, 7, 7] 2,359,296BatchNorm2d-61 [-1, 512, 7, 7] 1,024ReLU-62 [-1, 512, 7, 7]0Conv2d-63 [-1, 512, 7, 7] 2,359,296BatchNorm2d-64 [-1, 512, 7, 7] 1,024ReLU-65 [-1, 512, 7, 7]0BasicBlock-66 [-1, 512, 7, 7]0AdaptiveAvgPool2d-67 [-1, 512, 1, 1]0Linear-68 [-1, 102]52,326LogSoftmax-69 [-1, 102]0================================================================Total params: 11,228,838Trainable params: 11,228,838Non-trainable params: 0----------------------------------------------------------------Input size (MB): 0.57Forward/backward pass size (MB): 62.79Params size (MB): 42.83Estimated Total Size (MB): 106.20----------------------------------------------------------------

测试模型

使用一张图片作为测试,验证整个过程有没有问题,这里只输出了模型的推断置信度,但是这是随机值,所以并没有将其可是化,因为没有任何参考意义。后续对模型进行训练,对测试图片进行推断,可视化可以直观的了解推断的效果以评价模型的好坏。

def process_image(image):''' 预处理图片,返回numpy数组'''image = TF.resize(image, 256)upper_pixel = (image.height - 224) // 2left_pixel = (image.width - 224) // 2image = TF.crop(image, upper_pixel, left_pixel, 224, 224)image = TF.to_tensor(image)image = TF.normalize(image, normalize_mean, normalize_std)return image

def predict(image_path, model, topk=5):''' 读取图片预测结果,返回Top5'''image = Image.open(image_path)image = process_image(image)with torch.no_grad():model.eval()image = image.view(1,3,224,224)image = image.to(device)predictions = model.forward(image)predictions = torch.exp(predictions)top_ps, top_class = predictions.topk(topk, dim=1)return top_ps, top_class

category = 30image_name = 'image_03475.jpg'image_path = data_dir + f'/valid/{category}/{image_name}'probs, classes = predict(image_path, model)print(probs)print(classes)

tensor([[0.0301, 0.0275, 0.0264, 0.0257, 0.0219]], device='cuda:0')tensor([[73, 76, 9, 62, 32]], device='cuda:0')

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。