700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 学习神经网络:搭建一个vgg-11模型在FashionMNIST训练

学习神经网络:搭建一个vgg-11模型在FashionMNIST训练

时间:2022-03-16 11:21:52

相关推荐

学习神经网络:搭建一个vgg-11模型在FashionMNIST训练

VGG-11 的模型如下:

根据上图设计vgg_block:

代码如下:

def vgg_block(num_convs,in_channels,out_channels):layers=[] #创建一个空列表for _ in range(num_convs):layers.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,padding=1)) #卷积层layers.append(nn.ReLU()) #激活函数in_channels=out_channelslayers.append(nn.MaxPool2d(kernel_size=2,stride=2))return nn.Sequential(*layers)

args:

1.num_convs:vgg块中的卷积层数

2.in_channels:卷积层输入通道数

3.out_channesl:卷积层输出通道数(即:卷积层中卷积核的个数)

返回一个nn.Sequential

根据vgg_block 实现vgg网络搭建:

代码如下:

def vgg (conv_arch):'''通过块嵌套块的形式搭建vgg网络'''vgg_blks=[]in_channels=1for num_conv,out_channels in conv_arch:vgg_blks.append(vgg_block(num_conv,in_channels,out_channels))in_channels=out_channelsreturn nn.Sequential(*vgg_blks,#进入全连接层之前要展开tensornn.Flatten(),nn.Linear(out_channels*7*7,4096),nn.ReLU(),nn.Dropout(0.5),nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(0.5),nn.Linear(4096,10))

args:

conv_arch:整个vgg网络的vgg_block骨架,其中元素含有num_conv 和 out_channels

整体代码:

'''自定义一个vgg块'''import torchimport torchvisionfrom torch import nnfrom torch.utils.data import DataLoaderfrom torchvision import transformsdef try_gpu(i=0):'''尽量使用gpu提速'''if torch.cuda.device_count()>=i+1:return torch.device(f"cuda:{i}")else:return torch.device('cpu')def vgg_block(num_convs,in_channels,out_channels):layers=[] #创建一个空列表for _ in range(num_convs):layers.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,padding=1)) #卷积层layers.append(nn.ReLU()) #激活函数in_channels=out_channelslayers.append(nn.MaxPool2d(kernel_size=2,stride=2))return nn.Sequential(*layers)'''原始vgg网络有5个卷积块,其中前两个各有一个卷积层,后三个块各包含两个卷积层。第一个模块有64个通道,每个后续模块将输出通道数量翻倍,知道数字达到512.由于该网络使用8个卷积层和3个全连接层,因此它通常被称为vgg-11'''conv_arch=((1,64),(1,128),(2,256),(2,512),(2,512))#实现vgg-11。def vgg (conv_arch):'''通过块嵌套块的形式搭建vgg网络'''vgg_blks=[]in_channels=1for num_conv,out_channels in conv_arch:vgg_blks.append(vgg_block(num_conv,in_channels,out_channels))in_channels=out_channelsreturn nn.Sequential(*vgg_blks,#进入全连接层之前要展开tensornn.Flatten(),nn.Linear(out_channels*7*7,4096),nn.ReLU(),nn.Dropout(0.5),nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(0.5),nn.Linear(4096,10))'''训练模型'''ratio=4#运用列表解析small_conv_arch=[(pair[0],pair[1]//ratio)for pair in conv_arch]net=vgg(small_conv_arch).to(try_gpu())#下载数据集train_data=torchvision.datasets.FashionMNIST('FashionMNIST',train=True,transform=pose([transforms.ToTensor(),transforms.Resize(224)]),download=True)test_data=torchvision.datasets.FashionMNIST('FashionMNIST',train=False,transform=pose([transforms.ToTensor(),transforms.Resize(224)]),download=True)#加载数据集batch_size=128train_loader=DataLoader(train_data,batch_size=batch_size)test_loader=DataLoader(test_data,batch_size=batch_size)#损失函数loss_fn=nn.CrossEntropyLoss()#优化器lr=0.05optimizer=torch.optim.SGD(net.parameters(),lr=lr)#训练次数epoch=10total_train_step=0total_test_step=0#训练for i in range(epoch):print(f"第{i}轮训练开始了")for data in train_loader:imgs , targets=dataimgs=imgs.to(try_gpu())targets=targets.to(try_gpu())outputs=net(imgs)loss=loss_fn(outputs,targets)optimizer.zero_grad()loss.backward()optimizer.step()total_train_step=total_train_step+1if total_train_step%100==0:print(f"训练次数:{total_train_step},loss:{loss.item()}")#测试步骤total_test_loss=0total_accuracy=0with torch.no_grad():for data in test_loader:imgs,targets=dataimgs=imgs.to(try_gpu())targets=targets.to(try_gpu())outputs=net(imgs)loss=loss_fn(outputs,targets)total_test_loss=total_test_loss+lossaccuracy=(outputs.argmax(1)==targets).sum()total_accuracy=accuracy+total_accuracyprint(f"整体测试集上的Loss:{total_test_loss}")print(f"整体测试集上的测试率:{total_accuracy/len(test_data)}")total_test_step=total_test_step+1

小结:

1.VGG-11使用可复用的卷积块构造网络。不同的VGG模型可通过每个块中卷积层数量和输出通道数量的差异来定义

2.块的使用导致网络定义的非常简洁。使用块可以有效地设计复杂的网络。

3.在VGG论文中,Simonyan和iserman尝试了各种架构。特别是他们发现深层且窄的卷积(即3x3)比较浅层且宽的卷积更有效

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。