700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > GAN生成对抗模型根据minist数据集生成手写数字图片

GAN生成对抗模型根据minist数据集生成手写数字图片

时间:2019-03-05 23:43:55

相关推荐

GAN生成对抗模型根据minist数据集生成手写数字图片

文章目录

1.项目介绍2相关网站3具体的代码及结果导入工具包设置超参数定义优化器,以及损失函数训练时的迭代过程训练结果的展示

1.项目介绍

 通过用minist数据集进行训练,得到一个GAN模型,可以生成与minist数据集类似的图片。

GAN是一种生成模型,它的目的是通过学习真实数据的分布来生成新的数据。GAN由两个网络组成,一个是生成器(Generator),一个是判别器(Discriminator)。生成器的任务是从随机噪声中生成类似于真实数据的样本,判别器的任务是判断给定的样本是真实的还是生成的。GAN的训练过程可以看作是一种对抗博弈,生成器和判别器互相竞争,不断提高自己的能力,最终达到生成器生成的样本和真实数据分布一致,判别器无法区分真假的状态。通过GAN我们可以生成足以以假乱真的图像,GAN被广泛的应用在图像生成,语音生成等场景中。例如经典的换脸应用DeepFakes背后的技术便是GAN.

生成对抗网络的组成:

 生成器网络 、判别器网络

目标:

 总体目标:

 生成模型,根据已有的图片,生成与已有的图片类似的图片

 训练目标:

判别器

 能够正确的识别真的图片

 能够正确的识别假的图片

生成器

 能够生成的能被判别器判断为真的图片

本文是在google drive中部署的,部署过程参考博客:colab部署过程

2相关网站

参考的网站:github pytorch代码

所有的代码文件:提取码:f3vq

3具体的代码及结果

导入工具包

import osimport torchimport torchvisionimport torch.nn as nnfrom torchvision import transformsfrom torchvision.utils import save_imageimport osimport matplotlib.pyplot as plt%matplotlib inlineimport numpy as npimport torchfrom torch import nnimport torch.optim as optimimport torchvision#pip install torchvisionfrom torchvision import transforms, models, datasets#/docs/stable/torchvision/index.htmlimport imageioimport timeimport warningsimport randomimport sysimport copyimport jsonfrom PIL import Image

设置超参数

# Device configurationdevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')# Hyper-parameterslatent_size = 64hidden_size = 256image_size = 784num_epochs = 200batch_size = 100# latent_size:这是潜在向量的大小,用作生成器网络的输入以生成假图像。#latent_size 的大小会影响生成图像的多样性和质量。如果 latent_size 太小,则生成器可能无法捕获数据分布的所有变化,从而导致生成图像缺乏多样性。如果 latent_size 太大,则生成器可能会过拟合训练数据,从而导致生成图像质量下降。# hidden_size:这是隐藏层的大小,用于定义生成器和鉴别器网络中隐藏层的大小。# image_size:这是图像的大小,表示图像的像素数。在这种情况下,图像被重塑为一维张量,因此图像大小等于图像的长度。# num_epochs:这是训练期间整个数据集通过网络的次数。# batch_size:每批次中图像的数量。

sample_dir = 'samples'# Create a directory if not exists#创建一个sample数据集用来存储数据,一个是minist中的真实的图像,以一个是我们的生成器生成的图像if not os.path.exists(sample_dir):os.makedirs(sample_dir)

#transforms.ToTensor()是一个函数,它可以将PIL.Image或者numpy.ndarray格式的图像转换为torch.FloatTensor格式的张量,并且将像素值范围缩放到[0, 1]之间。#transforms.Normalize(mean, std)是一个类,它可以对张量图像进行标准化,即减去给定的均值mean并除以给定的标准差std。这样做可以使得图像的分布接近标准正态分布,有利于模型的训练和收敛。transform = pose([transforms.ToTensor(),transforms.Normalize(mean=[0.5], # 1 for greyscale channelsstd=[0.5])])

#在 Colab 文件系统的 /content/drive/ 目录下挂载您的 Google Drivefrom google.colab import drivedrive.mount('/content/drive/')# 指定当前的工作文件夹

import os# 此处为google drive中的文件路径,drive为之前指定的工作根目录,要加上os.chdir("/content/drive/MyDrive/gan/")

# MNIST datasetmnist = torchvision.datasets.MNIST(root='./data',train=True,transform=transform,download=True)# Data loaderdata_loader = torch.utils.data.DataLoader(dataset=mnist,batch_size=batch_size,shuffle=True)

#定义判别器# DiscriminatorD = nn.Sequential(nn.Linear(image_size, hidden_size),#该函数相比于ReLU,保留了一些负轴的值,缓解了激活值过小而导致神经元参数无法更新的问题,其中α\alphaα默认0.01。nn.LeakyReLU(0.2),nn.Linear(hidden_size, hidden_size),nn.LeakyReLU(0.2),nn.Linear(hidden_size, 1),#将值映射到0~1nn.Sigmoid())

#定义生成器# GeneratorG = nn.Sequential(nn.Linear(latent_size, hidden_size),nn.ReLU(),nn.Linear(hidden_size, hidden_size),nn.ReLU(),nn.Linear(hidden_size, image_size),#将值映射到-1~1nn.Tanh())

# Device settingD = D.to(device)G = G.to(device)

定义优化器,以及损失函数

# Binary cross entropy loss and optimizer#nn.BCELoss()函数是二分类交叉熵损失函数,用于计算二分类问题中的交叉熵损失criterion = nn.BCELoss()d_optimizer = torch.optim.Adam(D.parameters(), lr=0.0002)g_optimizer = torch.optim.Adam(G.parameters(), lr=0.0002)

#定义函数将生成器生成的图像的像素值的范围转化为0~1# denorm函数的作用是将输入张量的值从[-1,1]范围转换为[0,1]范围。def denorm(x):out = (x + 1) / 2return out.clamp(0, 1)# reset_grad函数的作用是将判别器和生成器的梯度清零,以便进行下一次反向传播def reset_grad():d_optimizer.zero_grad()g_optimizer.zero_grad()

# Start trainingtotal_step = len(data_loader)for epoch in range(num_epochs):for i, (images, _) in enumerate(data_loader):images = images.reshape(batch_size, -1).to(device)# Create the labels which are later used as input for the BCE lossreal_labels = torch.ones(batch_size, 1).to(device)fake_labels = torch.zeros(batch_size, 1).to(device)# ================================================================== ## Train the discriminator ## ================================================================== ##训练判别器,让判别器能够对真实的和虚假的图片都进行判断# Compute BCE_Loss using real images where BCE_Loss(x, y): - y * log(D(x)) - (1-y) * log(1 - D(x))# Second term of the loss is always zero since real_labels == 1outputs = D(images)#计算对于真实样本的损失#真实标签列表是作为BCELoss公式中的y参数,它和预测值x一起计算交叉熵。#具体来说,当y为1时,损失值为-log(x),#当y为0时,损失值为-log(1-x)。#这样可以保证当预测值x和真实标签y一致时,损失值最小,当预测值x和真实标签y相反时,损失值最大。d_loss_real = criterion(outputs, real_labels)real_score = outputs# Compute BCELoss using fake images# First term of the loss is always zero since fake_labels == 0z = torch.randn(batch_size, latent_size).to(device)fake_images = G(z)outputs = D(fake_images)#计算对于虚假样本的损失d_loss_fake = criterion(outputs, fake_labels)fake_score = outputs# Backprop and optimize#总损失为判断两个判断错误的损失,既我们希望找到一个对于正确错误的样本都能进行判断正确的判别器d_loss = d_loss_real + d_loss_fakereset_grad()d_loss.backward()d_optimizer.step()# ================================================================== ## Train the generator## ================================================================== ## Compute loss with fake images#随机生成一个输入,也就是虚假的图片z = torch.randn(batch_size, latent_size).to(device)#根据虚假输入,生成一个图片fake_images = G(z)#给判别器判断outputs = D(fake_images)# We train G to maximize log(D(G(z)) instead of minimizing log(1-D(G(z)))# For the reason, see the last paragraph of section 3. /pdf/1406.2661.pdf#计算生成器的损失,我们希望,生成的图像能和真实的图像类似,所以这里y取1,只考虑-ylog(p(x)),也就是判断正确为真的损失g_loss = criterion(outputs, real_labels)# Backprop and optimizereset_grad()g_loss.backward()g_optimizer.step()if (i+1) % 200 == 0:print('Epoch [{}/{}], Step [{}/{}], d_loss: {:.4f}, g_loss: {:.4f}, D(x): {:.2f}, D(G(z)): {:.2f}'.format(epoch, num_epochs, i+1, total_step, d_loss.item(), g_loss.item(),real_score.mean().item(), fake_score.mean().item()))# Save real imagesif (epoch+1) == 1:images = images.reshape(images.size(0), 1, 28, 28)save_image(denorm(images), os.path.join(sample_dir, 'real_images.png'))# Save sampled images#每一次迭代,我们保存我们的生成器能生成的虚假的图像,看看我们的生成器生成的图片在逐步的真实话的过程fake_images = fake_images.reshape(fake_images.size(0), 1, 28, 28)save_image(denorm(fake_images), os.path.join(sample_dir, 'fake_images-{}.png'.format(epoch+1)))

训练时的迭代过程

我们可以看到其中每次迭代的过程中

g_loss:生成器的损失在下降(说明生成器生成的图像越来越能被判别器识别为真实的图像)

D(G(z)): 判别器对于生成器生成的样本识别真样本的概率在上升(说明生成器的效果在不断提高,逐渐的可以生成真实的样本)

Epoch [0/200], Step [200/600], d_loss: 0.0252, g_loss: 5.2554, D(x): 1.00, D(G(z)): 0.02Epoch [0/200], Step [400/600], d_loss: 0.1340, g_loss: 4.9320, D(x): 0.95, D(G(z)): 0.07Epoch [0/200], Step [600/600], d_loss: 0.2178, g_loss: 4.7700, D(x): 0.93, D(G(z)): 0.07Epoch [1/200], Step [200/600], d_loss: 0.3207, g_loss: 2.5174, D(x): 0.88, D(G(z)): 0.07Epoch [1/200], Step [400/600], d_loss: 1.6857, g_loss: 3.8114, D(x): 0.64, D(G(z)): 0.30Epoch [1/200], Step [600/600], d_loss: 1.2818, g_loss: 2.4387, D(x): 0.79, D(G(z)): 0.39Epoch [2/200], Step [200/600], d_loss: 0.3939, g_loss: 3.4207, D(x): 0.91, D(G(z)): 0.19Epoch [2/200], Step [400/600], d_loss: 0.2278, g_loss: 2.5563, D(x): 0.93, D(G(z)): 0.11Epoch [2/200], Step [600/600], d_loss: 0.6092, g_loss: 4.0915, D(x): 0.83, D(G(z)): 0.19Epoch [3/200], Step [200/600], d_loss: 0.2883, g_loss: 3.6595, D(x): 0.87, D(G(z)): 0.07Epoch [3/200], Step [400/600], d_loss: 0.5897, g_loss: 2.6272, D(x): 0.81, D(G(z)): 0.15Epoch [3/200], Step [600/600], d_loss: 0.8164, g_loss: 2.3492, D(x): 0.81, D(G(z)): 0.31Epoch [4/200], Step [200/600], d_loss: 0.7031, g_loss: 2.1948, D(x): 0.84, D(G(z)): 0.25Epoch [4/200], Step [400/600], d_loss: 0.1647, g_loss: 3.7823, D(x): 0.98, D(G(z)): 0.08Epoch [4/200], Step [600/600], d_loss: 0.1939, g_loss: 3.5391, D(x): 0.91, D(G(z)): 0.05Epoch [5/200], Step [200/600], d_loss: 0.1912, g_loss: 3.8443, D(x): 0.92, D(G(z)): 0.05Epoch [5/200], Step [400/600], d_loss: 0.1662, g_loss: 3.8918, D(x): 0.97, D(G(z)): 0.11Epoch [5/200], Step [600/600], d_loss: 0.3716, g_loss: 3.8777, D(x): 0.87, D(G(z)): 0.06Epoch [6/200], Step [200/600], d_loss: 0.5468, g_loss: 3.8969, D(x): 0.85, D(G(z)): 0.10Epoch [6/200], Step [400/600], d_loss: 0.1739, g_loss: 3.8602, D(x): 0.94, D(G(z)): 0.04Epoch [6/200], Step [600/600], d_loss: 0.2472, g_loss: 4.8380, D(x): 0.91, D(G(z)): 0.04Epoch [7/200], Step [200/600], d_loss: 0., g_loss: 4.5990, D(x): 0.92, D(G(z)): 0.06Epoch [7/200], Step [400/600], d_loss: 0.3489, g_loss: 5.6411, D(x): 0.90, D(G(z)): 0.06Epoch [7/200], Step [600/600], d_loss: 0.3301, g_loss: 3.0097, D(x): 0.93, D(G(z)): 0.16Epoch [8/200], Step [200/600], d_loss: 0.3328, g_loss: 3.4204, D(x): 0.92, D(G(z)): 0.13Epoch [8/200], Step [400/600], d_loss: 0.1620, g_loss: 3.1926, D(x): 0.96, D(G(z)): 0.09Epoch [8/200], Step [600/600], d_loss: 0.2524, g_loss: 3.4148, D(x): 0.98, D(G(z)): 0.18Epoch [9/200], Step [200/600], d_loss: 0.1735, g_loss: 3.2235, D(x): 0.95, D(G(z)): 0.06Epoch [9/200], Step [400/600], d_loss: 0.1905, g_loss: 4.0506, D(x): 0.94, D(G(z)): 0.07Epoch [9/200], Step [600/600], d_loss: 0.1437, g_loss: 4.8081, D(x): 0.96, D(G(z)): 0.08Epoch [10/200], Step [200/600], d_loss: 0.0497, g_loss: 5.0490, D(x): 0.98, D(G(z)): 0.02Epoch [10/200], Step [400/600], d_loss: 0.0652, g_loss: 5.5561, D(x): 0.97, D(G(z)): 0.01Epoch [10/200], Step [600/600], d_loss: 0.1745, g_loss: 4.9265, D(x): 0.93, D(G(z)): 0.03Epoch [11/200], Step [200/600], d_loss: 0.1788, g_loss: 4.2305, D(x): 0.95, D(G(z)): 0.07Epoch [11/200], Step [400/600], d_loss: 0.0884, g_loss: 4.9210, D(x): 0.97, D(G(z)): 0.03Epoch [11/200], Step [600/600], d_loss: 0.4536, g_loss: 5.0761, D(x): 0.91, D(G(z)): 0.18Epoch [12/200], Step [200/600], d_loss: 0.3983, g_loss: 5.9821, D(x): 0.93, D(G(z)): 0.13Epoch [12/200], Step [400/600], d_loss: 0.2873, g_loss: 5.5326, D(x): 0.91, D(G(z)): 0.08Epoch [12/200], Step [600/600], d_loss: 0.1984, g_loss: 4.5899, D(x): 0.97, D(G(z)): 0.12Epoch [13/200], Step [200/600], d_loss: 0.1524, g_loss: 4.7841, D(x): 0.94, D(G(z)): 0.03Epoch [13/200], Step [400/600], d_loss: 0.1075, g_loss: 4.5099, D(x): 0.98, D(G(z)): 0.07Epoch [13/200], Step [600/600], d_loss: 0.5001, g_loss: 6.8877, D(x): 0.84, D(G(z)): 0.01Epoch [14/200], Step [200/600], d_loss: 0.0878, g_loss: 4.6591, D(x): 0.98, D(G(z)): 0.06Epoch [14/200], Step [400/600], d_loss: 0.2258, g_loss: 4.7830, D(x): 0.97, D(G(z)): 0.13Epoch [14/200], Step [600/600], d_loss: 0.2287, g_loss: 4.9709, D(x): 0.93, D(G(z)): 0.07Epoch [15/200], Step [200/600], d_loss: 0.2274, g_loss: 5.0505, D(x): 0.94, D(G(z)): 0.07Epoch [15/200], Step [400/600], d_loss: 0.1501, g_loss: 5.0111, D(x): 0.94, D(G(z)): 0.04Epoch [15/200], Step [600/600], d_loss: 0.0859, g_loss: 5.2517, D(x): 0.95, D(G(z)): 0.02Epoch [16/200], Step [200/600], d_loss: 0.1900, g_loss: 4.7658, D(x): 0.95, D(G(z)): 0.04Epoch [16/200], Step [400/600], d_loss: 0.1400, g_loss: 7.4018, D(x): 0.97, D(G(z)): 0.03Epoch [16/200], Step [600/600], d_loss: 0.1485, g_loss: 5.5882, D(x): 0.99, D(G(z)): 0.10Epoch [17/200], Step [200/600], d_loss: 0.2869, g_loss: 4.3017, D(x): 0.90, D(G(z)): 0.02Epoch [17/200], Step [400/600], d_loss: 0.2603, g_loss: 5.7215, D(x): 0.98, D(G(z)): 0.17Epoch [17/200], Step [600/600], d_loss: 0.1268, g_loss: 5.8928, D(x): 0.97, D(G(z)): 0.06Epoch [18/200], Step [200/600], d_loss: 0.0614, g_loss: 6.3626, D(x): 0.97, D(G(z)): 0.02Epoch [18/200], Step [400/600], d_loss: 0.3950, g_loss: 5.1696, D(x): 0.95, D(G(z)): 0.17Epoch [18/200], Step [600/600], d_loss: 0.0887, g_loss: 4.8490, D(x): 0.97, D(G(z)): 0.03Epoch [19/200], Step [200/600], d_loss: 0.1939, g_loss: 3.7379, D(x): 0.95, D(G(z)): 0.09Epoch [19/200], Step [400/600], d_loss: 0.3316, g_loss: 5.7292, D(x): 0.88, D(G(z)): 0.01Epoch [19/200], Step [600/600], d_loss: 0.1429, g_loss: 4.8458, D(x): 0.96, D(G(z)): 0.05Epoch [20/200], Step [200/600], d_loss: 0.1976, g_loss: 7.0141, D(x): 0.93, D(G(z)): 0.04Epoch [20/200], Step [400/600], d_loss: 0.1226, g_loss: 5.1962, D(x): 0.95, D(G(z)): 0.02Epoch [20/200], Step [600/600], d_loss: 0.2435, g_loss: 7.1254, D(x): 0.92, D(G(z)): 0.01Epoch [21/200], Step [200/600], d_loss: 0.3504, g_loss: 4.0122, D(x): 0.90, D(G(z)): 0.09Epoch [21/200], Step [400/600], d_loss: 0.3238, g_loss: 5.0485, D(x): 0.94, D(G(z)): 0.14Epoch [21/200], Step [600/600], d_loss: 0.2065, g_loss: 3.8730, D(x): 0.94, D(G(z)): 0.06Epoch [22/200], Step [200/600], d_loss: 0.3684, g_loss: 4.4913, D(x): 0.87, D(G(z)): 0.02Epoch [22/200], Step [400/600], d_loss: 0.1684, g_loss: 4.6138, D(x): 0.93, D(G(z)): 0.05Epoch [22/200], Step [600/600], d_loss: 0.2458, g_loss: 5.2389, D(x): 0.93, D(G(z)): 0.08Epoch [23/200], Step [200/600], d_loss: 0.3259, g_loss: 4.3412, D(x): 0.88, D(G(z)): 0.07Epoch [23/200], Step [400/600], d_loss: 0.2862, g_loss: 4.0427, D(x): 0.94, D(G(z)): 0.10Epoch [23/200], Step [600/600], d_loss: 0.4527, g_loss: 3.7884, D(x): 0.92, D(G(z)): 0.18Epoch [24/200], Step [200/600], d_loss: 0.1750, g_loss: 4.4059, D(x): 0.93, D(G(z)): 0.04Epoch [24/200], Step [400/600], d_loss: 0.1966, g_loss: 4.7848, D(x): 0.96, D(G(z)): 0.10Epoch [24/200], Step [600/600], d_loss: 0.2126, g_loss: 3.6014, D(x): 0.96, D(G(z)): 0.11Epoch [25/200], Step [200/600], d_loss: 0.4429, g_loss: 3.7987, D(x): 0.88, D(G(z)): 0.07Epoch [25/200], Step [400/600], d_loss: 0.3222, g_loss: 4.3068, D(x): 0.94, D(G(z)): 0.16Epoch [25/200], Step [600/600], d_loss: 0.2818, g_loss: 4.9270, D(x): 0.94, D(G(z)): 0.07Epoch [26/200], Step [200/600], d_loss: 0.2719, g_loss: 5.2220, D(x): 0.95, D(G(z)): 0.14Epoch [26/200], Step [400/600], d_loss: 0.4343, g_loss: 4.7186, D(x): 0.85, D(G(z)): 0.05Epoch [26/200], Step [600/600], d_loss: 0.4143, g_loss: 4.4156, D(x): 0.86, D(G(z)): 0.03Epoch [27/200], Step [200/600], d_loss: 0.3997, g_loss: 3.5575, D(x): 0.92, D(G(z)): 0.13Epoch [27/200], Step [400/600], d_loss: 0.3701, g_loss: 4.4439, D(x): 0.87, D(G(z)): 0.02Epoch [27/200], Step [600/600], d_loss: 0.0873, g_loss: 4.3754, D(x): 0.98, D(G(z)): 0.05Epoch [28/200], Step [200/600], d_loss: 0.4607, g_loss: 3.8893, D(x): 0.95, D(G(z)): 0.19Epoch [28/200], Step [400/600], d_loss: 0.3372, g_loss: 5.0975, D(x): 0.97, D(G(z)): 0.16Epoch [28/200], Step [600/600], d_loss: 0.2284, g_loss: 5.4233, D(x): 0.92, D(G(z)): 0.06Epoch [29/200], Step [200/600], d_loss: 0.3817, g_loss: 4.3107, D(x): 0.93, D(G(z)): 0.12Epoch [29/200], Step [400/600], d_loss: 0.3336, g_loss: 3.7298, D(x): 0.97, D(G(z)): 0.20Epoch [29/200], Step [600/600], d_loss: 0.2664, g_loss: 4.5941, D(x): 0.92, D(G(z)): 0.08Epoch [30/200], Step [200/600], d_loss: 0.4520, g_loss: 3.2173, D(x): 0.87, D(G(z)): 0.16Epoch [30/200], Step [400/600], d_loss: 0.4444, g_loss: 2.4830, D(x): 0.91, D(G(z)): 0.21Epoch [30/200], Step [600/600], d_loss: 0.2679, g_loss: 4.0562, D(x): 0.92, D(G(z)): 0.08Epoch [31/200], Step [200/600], d_loss: 0.4414, g_loss: 4.9419, D(x): 0.93, D(G(z)): 0.20Epoch [31/200], Step [400/600], d_loss: 0.4257, g_loss: 4.2394, D(x): 0.90, D(G(z)): 0.16Epoch [31/200], Step [600/600], d_loss: 0.1577, g_loss: 4.1475, D(x): 0.93, D(G(z)): 0.05Epoch [32/200], Step [200/600], d_loss: 0.6468, g_loss: 3.1813, D(x): 0.79, D(G(z)): 0.09Epoch [32/200], Step [400/600], d_loss: 0.7241, g_loss: 4.1790, D(x): 0.78, D(G(z)): 0.12Epoch [32/200], Step [600/600], d_loss: 0.4787, g_loss: 3.5841, D(x): 0.87, D(G(z)): 0.14Epoch [33/200], Step [200/600], d_loss: 0.2906, g_loss: 3.6536, D(x): 0.94, D(G(z)): 0.15Epoch [33/200], Step [400/600], d_loss: 0.4640, g_loss: 3.3191, D(x): 0.93, D(G(z)): 0.18Epoch [33/200], Step [600/600], d_loss: 0.6650, g_loss: 3.3779, D(x): 0.83, D(G(z)): 0.20Epoch [34/200], Step [200/600], d_loss: 0.3749, g_loss: 3.5957, D(x): 0.90, D(G(z)): 0.14Epoch [34/200], Step [400/600], d_loss: 0.3069, g_loss: 4.4897, D(x): 0.92, D(G(z)): 0.13Epoch [34/200], Step [600/600], d_loss: 0.4221, g_loss: 3.1977, D(x): 0.86, D(G(z)): 0.11Epoch [35/200], Step [200/600], d_loss: 0.5369, g_loss: 2.4619, D(x): 0.80, D(G(z)): 0.11Epoch [35/200], Step [400/600], d_loss: 0.5826, g_loss: 2.3311, D(x): 0.94, D(G(z)): 0.29Epoch [35/200], Step [600/600], d_loss: 0.4576, g_loss: 3.8068, D(x): 0.83, D(G(z)): 0.06Epoch [36/200], Step [200/600], d_loss: 0.2154, g_loss: 3.4432, D(x): 0.93, D(G(z)): 0.08Epoch [36/200], Step [400/600], d_loss: 0.4272, g_loss: 3.7969, D(x): 0.91, D(G(z)): 0.17Epoch [36/200], Step [600/600], d_loss: 0.4357, g_loss: 4.6301, D(x): 0.86, D(G(z)): 0.06Epoch [37/200], Step [200/600], d_loss: 0.3840, g_loss: 3.5209, D(x): 0.86, D(G(z)): 0.09Epoch [37/200], Step [400/600], d_loss: 0.2520, g_loss: 4.5026, D(x): 0.93, D(G(z)): 0.09Epoch [37/200], Step [600/600], d_loss: 0.2975, g_loss: 4.3481, D(x): 0.96, D(G(z)): 0.12Epoch [38/200], Step [200/600], d_loss: 0.2116, g_loss: 4.2693, D(x): 0.96, D(G(z)): 0.12Epoch [38/200], Step [400/600], d_loss: 0.2980, g_loss: 5.6267, D(x): 0.91, D(G(z)): 0.08Epoch [38/200], Step [600/600], d_loss: 0.3867, g_loss: 3.2204, D(x): 0.85, D(G(z)): 0.09Epoch [39/200], Step [200/600], d_loss: 0.5077, g_loss: 2.9204, D(x): 0.94, D(G(z)): 0.25Epoch [39/200], Step [400/600], d_loss: 0.5755, g_loss: 3.8340, D(x): 0.82, D(G(z)): 0.07Epoch [39/200], Step [600/600], d_loss: 0.3072, g_loss: 4.0534, D(x): 0.88, D(G(z)): 0.09Epoch [40/200], Step [200/600], d_loss: 0.2913, g_loss: 3.2628, D(x): 0.94, D(G(z)): 0.11Epoch [40/200], Step [400/600], d_loss: 0.5462, g_loss: 3.0916, D(x): 0.82, D(G(z)): 0.12Epoch [40/200], Step [600/600], d_loss: 0.6271, g_loss: 3.8926, D(x): 0.80, D(G(z)): 0.12Epoch [41/200], Step [200/600], d_loss: 0.4911, g_loss: 3.1461, D(x): 0.90, D(G(z)): 0.19Epoch [41/200], Step [400/600], d_loss: 0.4260, g_loss: 3.4678, D(x): 0.87, D(G(z)): 0.11Epoch [41/200], Step [600/600], d_loss: 0.6902, g_loss: 2.5126, D(x): 0.76, D(G(z)): 0.15Epoch [42/200], Step [200/600], d_loss: 0.4464, g_loss: 3.1702, D(x): 0.83, D(G(z)): 0.12Epoch [42/200], Step [400/600], d_loss: 0.3756, g_loss: 2.6393, D(x): 0.89, D(G(z)): 0.12Epoch [42/200], Step [600/600], d_loss: 0.6360, g_loss: 3.0586, D(x): 0.79, D(G(z)): 0.18Epoch [43/200], Step [200/600], d_loss: 0.4233, g_loss: 3.2700, D(x): 0.86, D(G(z)): 0.12Epoch [43/200], Step [400/600], d_loss: 0.6836, g_loss: 3.4518, D(x): 0.80, D(G(z)): 0.21Epoch [43/200], Step [600/600], d_loss: 0.7702, g_loss: 2.3113, D(x): 0.76, D(G(z)): 0.19Epoch [44/200], Step [200/600], d_loss: 0.3849, g_loss: 2.8337, D(x): 0.87, D(G(z)): 0.11Epoch [44/200], Step [400/600], d_loss: 0.5281, g_loss: 2.4929, D(x): 0.87, D(G(z)): 0.21Epoch [44/200], Step [600/600], d_loss: 0.7222, g_loss: 2.9932, D(x): 0.85, D(G(z)): 0.27Epoch [45/200], Step [200/600], d_loss: 0.5053, g_loss: 3.4436, D(x): 0.85, D(G(z)): 0.15Epoch [45/200], Step [400/600], d_loss: 0.5699, g_loss: 3.0064, D(x): 0.80, D(G(z)): 0.15Epoch [45/200], Step [600/600], d_loss: 0.5866, g_loss: 2.7113, D(x): 0.83, D(G(z)): 0.19Epoch [46/200], Step [200/600], d_loss: 0.5891, g_loss: 2.5745, D(x): 0.81, D(G(z)): 0.14Epoch [46/200], Step [400/600], d_loss: 0.4424, g_loss: 2.5302, D(x): 0.82, D(G(z)): 0.10Epoch [46/200], Step [600/600], d_loss: 0.6589, g_loss: 2.2096, D(x): 0.76, D(G(z)): 0.16Epoch [47/200], Step [200/600], d_loss: 0.5520, g_loss: 2.6966, D(x): 0.85, D(G(z)): 0.21Epoch [47/200], Step [400/600], d_loss: 0.7059, g_loss: 2.7294, D(x): 0.81, D(G(z)): 0.22Epoch [47/200], Step [600/600], d_loss: 0.2912, g_loss: 3.7787, D(x): 0.88, D(G(z)): 0.08Epoch [48/200], Step [200/600], d_loss: 0.4149, g_loss: 2.3708, D(x): 0.90, D(G(z)): 0.19Epoch [48/200], Step [400/600], d_loss: 0.4266, g_loss: 3.3905, D(x): 0.87, D(G(z)): 0.13Epoch [48/200], Step [600/600], d_loss: 0.3298, g_loss: 3.2459, D(x): 0.90, D(G(z)): 0.11Epoch [49/200], Step [200/600], d_loss: 0.3318, g_loss: 3.5093, D(x): 0.92, D(G(z)): 0.14Epoch [49/200], Step [400/600], d_loss: 0.6507, g_loss: 3.4914, D(x): 0.78, D(G(z)): 0.11Epoch [49/200], Step [600/600], d_loss: 0.6534, g_loss: 2.7849, D(x): 0.76, D(G(z)): 0.08Epoch [50/200], Step [200/600], d_loss: 0.6406, g_loss: 3.6847, D(x): 0.82, D(G(z)): 0.20Epoch [50/200], Step [400/600], d_loss: 0.6941, g_loss: 2.9424, D(x): 0.80, D(G(z)): 0.21Epoch [50/200], Step [600/600], d_loss: 0.4733, g_loss: 3.2116, D(x): 0.86, D(G(z)): 0.15Epoch [51/200], Step [200/600], d_loss: 0.3287, g_loss: 3.6701, D(x): 0.91, D(G(z)): 0.16Epoch [51/200], Step [400/600], d_loss: 0.5537, g_loss: 1.9568, D(x): 0.79, D(G(z)): 0.11Epoch [51/200], Step [600/600], d_loss: 0.6470, g_loss: 2.4228, D(x): 0.80, D(G(z)): 0.22Epoch [52/200], Step [200/600], d_loss: 0.8183, g_loss: 3.1269, D(x): 0.73, D(G(z)): 0.13Epoch [52/200], Step [400/600], d_loss: 0.4611, g_loss: 2.3535, D(x): 0.88, D(G(z)): 0.20Epoch [52/200], Step [600/600], d_loss: 0.5158, g_loss: 2.3248, D(x): 0.85, D(G(z)): 0.20Epoch [53/200], Step [200/600], d_loss: 0.5410, g_loss: 2.3491, D(x): 0.82, D(G(z)): 0.17Epoch [53/200], Step [400/600], d_loss: 0.6135, g_loss: 2.0376, D(x): 0.79, D(G(z)): 0.19Epoch [53/200], Step [600/600], d_loss: 0.6995, g_loss: 3.0768, D(x): 0.76, D(G(z)): 0.15Epoch [54/200], Step [200/600], d_loss: 0.7788, g_loss: 2.3476, D(x): 0.84, D(G(z)): 0.32Epoch [54/200], Step [400/600], d_loss: 0.6057, g_loss: 2.6538, D(x): 0.78, D(G(z)): 0.16Epoch [54/200], Step [600/600], d_loss: 0.6524, g_loss: 1.9913, D(x): 0.83, D(G(z)): 0.26Epoch [55/200], Step [200/600], d_loss: 0.4699, g_loss: 3.0903, D(x): 0.88, D(G(z)): 0.20Epoch [55/200], Step [400/600], d_loss: 0.6438, g_loss: 2.4830, D(x): 0.77, D(G(z)): 0.18Epoch [55/200], Step [600/600], d_loss: 0.8641, g_loss: 1.5713, D(x): 0.80, D(G(z)): 0.32Epoch [56/200], Step [200/600], d_loss: 0.5685, g_loss: 1.8569, D(x): 0.78, D(G(z)): 0.17Epoch [56/200], Step [400/600], d_loss: 0.4462, g_loss: 2.5452, D(x): 0.82, D(G(z)): 0.13Epoch [56/200], Step [600/600], d_loss: 0.5907, g_loss: 2.4652, D(x): 0.75, D(G(z)): 0.12Epoch [57/200], Step [200/600], d_loss: 0.4843, g_loss: 3.0264, D(x): 0.83, D(G(z)): 0.16Epoch [57/200], Step [400/600], d_loss: 0.5594, g_loss: 2.5355, D(x): 0.82, D(G(z)): 0.20Epoch [57/200], Step [600/600], d_loss: 0.5502, g_loss: 2.2562, D(x): 0.80, D(G(z)): 0.15Epoch [58/200], Step [200/600], d_loss: 0.6652, g_loss: 2.0351, D(x): 0.82, D(G(z)): 0.24Epoch [58/200], Step [400/600], d_loss: 0.4033, g_loss: 2.9153, D(x): 0.90, D(G(z)): 0.20Epoch [58/200], Step [600/600], d_loss: 0.6348, g_loss: 2.1119, D(x): 0.88, D(G(z)): 0.30Epoch [59/200], Step [200/600], d_loss: 0.6415, g_loss: 3.1282, D(x): 0.81, D(G(z)): 0.20Epoch [59/200], Step [400/600], d_loss: 0.5136, g_loss: 3.3376, D(x): 0.82, D(G(z)): 0.17Epoch [59/200], Step [600/600], d_loss: 0.6462, g_loss: 2.3404, D(x): 0.76, D(G(z)): 0.16Epoch [60/200], Step [200/600], d_loss: 0.5160, g_loss: 2.9145, D(x): 0.80, D(G(z)): 0.14Epoch [60/200], Step [400/600], d_loss: 0.7120, g_loss: 2.6986, D(x): 0.79, D(G(z)): 0.27Epoch [60/200], Step [600/600], d_loss: 0.4580, g_loss: 2.8799, D(x): 0.88, D(G(z)): 0.17Epoch [61/200], Step [200/600], d_loss: 0.5593, g_loss: 2.7334, D(x): 0.77, D(G(z)): 0.13Epoch [61/200], Step [400/600], d_loss: 0.7277, g_loss: 2.6792, D(x): 0.90, D(G(z)): 0.30Epoch [61/200], Step [600/600], d_loss: 0.7283, g_loss: 1.6875, D(x): 0.68, D(G(z)): 0.11Epoch [62/200], Step [200/600], d_loss: 0.3957, g_loss: 3.4459, D(x): 0.89, D(G(z)): 0.18Epoch [62/200], Step [400/600], d_loss: 0.5846, g_loss: 1.6489, D(x): 0.81, D(G(z)): 0.17Epoch [62/200], Step [600/600], d_loss: 0.7358, g_loss: 2.3752, D(x): 0.82, D(G(z)): 0.28Epoch [63/200], Step [200/600], d_loss: 0.5577, g_loss: 2.9186, D(x): 0.87, D(G(z)): 0.21Epoch [63/200], Step [400/600], d_loss: 0.6466, g_loss: 2.4374, D(x): 0.85, D(G(z)): 0.27Epoch [63/200], Step [600/600], d_loss: 0.6891, g_loss: 2.2632, D(x): 0.84, D(G(z)): 0.30Epoch [64/200], Step [200/600], d_loss: 0.8519, g_loss: 2.3878, D(x): 0.65, D(G(z)): 0.12Epoch [64/200], Step [400/600], d_loss: 0.6176, g_loss: 2.4059, D(x): 0.81, D(G(z)): 0.20Epoch [64/200], Step [600/600], d_loss: 0.8032, g_loss: 3.1006, D(x): 0.68, D(G(z)): 0.12Epoch [65/200], Step [200/600], d_loss: 0.5564, g_loss: 2.9973, D(x): 0.77, D(G(z)): 0.14Epoch [65/200], Step [400/600], d_loss: 0.7254, g_loss: 2.0362, D(x): 0.81, D(G(z)): 0.24Epoch [65/200], Step [600/600], d_loss: 0.5845, g_loss: 2.7048, D(x): 0.75, D(G(z)): 0.11Epoch [66/200], Step [200/600], d_loss: 0.6310, g_loss: 2.9230, D(x): 0.79, D(G(z)): 0.20Epoch [66/200], Step [400/600], d_loss: 0.5095, g_loss: 2.3467, D(x): 0.77, D(G(z)): 0.13Epoch [66/200], Step [600/600], d_loss: 0.5869, g_loss: 2.3648, D(x): 0.74, D(G(z)): 0.13Epoch [67/200], Step [200/600], d_loss: 0.7529, g_loss: 2.2525, D(x): 0.72, D(G(z)): 0.15Epoch [67/200], Step [400/600], d_loss: 0.6669, g_loss: 2.1734, D(x): 0.76, D(G(z)): 0.19Epoch [67/200], Step [600/600], d_loss: 0.6977, g_loss: 2.0799, D(x): 0.77, D(G(z)): 0.20Epoch [68/200], Step [200/600], d_loss: 0.5344, g_loss: 2.2810, D(x): 0.85, D(G(z)): 0.21Epoch [68/200], Step [400/600], d_loss: 0.7502, g_loss: 2.7689, D(x): 0.73, D(G(z)): 0.21Epoch [68/200], Step [600/600], d_loss: 0.8459, g_loss: 2.0199, D(x): 0.84, D(G(z)): 0.36Epoch [69/200], Step [200/600], d_loss: 0.5745, g_loss: 2.1853, D(x): 0.78, D(G(z)): 0.18Epoch [69/200], Step [400/600], d_loss: 0.7515, g_loss: 2.3225, D(x): 0.77, D(G(z)): 0.25Epoch [69/200], Step [600/600], d_loss: 0.7967, g_loss: 1.8125, D(x): 0.74, D(G(z)): 0.22Epoch [70/200], Step [200/600], d_loss: 0.7456, g_loss: 2.2358, D(x): 0.81, D(G(z)): 0.27Epoch [70/200], Step [400/600], d_loss: 0.6154, g_loss: 2.4855, D(x): 0.81, D(G(z)): 0.24Epoch [70/200], Step [600/600], d_loss: 0.5187, g_loss: 2.1255, D(x): 0.85, D(G(z)): 0.22Epoch [71/200], Step [200/600], d_loss: 0.5937, g_loss: 3.1329, D(x): 0.80, D(G(z)): 0.19Epoch [71/200], Step [400/600], d_loss: 0.5692, g_loss: 2.5442, D(x): 0.79, D(G(z)): 0.19Epoch [71/200], Step [600/600], d_loss: 0.4669, g_loss: 2.8289, D(x): 0.84, D(G(z)): 0.18Epoch [72/200], Step [200/600], d_loss: 0.5994, g_loss: 2.6761, D(x): 0.82, D(G(z)): 0.20Epoch [72/200], Step [400/600], d_loss: 0.4832, g_loss: 2.6731, D(x): 0.83, D(G(z)): 0.17Epoch [72/200], Step [600/600], d_loss: 0.5769, g_loss: 2.7867, D(x): 0.78, D(G(z)): 0.17Epoch [73/200], Step [200/600], d_loss: 0.6073, g_loss: 2.0403, D(x): 0.79, D(G(z)): 0.17Epoch [73/200], Step [400/600], d_loss: 0.7357, g_loss: 2.4262, D(x): 0.74, D(G(z)): 0.20Epoch [73/200], Step [600/600], d_loss: 0.5897, g_loss: 2.4739, D(x): 0.82, D(G(z)): 0.21Epoch [74/200], Step [200/600], d_loss: 0.6338, g_loss: 2.2802, D(x): 0.80, D(G(z)): 0.23Epoch [74/200], Step [400/600], d_loss: 0.5724, g_loss: 2.7530, D(x): 0.84, D(G(z)): 0.23Epoch [74/200], Step [600/600], d_loss: 0.6505, g_loss: 2.5929, D(x): 0.78, D(G(z)): 0.18Epoch [75/200], Step [200/600], d_loss: 0.6580, g_loss: 2.5763, D(x): 0.85, D(G(z)): 0.28Epoch [75/200], Step [400/600], d_loss: 0.6089, g_loss: 2.0517, D(x): 0.81, D(G(z)): 0.22Epoch [75/200], Step [600/600], d_loss: 0.6480, g_loss: 2.3134, D(x): 0.83, D(G(z)): 0.27Epoch [76/200], Step [200/600], d_loss: 0.8040, g_loss: 2.1533, D(x): 0.77, D(G(z)): 0.28Epoch [76/200], Step [400/600], d_loss: 0.7031, g_loss: 2.7626, D(x): 0.79, D(G(z)): 0.24Epoch [76/200], Step [600/600], d_loss: 0.7798, g_loss: 1.9259, D(x): 0.77, D(G(z)): 0.26Epoch [77/200], Step [200/600], d_loss: 0.6174, g_loss: 2.2541, D(x): 0.76, D(G(z)): 0.16Epoch [77/200], Step [400/600], d_loss: 0.7185, g_loss: 1.5864, D(x): 0.78, D(G(z)): 0.22Epoch [77/200], Step [600/600], d_loss: 0.6941, g_loss: 2.3483, D(x): 0.82, D(G(z)): 0.27Epoch [78/200], Step [200/600], d_loss: 0.8584, g_loss: 2.3806, D(x): 0.72, D(G(z)): 0.22Epoch [78/200], Step [400/600], d_loss: 0.6060, g_loss: 1.8562, D(x): 0.83, D(G(z)): 0.24Epoch [78/200], Step [600/600], d_loss: 0.7914, g_loss: 2.5783, D(x): 0.82, D(G(z)): 0.32Epoch [79/200], Step [200/600], d_loss: 0.7219, g_loss: 2.3257, D(x): 0.73, D(G(z)): 0.20Epoch [79/200], Step [400/600], d_loss: 0.7538, g_loss: 2.2944, D(x): 0.78, D(G(z)): 0.27Epoch [79/200], Step [600/600], d_loss: 0.6531, g_loss: 2.0533, D(x): 0.80, D(G(z)): 0.24Epoch [80/200], Step [200/600], d_loss: 0.9207, g_loss: 1.8896, D(x): 0.64, D(G(z)): 0.16Epoch [80/200], Step [400/600], d_loss: 0.7419, g_loss: 2.2362, D(x): 0.69, D(G(z)): 0.17Epoch [80/200], Step [600/600], d_loss: 0.5812, g_loss: 2.3372, D(x): 0.76, D(G(z)): 0.14Epoch [81/200], Step [200/600], d_loss: 0.5252, g_loss: 2.2365, D(x): 0.78, D(G(z)): 0.17Epoch [81/200], Step [400/600], d_loss: 0.7609, g_loss: 2.1495, D(x): 0.75, D(G(z)): 0.26Epoch [81/200], Step [600/600], d_loss: 0.7870, g_loss: 2.3520, D(x): 0.75, D(G(z)): 0.26Epoch [82/200], Step [200/600], d_loss: 0.7311, g_loss: 2.2137, D(x): 0.78, D(G(z)): 0.27Epoch [82/200], Step [400/600], d_loss: 0.6972, g_loss: 1.7540, D(x): 0.77, D(G(z)): 0.26Epoch [82/200], Step [600/600], d_loss: 0.8349, g_loss: 1.7994, D(x): 0.84, D(G(z)): 0.34Epoch [83/200], Step [200/600], d_loss: 0.8138, g_loss: 2.1877, D(x): 0.80, D(G(z)): 0.32Epoch [83/200], Step [400/600], d_loss: 0.7913, g_loss: 2.0897, D(x): 0.72, D(G(z)): 0.23Epoch [83/200], Step [600/600], d_loss: 0.9098, g_loss: 1.5447, D(x): 0.66, D(G(z)): 0.23Epoch [84/200], Step [200/600], d_loss: 0.8892, g_loss: 1.7672, D(x): 0.73, D(G(z)): 0.30Epoch [84/200], Step [400/600], d_loss: 0.5531, g_loss: 2.2540, D(x): 0.80, D(G(z)): 0.21Epoch [84/200], Step [600/600], d_loss: 0.8780, g_loss: 2.0353, D(x): 0.79, D(G(z)): 0.35Epoch [85/200], Step [200/600], d_loss: 0.6664, g_loss: 2.4569, D(x): 0.83, D(G(z)): 0.25Epoch [85/200], Step [400/600], d_loss: 0.9369, g_loss: 1.9261, D(x): 0.72, D(G(z)): 0.30Epoch [85/200], Step [600/600], d_loss: 0.8626, g_loss: 1.3774, D(x): 0.74, D(G(z)): 0.30Epoch [86/200], Step [200/600], d_loss: 0.8138, g_loss: 2.2834, D(x): 0.72, D(G(z)): 0.24Epoch [86/200], Step [400/600], d_loss: 0.9225, g_loss: 2.0189, D(x): 0.70, D(G(z)): 0.29Epoch [86/200], Step [600/600], d_loss: 0.7091, g_loss: 2.5431, D(x): 0.75, D(G(z)): 0.22Epoch [87/200], Step [200/600], d_loss: 0.7513, g_loss: 1.6684, D(x): 0.72, D(G(z)): 0.23Epoch [87/200], Step [400/600], d_loss: 0.7172, g_loss: 1.7539, D(x): 0.82, D(G(z)): 0.30Epoch [87/200], Step [600/600], d_loss: 1.0858, g_loss: 2.5704, D(x): 0.64, D(G(z)): 0.24Epoch [88/200], Step [200/600], d_loss: 0.8175, g_loss: 2.5280, D(x): 0.74, D(G(z)): 0.25Epoch [88/200], Step [400/600], d_loss: 0.7610, g_loss: 1.8840, D(x): 0.78, D(G(z)): 0.28Epoch [88/200], Step [600/600], d_loss: 0.7132, g_loss: 2.2979, D(x): 0.72, D(G(z)): 0.18Epoch [89/200], Step [200/600], d_loss: 0.8786, g_loss: 1.6000, D(x): 0.72, D(G(z)): 0.25Epoch [89/200], Step [400/600], d_loss: 0.7933, g_loss: 1.9502, D(x): 0.71, D(G(z)): 0.23Epoch [89/200], Step [600/600], d_loss: 0.5541, g_loss: 2.7170, D(x): 0.81, D(G(z)): 0.19Epoch [90/200], Step [200/600], d_loss: 0.9670, g_loss: 1.5065, D(x): 0.64, D(G(z)): 0.23Epoch [90/200], Step [400/600], d_loss: 0.8945, g_loss: 1.5057, D(x): 0.74, D(G(z)): 0.32Epoch [90/200], Step [600/600], d_loss: 0.8275, g_loss: 1.9291, D(x): 0.66, D(G(z)): 0.20Epoch [91/200], Step [200/600], d_loss: 0.7411, g_loss: 2.1024, D(x): 0.75, D(G(z)): 0.24Epoch [91/200], Step [400/600], d_loss: 0.8613, g_loss: 1.7976, D(x): 0.72, D(G(z)): 0.27Epoch [91/200], Step [600/600], d_loss: 0.9204, g_loss: 1.9358, D(x): 0.81, D(G(z)): 0.38Epoch [92/200], Step [200/600], d_loss: 0.5769, g_loss: 1.8164, D(x): 0.85, D(G(z)): 0.26Epoch [92/200], Step [400/600], d_loss: 0.8222, g_loss: 1.5064, D(x): 0.73, D(G(z)): 0.26Epoch [92/200], Step [600/600], d_loss: 0.5844, g_loss: 2.5825, D(x): 0.78, D(G(z)): 0.20Epoch [93/200], Step [200/600], d_loss: 0.6836, g_loss: 1.9087, D(x): 0.74, D(G(z)): 0.22Epoch [93/200], Step [400/600], d_loss: 0.7328, g_loss: 1.7869, D(x): 0.72, D(G(z)): 0.22Epoch [93/200], Step [600/600], d_loss: 0.7112, g_loss: 1.6405, D(x): 0.79, D(G(z)): 0.28Epoch [94/200], Step [200/600], d_loss: 0.7915, g_loss: 1.7422, D(x): 0.71, D(G(z)): 0.24Epoch [94/200], Step [400/600], d_loss: 0.7935, g_loss: 1.4080, D(x): 0.85, D(G(z)): 0.36Epoch [94/200], Step [600/600], d_loss: 1.0396, g_loss: 1.4284, D(x): 0.76, D(G(z)): 0.35Epoch [95/200], Step [200/600], d_loss: 0.7436, g_loss: 2.2755, D(x): 0.73, D(G(z)): 0.18Epoch [95/200], Step [400/600], d_loss: 0.8688, g_loss: 1.8874, D(x): 0.76, D(G(z)): 0.30Epoch [95/200], Step [600/600], d_loss: 0.8444, g_loss: 2.3308, D(x): 0.66, D(G(z)): 0.19Epoch [96/200], Step [200/600], d_loss: 0.7442, g_loss: 2.3771, D(x): 0.73, D(G(z)): 0.22Epoch [96/200], Step [400/600], d_loss: 0.7074, g_loss: 2.2321, D(x): 0.74, D(G(z)): 0.18Epoch [96/200], Step [600/600], d_loss: 0.8868, g_loss: 1.4519, D(x): 0.67, D(G(z)): 0.21Epoch [97/200], Step [200/600], d_loss: 0.8345, g_loss: 1.6682, D(x): 0.72, D(G(z)): 0.28Epoch [97/200], Step [400/600], d_loss: 0.7934, g_loss: 2.1283, D(x): 0.68, D(G(z)): 0.17Epoch [97/200], Step [600/600], d_loss: 0.7260, g_loss: 1.4867, D(x): 0.76, D(G(z)): 0.25Epoch [98/200], Step [200/600], d_loss: 0.7596, g_loss: 2.1651, D(x): 0.76, D(G(z)): 0.24Epoch [98/200], Step [400/600], d_loss: 0.8643, g_loss: 1.9000, D(x): 0.69, D(G(z)): 0.24Epoch [98/200], Step [600/600], d_loss: 0.7319, g_loss: 1.6552, D(x): 0.75, D(G(z)): 0.25Epoch [99/200], Step [200/600], d_loss: 0.7711, g_loss: 1.7455, D(x): 0.69, D(G(z)): 0.20Epoch [99/200], Step [400/600], d_loss: 0.6971, g_loss: 2.1426, D(x): 0.80, D(G(z)): 0.25Epoch [99/200], Step [600/600], d_loss: 0.8339, g_loss: 2.5612, D(x): 0.68, D(G(z)): 0.23Epoch [100/200], Step [200/600], d_loss: 0.9574, g_loss: 1.6630, D(x): 0.65, D(G(z)): 0.26Epoch [100/200], Step [400/600], d_loss: 0.9069, g_loss: 1.6404, D(x): 0.65, D(G(z)): 0.23Epoch [100/200], Step [600/600], d_loss: 0.7358, g_loss: 1.6987, D(x): 0.74, D(G(z)): 0.24Epoch [101/200], Step [200/600], d_loss: 0.8096, g_loss: 1.5011, D(x): 0.73, D(G(z)): 0.26Epoch [101/200], Step [400/600], d_loss: 0.9509, g_loss: 1.5224, D(x): 0.75, D(G(z)): 0.32Epoch [101/200], Step [600/600], d_loss: 1.0163, g_loss: 1.7723, D(x): 0.76, D(G(z)): 0.41Epoch [102/200], Step [200/600], d_loss: 0.9929, g_loss: 1.3493, D(x): 0.73, D(G(z)): 0.37Epoch [102/200], Step [400/600], d_loss: 0.8234, g_loss: 2.2110, D(x): 0.67, D(G(z)): 0.19Epoch [102/200], Step [600/600], d_loss: 0.6964, g_loss: 2.1019, D(x): 0.74, D(G(z)): 0.21

对分别对生成器和判别器的模型进行保存

# Save the model checkpointstorch.save(G.state_dict(), 'G.ckpt')torch.save(D.state_dict(), 'D.ckpt')

训练结果的展示

一开提供的minist的真实数据的图片

通过迭代1次过后生成器生成的图片

通过迭代10次过后生成器生成的图片

通过迭代50次过后生成器生成的图片

通过迭代103次过后生成器生成的图片

我们可以看到,随着迭代次数的增加,我们的生产起生成的图像逐渐的与minist数据的提供的数据图像相类似。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。