700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > tensorflow学习笔记(十):GAN生成手写体数字(MNIST)

tensorflow学习笔记(十):GAN生成手写体数字(MNIST)

时间:2020-12-10 19:24:08

相关推荐

tensorflow学习笔记(十):GAN生成手写体数字(MNIST)

文章目录

一、GAN原理二、项目实战2.1 项目背景2.2 网络描述2.3 项目实战

一、GAN原理

生成对抗网络简称GAN,是由两个网络组成的,一个生成器网络和一个判别器网络。这两个网络可以是神经网络(从卷积神经网络、循环神经网络到自编码器)。生成器从给定噪声中(一般是指均匀分布或者正态分布)产生合成数据,判别器分辨生成器的的输出和真实数据。前者试图产生更接近真实的数据,相应地,后者试图更完美地分辨真实数据与生成数据。由此,两个网络在对抗中进步,在进步后继续对抗,由生成式网络得的数据也就越来越完美,逼近真实数据,从而可以生成想要得到的数据(图片、序列、视频等)。网络结构如下:

GAN的公式:

更多理论知识请参考:生成对抗网络GAN详细推导。

需要注意的就是GAN模型中没有用到卷积,只是简单的多层神经网络。在网络中,也没有新的tensorflow函数,在经过了前边几篇学习笔记和理论知识学习之后,实现起来也比较容易实现。

二、项目实战

2.1 项目背景

基于MNIST数据集,利用GAN生成手写体数字。有关MNIST的简介,请参考:tensorflow学习笔记(四):利用BP手写体(MNIST)识别。

2.2 网络描述

利用Tensorflow搭建GAN网络,其中生成网络为两层全连接层,判别网络也为两层全连接层,并用MNIST训练,然后生成手写体数字。

2.3 项目实战

from __future__ import division, print_function, absolute_importimport matplotlib.pyplot as pltimport numpy as npimport tensorflow as tf# 导入MNIST数据集from tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# 训练参数num_steps = 20000batch_size = 128learning_rate = 0.0002# 网络参数image_dim = 784 # 28*28gen_hidden_dim = 256disc_hidden_dim = 256noise_dim = 100 # Noise data points# A custom initialization (see Xavier Glorot init)def glorot_init(shape):return tf.random_normal(shape=shape, stddev=1. / tf.sqrt(shape[0] / 2.))# 保存隐藏层的权重和偏置weights = {'gen_hidden1': tf.Variable(glorot_init([noise_dim, gen_hidden_dim])),'gen_out': tf.Variable(glorot_init([gen_hidden_dim, image_dim])),'disc_hidden1': tf.Variable(glorot_init([image_dim, disc_hidden_dim])),'disc_out': tf.Variable(glorot_init([disc_hidden_dim, 1])),}biases = {'gen_hidden1': tf.Variable(tf.zeros([gen_hidden_dim])),'gen_out': tf.Variable(tf.zeros([image_dim])),'disc_hidden1': tf.Variable(tf.zeros([disc_hidden_dim])),'disc_out': tf.Variable(tf.zeros([1])),}# 生成网络def generator(x):hidden_layer = tf.matmul(x, weights['gen_hidden1'])hidden_layer = tf.add(hidden_layer, biases['gen_hidden1'])hidden_layer = tf.nn.relu(hidden_layer)out_layer = tf.matmul(hidden_layer, weights['gen_out'])out_layer = tf.add(out_layer, biases['gen_out'])out_layer = tf.nn.sigmoid(out_layer)return out_layer# 判别网络def discriminator(x):hidden_layer = tf.matmul(x, weights['disc_hidden1'])hidden_layer = tf.add(hidden_layer, biases['disc_hidden1'])hidden_layer = tf.nn.relu(hidden_layer)out_layer = tf.matmul(hidden_layer, weights['disc_out'])out_layer = tf.add(out_layer, biases['disc_out'])out_layer = tf.nn.sigmoid(out_layer)return out_layer##############创建网络# 网络输入gen_input = tf.placeholder(tf.float32, shape=[None, noise_dim], name='input_noise')disc_input = tf.placeholder(tf.float32, shape=[None, image_dim], name='disc_input')# 创建生成网络gen_sample = generator(gen_input)# 创建两个判别网络 (一个来自噪声输入, 一个来自生成的样本)disc_real = discriminator(disc_input)disc_fake = discriminator(gen_sample)# 定义损失函数gen_loss = -tf.reduce_mean(tf.log(disc_fake))disc_loss = -tf.reduce_mean(tf.log(disc_real) + tf.log(1. - disc_fake))# 定义优化器optimizer_gen = tf.train.AdamOptimizer(learning_rate=learning_rate)optimizer_disc = tf.train.AdamOptimizer(learning_rate=learning_rate)# 训练每个优化器的变量# 生成网络变量gen_vars = [weights['gen_hidden1'], weights['gen_out'],biases['gen_hidden1'], biases['gen_out']]# 判别网络变量disc_vars = [weights['disc_hidden1'], weights['disc_out'],biases['disc_hidden1'], biases['disc_out']]# 最小损失函数train_gen = optimizer_gen.minimize(gen_loss, var_list=gen_vars)train_disc = optimizer_disc.minimize(disc_loss, var_list=disc_vars)# 初始化变量init = tf.global_variables_initializer()# 开始训练with tf.Session() as sess:sess.run(init)for i in range(1, num_steps+1):# 准备数据batch_x, _ = mnist.train.next_batch(batch_size)# 产生噪声给生成网络z = np.random.uniform(-1., 1., size=[batch_size, noise_dim])# 训练feed_dict = {disc_input: batch_x, gen_input: z}_, _, gl, dl = sess.run([train_gen, train_disc, gen_loss, disc_loss],feed_dict=feed_dict)if i % 1000 == 0 or i == 1:print('Step %i: Generator Loss: %f, Discriminator Loss: %f' % (i, gl, dl))# 使用生成器网络从噪声生成图像f, a = plt.subplots(4, 10, figsize=(10, 4))for i in range(10):# 噪声输入.z = np.random.uniform(-1., 1., size=[4, noise_dim])g = sess.run([gen_sample], feed_dict={gen_input: z})g = np.reshape(g, newshape=(4, 28, 28, 1))# 将原来黑底白字转换成白底黑字,更好的显示g = -1 * (g - 1)for j in range(4):# 从噪音中生成图像。 扩展到3个通道,用于matplotlibimg = np.reshape(np.repeat(g[j][:, :, np.newaxis], 3, axis=2),newshape=(28, 28, 3))a[j][i].imshow(img)f.show()plt.draw()plt.waitforbuttonpress()

训练20000次,输出结果:

Step 1: Generator Loss: 1.002087, Discriminator Loss: 1.212741Step 1000: Generator Loss: 3.819249, Discriminator Loss: 0.063358Step 2000: Generator Loss: 4.281909, Discriminator Loss: 0.040046Step 3000: Generator Loss: 3.737413, Discriminator Loss: 0.07Step 4000: Generator Loss: 3.734505, Discriminator Loss: 0.121832Step 5000: Generator Loss: 3.478826, Discriminator Loss: 0.155717Step 6000: Generator Loss: 3.131607, Discriminator Loss: 0.167828Step 7000: Generator Loss: 3.458174, Discriminator Loss: 0.176890Step 8000: Generator Loss: 3.987390, Discriminator Loss: 0.132476Step 9000: Generator Loss: 3.256813, Discriminator Loss: 0.246182Step 10000: Generator Loss: 4.022185, Discriminator Loss: 0.106170Step 11000: Generator Loss: 3.692181, Discriminator Loss: 0.229384Step 12000: Generator Loss: 3.681010, Discriminator Loss: 0.221918Step 13000: Generator Loss: 3.232910, Discriminator Loss: 0.276704Step 14000: Generator Loss: 3.951521, Discriminator Loss: 0.223627Step 15000: Generator Loss: 3.263102, Discriminator Loss: 0.262820Step 16000: Generator Loss: 3.180792, Discriminator Loss: 0.326289Step 17000: Generator Loss: 3.495943, Discriminator Loss: 0.350409Step 18000: Generator Loss: 3.797458, Discriminator Loss: 0.174091Step 19000: Generator Loss: 2.964710, Discriminator Loss: 0.286498Step 20000: Generator Loss: 3.576961, Discriminator Loss: 0.336350

生成的图片:

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。