700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 【062】MNIST手写体数字识别

【062】MNIST手写体数字识别

时间:2023-01-17 06:38:15

相关推荐

【062】MNIST手写体数字识别

内容目录

一、MNIST 介绍1、MNIST 介绍2、获取MNIST数据的几种方法二、模型训练 1、keras 训练 2、感知机训练

3、CNN训练

一、MNIST 介绍

1、MNIST 介绍

MNIST数据集分为训练图像和测试图像。训练图像60000张,测试图像10000张,每一个图片代表0-9中的一个数字,且图片大小均为28*28的矩阵。

train-images-idx3-ubyte.gz: training set images (9912422 bytes) 训练图片

train-labels-idx1-ubyte.gz: training set labels (28881 bytes) 训练标签

t10k-images-idx3-ubyte.gz: test set images (1648877 bytes) 测试图片

t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes) 测试标签

2、获取MNIST数据的几种方法

方法1

官网下载,MNIST数据集的版权在Yann LeCun教授手上,在他的主页下载即可。/exdb/mnist/下载4个gz文件,实际上这也是旧版TensorFlow中获取mnist的方法。注意,图像数据取值为0到1之间。

方法2

谷歌/tensorflow/tf-keras-datasets/mnist.npz 下载1个npz文件,实际上这也是新版TensorFlow中获取mnist的方法。注意,图像数据取值为0到255之间。

方法3

通过TensorFlow获取,提前下载好放在这里就可以避免无法下载的问题。

#tensorflow1.7以前#下载好数据集,放到mnist文件夹下,可以避免无法下载的问题,然后指定datapath来读取。fromtensorflow.examples.tutorials.mnistimportinput_datadatapath="./mnist/"mnist=input_data.read_data_sets(datapath,one_hot=True)train_x=mnist.train.imagestrain_y=mnist.train.labelstest_x=mnist.test.imagestest_y=mnist.text.labels#tensorflow1.7以后#下载好数据集mnist.npz,放于~/.keras/datasets/下,可以避免无法下载的问题importtensorflowastf(train_x,train_y),(test_x,test_y)=tf.keras.datasets.mnist.load_data(path='mnist.npz')

方法4

通过Keras获取。

fromkeras.datasetsimportmnist(train_x,train_y),(test_x,test_y)=mnist.load_data()#:\ProgramFiles\Python\Python36-64\Lib\site-packages\keras\datasets#C:\Users\user_name\.keras\datasets

二、模型训练

 1、获数据集 

importtensorflowastfimporttimeimportmatplotlibimportmatplotlib.pyplotaspltstart=time.time()mnist=tf.keras.datasets.mnist(x_train,y_train),(x_test,y_test)=mnist.load_data()x_train,x_test=x_train/255.0,x_test/255.0#给定的像素的灰度值在0-255,为了使模型的训练效果更好,通常将数值归一化映射到0-1。print((x_train.shape,y_train.shape),(x_test.shape,y_test.shape))#((60000,28,28),(60000,))((10000,28,28),(10000,))

2、查看其中图片

some_digit=x_train[3000]some_digit_image=some_digit.reshape(28,28)plt.imshow(some_digit_image,cmap=matplotlib.cm.binary,interpolation="nearest")#灰色的#plt.imshow(some_digit_image)#彩色的plt.axis('off')plt.show()

fromkeras.datasetsimportmnist#这里是从keras的datasets中导入mnist数据集importmatplotlib.pyplotasplt#这里是将matplotlib.pyplot重名为plt(X_train,y_train),(X_test,y_test)=mnist.load_data()#所以这里返回的是手写图片的两个tuple,#第一个tuple存储的是我们已经人工分类好的图片,也就是每一张图片都有自己对应的标签,然后可以拿来训练,#第二个tuple存储的是我们还没分类的图片,在第一个tuple训练完后,#我们可以把第二个tuple利用神经网络进行分类,根据实验结果的真实值与我们的预测值进行对比得到相应的损失值,#再利用反向传播进行参数更新,再进行分类,然后重复前述步骤直至损失值最小plt.subplot(331)#这个subplot函数的作用是确定图像的位置以及图像的个数,前两个3的意思是可以放9张图片,如果变成221的话,就是可以放4张图片,然后后面的1,是确定图像的位置,处于第一个,plt.imshow(X_test[0],cmap=plt.get_cmap('gray'))#这里个把图片显示出来#X_train存储的是图像的像素点组成的list数据类型,这里面又由一个二维的list(28x28的像素点值)和一个对应的标签list组成,y_train存储的是对应图像的标签,也就是该图像代表什么数字plt.subplot(332)plt.imshow(X_train[1],cmap=plt.get_cmap('gray'))plt.subplot(333)plt.imshow(X_train[2],cmap=plt.get_cmap('gray'))plt.subplot(334)plt.imshow(X_train[3],cmap=plt.get_cmap('gray'))plt.subplot(335)plt.imshow(X_train[4],cmap=plt.get_cmap('gray'))plt.subplot(336)plt.imshow(X_train[5],cmap=plt.get_cmap('gray'))plt.subplot(337)plt.imshow(X_train[6],cmap=plt.get_cmap('gray'))plt.subplot(338)plt.imshow(X_train[7],cmap=plt.get_cmap('gray'))plt.subplot(339)plt.imshow(X_train[8],cmap=plt.get_cmap('gray'))#在这里imshow函数的官方文档:/api/_as_gen/matplotlib.pyplot.imshow.html#matplotlib.pyplot.imshow我们这里第一个参数是图片的像素点值组成的数组(列表),第二个参数是指明图片的色彩#showtheplot#plt.show()最后这里官方文档是这样说的:Display a figure. When running in ipython with its pylab mode, display all figures and return to the ipython prompt.,所以我们可以知道show函数是把所有图片都展示出来。

3、keras模型预测

#Dropout(0.2)model=tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,28)),tf.keras.layers.Dense(128,activation='relu'),tf.keras.layers.Dropout(0.2),tf.keras.layers.Dense(10,activation='softmax')])pile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])model.fit(x_train,y_train,epochs=5)loss,accuracy=model.evaluate(x_test,y_test,verbose=2)print('loss:%s,accuracy:%s'%(loss,accuracy))end=time.time()t=end-startprint('用时:%s秒'%t)#epochs=5,loss:0.07518016551942565,accuracy:0.9781,用时26.241849899291992秒#epochs=10,loss:0.07639624819413875,accuracy:0.9778,用时46.722630977630615秒

#把Dropout(0.2)注释掉#epochs=5,loss:0.0709681018311996,accuracy:0.9776,用时:25.166700839996338秒

4、多层感知机预测

在实现卷积神经网络这种复杂的模型之前,先实现一个简单但效果也不错的模型:多层感知机。这种模型也叫含隐层的神经网络。

importtimestart=time.time()importnumpy#导入数据库fromkeras.datasetsimportmnistfromkeras.modelsimportSequentialfromkeras.layersimportDensefromkeras.layersimportDropoutfromkeras.utilsimportnp_utilsseed=7#设置随机种子numpy.random.seed(seed)(X_train,y_train),(X_test,y_test)=mnist.load_data()#加载数据num_pixels=X_train.shape[1]*X_train.shape[2]X_train=X_train.reshape(X_train.shape[0],num_pixels).astype('float32')X_test=X_test.reshape(X_test.shape[0],num_pixels).astype('float32')#数据集是3维的向量(instancelength,width,height).对于多层感知机,模型的输入是二维的向量,因此这#里需要将数据集reshape,即将28*28的向量转成784长度的数组。可以用numpy的reshape函数轻松实现这个过#程。#给定的像素的灰度值在0-255,为了使模型的训练效果更好,通常将数值归一化映射到0-1。X_train=X_train/255X_test=X_test/255#最后,模型的输出是对每个类别的打分预测,对于分类结果从0-9的每个类别都有一个预测分值,表示将模型#输入预测为该类的概率大小,概率越大可信度越高。由于原始的数据标签是0-9的整数值,#通常将其表示成#0ne-hot向量。如第一个训练数据的标签为5,one-hot表示为[0,0,0,0,0,1,0,0,0,0]。y_train=np_utils.to_categorical(y_train)y_test=np_utils.to_categorical(y_test)num_classes=y_test.shape[1]#现在需要做得就是搭建神经网络模型了,创建一个函数,建立含有一个隐层的神经网络。#definebaselinemodeldefbaseline_model():#createmodelmodel=Sequential()model.add(Dense(num_pixels,input_dim=num_pixels,kernel_initializer='normal',activation='relu'))model.add(Dense(num_classes,kernel_initializer='normal',activation='softmax'))#pile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])returnmodel#型的隐含层含有784个节点,接受的输入长度也是784(28*28),最后用softmax函数将预测结果转换为标签#的概率值。#将训练数据fit到模型,设置了迭代轮数,每轮200个训练样本,将测试集作为验证集,并查看训练的效果。#buildthemodelmodel=baseline_model()#Fitthemodelmodel.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=10,batch_size=200,verbose=2)#Finalevaluationofthemodelloss,accuracy=model.evaluate(X_test,y_test,verbose=2)print('loss:%s,accuracy:%s'%(loss,accuracy))end=time.time()t=end-startprint('用时:%s秒'%t)

Train on 60000 samples, validate on 10000 samplesEpoch 1/10-02-27 14:12:52.942687: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library cublas64_10.dll- 3s - loss: 0.2783 - accuracy: 0.9210 - val_loss: 0.1413 - val_accuracy: 0.9575Epoch 2/10- 2s - loss: 0.1115 - accuracy: 0.9677 - val_loss: 0.0928 - val_accuracy: 0.9703Epoch 3/10- 2s - loss: 0.0719 - accuracy: 0.9797 - val_loss: 0.0784 - val_accuracy: 0.9769Epoch 4/10- 2s - loss: 0.0507 - accuracy: 0.9855 - val_loss: 0.0742 - val_accuracy: 0.9774Epoch 5/10- 2s - loss: 0.0373 - accuracy: 0.9892 - val_loss: 0.0676 - val_accuracy: 0.9791Epoch 6/10- 2s - loss: 0.0270 - accuracy: 0.9927 - val_loss: 0.0631 - val_accuracy: 0.9806Epoch 7/10- 2s - loss: 0.0211 - accuracy: 0.9946 - val_loss: 0.0627 - val_accuracy: 0.9811Epoch 8/10- 2s - loss: 0.0141 - accuracy: 0.9968 - val_loss: 0.0625 - val_accuracy: 0.9803Epoch 9/10- 2s - loss: 0.0108 - accuracy: 0.9978 - val_loss: 0.0581 - val_accuracy: 0.9811Epoch 10/10- 2s - loss: 0.0081 - accuracy: 0.9984 - val_loss: 0.0587 - val_accuracy: 0.9812loss:0.05865979683827463,accuracy:0.9811999797821045用时:30.996304988861084 秒

5、简单的卷积神经网络预测

卷积神经网络(CNN)是一种深度神经网络,与单隐层的神经网络不同的是它还包含卷积层、池化层、Dropout层等,这使得它在图像分类的问题上有更优的效果。

importnumpyimporttimestart=time.time()fromkeras.datasetsimportmnistfromkeras.modelsimportSequentialfromkeras.layersimportDensefromkeras.layersimportDropoutfromkeras.layersimportFlattenfromkeras.layers.convolutionalimportConv2Dfromkeras.layers.convolutionalimportMaxPooling2Dfromkeras.utilsimportnp_utilsseed=7numpy.random.seed(seed)#设定随机数种子#将数据reshape,CNN的输入是4维的张量(可看做多维的向量),第一维是样本规模,#第二维是像素通道,第三维和第四维是长度和宽度。并将数值归一化和类别标签向量化。(X_train,y_train),(X_test,y_test)=mnist.load_data()#reshapetobe[samples][pixels][width][height]X_train=X_train.reshape(X_train.shape[0],28,28,1).astype('float32')X_test=X_test.reshape(X_test.shape[0],28,28,1).astype('float32')X_train=X_train/255X_test=X_test/255#onehotencodeoutputsy_train=np_utils.to_categorical(y_train)y_test=np_utils.to_categorical(y_test)num_classes=y_test.shape[1]'''构造CNN第一层是卷积层。该层有32个feature map,或者叫滤波器,作为模型的输入层,接受[pixels][width][height]大小的输入数据。feature map的大小是5*5,其输出接一个‘relu’激活函数。下一层是pooling层,使用了MaxPooling,大小为2*2。下一层是Dropout层,该层的作用相当于对参数进行正则化来防止模型过拟合。接下来是全连接层,有128个神经元,激活函数采用‘relu’。最后一层是输出层,有10个神经元,每个神经元对应一个类别,输出值表示样本属于该类别的概率大小。'''defbaseline_model():#createmodelmodel=Sequential()model.add(Conv2D(32,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Dropout(0.2))model.add(Flatten())model.add(Dense(128,activation='relu'))model.add(Dense(num_classes,activation='softmax'))#pile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])returnmodel#开始训练模型model=baseline_model()model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=10,batch_size=200,verbose=2)loss,accuracy=model.evaluate(X_test,y_test,verbose=2)print('loss:%s,accuracy:%s'%(loss,accuracy))end=time.time()t=end-startprint('用时:%s秒'%t)

Train on 60000 samples, validate on 10000 samplesEpoch 1/10-02-27 14:32:19.877644: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library cublas64_10.dll-02-27 14:32:20.129102: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library cudnn64_7.dll-02-27 14:32:21.857355: W tensorflow/stream_executor/gpu/:312] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms onlyRelying on driver to perform ptx compilation. This message will be only logged once.- 9s - loss: 0.2238 - accuracy: 0.9364 - val_loss: 0.0751 - val_accuracy: 0.9764Epoch 2/10- 6s - loss: 0.0712 - accuracy: 0.9785 - val_loss: 0.0467 - val_accuracy: 0.9835Epoch 3/10- 6s - loss: 0.0505 - accuracy: 0.9846 - val_loss: 0.0422 - val_accuracy: 0.9857Epoch 4/10- 6s - loss: 0.0400 - accuracy: 0.9876 - val_loss: 0.0386 - val_accuracy: 0.9877Epoch 5/10- 6s - loss: 0.0318 - accuracy: 0.9900 - val_loss: 0.0344 - val_accuracy: 0.9885Epoch 6/10- 6s - loss: 0.0260 - accuracy: 0.9919 - val_loss: 0.0337 - val_accuracy: 0.9896Epoch 7/10- 6s - loss: 0.0221 - accuracy: 0.9928 - val_loss: 0.0341 - val_accuracy: 0.9893Epoch 8/10- 6s - loss: 0.0186 - accuracy: 0.9941 - val_loss: 0.0331 - val_accuracy: 0.9888Epoch 9/10- 6s - loss: 0.0159 - accuracy: 0.9949 - val_loss: 0.0300 - val_accuracy: 0.9902Epoch 10/10- 6s - loss: 0.0131 - accuracy: 0.9961 - val_loss: 0.0301 - val_accuracy: 0.9912loss:0.030062780727856443,accuracy:0.9911999702453613用时:73.75469589233398 秒若是 7x7,epoch=5loss:0.03231118987446,accuracy:0.9884999990463257用时:36.213218450546265 秒若是3x3,epoch=5loss:0.04168314480264671,accuracy:0.9854999780654907用时:44.08165001869202

#多怎加一层卷积Conv2D(16,(5,5),epoch=5,总共2层卷积defbaseline_model():#createmodelmodel=Sequential()model.add(Conv2D(32,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(16,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Dropout(0.2))model.add(Flatten())model.add(Dense(128,activation='relu'))model.add(Dense(num_classes,activation='softmax'))#结果#loss:0.02800338240491692,accuracy:0.9902999997138977#用时:39.675434827804565秒

#多怎加一层卷积Conv2D(16,(5,5),epoch=5,总共2层卷积,Dropout(0.8)defbaseline_model():#createmodelmodel=Sequential()model.add(Conv2D(32,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(16,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Dropout(0.8))model.add(Flatten())model.add(Dense(128,activation='relu'))model.add(Dense(num_classes,activation='softmax'))#pile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])returnmodel#结果#loss:0.052183764855377374,accuracy:0.9843000173568726#用时:38.766382455825806秒#多怎加一层卷积Conv2D(8,(3,3),epoch=5,总共3层卷积defbaseline_model():#createmodelmodel=Sequential()model.add(Conv2D(32,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(16,(5,5),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(8,(3,3),input_shape=(28,28,1),activation='relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Dropout(0.2))model.add(Flatten())model.add(Dense(128,activation='relu'))model.add(Dense(num_classes,activation='softmax'))#pile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])returnmodel#结果#loss:0.1265007281795144,accuracy:0.9646999835968018#用时:39.36178970336914秒

About Me:小婷儿

●本文作者:小婷儿,专注于python、数据分析、数据挖掘、机器学习相关技术,也注重技术的运用

● 作者博客地址:/u010986753

●本系列题目来源于作者的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

●版权所有,欢迎分享本文,转载请保留出处

●微信:tinghai87605025 联系我加微信群

●QQ:87605025

●QQ交流群py_data:483766429

●公众号:python宝 或 DB宝

●提供OCP、OCM和高可用最实用的技能培训

●题目解答若有不当之处,还望各位朋友批评指正,共同进步

如果你觉得到文章对您有帮助,欢迎赞赏哦!有您的支持,小婷儿一定会越来越好!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。