700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 【python和机器学习入门2】决策树3——使用决策树预测隐形眼镜类型

【python和机器学习入门2】决策树3——使用决策树预测隐形眼镜类型

时间:2018-08-26 05:51:32

相关推荐

【python和机器学习入门2】决策树3——使用决策树预测隐形眼镜类型

参考博客:决策树实战篇之为自己配个隐形眼镜(po主Jack-Cui,《——大部分内容转载自

参考书籍:《机器学习实战》——第三章3.4

《——决策树基础知识见前两篇 ,

摘要:本篇用一个预测隐形眼镜类型的例子讲述如何建树、可视化,并介绍了用sklearn构建决策树的代码

目录

1 数据处理

2 完整代码

3 Matplotlib可视化

4 sklearn构建决策树

1 数据处理

隐形眼镜数据集是非常著名的数据集,它包含很多患者眼部状态的观察条件以及医生推荐的隐形眼镜类型。隐形眼镜类型包括硬材质(hard)、软材质(soft)以及不适合佩戴隐形眼镜(no lenses)。给出一个数据集,使用决策树预测患者的隐形眼镜类型(共三类:hard/soft/no lenses)

lenses.txt数据如下图,共24组数据,5列属性,第5列为隐形眼镜类型,即我们需要预测的分类。

数据labels为[ageprescriptastigmatictearRateclass]

即[年龄、症状,是否散光,眼泪数量,最终的分类标签]

'''创建数据集'''def createDataSet():fr = open('lenses.txt')dataSet = [rl.strip().split('\t') for rl in fr.readlines()]print dataSetlabels = ['age','prescript','astigmatic','tearRate'] #特征属性return dataSet, labels#返回数据集和特征属性

2 完整代码

#!/usr/bin/env python#_*_coding:utf-8_*_import numpy as npimport jsonimport operatorfrom math import log'''创建数据集'''def createDataSet():fr = open('lenses.txt')dataSet = [rl.strip().split('\t') for rl in fr.readlines()]labels = ['age','prescript','astigmatic','tearRate'] #特征属性return dataSet, labels#返回数据集和特征属性'''经验熵'''def calShannonEnt(dataset):m = len(dataset)lableCount = {}'''计数'''for data in dataset:currentLabel = data[-1]if currentLabel not in lableCount.keys():lableCount[currentLabel] = 0lableCount[currentLabel] += 1'''遍历字典求和'''entropy = 0for label in lableCount:p = float(lableCount[label]) / mentropy -= p * log(p,2)return entropy'''第i个特征根据取值value划分子数据集'''def splitdataset(dataset,axis,value):subSet = []for data in dataset:if(data[axis] == value):data_x = data[:axis]data_x.extend(data[axis+1:])subSet.append(data_x)return subSet'''遍历数据集求最优IG和特征'''def chooseBestFeatureToSpit(dataSet):feature_num = len(dataSet[0])-1origin_ent = calShannonEnt(dataSet)infoGain = 0.0best_infogain = 0.0for i in range(feature_num):fi_all = [data[i] for data in dataSet]fi_all = set(fi_all)#print fi_allsubset_Ent = 0'''遍历所有可能value'''for value in fi_all:#划分子集#print i,valuesubset = splitdataset(dataSet,i,value)#print subset#计算子集熵p = float(len(subset)) / len(dataSet)subset_Ent += p * calShannonEnt(subset)#计算信息增益infoGain = origin_ent - subset_Ent#记录最大IG#print "第 %d 个特征的信息增益为 %f" % (i,infoGain)if(infoGain > best_infogain):best_feature = ibest_infogain = infoGainreturn best_feature'''计数并返回最多类别'''def majorityCnt(classList):classCount = {}for class_ in classList:if(class_ not in classCount.keys()):classCount[class_] = 0classCount[class_] += 1classSort = sorted(classCount.iteritems(),key = operator.itemgetter(1),reverse=True)return classSort[0][0]'''向下递归创建树 '''def createTree(dataSet,labels,feaLabels):'''数据集所有类别'''classList = [example[-1] for example in dataSet]'''判断是否属于2个终止类型''''''1 全属一个类'''if(len(classList) == classList.count(classList[0])):return classList[0]'''2 只剩1个特征属性'''if(len(dataSet[0]) == 1):majorClass = majorityCnt(classList)return majorClass'''继续划分'''best_feature = chooseBestFeatureToSpit(dataSet)#最优划分特征 下标号best_feaLabel = labels[best_feature]feaLabels.append(best_feaLabel) #存储最优特征del(labels[best_feature])#特征属性中删去最优特征《——ID3消耗特征feaValue = [example[best_feature] for example in dataSet]feaValue = set(feaValue) #获取最优特征的属性值列表deci_tree = {best_feaLabel:{}}#子树的根的key是此次划分的最优特征名,value是再往下递归划分的子树for value in feaValue:subLabel = labels[:] #因为每个value都需要label,copy以免递归更改subset = splitdataset(dataSet,best_feature,value)deci_tree[best_feaLabel][value] = createTree(subset,subLabel,feaLabels)#print deci_treereturn deci_treeif __name__ == '__main__':dataSet, labels = createDataSet()feaLabels = []mytree = createTree(dataSet,labels,feaLabels)print json.dumps(mytree,ensure_ascii=False)

建树结果

{"tearRate": {"reduced": "no lenses", "normal": {"astigmatic": {"yes": {"prescript": {"hyper": {"age": {"pre": "no lenses", "presbyopic": "no lenses", "young": "hard"}}, "myope": "hard"}}, "no": {"age": {"pre": "soft", "presbyopic": {"prescript": {"hyper": "soft", "myope": "no lenses"}}, "young": "soft"}}}}}}

3 Matplotlib可视化

上面建树的字典展示看起来很不直观,接下来用matplotlib将结果可视化一下

环境 maxos 10.12.3 python2.7

模块下载,python2

pip install matplotlib

如果是python3

pip3 install matplotlib

代码引入模块

import matplotlibimport matplotlib.pyplot as plt

需要用到的函数:

getNumLeafs:获取决策树叶子结点的数目getTreeDepth:获取决策树的层数plotNode:绘制结点plotMidText:标注有向边属性值plotTree:绘制决策树createPlot:创建绘制面板

def getNumLeafs(myTree):numLeafs = 0 #初始化叶子firstStr = next(iter(myTree)) #python3中myTree.keys()返回的是dict_keys,不在是list,所以不能使用myTree.keys()[0]的方法获取结点属性,可以使用list(myTree.keys())[0]secondDict = myTree[firstStr] #获取下一组字典for key in secondDict.keys():if type(secondDict[key]).__name__=='dict':#测试该结点是否为字典,如果不是字典,代表此结点为叶子结点numLeafs += getNumLeafs(secondDict[key])else: numLeafs +=1return numLeafsdef getTreeDepth(myTree):maxDepth = 0 #初始化决策树深度firstStr = next(iter(myTree)) #python3中myTree.keys()返回的是dict_keys,不在是list,所以不能使用myTree.keys()[0]的方法获取结点属性,可以使用list(myTree.keys())[0]secondDict = myTree[firstStr] #获取下一个字典for key in secondDict.keys():if type(secondDict[key]).__name__=='dict':#测试该结点是否为字典,如果不是字典,代表此结点为叶子结点thisDepth = 1 + getTreeDepth(secondDict[key])else: thisDepth = 1if thisDepth > maxDepth: maxDepth = thisDepth #更新层数return maxDepthdef plotNode2(nodeTxt, centerPt, parentPt, nodeType):arrow_args = dict(arrowstyle="<-") #定义箭头格式#font = FontProperties(fname=r"c:\windows\fonts\simsun.ttc", size=14) #设置中文字体createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction', #绘制结点xytext=centerPt, textcoords='axes fraction',va="center", ha="center", bbox=nodeType, arrowprops=arrow_args)def plotMidText(cntrPt, parentPt, txtString):xMid = (parentPt[0]-cntrPt[0])/2.0 + cntrPt[0] #计算标注位置yMid = (parentPt[1]-cntrPt[1])/2.0 + cntrPt[1]createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)def plotTree(myTree, parentPt, nodeTxt):decisionNode = dict(boxstyle="sawtooth", fc="0.8")#设置结点格式leafNode = dict(boxstyle="round4", fc="0.8") #设置叶结点格式numLeafs = getNumLeafs(myTree) #获取决策树叶结点数目,决定了树的宽度depth = getTreeDepth(myTree)#获取决策树层数firstStr = next(iter(myTree))#下个字典cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW, plotTree.yOff) #中心位置plotMidText(cntrPt, parentPt, nodeTxt) #标注有向边属性值plotNode2(firstStr, cntrPt, parentPt, decisionNode)#绘制结点secondDict = myTree[firstStr]#下一个字典,也就是继续绘制子结点plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalD#y偏移for key in secondDict.keys():if type(secondDict[key]).__name__=='dict': #测试该结点是否为字典,如果不是字典,代表此结点为叶子结点plotTree(secondDict[key],cntrPt,str(key))#不是叶结点,递归调用继续绘制else:#如果是叶结点,绘制叶结点,并标注有向边属性值plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalWplotNode2(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalDdef createPlot(inTree):fig = plt.figure(1, facecolor='white') #创建figfig.clf()#清空figaxprops = dict(xticks=[], yticks=[])createPlot.ax1 = plt.subplot(111, frameon=False, **axprops) #去掉x、y轴plotTree.totalW = float(getNumLeafs(inTree)) #获取决策树叶结点数目plotTree.totalD = float(getTreeDepth(inTree)) #获取决策树层数plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0; #x偏移plotTree(inTree, (0.5,1.0), '')#绘制决策树plt.show() #显示绘制结果if __name__ == '__main__':dataSet, labels = createDataSet()feaLabels = []mytree = createTree(dataSet,labels,feaLabels)# print json.dumps(mytree,ensure_ascii=False)createPlot(mytree)

4 sklearn构建决策树

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。