700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > Andrew Ng -- machine learning ex2/吴恩达机器学习ex2

Andrew Ng -- machine learning ex2/吴恩达机器学习ex2

时间:2023-03-15 03:26:31

相关推荐

Andrew Ng -- machine learning ex2/吴恩达机器学习ex2

这个项目包含了吴恩达机器学习ex2的python实现,主要知识点为逻辑回归、正则化,题目内容可以查看数据集中的ex2.pdf

代码来自网络(原作者黄广海的github),添加了部分对于题意的中文翻译,以及修改成与习题一致的结构,方便大家理解

另,原来代码中的高次项有些问题,以及缺少作图部分,已经修改补全

其余练习的传送门

ex1:线性回归ex2:逻辑回归、正则化

1 逻辑回归¶

在训练的初始阶段,我们将要构建一个逻辑回归模型来预测,某个学生是否被大学录取。

设想你是大学相关部分的管理者,想通过申请学生两次测试的评分,来决定他们是否被录取。

现在你拥有之前申请学生的可以用于训练逻辑回归的训练样本集。对于每一个训练样本,你有他们两次测试的评分和最后是被录取的结果。

1.1 数据可视化¶

In[1]:

import numpy as npimport pandas as pdimport matplotlib.pyplot as plt

/opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88return f(*args, **kwds)/opt/conda/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88return f(*args, **kwds)

In[2]:

path = '/home/kesci/input/andrew_ml_ex22391/ex2data1.txt'data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])data.head()

Out[2]:In[3]:

positive = data[data['Admitted'].isin([1])]negative = data[data['Admitted'].isin([0])]fig, ax = plt.subplots(figsize=(12,8))ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted')ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted')ax.legend()ax.set_xlabel('Exam 1 Score')ax.set_ylabel('Exam 2 Score')plt.show()

1.2 实现¶

1.2.1 sigmoid 函数¶

逻辑回归函数为

hθ=g(θTx)

g代表一个常用的逻辑函数(logistic function)为S形函数(Sigmoid function),公式为:

g(z)=11+e−z

合起来,我们得到逻辑回归模型的假设函数:

hθ(x)=11+e−θTx In[4]:

# 实现sigmoid函数def sigmoid(z):return 1 / (1 + np.exp(-z))

1.2.2 代价函数和梯度¶

代价函数:

J(θ)=1m∑i=1m[−y(i)log⁡(hθ(x(i)))−(1−y(i))log⁡(1−hθ(x(i)))]

梯度:

∂J(θ)∂θj=1m∑i=1m(hθ(x(i))−y(i))xj(i)

虽然这个梯度和前面线性回归的梯度很像,但是要记住$h_\theta(x)$是不一样的

实现完成后,用初始$\theta$代入计算,结果应该是0.693左右

In[5]:

# 实现代价函数def cost(theta, X, y):theta = np.matrix(theta)X = np.matrix(X)y = np.matrix(y)first = np.multiply(-y, np.log(sigmoid(X * theta.T)))second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))return np.sum(first - second) / (len(X))

初始化X,y,$\theta$

In[6]:

# 加一列常数列data.insert(0, 'Ones', 1)# 初始化X,y,θcols = data.shape[1]X = data.iloc[:,0:cols-1]y = data.iloc[:,cols-1:cols]theta = np.zeros(3)# 转换X,y的类型X = np.array(X.values)y = np.array(y.values)

In[7]:

# 检查矩阵的维度X.shape, theta.shape, y.shape

Out[7]:

((100, 3), (3,), (100, 1))

In[8]:

# 用初始θ计算代价cost(theta, X, y)

Out[8]:

0.6931471805599453

In[9]:

# 实现梯度计算的函数(并没有更新θ)def gradient(theta, X, y):theta = np.matrix(theta)X = np.matrix(X)y = np.matrix(y)parameters = int(theta.ravel().shape[1])grad = np.zeros(parameters)error = sigmoid(X * theta.T) - yfor i in range(parameters):term = np.multiply(error, X[:,i])grad[i] = np.sum(term) / len(X)return grad

1.2.3 用工具库计算θ的值¶

在此前的线性回归中,我们自己写代码实现的梯度下降(ex1的2.2.4的部分)。当时我们写了一个代价函数、计算了他的梯度,然后对他执行了梯度下降的步骤。这次,我们不自己写代码实现梯度下降,我们会调用一个已有的库。这就是说,我们不用自己定义迭代次数和步长,功能会直接告诉我们最优解。

andrew ng在课程中用的是Octave的“fminunc”函数,由于我们使用Python,我们可以用scipy.optimize.fmin_tnc做同样的事情。

(另外,如果对fminunc有疑问的,可以参考下面这篇百度文库的内容/view/2f6ce65d0b1c59eef8c7b47a.html )

如果一切顺利的话,最有θ对应的代价应该是0.203

In[10]:

import scipy.optimize as optresult = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))result

Out[10]:

(array([-25.16131863, 0.20623159, 0.7149]), 36, 0)

让我们看看在这个结论下代价函数计算结果是什么个样子~

In[11]:

# 用θ的计算结果代回代价函数计算cost(result[0], X, y)

Out[11]:

0.20349770158947458

画出决策曲线

In[12]:

plotting_x1 = np.linspace(30, 100, 100)plotting_h1 = ( - result[0][0] - result[0][1] * plotting_x1) / result[0][2]fig, ax = plt.subplots(figsize=(12,8))ax.plot(plotting_x1, plotting_h1, 'y', label='Prediction')ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted')ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted')ax.legend()ax.set_xlabel('Exam 1 Score')ax.set_ylabel('Exam 2 Score')plt.show()

1.2.4 评价逻辑回归模型¶

在确定参数之后,我们可以使用这个模型来预测学生是否录取。如果一个学生exam1得分45,exam2得分85,那么他录取的概率应为0.776

In[13]:

# 实现hθdef hfunc1(theta, X):return sigmoid(np.dot(theta.T, X))hfunc1(result[0],[1,45,85])

Out[13]:

0.7762906238162848

另一种评价θ的方法是看模型在训练集上的正确率怎样。写一个predict的函数,给出数据以及参数后,会返回“1”或者“0”。然后再把这个predict函数用于训练集上,看准确率怎样。

In[14]:

# 定义预测函数def predict(theta, X):probability = sigmoid(X * theta.T)return [1 if x >= 0.5 else 0 for x in probability]

In[15]:

# 统计预测正确率theta_min = np.matrix(result[0])predictions = predict(theta_min, X)correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)]accuracy = (sum(map(int, correct)) % len(correct))print ('accuracy = {0}%'.format(accuracy))

accuracy = 89%

画出对应曲线

2 正则化逻辑回归¶

在训练的第二部分,我们将实现加入正则项提升逻辑回归算法。

设想你是工厂的生产主管,你有一些芯片在两次测试中的测试结果,测试结果决定是否芯片要被接受或抛弃。你有一些历史数据,帮助你构建一个逻辑回归模型。

2.1 数据可视化¶

In[16]:

path = '/home/kesci/input/andrew_ml_ex22391/ex2data2.txt'data_init = pd.read_csv(path, header=None, names=['Test 1', 'Test 2', 'Accepted'])data_init.head()

Out[16]:In[17]:

positive2 = data_init[data_init['Accepted'].isin([1])]negative2 = data_init[data_init['Accepted'].isin([0])]fig, ax = plt.subplots(figsize=(12,8))ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted')ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected')ax.legend()ax.set_xlabel('Test 1 Score')ax.set_ylabel('Test 2 Score')plt.show()

以上图片显示,这个数据集不能像之前一样使用直线将两部分分割。而逻辑回归只适用于线性的分割,所以,这个数据集不适合直接使用逻辑回归。

2.2 特征映射¶

一种更好的使用数据集的方式是为每组数据创造更多的特征。所以我们为每组$x_1,x_2$添加了最高到6次幂的特征

In[18]:

degree = 6data2 = data_initx1 = data2['Test 1']x2 = data2['Test 2']data2.insert(3, 'Ones', 1)for i in range(1, degree+1):for j in range(0, i+1):data2['F' + str(i-j) + str(j)] = np.power(x1, i-j) * np.power(x2, j)#此处原答案错误较多,已经更正data2.drop('Test 1', axis=1, inplace=True)data2.drop('Test 2', axis=1, inplace=True)data2.head()

Out[18]:

5 rows × 29 columns

3.2 代价函数和梯度¶

这一部分要实现计算逻辑回归的代价函数和梯度的函数。代价函数公式如下:

J(θ)=1m∑i=1m[−y(i)log⁡(hθ(x(i)))−(1−y(i))log⁡(1−hθ(x(i)))]+λ2m∑j=1nθj2

记住$\theta_0$是不需要正则化的,下标从1开始。

梯度的第j个元素的更新公式为:

θ0:=θ0−a1m∑i=1m[hθ(x(i))−y(i)]x0(i) θj:=θj−a1m∑i=1m[hθ(x(i))−y(i)]xj(i)+λmθj

对上面的算法中 j=1,2,...,n 时的更新式子进行调整可得:

θj:=θj(1−aλm)−a1m∑i=1m(hθ(x(i))−y(i))xj(i)

把初始$\theta$(所有元素为0)带入,代价应为0.693

In[19]:

# 实现正则化的代价函数def costReg(theta, X, y, learningRate):theta = np.matrix(theta)X = np.matrix(X)y = np.matrix(y)first = np.multiply(-y, np.log(sigmoid(X * theta.T)))second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))reg = (learningRate / (2 * len(X))) * np.sum(np.power(theta[:,1:theta.shape[1]], 2))return np.sum(first - second) / len(X) + reg

In[20]:

# 实现正则化的梯度函数def gradientReg(theta, X, y, learningRate):theta = np.matrix(theta)X = np.matrix(X)y = np.matrix(y)parameters = int(theta.ravel().shape[1])grad = np.zeros(parameters)error = sigmoid(X * theta.T) - yfor i in range(parameters):term = np.multiply(error, X[:,i])if (i == 0):grad[i] = np.sum(term) / len(X)else:grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i])return grad

In[21]:

# 初始化X,y,θcols = data2.shape[1]X2 = data2.iloc[:,1:cols]y2 = data2.iloc[:,0:1]theta2 = np.zeros(cols-1)# 进行类型转换X2 = np.array(X2.values)y2 = np.array(y2.values)# λ设为1learningRate = 1

In[22]:

# 计算初始代价costReg(theta2, X2, y2, learningRate)

Out[22]:

0.6931471805599454

2.3.1 用工具库求解参数¶

In[23]:

result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate))result2

Out[23]:

(array([ 1.27271026, 0.62529965, 1.18111686, -2.01987398, -0.91743189,-1.43166928, 0.12393228, -0.36553118, -0.35725404, -0.17516291,-1.45817009, -0.05098418, -0.61558555, -0.27469165, -1.19271298,-0.24217841, -0.20603299, -0.04466178, -0.27778951, -0.29539514,-0.45645982, -1.04319155, 0.02779373, -0.2924487 , 0.0155576 ,-0.32742405, -0.1438915 , -0.92467487]), 32, 1)

最后,我们可以使用第1部分中的预测函数来查看我们的方案在训练数据上的准确度。

In[24]:

theta_min = np.matrix(result2[0])predictions = predict(theta_min, X2)correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y2)]accuracy = (sum(map(int, correct)) % len(correct))print ('accuracy = {0}%'.format(accuracy))

accuracy = 98%

2.4 画出决策的曲线¶

In[25]:

def hfunc2(theta, x1, x2):temp = theta[0][0]place = 0for i in range(1, degree+1):for j in range(0, i+1):temp+= np.power(x1, i-j) * np.power(x2, j) * theta[0][place+1]place+=1return temp

In[26]:

def find_decision_boundary(theta):t1 = np.linspace(-1, 1.5, 1000)t2 = np.linspace(-1, 1.5, 1000)cordinates = [(x, y) for x in t1 for y in t2]x_cord, y_cord = zip(*cordinates)h_val = pd.DataFrame({'x1':x_cord, 'x2':y_cord})h_val['hval'] = hfunc2(theta, h_val['x1'], h_val['x2'])decision = h_val[np.abs(h_val['hval']) < 2 * 10**-3]return decision.x1, decision.x2

In[27]:

fig, ax = plt.subplots(figsize=(12,8))ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted')ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected')ax.set_xlabel('Test 1 Score')ax.set_ylabel('Test 2 Score')x, y = find_decision_boundary(result2)plt.scatter(x, y, c='y', s=10, label='Prediction')ax.legend()plt.show()

2.5 改变λ,观察决策曲线¶

$\lambda=0$时过拟合

In[28]:

learningRate2 = 0result3 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate2))

Out[28]:

(array([ 14.60193336, 21.20326682,4.60748805, -150.30636263,-70.51421716, -65.71761632, -167.22986423, -100.93094956,-58.4583472 ,9.35117823, 538.72438097, 445.25267052,633.43046793, 239.567217 , 92.6608774 , 300.20568543,362.78215934, 440.45538844, 196.63024035, 52.26698467,-13.32416223, -639.85098768, -782.82561038, -1230.55113233,-846.7737968 , -793.91524305, -273.62741174, -51.4586635 ]),280,3)

In[29]:

fig, ax = plt.subplots(figsize=(12,8))ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted')ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected')ax.set_xlabel('Test 1 Score')ax.set_ylabel('Test 2 Score')x, y = find_decision_boundary(result3)plt.scatter(x, y, c='y', s=10, label='Prediction')ax.legend()plt.show()

$\lambda=100$时欠拟合

In[30]:

learningRate3 = 100result4 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate3))

In[31]:

fig, ax = plt.subplots(figsize=(12,8))ax.scatter(positive2['Test 1'], positive2['Test 2'], s=50, c='b', marker='o', label='Accepted')ax.scatter(negative2['Test 1'], negative2['Test 2'], s=50, c='r', marker='x', label='Rejected')ax.set_xlabel('Test 1 Score')ax.set_ylabel('Test 2 Score')x, y = find_decision_boundary(result4)plt.scatter(x, y, c='y', s=10, label='Prediction')ax.legend()plt.show()

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。