在之前,我们了解到在算法存在高方差问题时,扩充训练集的数据量有助于降低验证集的误差。那么,是否有其他情况我们可以通过增加数据量来优化算法呢?
假如我们有这样一个学习问题:我需要在{to,too,two}中选出一个填入以下句子:For breakfast, I ate __eggs.在这种问题中,句子的信息越多,算法越有可能得到答案,也就是说训练集数据额定增大是有益的。在房价问题中,假如我们只给了房屋面积的大小以及价格,单纯只靠这个预测到真实的价格的难度是很大的,毕竟还需要考虑所处位置的地价。
其实,这类问题概括起来,只要解决了偏差和方差的问题就可以了,那么我们要做的就是让算法的参数尽可能地多的同时,再增加训练集的数量,这样,在前者的作用下会变得很小,而在庞大的数据量的加持下,
,这样我们就可以保证
很小,从而达到优化算法的目的。
题目:利用水库的水位变化预测大坝的出水量
代码:
import numpy as np
import scipy.io as sio
import matplotlib.pyplot as plt
from scipy.optimize import minimizedef linear(): # 线性回归fig, ax = plt.subplots()ax.scatter(X_train[:, 1], y_train)ax.set(xlabel = 'water level',ylabel = 'flowing data')return fig, axdef reg_cost(theta, X, y, l):#正则化代价函数cost = np.sum(np.power((X@theta - y.flatten()), 2))reg = theta[1:]*lreturn (cost + reg)/(2*len(X))def reg_gradient(theta, X, y, l): #正则化梯度grad = (X@theta-y.flatten())@Xreg = l*thetareg[0] = 0return(grad + reg)/len(X)def train_model(X, y, l):theta = np.ones(X.shape[1])res = minimize(fun=reg_cost,x0=theta,args=(X, y, l),method='TNC',jac=reg_gradient)return res.xdef learning_curve(X_train, y_train, X_val, y_val, l):x = range(1, len(X_train)+1) #由于range()左闭右开取不到最后一个数,所以在最后要+1training_cost = [] # 训练集代价函数cv_cost = [] #验证集代价函数for i in x:res = train_model(X_train[:i, :], y_train[:i, :], l)training_cost_i = reg_cost(res, X_train[:i, :], y_train[:i, :],l)cv_cost_i = reg_cost(res, X_val, y_val, l)training_cost.append(training_cost_i)cv_cost.append(cv_cost_i)plt.plot(x, training_cost, label='training cost')plt.plot(x, cv_cost, label='cv cost')plt.legend()plt.xlabel('training numbers')plt.ylabel('error')plt.show()l=1
data = sio.loadmat('ex5data1.mat')
print(data.keys())X_train = data['X']
y_train = data['y']
print(X_train.shape)
print(y_train.shape)X_val = data['Xval']
y_val = data['yval']
print(X_val.shape)
print(y_val.shape)X_test = data['Xtest']
y_test = data['ytest']
print(X_test.shape)
print(y_test.shape)X_train = np.insert(X_train, 0, 1, axis=1)
X_val = np.insert(X_val, 0, 1, axis=1)
X_test = np.insert(X_test, 0, 1, axis=1)
fig, ax = linear()
plt.show()theta = np.ones(X_train.shape[1])
reg__cost = reg_cost(theta, X_train, y_train, l)
print(reg__cost)reg__gradient = reg_gradient(theta, X_train, y_train, l)
print(reg__gradient)theta_final = train_model(X_train, y_train, l=0)
fig, ax = linear()
plt.plot(X_train[:, 1], X_train@theta_final, c='r')
plt.show()# 线性回归compare = learning_curve(X_train, y_train, X_val, y_val, l=0)
print(compare) #比较训练集和验证集的误差,判断是否出现高偏差或高方差
输出:
dict_keys(['__header__', '__version__', '__globals__', 'X', 'y', 'Xtest', 'ytest', 'Xval', 'yval'])
(12, 1)
(12, 1)
(21, 1)
(21, 1)
(21, 1)
(21, 1)
[303.99319222]
[-15.30301567 598.25074417]

原始数据散点图

线性回归模拟图

训练集和验证集的代价函数误差
今日小结:比之前学会了更灵活地用函数写功能来实现,但数据之间的维度转换很重要。今天的作业只有一半,在作正则化时没考虑到维度的问题,还在整改。
作业参考:https://www.bilibili.com/video/BV124411A75S?spm_id_from=333.788.videopod.episodes&vd_source=867b8ecbd62561f6cb9b4a83a368f691&p=8
