700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 【深度学习 项目实战】Keras深度学习多变量时间序列预测的LSTM模型

【深度学习 项目实战】Keras深度学习多变量时间序列预测的LSTM模型

时间:2022-02-23 12:57:19

相关推荐

【深度学习 项目实战】Keras深度学习多变量时间序列预测的LSTM模型

无意中发现了一个巨牛的人工智能教程,忍不住分享一下给大家。教程不仅是零基础,通俗易懂,而且非常风趣幽默,像看小说一样!觉得太牛了,所以分享给大家。点这里可以跳转到教程。人工智能教程

本篇文章将介绍基于Keras深度学习的多变量时间序列预测的LSTM模型。

项目名称:空气污染预测

一、主要内容:

如何将原始数据集转换为可用于时间序列预测的内容。如何准备数据并使LSTM适合多变量时间序列预测问题。如何进行预测并将结果重新缩放为原始单位。

二、数据下载

在本教程中,我们将使用空气质量数据集。该数据集报告了美国驻中国大使馆五年来每小时的天气和污染水平。数据包括日期时间,称为PM2.5浓度的污染以及包括露点,温度,压力,风向,风速和雪雨累计小时数的天气信息。原始数据中的完整功能列表如下:

否:行号年:此行中的数据年份month:此行中数据的月份日期:此行中的数据日期hour:该行中的数据小时pm2.5:PM2.5浓度露点:露点TEMP:温度PRES:压力cbwd:组合风向Iws:累计风速是:累计下雪时间Ir:累计下雨时间

我们可以使用这些数据来构建预测问题,在此情况下,鉴于前几个小时的天气条件和污染,我们可以预测下一个小时的污染。

数据下载地址

下载数据集并将其命名为 raw.csv

三、数据处理

第一步,将零散的日期时间信息整合为一个单一的日期时间,以便我们可以将其用作 Pandas 的索引。

快速检查第一天的 pm2.5 的 NA 值。因此,我们需要删除第一行数据。在数据集中还有几个零散的「NA」值,我们现在可以用 0 值标记它们。

以下脚本用于加载原始数据集,并将日期时间信息解析为 Pandas DataFrame 索引。「No」列被删除,每列被指定更加清晰的名称。最后,将 NA 值替换为「0」值,并删除前一天的数据。

# -*- coding: utf-8 -*-from pandas import *# 定义字符串转换为日期数据def parse(x):return datetime.strptime(x, '%Y %m %d %H')# 数据存放路径设置data_path=r'D:\深度学习\数据集\raw.csv'# 读取数据dataset = read_csv(data_path,sep=',', parse_dates = [['year', 'month', 'day', 'hour']], index_col=0, date_parser=parse)# 删除NO列dataset.drop('No', axis=1, inplace=True)# 重命名dataset.columns = ['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']# 索引重命名dataset.index.name = 'date'# 填充NA值为0dataset['pollution'].fillna(0, inplace=True)# 删除前24行无效数据dataset = dataset[24:]# 打印前五行数据print(dataset.head(5))# 保存数据dataset.to_csv(r'D:\深度学习\数据集\pollution.csv')

pollution dew temp press wnd_dir wnd_spd snow raindate -01-02 00:00:00129.0 -16 -4.0 1020.0SE1.7900-01-02 01:00:00148.0 -15 -4.0 1020.0SE2.6800-01-02 02:00:00159.0 -11 -5.0 1021.0SE3.5700-01-02 03:00:00181.0 -7 -5.0 1022.0SE5.3610-01-02 04:00:00138.0 -7 -5.0 1022.0SE6.2520

四、建立多变量 LSTM 预测模型

# -*- coding: utf-8 -*-from math import sqrtfrom numpy import concatenatefrom matplotlib import pyplotfrom pandas import read_csvfrom pandas import DataFramefrom pandas import concatfrom sklearn.preprocessing import MinMaxScalerfrom sklearn.preprocessing import LabelEncoderfrom sklearn.metrics import mean_squared_error,r2_scorefrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTM# 将序列转换为监督学习函数def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):n_vars = 1 if type(data) is list else data.shape[1]df = DataFrame(data)cols, names = list(), list()# input sequence (t-n, ... t-1)for i in range(n_in, 0, -1):cols.append(df.shift(i))names += [('var%d(t-%d)' % (j + 1, i)) for j in range(n_vars)]# forecast sequence (t, t+1, ... t+n)for i in range(0, n_out):cols.append(df.shift(-i))if i == 0:names += [('var%d(t)' % (j + 1)) for j in range(n_vars)]else:names += [('var%d(t+%d)' % (j + 1, i)) for j in range(n_vars)]# put it all togetheragg = concat(cols, axis=1)agg.columns = names# drop rows with NaN valuesif dropnan:agg.dropna(inplace=True)return agg# 导入数据集dataset = read_csv(r'D:\深度学习\数据集\pollution.csv', header=0, index_col=0)values = dataset.values# 离散变量独热编码encoder = LabelEncoder()values[:, 4] = encoder.fit_transform(values[:, 4])# 转换数据类型values = values.astype('float32')# 特征归一化为0-1之间scaler = MinMaxScaler(feature_range=(0, 1))scaled = scaler.fit_transform(values)# 数据转换为监督学习数据集reframed = series_to_supervised(scaled, 1, 1)# 删除不需要的列reframed.drop(reframed.columns[[9, 10, 11, 12, 13, 14, 15]], axis=1, inplace=True)# 划分训练数据集和测试数据集values = reframed.valuesn_train_hours = 365 * 24train = values[:n_train_hours, :]test = values[n_train_hours:, :]train_X, train_y = train[:, :-1], train[:, -1]test_X, test_y = test[:, :-1], test[:, -1]# 将输入(X)重构为 LSTM 预期的 3D 格式,即 [样本,时间步,特征]。# reshape input to be 3D [samples, timesteps, features]train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)# 设计lstm模型model = Sequential()model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))model.add(Dense(1))pile(loss='mae', optimizer='adam')# 训练模型history = model.fit(train_X, train_y, epochs=100, batch_size=50, validation_data=(test_X, test_y), verbose=2, shuffle=False)# 误差可视化pyplot.plot(history.history['loss'], label='train')pyplot.plot(history.history['val_loss'], label='test')pyplot.legend()pyplot.show()# 模型预测yhat = model.predict(test_X)# 转换预测值test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)inv_yhat = scaler.inverse_transform(inv_yhat)inv_yhat = inv_yhat[:,0]# 转换实际值test_y = test_y.reshape((len(test_y), 1))inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)inv_y = scaler.inverse_transform(inv_y)inv_y = inv_y[:,0]# 模型评估# 均方误差rmse = sqrt(mean_squared_error(inv_y, inv_yhat))# R方r2=r2_score(inv_y, inv_yhat)print('Test RMSE: %.3f' % rmse)print('Test R2:%.3f' % r2)

Using TensorFlow backend.-12-19 10:32:46.083137: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library cudart64_100.dll(8760, 1, 8) (8760,) (35039, 1, 8) (35039,)-12-19 10:32:47.305909: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library nvcuda.dll-12-19 10:32:47.333454: I tensorflow/core/common_runtime/gpu/:1618] Found device 0 with properties: name: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.56pciBusID: 0000:01:00.0-12-19 10:32:47.333855: I tensorflow/stream_executor/platform/default/:25] GPU libraries are statically linked, skip dlopen check.-12-19 10:32:47.334613: I tensorflow/core/common_runtime/gpu/:1746] Adding visible gpu devices: 0-12-19 10:32:47.335203: I tensorflow/core/platform/:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2-12-19 10:32:47.337976: I tensorflow/core/common_runtime/gpu/:1618] Found device 0 with properties: name: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.56pciBusID: 0000:01:00.0-12-19 10:32:47.338315: I tensorflow/stream_executor/platform/default/:25] GPU libraries are statically linked, skip dlopen check.-12-19 10:32:47.339027: I tensorflow/core/common_runtime/gpu/:1746] Adding visible gpu devices: 0-12-19 10:32:47.854835: I tensorflow/core/common_runtime/gpu/:1159] Device interconnect StreamExecutor with strength 1 edge matrix:-12-19 10:32:47.855022: I tensorflow/core/common_runtime/gpu/:1165]0 -12-19 10:32:47.855120: I tensorflow/core/common_runtime/gpu/:1178] 0: N -12-19 10:32:47.855868: I tensorflow/core/common_runtime/gpu/:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2919 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)Train on 8760 samples, validate on 35039 samplesEpoch 1/100-12-19 10:32:48.783437: I tensorflow/stream_executor/platform/default/:44] Successfully opened dynamic library cublas64_100.dll- 2s - loss: 0.0606 - val_loss: 0.0485Epoch 2/100- 1s - loss: 0.0347 - val_loss: 0.0372Epoch 3/100- 1s - loss: 0.0180 - val_loss: 0.0230Epoch 4/100- 1s - loss: 0.0157 - val_loss: 0.0165Epoch 5/100- 1s - loss: 0.0149 - val_loss: 0.0147Epoch 6/100- 2s - loss: 0.0149 - val_loss: 0.0145Epoch 7/100- 2s - loss: 0.0146 - val_loss: 0.0147Epoch 8/100- 2s - loss: 0.0147 - val_loss: 0.0147Epoch 9/100- 2s - loss: 0.0146 - val_loss: 0.0150Epoch 10/100- 2s - loss: 0.0144 - val_loss: 0.0155Epoch 11/100- 2s - loss: 0.0149 - val_loss: 0.0148Epoch 12/100- 2s - loss: 0.0149 - val_loss: 0.0151Epoch 13/100- 2s - loss: 0.0146 - val_loss: 0.0150Epoch 14/100- 2s - loss: 0.0147 - val_loss: 0.0149Epoch 15/100- 2s - loss: 0.0146 - val_loss: 0.0147Epoch 16/100- 2s - loss: 0.0151 - val_loss: 0.0154Epoch 17/100- 2s - loss: 0.0150 - val_loss: 0.0154Epoch 18/100- 2s - loss: 0.0148 - val_loss: 0.0152Epoch 19/100- 2s - loss: 0.0149 - val_loss: 0.0153Epoch 20/100- 2s - loss: 0.0148 - val_loss: 0.0157Epoch 21/100- 2s - loss: 0.0147 - val_loss: 0.0156Epoch 22/100- 2s - loss: 0.0147 - val_loss: 0.0157Epoch 23/100- 2s - loss: 0.0147 - val_loss: 0.0158Epoch 24/100- 2s - loss: 0.0147 - val_loss: 0.0156Epoch 25/100- 2s - loss: 0.0146 - val_loss: 0.0154Epoch 26/100- 2s - loss: 0.0146 - val_loss: 0.0155Epoch 27/100- 2s - loss: 0.0146 - val_loss: 0.0155Epoch 28/100- 2s - loss: 0.0146 - val_loss: 0.0148Epoch 29/100- 2s - loss: 0.0147 - val_loss: 0.0149Epoch 30/100- 2s - loss: 0.0146 - val_loss: 0.0156Epoch 31/100- 2s - loss: 0.0146 - val_loss: 0.0151Epoch 32/100- 2s - loss: 0.0146 - val_loss: 0.0152Epoch 33/100- 2s - loss: 0.0146 - val_loss: 0.0150Epoch 34/100- 2s - loss: 0.0145 - val_loss: 0.0149Epoch 35/100- 2s - loss: 0.0147 - val_loss: 0.0147Epoch 36/100- 2s - loss: 0.0145 - val_loss: 0.0148Epoch 37/100- 2s - loss: 0.0145 - val_loss: 0.0147Epoch 38/100- 2s - loss: 0.0146 - val_loss: 0.0146Epoch 39/100- 2s - loss: 0.0145 - val_loss: 0.0146Epoch 40/100- 2s - loss: 0.0145 - val_loss: 0.0143Epoch 41/100- 2s - loss: 0.0144 - val_loss: 0.0143Epoch 42/100- 2s - loss: 0.0145 - val_loss: 0.0143Epoch 43/100- 2s - loss: 0.0146 - val_loss: 0.0144Epoch 44/100- 2s - loss: 0.0145 - val_loss: 0.0141Epoch 45/100- 2s - loss: 0.0144 - val_loss: 0.0139Epoch 46/100- 2s - loss: 0.0146 - val_loss: 0.0140Epoch 47/100- 2s - loss: 0.0146 - val_loss: 0.0140Epoch 48/100- 2s - loss: 0.0143 - val_loss: 0.0138Epoch 49/100- 2s - loss: 0.0145 - val_loss: 0.0140Epoch 50/100- 2s - loss: 0.0143 - val_loss: 0.0139Epoch 51/100- 2s - loss: 0.0142 - val_loss: 0.0138Epoch 52/100- 2s - loss: 0.0142 - val_loss: 0.0140Epoch 53/100- 2s - loss: 0.0146 - val_loss: 0.0139Epoch 54/100- 2s - loss: 0.0144 - val_loss: 0.0138Epoch 55/100- 2s - loss: 0.0145 - val_loss: 0.0138Epoch 56/100- 2s - loss: 0.0145 - val_loss: 0.0138Epoch 57/100- 2s - loss: 0.0144 - val_loss: 0.0136Epoch 58/100- 2s - loss: 0.0145 - val_loss: 0.0137Epoch 59/100- 2s - loss: 0.0143 - val_loss: 0.0137Epoch 60/100- 2s - loss: 0.0141 - val_loss: 0.0137Epoch 61/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 62/100- 2s - loss: 0.0146 - val_loss: 0.0144Epoch 63/100- 2s - loss: 0.0145 - val_loss: 0.0140Epoch 64/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 65/100- 2s - loss: 0.0145 - val_loss: 0.0144Epoch 66/100- 2s - loss: 0.0142 - val_loss: 0.0137Epoch 67/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 68/100- 2s - loss: 0.0142 - val_loss: 0.0137Epoch 69/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 70/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 71/100- 2s - loss: 0.0142 - val_loss: 0.0137Epoch 72/100- 2s - loss: 0.0142 - val_loss: 0.0137Epoch 73/100- 2s - loss: 0.0142 - val_loss: 0.0137Epoch 74/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 75/100- 2s - loss: 0.0143 - val_loss: 0.0138Epoch 76/100- 2s - loss: 0.0144 - val_loss: 0.0137Epoch 77/100- 2s - loss: 0.0144 - val_loss: 0.0136Epoch 78/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 79/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 80/100- 2s - loss: 0.0143 - val_loss: 0.0135Epoch 81/100- 2s - loss: 0.0142 - val_loss: 0.0135Epoch 82/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 83/100- 2s - loss: 0.0142 - val_loss: 0.0136Epoch 84/100- 2s - loss: 0.0143 - val_loss: 0.0135Epoch 85/100- 2s - loss: 0.0143 - val_loss: 0.0135Epoch 86/100- 2s - loss: 0.0143 - val_loss: 0.0135Epoch 87/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 88/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 89/100- 2s - loss: 0.0142 - val_loss: 0.0135Epoch 90/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 91/100- 2s - loss: 0.0143 - val_loss: 0.0135Epoch 92/100- 2s - loss: 0.0144 - val_loss: 0.0136Epoch 93/100- 2s - loss: 0.0142 - val_loss: 0.0135Epoch 94/100- 2s - loss: 0.0142 - val_loss: 0.0135Epoch 95/100- 2s - loss: 0.0142 - val_loss: 0.0135Epoch 96/100- 2s - loss: 0.0144 - val_loss: 0.0135Epoch 97/100- 2s - loss: 0.0143 - val_loss: 0.0136Epoch 98/100- 2s - loss: 0.0142 - val_loss: 0.0134Epoch 99/100- 2s - loss: 0.0141 - val_loss: 0.0136Epoch 100/100- 2s - loss: 0.0142 - val_loss: 0.0136Test RMSE: 26.489Test R2:0.917Process finished with exit code 0

评估模型

模型拟合后,我们可以预测整个测试数据集。通过初始预测值和实际值,我们可以计算模型的误差分数。在这种情况下,我们可以计算出与变量相同的单元误差的均方根误差(RMSE)。

以及R方确定系数

Test RMSE: 26.489

Test R2:0.917

模型效果不错哦

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。