700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > Encoder-Decoder LSTM Model模型对家庭用电进行多步时间序列预测

Encoder-Decoder LSTM Model模型对家庭用电进行多步时间序列预测

时间:2023-10-27 22:45:37

相关推荐

Encoder-Decoder LSTM Model模型对家庭用电进行多步时间序列预测

在本节中,我们可以更新普通的LSTM以使用编解码器模型。这意味着模型不会直接输出向量序列。相反,该模型将由两个子模型组成,用于读取和编码输入序列的编码器,以及读取编码的输入序列并对输出序列中的每个元素进行一步预测的解码器。这种差别很细微,因为实际上这两种方法都可以预测序列输出。重要的不同之处在于,解码器使用了LSTM模型,这使得解码器既可以知道前一天在序列中预测了什么,又可以在输出序列时积累内部状态。让我们仔细看看这个模型是如何定义的。和前面一样,我们定义了一个包含200个单元的LSTM隐藏层。这是解码器模型,它将读取输入序列并输出一个200个元素向量(每个单元一个输出),该元素向量从输入序列捕获特性。

我们将使用14天的总功耗作为输入。

# define modelmodel = Sequential()model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))

我们将使用一个简单的编码器-解码器架构,易于在Keras中实现,这与LSTM自动编码器的架构有很多相似之处。首先,对输入序列的内部表示进行多次重复,对于输出序列中的每个时间步长重复一次。这个向量序列将被呈现给LSTM解码器。

model.add(RepeatVector(7))

然后我们将解码器定义为一个包含200个单元的LSTM隐藏层。重要的是,解码器将输出整个序列,而不仅仅是序列末尾的输出,就像我们对编码器所做的那样。这意味着200个单元中的每一个单元都将为7天中的每一天输出一个值,表示输出序列中每天预测的内容的基础。

model.add(LSTM(200, activation='relu', return_sequences=True))

然后,我们将使用一个完全连接的层来解释最终输出层之前输出序列中的每个时间步长。重要的是,输出层预测输出序列中的一个步骤,不是一次七天,这意味着我们将对输出序列中的每个步骤使用相同的层。它的意思是相同的完全连接层和输出层将用于处理解码器提供的每个时间步长。为了实现这一点,我们将解释层和输出层封装在一个TimeDistributed包装器中,该包装器允许从解码器每次执行步骤时都使用所封装的层。模型。添加(TimeDistributed(密度(100年,激活= ’ relu ')))model.add (TimeDistributed(密度(1))):

model.add(TimeDistributed(Dense(100, activation='relu')))model.add(TimeDistributed(Dense(1)))```这允许LSTM解码器计算出输出序列中每个步骤所需的上下文,以及用于单独解释每个时间步骤的被包装的密集层,同时重用相同的权重来执行解释。另一种方法是将LSTM解码器创建的所有结构压平,并直接输出矢量。您可以尝试将其作为一个扩展,以查看它是如何进行比较的。因此,网络输出与输入结构相同的三维向量,具有维数[样本、时间步长、特征]。它只有一个功能,即每天消耗的总电量,而且总是有7个功能。因此,一个单一的一周预测将有大小:[1,7,1]。因此,在对模型进行训练时,我们必须对输出数据(y)进行重构,使其具有三维结构,而不是上一节所使用的【sample, features】的二维结构。```python# reshape output into [samples, timesteps, features]train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))

# train the modeldef build_model(train, n_input):# prepare datatrain_x, train_y = to_supervised(train, n_input)# define parametersverbose, epochs, batch_size = 0, 20, 16n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]# reshape output into [samples, timesteps, features]train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))# define modelmodel = Sequential()model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))model.add(RepeatVector(n_outputs))model.add(LSTM(200, activation='relu', return_sequences=True))model.add(TimeDistributed(Dense(100, activation='relu')))model.add(TimeDistributed(Dense(1)))pile(loss='mse', optimizer='adam')# fit networkmodel.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)return model

# univariate multi-step encoder-decoder lstmfrom math import sqrtfrom numpy import splitfrom numpy import arrayfrom pandas import read_csvfrom sklearn.metrics import mean_squared_errorfrom matplotlib import pyplotfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Flattenfrom keras.layers import LSTMfrom keras.layers import RepeatVectorfrom keras.layers import TimeDistributed# split a univariate dataset into train/test setsdef split_dataset(data):# split into standard weekstrain, test = data[1:-328], data[-328:-6]# restructure into windows of weekly datatrain = array(split(train, len(train)/7))test = array(split(test, len(test)/7))return train, test# evaluate one or more weekly forecasts against expected valuesdef evaluate_forecasts(actual, predicted):scores = list()# calculate an RMSE score for each dayfor i in range(actual.shape[1]):# calculate msemse = mean_squared_error(actual[:, i], predicted[:, i])# calculate rmsermse = sqrt(mse)# storescores.append(rmse)# calculate overall RMSEs = 0for row in range(actual.shape[0]):for col in range(actual.shape[1]):s += (actual[row, col] - predicted[row, col])**2score = sqrt(s / (actual.shape[0] * actual.shape[1]))return score, scores# summarize scoresdef summarize_scores(name, score, scores):s_scores = ', '.join(['%.1f' % s for s in scores])print('%s: [%.3f] %s' % (name, score, s_scores))# convert history into inputs and outputsdef to_supervised(train, n_input, n_out=7):# flatten datadata = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))X, y = list(), list()in_start = 0# step over the entire history one time step at a timefor _ in range(len(data)):# define the end of the input sequencein_end = in_start + n_inputout_end = in_end + n_out# ensure we have enough data for this instanceif out_end < len(data):x_input = data[in_start:in_end, 0]x_input = x_input.reshape((len(x_input), 1))X.append(x_input)y.append(data[in_end:out_end, 0])# move along one time stepin_start += 1return array(X), array(y)# train the modeldef build_model(train, n_input):# prepare datatrain_x, train_y = to_supervised(train, n_input)# define parametersverbose, epochs, batch_size = 0, 20, 16n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]# reshape output into [samples, timesteps, features]train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))# define modelmodel = Sequential()model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))model.add(RepeatVector(n_outputs))model.add(LSTM(200, activation='relu', return_sequences=True))model.add(TimeDistributed(Dense(100, activation='relu')))model.add(TimeDistributed(Dense(1)))pile(loss='mse', optimizer='adam')# fit networkmodel.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)return model# make a forecastdef forecast(model, history, n_input):# flatten datadata = array(history)data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))# retrieve last observations for input datainput_x = data[-n_input:, 0]# reshape into [1, n_input, 1]input_x = input_x.reshape((1, len(input_x), 1))# forecast the next weekyhat = model.predict(input_x, verbose=0)# we only want the vector forecastyhat = yhat[0]return yhat# evaluate a single modeldef evaluate_model(train, test, n_input):# fit modelmodel = build_model(train, n_input)# history is a list of weekly datahistory = [x for x in train]# walk-forward validation over each weekpredictions = list()for i in range(len(test)):# predict the weekyhat_sequence = forecast(model, history, n_input)# store the predictionspredictions.append(yhat_sequence)# get real observation and add to history for predicting the next weekhistory.append(test[i, :])# evaluate predictions days for each weekpredictions = array(predictions)score, scores = evaluate_forecasts(test[:, :, 0], predictions)return score, scores# load the new filedataset = read_csv('household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])# split into train and testtrain, test = split_dataset(dataset.values)# evaluate model and get scoresn_input = 14score, scores = evaluate_model(train, test, n_input)# summarize scoressummarize_scores('lstm', score, scores)# plot scoresdays = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']pyplot.plot(days, scores, marker='o', label='lstm')pyplot.show()

————————————————

版权声明:本文为CSDN博主「颠沛的小丸子」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:/Dian1pei2xiao3/article/details/91470157

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。